Share via


2B0ST0N6 day one

This is the first of my posts describing my experiences at SIGGRAPH 2006. I will try to have a post every day summarizing what I see. Everything here is my personal opinion.

The first day of the conference is a little more low key than the other days, by the look of things. The exhibition is still being set up (you can see it when you walk across bridges) and there are no paper presentations. There are courses, however, and given that there was not much else going on I decided to go to one to get my learning on.

The course I chose was this one. Basically there were a bunch of different lectures from people all centered around doing shading and rendering on the GPU. I really think that this area is still growing really fast and this was a great course for hearing about the history of the area and different technologies and techniques.

The first lecture was from David Blythe, a fellow Microsoft guy who talked about DX10 and how the architecture differs from DX9, and what the motivations and improvements are. This was very informative - I have read some slide decks and tutorial code before, but having a talk targeted around explaining the differences really helps. One interesting thing about the talk was that it also helped understand DX9 more through the explanations of where the frustrations and bottlenecks are. I knew that state changes were expensive and had some idea why, but the slide deck was good about explaining how the bottlenecks relate to GPU architecture. From what I can tell the DX10 design is a lot cleaner, and has the potential to free up a lot of CPU time by reducing how much has to be shunted between the CPU and GPU. I can't wait until I get a chance to play with it some - geometry shaders look like they will open up a lot of algorithms that you could not do before.

The next talk from Michael McCool discussed RapidMind, which is a system for letting people write general purpose parallel programs, of which shaders are a subset. Some of the renderings that he showed were very impressive, and the idea that you can write your algorithms once and have them run on a variety of hardware and architectures was very cool. What was interesting was hearing about applications beyond just rendering, such as flocking algorithms. I love seeing where shader programs are doing things other than shading (I have seen a paper about doing sorting using a GPU, for example).

Next Marc Olano talked about the OpenGL shading language. This was interesting to me because I have written shaders for DX9, and I have written OpenGL code before, but I have never done shaders in OpenGL and I was curious to see how it is done. I think that the way that you can bind data from the OpenGL state to the shaders through the variable names like glColor was very elegant. I was also glad to see that for the most part the shaders looked just like their DX counterparts. It is nice to see that the simularity in shading languages allows for easy transfer of algorithms between them.

After lunch Mark Kilgard from NVIDIA talked about Cg as well as giving an overview of the development of NVIDIA hardware, starting way back more than 5 years ago. There were a lot of graphs demonstrating the increase in performance over the years, as well as breakdowns on where the performance was coming from (clock speed, number of shader units, etc). There were also some comparisons with current and soon to be CPUs from Intel to give an idea of the difference in speed (which is 1 or 2 orders of magitude in favor of the GPU, depending on how you measure things, apparently). Mark also gave a bit of an overview of Cg, and what I found interesting was the fact that this language could target multiple APIs. The idea that you can write a shader once and use it with both DX and OpenGL is pretty compelling.

Next up was Thorsten Scheuermann from ATI, who talked about using render to vertex buffer (R2VB) as a way to enable you to use the pixel shader hardware for doing vertex shading. Since the pixel shader is quite often more powerful with a richer instruction set, this allows for some things to be done much faster than using the vertex buffer. The examples that Thorsten showed involved skinning mesh animations with around 10000 instances of an animation. NVIDIA hardware does not support this operation (apparently) but you can use vertex textures to do a similar technique with NVIDIA hardware. What I liked about the presentation was seeing familiar things used with a bit of a twist. I find that seeing more things like this allows you to be more creative with similar constraints in the future.

At this point there was a break, and I went to another course on Procedural modelling of Urban environments, since the shading and rendering course took a path into the non-realtime world at this point. I actually regret doing this - even though I am not involved in production rendering, a lot of the same problem spaces are explored and the technology is very similar. When I went back later for the last Q&A session I caught the end of a talk from Fabio Pellacini which looked very compelling.

The Urban environment course was still interesting however. Pascal Mueller talked about a tool that he has developed for taking various forms of data (such as GIS information) and combining them with some algorithms he has devised in order to procedurally generate models of cities. The end results look amazing to me - considering the work that would be involved in building a detailed 3d model of a metropolis, the fact that you can generate so much modelling data from such small inputs is incredible. Some examples of applying the software included modelling cities where the only data was small amounts gleaned from an archeological dig.

Ben Watson also presented a technique for creating urban models (up to the street and parcel level) from nothing (or at least from very small inputs, depending on the desire of the user). This differed from Mueller's work in that it was not for making a model of an existing place. This was also amazing to me, because coming up with psuedo-random data sets that fit some characteristic (such as looking like a typical city map) is something that I have tried to do without success before. What I found quietly amusing was when I looked at a screenshot and thought "That looks like SimCity" and then he mentioned doing work with Maxis and using their engine.

The content of the Urban Environment talks differed from what I expected. Reading the description of the course now it seems obvious, but I was expecting more detail about how to render a dataset like a detailed city model in real time very quickly. It is interesting how our own biases cause us to see or expect things - obviously the problem of rendering the set is more interesting to me than the problem of creating it (mainly due to my lack of aptitude in the latter).

The last thing that I saw was the Fast-Forward papers preview. This is basically a form of torture for people presenting papers at SIGGraph - they make some representative party of each paper do a 50 second talk about their paper. This is a great idea - as well as being entertaining, you almost get a 2 hour preview of every paper at the conference (which is no mean feat considering how many papers there are). Less than a minute is often enough for most things for you to decide if you want to go, so it is very useful. There is so much content at this conference the problem is knowing how to sift through it and not miss anything.

Comments