a series of blogs, I'm continuing to talk about, some of the techniques
in video games / CG that will perform very well on CELL chip, some of
these techniques are very advanced, and some are simple. One of the
great things about video games, and CELL chip, is that "
can be largely designed, meaning that there are many creative
alternatives to video game design, and in turn reflect how they graphics
could be generated. This can make SPU
well, many of these techniques can involve almost NO INPUT DMA
transfers to the SPU. With double buffer techniques this means that the
SPU can spend pretty much ALL of it's time computing, at full speed, in
SRAM, with no DMA wait states. This means continual "
of scan lines, vertex buffers, compressed textures, bone matrices,
advanced enveloping (skinning) like ILM's MWE (multi-weight enveloping),
the sky is the limit. This is a one of the concepts about SPUs that I
really like. Advances in the last 10 years or so, in image synthesis, as
well as motion synthesis, are very much like other methods of
synthesis, they are exactly the kinds of algorithms, that require little
input definition, such as parametrizations of models, that produce
huge volumes of output data, with parameterizable resolutions. We are
now at the stage where we can literally have, parametrized NURBS "
, meaning that complete crowds can be generated, out of an extrapolation of the "
of human faces. Some of the hair demos that are available now, are
quite amazing, just imagine what the creative possibilities of
OFFLINE META GENERATION for artists tools, in a professional environment.
concept of META generation has been around for a very long time, but
has been largely overlooked in the video games industry, mostly due to
modelers such as MAX / MAYA, and the data exporters involved with the
various parts of the visual art production process. META generation, is
basically the idea of storing the "creation process" involved with
the construction of a geometric model or image, a good example of this
kind of thing is the German, procedural game Kkrieger, which is ULTRA
TINY. GIS also use a lot of META generation techniques. as well a some
paint programs, like Photoshop and fractal paint designer. Typically
model data, and texture image data, is exported into the video games
pipeline with vertex data, jpg data, and other forms of static
compressed data. META generation is way beyond most types of
compression, for example, an openGL display list of nested display list
calls, can compress a 1000:1 or more, depending on how complex the
source display list palette objects are. For
example, in SYNTH video game, there is a nested display list, that has
somewhere around 150 MB of just pure transformations, and integer
display list calls, The source palette objects
sum total somewhere around 15 MB of vertex data. Think of how many verts
that would be if it were ever to have been stored, when the call lists
themselves of ints are huge, and the call objects having 5-600 verts
each in some cases. This really is different than a geometry shader, and
would kill shader, with an interesting program running.
won't pretend that there is enough time for me to begin to explain just
how good, that a META modeling package could be. It could be better
than MAYA in many ways, it just depends on what kind of game design, if
you were looking for realism all the time, and human facial imagery, a
high degree of digitizing may be involved, which nullifies the process
of modeling and META expression. On the other hand, good skin and
hair shaders have been developed, as well as good motion synthesis,
similar to research by Patros Falutsos (virtual stuntment), and
companies like natural motion.
I were to start to try to vamp up my video games studio to take
advantage of the META generation capabilities of a the CELL / PS3, I
would start with the idea of META texturing, not to be confused with
MEGA-texturing (Carmak). META texturing, for the most part is a tool
oriented concept, in which the "
of the process of texturing are stored instead of the final output. This right away, leads to things like "
, brick modeling, and no doubt many kinds of finite element based approximations, even things like quick "
. The possibilities are endless, but at the same time, many of these models, require creative programmers, "
is a bit strange
One of the most common techniques is to use META commands, on already
digitized images of things, where the images are drawn on by artists.
This is also OK, but the more reliant you become on pure image "
DMA input patterns, the further you go away from being able to have the SPU run as a pure "
processor. Some of these source images can be are huge. A lot of entry
level META texturing, would be just this, composites, and blends and
blurs on actual images. It is possible to do "
modeling, in order to generate surfaces that approximate sets of
digitized images, but is complex. The future is very bright for this
kind of thing, but the software engineers are key, as well as good
tools, and good mathematical models."SYNTH"
SYNTH video game, which is 100% pure math art, uses a high degree of META process, but no actual "artist tools" or artistic input was given to the graphical process outside of C++ openGL "script".
There is some real time texturing in SYNTH going on, a per pixel
animation is being computed, that is quite complicated, and has
been sprawled over 2 threads in real time. Most of SYNTH's procedural
texturing is done at load time, and generation time, this also can be
done well in high memory situations in games, where the loading is
taking place from CD, and generation into memory. I estimate that input parametrization for many geometric generation models, such as house
models, building models, faces, could be easily under 16k of input data,
and generating huge multi-megabyte models, and texture variations, very
rapidly, nothing compared to time complexity of some of the "per pixel animation"
that are being computed in SYNTH, it would be much faster, and with
near photo-realism, and would be well suited for AAA games. It sounds a
bit weird to parametrize a "house" but it can be done, and
done well. The per pixel animation in SYNTH is 512x512 , and each pixel
is subject to around 20-30 calls to SIN and sqrt and other nasties. It
reaches near fractal complexity, without iteration depth approx O(N3),
instead it's O(N2) with a HUGE C. The animation produces over 2GB of uncompressed texture data, but since it's real time, almost none is present. I don't recommended per
pixel animation much, but the creative possibilities are very high. To
understand what I mean by per pixel animation, fractal "zooms" are "per pixel"
animation, simply means every pixel in an image computed, through a
similar means. This is typically much slower than using texture fragment
based generations, colorization, fills, and lighting equations. I was
asked, why did I not do this in a pixel shader, and the answer was easy,
this texture is computed "a frame behind" and it used a very large number of times, and with many other "tricks" happening on it, including pixel shaders.
Shown below is an example of some LOAD TIME, procedural generation, I call it the PHANTOM SURFACE, for "creative reasons", the math is ultra complex, but is alas, just meant to "look cool".
you can check out SYNTH video game in the FILES section of this IBM blog, or at MODDB
This is a picture of "the phantom surface", it was generated at load time, and takes approx 5-10 seconds to generate a 1024x1024 texture. This picture shows the phantom surface,
with a symmetry flip, done down the vertical axis. It's one of the
evilest looking things I've ever seen, and is generated in blurred black
and white. It consists of many calls to SIN, and SQRT and some high dimensional projections. It was not planned as a "demon face" generator, made from splines or anything. It was a combination process, extraction, where I noticed the "phantom surface" first on a 3D hillside, and deiced to "project it down" into a flat image to be reused.
(MORE TO COME!!! STAY TUNED),.. or go visit, the "audio synthesis" article, which is the same as this one, but for AUDIO.