Realistic Computer Graphics

There’s a really good, interesting interview with Tim Sweeney (founder of Epic Megagames, the company behind Unreal, and along with John Carmack one of the gods of game engine programming and realtime 3d graphics) on Gamasutra (I found the link thanks to Reddit).

We’re only about a factor of a thousand off from achieving all that in real-time without sacrifices. So we’ll certainly see that happen in our lifetimes; it’s just a result of Moore’s Law. Probably 10-15 years for that stuff, which isn’t far at all. Which is scary — we’ll be able to saturate our visual systems with realistic graphics at that point.

But there’s another problem in graphics that’s not as easily solvable. It’s anything that requires simulating human intelligence or behavior: animation, character movement, interaction with characters, and conversations with characters. They’re really cheesy in games now.

USAToday managed to report this as “Epic Games founder: Realistic graphics in video games 10-15 years away“. And of course they quoted the first paragraph only.

Back of Envelope

Currently, it takes around 10h to render a typical moderate resolution image using an unbiased renderer like Luxrender. So that’s 36,000 seconds for one frame at less than a typical screen resolution for a game. Assuming 50fps would be acceptable, that’s a factor of 1,800,000. It’s probably not unreasonable to assume that game engine coders will find amazing optimizations and shortcuts when implementing this, and also that it will benefit enormously from parallelism (it already does) and will probably run on future GPUs.

So, if we assume the advantage of GPUs over CPUs gives us roughly a 100x improvement, optimization gives us a 100x improvement, then Sweeney’s off-the-cuff remark looks pretty darn accurate. In other words, if we had 100x as many CPUs running 100x faster code 100x faster — that’s about ten years of Moore’s Law — we could do real-time unbiased rendering at 50fps at currently acceptable resolutions.

But… Moore’s Law is Dead, Get Over It

But it seems to me that Moore’s law hasn’t been delivering the goods lately, and there’s no real reason to expect it to come back. Today’s Mac Pro is roughly 30% faster than its predecessor, which in turn ran about 50% faster than its predecessor — and that’s performing tasks that derive maximum benefit from parallelism. That’s a 1.95 factor improvement in processing capability in about three years, almost all of it from increasing the number of CPU cores — Moore’s Law would predict a 4x improvement in that time. And bear in mind that the Mac Pro’s huge speed bump over the G5 was largely a consequence of the G5 having hit a speed plateau for two years preceding that, so that’s close to five years of Moore’s Law having fallen by the wayside. We’ve essentially gotten the low hanging fruit from clockspeed, and now we’re running out of low hanging fruit from parallelism.

And it’s not like GPUs — which are massively parallel — have been doing especially well lately either. The nVidia 8800 is well over two years old now. When the 9800 came out it wasn’t considered much of an improvement over its predecessor. And here’s a quote from the top hit when I searched for benchmarks 8800, 9800, and 4870 (the “new hotness” from ATI) on Google:

I recently traded my 9800gtx+ OC for the HD 4870 today and i was told i would be amazed at the diffrence. so i installed everything and updated the drivers and wouldnt u know it, no diffrence from the nvidia card.

And then there’s expectations management

If we (reasonably) assume “acceptable” resolution in ten or fifteen years will be 4000×2000 (1680×1050 today is like 1024×768 less than ten years ago, today’s digital cameras shoout around 12M pixels) and acceptable framerate will be 100fps then that’s another 8x processing power. If we also assume that scene complexity will grow (we need physics driven blades of grass, etc. — most unbiased renders tend to be of fairly sterile and/or static scenes) that’s another who-knows-how-much.

Conclusion

I suspect Sweeney wasn’t thinking of unbiased rendering in his “factor of a thousand” though. He was probably thinking of raytracing with stochastic sampling, which is the “good enough” rendering used in most production pipelines today. But just as today’s game engines offer scanline quality rendering (which was what production pipelines were using ten years ago), when they’re doing realtime raytracing in 2020, movies will be using unbiased rendering pipelines and Siggraph papers will be describing something we haven’t even thought of yet to handle problems we haven’t gotten to yet.

And — as Sweeney correctly pointed out — we still won’t be able to model or animate humans realistically enought to fool ourselves.