Visual splendor with no precomputation.
Special thanks to Infinite Realities for the Head Scan model used to demonstrate Duality's capabilities.
More information will be posted as it becomes available. Last update was pushed on August 8th, 2012.
Duality Engine, what is it?
A software framework with the associated tools which enables game developers to create high quality games with a frightening amount of detail. It has been in development for several years, as a backend to our own unnamed game project and as a personal quest of our CTO, Domagoj Pandža, to implement the most precise realtime global illumination renderer governed by physical laws, rather than artistic guesswork. And we truly believe that rendering of one frame must be limited to below 16.67 ms. No compromise. Everything else destroys immersion.
Written completely in C++11 (formerly C++0x), the Duality Engine takes full advantage of the many new improvements to C++ that were officially sanctioned in 2011. Why? To improve performance and to allow for a more streamlined workflow which makes writing low-level code a little less painful, in turn creating opportunities for growth beyond original parameters. Also, we wanted to evangelize the benefits of modernizing the process.
What is "realtime"? And why is 60 Hz important?
Realtime, in its essence, is an expression used for anything that updates its data in a very short time interval, offering the user a quick response to their actions. The notion is common in science and engineering, but also affects games because we're interacting with virtual worlds. A user performs an action and the world responds in turn. If that response were to slow down, the player's satisfaction and sense of reality would fade away very quickly. We use our eyes' inability to discern a seqeunce of closely packed, content-variable images - giving the illusion of animation.
The smaller the interval between images, the more subtle the sequence of images giving a much more natural illusion of motion, without hard jumps. In the past, TVs displayed images by interlacing, combining temporally offset fields, first laying out the odd lines and then filling in between. The temporal offset between two fields was used to convey a much more believable motion by using another physical property of the human ocular system, the fact that light which arrives on the retina causes a reaction which has a certain wear-off period. Not only that this reduced bandwidth, it generated the effect of a much higher framerate, effectively doubling it.
For similar reasons, console games ran (and unfortunately still run) at 30 frames per second. However, what most tend to ignore is that most displays today are high-resolution progressive scanners with a mutable refresh rate between 50 and 60 Hz. In effect, they are driving games with a much bigger interval between frames, giving players a very irritating experience because the slow responsiveness is painfully visible. What is even worse, to prevent high refresh rates from forcing keyframed animations recorded @ 30 FPS to execute twice as fast, some games actually manage to slow down gametime, outputting interpolated animation that is doubled in length to accomodate running over a 60 Hz display, effectively reducing gameplay to badly executed bullet-time action.
Here at Motion, we are obsessed with responsiveness, fast execution and interactive gameplay. And that can only be achieved by matching the framerate with the golden 60 Hz refresh rate and optimizing our engine to give all players a chance to experience its beauty at high framerates. Other engines available on the market that offer "realtime realistic rendering" are usually just slideshows which are more frustrating than waiting for a raytracer to output an image.
On the bright side, there are some teams that appreciate motion and interactivity as much as we do - a positive example comes from DICE's Frostbite 2 which implements a radiosity engine operating on procedurally simplified geometry to calculate indirect lighting in the scene at realtime - with beautiful and scalable performance on many different configurations. If you can't give 16.67 ms per frame on an i7 machine, 16 GB, HD6990 machine - your code is bad and you should feel bad.
We can all agree that merely looking good is not a feature if you can't immerse yourself in it. If anything, beautiful scenes and lousy performance is just adding insult to injury. Since we're all gamers, we understand the joy of immersing oneself in a game universe one falls in love with. And hopefully, you'll enjoy creating games on top of Duality as much as playing them.
64-bit only? DirectX® 11 only? Is that wise?
64-bit hardware and operating systems have been around for years. DirectX® 11 is the next step in the evolution of the graphics pipeline. We should take issue with stagnation and progress further. 64-bit hardware & OS requirements give us a huge advantage in terms of available address space, allow for fantastic precision and can withstand textures of extreme detail.
Best case scenario for 32-bit platforms is 4 GB of addressable space and that is only after Microsoft resolved the MMIO mapping issue which, in the past, reduced available memory to 2 GB. As you know, 64-bit gives us an unsigned range from 0 to 18 446 744 073 709 551 615. That is stupendously high, so high in fact that some platforms reduced the number of bits that describe memory addresses because the hardware companies cannot even account for the full 64-bit range. 52-bit is even overkill.
DirectX® 11 has brought us tessellation. This allows us to implement huge view ranges since it's basically trivial to work out an adaptive LoD system which modulates triangle density as a function of distance from the camera. Furthermore, with graphics cards that offer 4 GB (or more) of video memory, we can save on geometry storage by amplifying it on the fly, creating intricate detail with displacement maps of extreme resolution which were derived off the high-detail models. Terrain can now be procedurally amplified to generate beautiful vistas. Characters can now look actually organic. And to shave off even more performance issues, we can just animate the base mesh and let the GPU restore tessellation data. We live in exciting times. And this is just the tip of the ice berg.
From a business feasibility point of view, you surely understand that usual development cycles are no less than 2 years. We're talking about an earliest release date by the end of 2014. if you take up Duality soon. By then, a big majority of gamers will have updated to the specified hardware. Not to mention hardware companies which will surely cut back on the prices as time goes on.
When will the first public demo be available?
When it's ready. To be a bit more specific, it should be demonstrated in the context of our unnamed game project. For now, we're only offering previews to experienced software engineers who work for development studios which are interested in licensing the Duality Engine. The reason for this should be obvious, technical previews can only be enjoyed by those who understand the complexities of developing a sophisticated game engine. Even now, it's gorgeous. But that's still not the first impression we're looking for.
What is the significance of Miri64 within Duality?
Miri64 is the 64-bit multithreaded rendering framework within Duality. It is tasked with the most important part of a great game - the actual rendering to the screen. It's profoundly sophisticated, employing stochastic methods of evaluating the complex rendering equation and implementing tile-based deferred rendering to separate geometry from lighting which vastly increases performance because it kills unnecessary overdraw. It was developed with data propagation in mind, which esentially means it does severe optimizations and linearizes groups of data into adjacent memory blocks which can be efficiently stored within the cache, reducing the painful latency related to accessing main memory due to unnecessary memory fragmentation that can arise from bad data management.
It is exclusively 64-bit and uses advanced vectorization extensions of modern Sandy Bridge+ instruction sets like AVX to reduce CPU overhead from doing the heavy linear algebra calculations necessary to allow the GPU to properly transform the geometry currently in the scene. It employs a custom payload-sensitive scene organizing scheme which does intelligent guesses about the distribution of geometry in the scene, reducing unnecessary computations for visibility determination.
It is the most interesting part of Duality, the most visceral and tangible. Everything else, in essence, serves to produce scenes for Miri64 to crunch.
Whose work has influenced the shaping of Duality?
[Domagoj Pandža, CTO]: As men of science, we firmly believe in the notion of standing on the shoulders of giants. Heroes, like us, who have committed their lives to developing new technologies and expanding our understanding of the world around us. Many people have aided with their insights to representing the real world with pure mathematics...
From the famous pioneer who derived the ubiquitous rendering equation, Jim Kajiya, to more recent gentlemen like Kaplanyan, Dachsbacher, Jimenez, Myers, Enderton, Bavoil and many more. As it is only fair, I will also publish papers on behalf of Motion Digital regarding my insights into global illumination systems which will, hopefully, be useful to those interested in computer graphics.