Monday, 5 July 2010

Tech Report: The 3D Behind Crysis 2

During E3 Crytek boasted about it’s incredible method of displaying games in 3D with minimal performance hit. Previously many developers have spoken out about how difficult it would be to get high-end fully featured titles running in this format on consoles, sighting performance issues, and especially the limitations caused by having to render every frame twice (one for each eye) for the effect to be possible. However the ambitious developer of the massively anticipated Crysis 2 has managed to get the game up and running with far less impact on performance.

At E3 Crytek demonstrated a version of Crysis 2 running in full stereoscopic 3D, claiming that the effect cost them only an extra 1.5% of processing power over rendering the game in traditional 2D. Mightily impressive you might think, and that was exactly my thoughts after reading about their presentation. How could it be that a resource heavy game like Crysis was running in full 3D – and in 720p for that matter – without the kind of noticeable impact on performance that the likes of Killzone 3 and Motorstorm seem to be suffering from? The answer is in ‘how’ the effect is created compared to those games.


You see, for Crysis 2 the developers are using a cheat of sorts, a method of 2D to 3D conversion, much like the process that goes on behind the scenes in taking an old 2D movie and then processing it in certain ways so it displays in 3D, complete with a reasonable level of natural depth. Although Crytek themselves haven’t shed any light on the process, it is plainly apparent on what options they might be using, and all signs point to a pixel-shift, plus depth buffer approach to creating the 3D effect.

This type of 2D displacement tech is very similar to the ones used in film production, and the process works in much the same way in videogames. The only difference is that you are relying on a mathematical algorithm in order to fill in any gaps left behind the pixel shift, and of course using the Z-buffer for depth information. All work in 2D displacement is done on a pixel level, nothing geometry-wise is processed at all, it is a completely post process effect.

Here's how it works:

Starting off, the Z-buffer gives you the depth information from a single point of view (POV) required to create the 3D scene. After which the overall viewing distance for the eyes is calculated, thus creating an ideal viewing position in which to determine how far away certain objects are from the screen, etc. Next, the pixels are moved from left to right, and vice versa in order to create the images for each eye using the above Z-buffer info and calculated viewing distance.

You now have a rough approximation of two separate frames (one for each) in order for the 3D effect to be displayed. Essentially rendering is done for one viewpoint, and then two different views are created by shifting pixels around left to right, and vice versa. However, you may also have a few holes in the image arising from changes in what is visible on screen from one frame to the next after the left/right pixel shift has occurred.

Like with the post process conversion of 2D film stock into a 3D print, these holes need to be filled in with information that is not longer there. But unlike with that conversion process – in which a post production artist manually creates new details on a frame by frame basis – for videogame rendering it has to be done in real time by a cleverly designed algorithm instead.

This of course creates problems, seeing as it isn’t easy for a mathematical routine to fill in the gaps left behind in the image without some side effects. Just look at how the upscaling process can leave so many unwanted artefacts if not done carefully, and with a high degree of accuracy. The same thing is equally important here, with the developer needing to create something that carefully determines what information has been cut out, and what needs to replace it.

In Crytek’s case, their method of converting a 2D image into a 3D one in real-time is particularly successful, with little in the way of apparent side effects according to the press who have seen it running. The amount of depth perception is said to be lower than the likes of Guerrilla Games’ Killzone 3 – which actually renders individual frames for each eye – although still appearing fairly natural with only a slight hint at that cardboard cut out look which plague most 2D to 3D conversions.


Impressively, Crytek are also using 3D in a way no other developer seem to be doing at the moment. Rather than having images (such as explosions and particles) jump out at you during play, they are instead using the effect to create a natural depth which extends into the television set, acting as an extension of your natural peripheral vision.

So far the 3D tech behind Crysis 2 has certainly impressed, although Crytek it seems are not the only ones to be using it. Sony are also developing a similar process for first and third party usage in order to make 3D a little more achievable on the PS3, negating the heavy performance cost that comes with rendering the effect for real.

With the 3D race now officially on, it will be interesting to see how developers implement the effect into their games, especially with regards to either rendering in proper 3D with individual frames for each eye, or with the 2D displacement tech talked about on this page today.

Crytek has shown that it is possible to convincingly include support for the format without having to completely rewrite their engine in order to include it, whilst Guerrilla Games has also showcased the advantages of actually downgrading image quality in order to create an unparalleled natural depth that can only e achieved by actually rendering in 3D.

What is apparent is that the two different approaches to including 3D in the latest software releases provide clear support for developers on both sides of the coin when it comes to adopting the format.

1 comment:

  1. Very good in depht analysis!! Thank you for share your thoughts.

    ReplyDelete