Showing posts with label tech report. Show all posts
Showing posts with label tech report. Show all posts

Tuesday, 16 November 2010

Tech Report: A Look At The L.A Noire Trailer

Originally intended as simply a PS3 exclusive release, L.A Noire is now heading to both PS3 and Xbox 360 later next year. With Red Read Dedemption slowly fading out of gamers minds, and with another GTA not on the cards for over a year, attention has begun to turn to Rockstar’s barely talked about crime thriller, L.A Noire.

A few days ago a brand new trailer was released for the game. It appeared to be put together from in-game engine cut-scenes, running in real-time and presenting audiences with a firm impression of what’s to come. Well, it terms of the characters and atmosphere at least – the gameplay at this point, lurks away somewhere in Rockstar’s gritty 1930’s and 40’s inspired world, yet to be privy to a public spectacle.


Check out the HD trailer here

While the trailer hardly looks visually outstanding on first impressions – or on repeat viewing for that matter – one element does stand out above the rest: the game’s incredible facial animation system and use of some stellar motion capture work - the very reason for us taking the time out to deliver this short technical look.

Having convincing lip-syncing, backed up by decent voice acting are two such requirements for any title whose focus is on providing an intriguing and potentially gripping narrative tale. Though without adequate motion capturing, and considerably polished facial responses, all this goes to waste. Just look at the likes of Alan Wake, or Heavy Rain, both are titles that loose some of their emotional impact due to either poor voice acting, bad lip-syncing, or buggy motion capture work.

One or two of these elements may make for a compelling enough experience, but not an exceptional one. And it is exactly this which L.A Noire hops to rectify.


Looking at the trailer, and we can see that Rockstar have indeed taken the time in not only providing some rather excellent voice work, but also in meticulously crafting some of the most sophisticated facial animation tech we’ve seen in any game so far. The way facial muscles move as characters are speaking, and the subtle changes in normal mapping as muscles expand and then contract (around lips and mouth), at the most basic level, all go along way to adding a sense of believability to the proceedings, that you are in fact watching actual characters coming alive on screen, and not simply polygon models.

From a technical point of view, this singular element is by far the most advanced, though is the only one to really impress as a whole. The superb blend in normal mapping offsets the sometimes potentially stiff looking nature of pure geometry movement, thus allowing smoother animation as a result. More detail in the way in which individual elements of the face reacts are clearly shown, along with an impressive use of normal maps working in tandem with the motion capture animation.


Outside of the solid motion capture work and stand-out facial animation system, the rest of the tech powering the game appears to be far less impressive. Although, the washed out and reduced-contrast nature of the art direction seems to be the main cause of this. The style presented here in L.A Noire is faithful to the unique look of similarly themed movies of the 1930’s and 40’s; that is to say that contrast has been intentionally adjusted resulting in a slightly washed out look to the game in general.

However, we can also see that skin shaders and layered shader effects in general, aren’t terribly impressive. In fact they seem to be quite basic compared to more advanced implementations as seen in the recent Call Of Duty Black Ops. L.A Noire appears to be using only basic texturing, plus a normal map and colour map for its character faces, with noticeable levels of specular highlighting being distinctly absent - the matt, almost shiny look is being caused by the normal mapping in reaction with the game's lighting model.

Oddly, there is evidence of screen-space ambient occlusion in these screens, though the effect does little to bring a large amount of additional depth to the scene compared to most titles that use it. Usually SSAO is implemented in order to expand the impact shadowing has throughout a game, bringing more depth, and indeed detail to the scene. However, in L.A Noire it looks like the level of contrast defeats that slightly. Like in the 360 version of Kane & Lynch 2, its effects go partially un-noticed, other than some artefacting revealing its presence.

Perhaps implementation of SSAO is being used to balance out the low contrasted nature of the game as a whole, delivering more shadowing depth where there would otherwise be even less. This would certainly explain why we aren’t seeing more three-dimensionality in overall image composition. Or maybe, it’s that the effect is only being used subtly to complement the look the artists are going for.


Moving on, and the actual framebuffer itself looks to be native 720p, but contained in a 2.39:1 aspect ratio (1280x540), with black boarders at the top and bottom of the screen. It is these boarders which account for the 540 horizontal res that you are seeing – actual gameplay will of course be presented in full screen, so we expect that a full 1280x720 image will be available for the duration.

There’s no scaling of any kind going on, despite the blurry nature of the screens on this page. Instead, the loss in IQ is caused by video compression artefacts, which not only affect edges but pixels across the entire scene.


Outside of the obvious technical murmurings, there’s no indication of platform the montage from the trailer is comprised from. Originally L.A Noire was conceived as a PS3 exclusive, with a 360 release announced only months ago, so it is pretty likely that the footage is indeed taken from the Sony version.

Depending on when development shifted from being solely a single platform affair to becoming a full multi-platform project, we could be looking at a game potentially leading on the PS3. This would mean that parity between both formats should, in theory be reasonably close, without noticeable cuts being made to the PS3 version in terms of resolution or even alpha buffers - which are likely to be used more sparingly in any case of PS3 leading. Uncharted 2 for example, displays relatively few heavy alpha-based scenarios in comparison to COD: Black Ops, and Killzone 2 and 3 both use lower res buffers with additional blending for better integration.

Again, this is pure speculation at this point - the trailer could well be using the 360 build for all we know. And in terms of leading on PS3, that sometimes can be used to boost performance of the 360 version as well, with a greater emphasis on optimising code and an increased use of parallelisation delivering a visually better looking game as a result.


Overall, L.A Noire is looking pretty interesting as a cinematic title, if not as a technical showcase for high-levels of decent motion-capture work, and impressive facial animation. While the rest of the graphical make up, from the very little we’ve seen, looks at this point to be distinctly under whelming, one can’t help but feel this was a stylistic choice more than anything else, in-keeping with the world and imagery the developers are trying to create.

And in that respect L.A Noire looks to be quite intriguing. We’ve seen many games that try, and indeed fail to pull off real cinematic brilliance – Alan Wake and Heavy Rain to name but two – although Rockstar’s track record remain distinctly un-blemished in this regard. Both Red Dead Redemption, and especially GTA IV previously showcased the skill and command the company has over delivering such experiences (different team, but still), and with L.A Noire they certainly look like they could be doing the same again.

Time will tell however, but we’ll definitely be keeping our eye on this one.

Thursday, 30 September 2010

Tech Report: Updated 3DS Software Analysis

Yesterday Nintendo announced the worldwide launch details of the 3DS, revealing that the machine would be released in Japan on 26th February, and sometime in March 2011 for both North America and Europe. The system will go on sale in Japan for at 25,000 yen, roughly equating to 200 of our GBP, or 300 dollars for those folks over the pond.

But rather than repeating countless details that you can read pretty much everywhere else, instead we’ll be taking an tech-focused look at some of the direct-feed trailers released by Nintendo and select third-parties of first batch 3DS software.

Nintendo themselves displayed a multi-title show reel of games currently in development for the system, although heavy-hitters like Resident Evil Revelations and Dead Or Alive Dimensions managed to get their own individual trailers. Seeing these two titles in motion alone, along with Metal Gear Solid 3, I was left distinctly impressed with the kind of quality on offer from these early, first-gen experiences.

Clearly these titles are already beyond anything seen on the last generation of consoles. Whilst polygon counts may be lower, and in some regards noticeable so, the impressive quality and precision of the effects on offer more than make up for this. In fact, they prove that despite the low paper specs of the console, that it can still deliver some graphically accomplished titles whilst touting all the benefits of stereo 3D along with them.

Taking a look at the individual titles themselves, and we can see the various strengths and weaknesses of the system in their entirety.

Metal Gear Solid in particular seems to show a perfect balance between low geometry and use of advanced visual effects. Whilst the game’s poly counts are noticeable lower than the PS2 version, the level of detail on offer is also visibly superior. You can easily see in the trailer that most of the game is normal mapped, and not just the characters.


The environments in particular benefit hugely from this, in some cases looking smoother than in the original PS2 game. The flower scene particularly demonstrates the 3DS’s ability to throw around lots of normal-mapped geometry on screen, along with having enough power left over to handle all the physics calculations needed for so many moving objects.

MGS Trailer

Dead Or Alive seems to feature excellent use of per-pixel lighting and normal mapping, both effects looking far superior to anything seen on the original Xbox. Texture detail is reasonable, filtering is pretty good maintaining relatively moderate levels of image quality, and the characters also seem to have dynamic shadow maps applied to them, which works well with the rest of the lighting solution. One downside however, is the lack of any self-shadowing. This makes the game look slightly flat compared to SSFIV and RE Revelations. Although seeing as it should be running at 60fps, this is a worthy compromise.

DOA TrailerOur previous look

We’ve already taken a look a Resident Evil Revelations extensively here, and dare I say seeing it in motion is an incredible experience, comfortably showing off just what the 3DS can do. The quality of the normal-mapping is superb, the per-pixel, HDR infused lighting solution creates an incredible sense of depth, and texturing is quite possibly the most detailed on the platform.



Obvious issues stand out with the game running in 3D mode however. Texture shimmering and aliasing is apparent, as is edge aliasing due to the lack of any AA being present. These issues can be seen in the majority of PSP titles - a clear graphical downside of the Sony system - and in most of the 3DS games revealed so far.

RE: Revelations footage can be found this in line-up trailer

Super Street Fighter IV we’ve already covered here, so to be fair there’s very little we can add on to our original report. The game’s downgraded use of assets from the PS3/360 versions makes the title look remarkably close to the high-end console game. Self-shadowing if evident, and is backed up with a slew of impressive shader effects.

SSFIV Trailer

Perhaps one title that didn’t impress as much as the others was Capcom’s Resident Evil: The Mercenaries 3D, a separate game from RE: Revelations. As you can see from the screenshots below, the game suffers from terrible aliasing from both a lack of AA, and from some pretty bad texture shimmering. This is made worse than in RE: Revelations due to greater amounts of small geomretry being present on screen, along with what looks like a lower level of texture filtering, and poorer mip-mapping.

Detail levels are incredibly high though, and it is clear that despite the low image quality that the game looks considerably more impressive than RE4 on the iPhone.



A general trailer showing off a wide range of 3DS games in development can be found below:

Nintendo 3DS Games Line-Up Trailer

Judging from the various videos released yesterday, it is pretty apparent that in many ways the 3DS easily outclasses every last-gen system with regards to visuals effects, and in fact pretty much everything seen on the iPhone so far. Epic citadel may have higher res texture maps and art assets, along with higher precision normal mapping, but it is also lacking some, if not most of the high-end, per-pixel effects that so many top-tier 3DS titles are showing off.

Case in point; there is nothing running in real-time on the iPhone that I’ve seen matching RE: Revelations, or MGS3 with half as many effects, whether that be an actual game, demo, or otherwise. Maybe that is simply because there is very little incentive to produce titles with such high quality visuals on the platform. And when most high-end titles sell for just £6.99, that is completely understandable, though it is precisely this difference which could make the 3DS stand out. That, and of course the fact that every game will be playable in 3D on the console with most of its visual integrity intact.

However, despite the polished visual mastery on offer with high-end 3DS titles, we can also see that the machine struggles with maintaining overall texture fidelity in some cases, with various issues from poor mip-mapping, to aliasing being present, very much like with what we are seeing on the majority of PSPgames. This is one downside that I perhaps didn’t quite expect to see so obviously given the nature of the hardware.

When you look at how even early Dreamcast titles manage to feature correct mip-mapping on textures with better filtering, and a lack of texture aliasing (even in titles which feature no AA), then we can begin to see the compromises Nintendo has had to make in order to get a reasonably powerful, albeit low spec handheld with 3D support out the door at a relatively low cost.

Saying that the 3DS isn’t the iPhone, nor is it the PSP2. Plus battery life, and a reasonable entry cost are far more important than having the absolute bleeding edge in visual fidelity at your disposal. It’s these factors, along with the desire to innovate, which usually separate Nintendo from the competition.

Even when taking into account some of the graphcal cut-backs sighted, the trailers released thus far show that for the most part, in real-world terms that the 3DS handles top end visuals extremely well for a handheld device, with a slew of shader effects balancing out the few negatives. Coupled with the fact that developers also have the option of enhancing their games with 2D only extras (AA and per-object motion blur) then what you have here is still a mightily impressive showing, just not quite as flawless as some may have hoped.

Friday, 24 September 2010

Tech Report: Nintendo 3DS Hardware Analysis

We first took a look at how powerful we thought the 3DS might be right here, and then again here, just after we learned which GPU would be powering the system. Now it’s time to do this once again, as a few days ago IGN appeared to accurately reveal the complete 3DS spec sheet, with information encompassing everything from CPU type and final GPU clock speed, to the amount of memory on board, both for system and graphics use. In short, the complete picture has been unravelled right before our eyes.


The full specs list for the 3DS is as follows: the machine is powered by two ARM11 CPUs running at 266MHz, and a DMP PICA200 GPU clocked at 133MHz. It features 4MB of VRAM dedicated to graphics (textures, framebuffer, effects? That’s not clear yet), 64MBs of RAM, and 1.5GBs of flash memory for storage.

Looking at the above, we can see that the 3DS appears initially to be rather underpowered. The GPU speed is incredibly low for a modern handheld device, and the ARM11 CPU was last featured in the original iPhone and iPod touch, and certainly more than a fair bit weaker than the A4 Cortex powering the new iPhone 4. However, when looking closer at the hardware itself, the resolution it runs at, and just what graphical features will be running, and how they will be implemented, it is also clear that the hardware isn’t quite as stillborn as you might expect.

Current game demos like Resident Evil Revelations, and Metal Gear Solid 3 both showcase the machines strong graphical capabilities despite the on paper limitations. And it is also important to point out that Nintendo’s hardware, unlike that of the iPhone and other multimedia / general handheld devices, the 3DS isn’t likely to feature a performance sapping OS powering it, or a restrictive high-level API limiting what you can do graphically. Nope, it’s almost certain that with the 3DS it will be possible for developers to code directly to metal, thus ensuring that they get ever last drop of juice from what the hardware is capable of.



Taking into account the small screen size, and small screen resolution itself (800x240), then you find that the system’s overall performance is perfectly suited to this type of environment. There’s no point for example, in rendering dozens of millions of textured, layered, and complexly shaded polygons per-second on a small screen in which at such a low resolution - most of that will almost certainly go to waste. Instead, like we have said before Nintendo seems to have taken a balanced, economical approach to their next-gen handheld hardware. And this looks to be the right choice. Cost/performance wise, it looks set to get the job done comfortably, and when looking at the individual make up of the system’s internals we can see why.

The CPU for example, an ARM11 running at 266MHz, is unlikely to be doing any complex physics calculations, or highly advanced AI routines – these aren’t really needed for small doses of on the go gaming, so appears to be low spec, but entirely adequate for the task in hand. Of course we can expect basic physics, and the illusion of advanced AI with the chip – seeing as it is rated roughly in line with an Intel 486 CPU, then scripted AI events, and arcade-like physics are more than possible, and satisfactory.

Looking at what was achieved on the original iPhone, and the fact that developers were still hindered by Apple’s domineering software API, then you can easily expect a substantial improvement when coding direct to metal, or much closer with a less restrictive development environment. Better collision routines, AI etc. All that is possible when taking into account the chip in context of how the 3DS works in comparison.

The decision to downclock the GPU is a rather interesting factor, not least of all because the standard PICA 200 running at 200MHz is very low spec by today’s standard – trailing way behind the iPhone 4’s SGX535, but also because it’s unlikely to be that much less cost effective. Instead, like in our original assumptions, we assume that this downgrade was done in order to preserve battery life, whilst also providing a small, but altogether beneficial decrease in overall system cost.

Even with GPU’s downclocking to 133MHz, it still packs plenty of punch. The original PICA-200 running at 200MHz can push around a maximum of around 15.3 million polygons per-second in a best case scenario, although that is unlikely to be in a real-world game environment (30 or 60fps with full effects etc).


In the 3DS, where the clock speed has been lowered to 133MHz we can expect a further drop in performance. From what we can see with current game demos, is that the systems peak polygon performance (on first-gen software at least) looks to be around the 3 to 6 million mark per-second – just over that of the PSP, and equalling the mid-range table of what the PS2 can do. Of course this is assuming optimised conditions, seeing as none of the software looks like it pushing anything more than around 4, maybe five million polys per-second.

However, such low geometry counts seldom makes a big difference these days, especially where advanced shaders, and multiple texture layers are concerned. And this is where the 3DS shows us its trump card. With the addition of advanced fixed-function effects which simulate the use of programmable shaders, along with actual vertex shader capabilities, Nintendo’s handheld can do a lot more with less, polygon wise, thus negating what can be seen as a lack of overall polygon pushing power. Also on such a small screen, huge amounts of geometry is always going to be less beneficial than a string of useful visual effects.


In terms of memory, the system is pretty much on par with the PSP. The 3DS has 64MB of main memory, and 4MB of video RAM - basically the same as the PSP Slim & Lite (bar VRAM in which the PSP S&L has 8MB). Initially, the inclusion of only 64MB of memory for the overall system to use may seem limiting. However, when you consider that the 3DS is a cart-based system, and that large amounts of data can be streamed in real-time from the format, then 64MB appears to be a suitable amount given what’s expected of it.

The same could also be said of the system’s 4MB of video RAM. Although it does seem rather limiting at first - it’s not yet known whether it is simply being used as framebuffer memory, or to hold the entire rendered scene, complete with textures and fixed-function texture layers - it should be enough for most games given the overall make up of the system's architecture. Determining its impact on performance though, is somewhat guesswork at this point.

Saying that, assuming Nintendo has included an efficient texture compression system then 4MB should be more than enough to fit in both the framebuffer and graphics data as an all-in-one solution. At the 800x240 resolution games are rendering at, you’re not really going to need that much more space for decent image quality anyways.

Obviously we don’t know the bandwidth numbers for the system’s graphics memory, although current game demos clearly demonstrate performance beyond that of the PSP, and the PS2 with regards to visual effects. And that’s with pushing around a lot more through the graphics pipeline. The standard PICA-200 GPU running at 200MHz has a pixel fill-rate of around 800 million pixels per-second (more than the GCN but less than XB and Wii), so we can comfortably say that the downclocked 3DS revision features noticeably less that. Although by how much, we can’t really say.

Surprisingly, when looking at the raw numbers of the 3DS’s specifications, you can actually see that the machine isn’t all that much more powerful that Sony’s PSP, with the amount of memory being the same, and geometry counts being very similar, albeit a little closer to the low end, mid-range of the PS2. What gives the 3DS its visual edge it seems, is simply down to its GPU’s capacity for rendering loads of advanced fixed-function effects on screen in lieu of having proper pixel shaders. Per-pixel lighting is supported, as is bump-mapping, specular and diffuse reflections, refraction mapping, procedural texturing and soft shadowing.

All of these add serious clout to the final images the 3DS produces in its games, and is exactly why the likes of Resident Evil Revelations and Metal Gear Solid 3 looks so good. The former looking closer to current 360 and PS3 games than most titles on the original Xbox.

Lastly, the system also features 1.5GBs of flash memory, used primarily for user-based storage. We can expect this space to be occupied by downloadable content, and various music and media files the user has transferred onto the console. Interestingly, it appears that the system actually features a 2GB flash memory chip inside, leaving 512MB solely in the hands of the OS. It is likely that this will be used to upgrade the machine’s firmware further on down the line, adding new functionality to the unit and who knows what else.


With the final specifications of the 3DS revealed (minus the odd bit of info here and there) it is clear that the system is, at first glance, not blinding more powerful than the PSP as it originally appeared. Much of what makes the 3DS games graphically so impressive comes from cleaver implementation of layered texture effects, and some impressive texture compression. Obviously, stuff like total system bandwidth is still up in the air. Although we can see that Nintendo's machine is working smarter, rather than harder.

However, this just might be enough. From what we’ve seen of the software, the machine has no problems in overshadowing DC and PS2 games, even bettering some Wii and Xbox 1 titles, so the lacking nature of the machine’s raw specifications are certainly not the be all and end all of the story.

Like with the N64, GCN, NDS, Wii, and pretty much every games console they’ve ever done, Nintendo have always been clever in selecting cost-effective, but capable performing parts, ones which get the job done without needing as much raw grunt as its competitors. And this is exactly the case here with the 3DS. They could have gone with NVIDA’s Tegra 2 solution (and evidence points to the fact that they originally were going to), however, for what is likely to be either cost or power efficiency issues, decidedly to switch to the DMP PICA-200 chip instead.

The decision, however silly it might seem in the face of vastly superior smartphone tech, and the rumoured PSP2, makes sense when you consider that the main draw of their system is it’s ability of deliver a solid 3D experience without the need for the user to wear glasses, and at what is likely to be at a reasonable price. The fact that games for the system currently impress, despite paper limitations, is just another sign that the company has done the right thing, especially given the circumstances of the ever-increasing cost of having cutting edge hardware in the home.

Balancing impressive graphics hardware and a low entry price with mass-market adoption is usually not an easy task. But Nintendo has shown time and time again that it definitely knows what its doing in this sector. And the 3DS looks like being another shining example of just that.

Saturday, 28 August 2010

Tech Report: Vanquish - The Resolution Game

Recently, over the last couple of weeks various direct-feed screenshots have surfaced of Platinum Games’ Vanquish, with each batch seemingly being rendered in different resolutions to the last, and with varying amounts of anti-aliasing. Some of these screenshots came directly from the developers blog, looking like authentic framebuffer grabs off an actual 360 or PS3 console, whilst some appeared to be of almost bullshot-like quality without going the whole hog of supersampling.

The question is, what resolution is the game really running at, and which platform did the framebuffer grabs come from? IQGamer takes a brief look a few days ahead of the game’s demo release attempting to set the record straight.

Vanquish, the latest action game from the mind behind such hits like Resident Evil and Devil May Cry - the legendary Shinji Mikami - is being developed using a modified version of the same engine which powered Bayonetta for Xbox 360 and PS3, with the PS3 version sighted as the lead platform this time around.

Like with Bayonetta, Vanquish’s engine clearly has its eye on delivering lots of alpha heavy particle effects and transparencies on screen during combat, which usually causes problems for the bandwidth staved PS3, especially without further optimisations or multiple resolution render targets for different graphical effects. Although compared to Bayonetta Vanquish doesn’t seem to be anywhere near as alpha heavy on first impressions, and the quality of the game’s effects do in fact look to be slightly higher than in Platinum Games’ other title, even if there is still evidence of lower-res buffers in action.

However, we’re not here to talk about the entire engine from a tech perspective today, instead focusing on determining platform and final rendering resolution. Saying that, our observations with regards to particle resolution in the released screens could well be achieved with a slightly greater saving in memory due to reducing the overall framebuffer size down somewhat. And this is exactly what appears to be happening when looking at the more genuine screen grabs released by the developer.

The first screenshots released of Vanquish were clearly supersampled bullshots. Shown below is an image of the game supposedly rendered in 720p, but with insanely high amounts of AA and some impressive post-processing effects rarely seen in such high quality at game level. It’s pretty obvious that this isn’t what the final game is going to look like. Not unless a PC version is in the bag and running behind the scenes.


This next screenshot looks a lot more authentic, and is actually rendered in 720p (1280x720) without any additional supersampling or downscaling of the image. However it also looks incredibly clean, with very high levels of AA – at least 8x MSAA or more, meaning that additional AA could well have been applied to the final framebuffer image for promotional use whilst keeping the game’s natural rendering res intact. It’s also possible that this could be from a different build – the 360 game perhaps. Although the levels of AA here are clearly beyond the 4xMSAA capable on MS’s console.


The final screenshot is far more telling, and looks blatantly like you’d expect one taken directly from the HDMI output of either console. Here you’ll see that there is no anti-aliasing to be found of any kind – not even 2xMSAA or QAA, and that the image quality is noticeably below that of the above 720p shot supposedly from a real framebuffer grab.

Resolution-wise, the below screen is being rendered out at 1024x720 with no AA, meaning that it clearly would fit into the 360’s EDRAM without tiling, and also appears to be something we’d expect from a standard multiplatform PS3 port; a slight drop in horizontal res, and no AA giving the game away.


But which version is this screen from? And is it simply possible that Vanquish will have no AA and a lower horizontal resolution on both platforms?

Well, there are obviously no straight up, one-hundred percent answers at the moment seeing as we don’t know for sure which version the screens are from. What we do know though, is that they were taken directly from the video output of either current-gen console, and that the visual composition of the image makes it likely that we are indeed looking at the PS3 build.

Traditionally, when downgrading resolution for both consoles developers usually only cut back on the horizontal res on PS3, and a mixture of both on 360. This due to the fact that the PS3 has no built in hardware scaler, other than the broken horizontal scaler found inside its RSX GPU, whereas 360 has advanced scaling capabilities found in Xenos.

The other issue is the lack of AA, and this is down to memory bandwidth. If resolution is not the first thing to be cut, then use of anti-aliasing is. The 360, with its 10mb of EDRAM, easily has enough bandwidth to usually deliver at least 2xMSAA for 720p games (with occasional cuts in overall res where required) whereas the PS3 for the most part does not.

This means that it is more than likely that the screenshots released from the developer (the ones that are genuine) are from the PS3 version of the game. The fact that Platinum Games have stated all the way through Vanquish’s development that the PS3 version is the lead build, also backs this up. As does the game’s appearance at various showcase venues for the press, in which it was the PS3 build that was usually demonstrated.

Of course this is simply well informed guesswork on what version we think the screens are from. In order to find out solidly we will have to wait until the demo goes live on Wednesday, September 1st, after which we should have our initial hands-on impressions with the title up on the site, and hopefully a proper tech analysis a few days later.

Until then we can say that Vanquish does appear to render in 1024x720 with no AA on at least one, or both consoles, with the PS3 version strongly edging our bets on which version the screens are from.

Wednesday, 11 August 2010

Tech Report: Kinect - The Latency Question


There has been much talk about the high latency surrounding the Kinect, along with the heavy burden caused by the additional CPU usage that is needed to run the more complicated Kinect games. However this doesn’t have to be the case, and recently in an interview with CVG, Blitz Games and chief technical officer, Andrew Oliver revealed that they have developed a way around the limiting latency factor.

“There are various technologies involved. Some people are using a skeletal system, and it takes a little bit of time to calculate. It’s only a split second. We're actually using a different masking system, which can tighten things up. But this is all software-based, so where some people might see some little cracks, they're easily fixable by software. That is, the camera fundamentally works and gives you the input; game designers are running forward in a completely new area and learning this stuff. It's like any console. The first few games will look like nothing compared to second and third generation.”

He then elaborated on the question of how much lag can we expect to see using their approach, and how cleaver coding can almost eliminate it.

“It depends on what technology you're using. I have seen a few games with a bit of lag, but that is the software choice of the creators; they've programmed it a certain way, and they'll come up with new techniques. We will tighten and tighten it. There doesn't need to be a lag. We can get it down to maybe two frames behind, which is pretty insignificant; you won't notice. We're just learning new tricks. Ours is pretty tight.”

His comments make for interesting reading. Although we have always known that it was down to software in determining how the image is processed, we had no idea that the system in place was so flexible, that the developers can choose what data they want to use from the Kinect.

Ultimately, the way that Kinect works will always produce some lag, even if it is just a very small amount. There is no way around this. The device still needs to provide data to the Xbox 360 console in the form of a depth map, plus RGB camera image, whilst also performing basic set up routines with the sound sensor, before then transmitting it down the USB 2.0 cable to the machine, all of which results in a small amount of fixed lag.

In addition there is more lag added on top of this when the data gets processed by the 360 console. The amount of lag which takes place here depends entirely on how the developer chooses to interpret, and in turn use the data. They may choose not to use the RGB camera layer, instead simply relying on the depth map information, or they could just bypass any skeletal tracking altogether, thus saving on overall processing time but also resulting in lower latency but a more basic motion tracking system as a result.

Indeed, what we now know is that it’s the developer that decides which parts of the system to take advantage of. In effect they can choose to you all, or none of the above tracking methods according to what kind of experience they are looking to create. This means that they could produced a fairly basic game not too dissimilar to something you’d find on the Eye Toy, but with minimal lag. Or something which uses the full extent of the full body tracking available when using the depth map plus RGB camera image combined, resulting in a highly accurate and advanced experience, just at the expense of having noticeably more lag.

For project at Blitz Games they seem to be towing the line between the two. Although we can’t be sure as to how much complex data they are choosing to discard, we do know that to get the overall latency down to one or two frames, that you’ll effectively need to scrap most of the depth map information and forgo advanced skeletal tracking. Essentially, pairing back the image processing down to a bare minimum of what the Kinect can do, whilst also trying to maintain some of it’s more trademark features.

Having a less laggy system is always preferential with regards to any control system in gaming, whether that be a standard control pad or a motion-tracking camera. However, in this case finding a cleaver solution to the issue might also negate some of, or most of the additional features that the Kinect provides over other motion controllers.

You could argue, that what is the point in getting the lag down to one or two frames when you are having to cut back on many of the things which in essence makes Kinect so special. If you’re not going to be doing motion tracking, and just taking advantage of the standard RGB camera then why bother to use Kinect at all. Especially when a simpler solution such as the PS Eye could well be enough to handle the type of software you are trying to create.

So, assuming that Blitz Games aren’t using any kind of depth buffer at all for their title, then Ubisoft could easily convert the game over to the PS Eye with minimal issues. Effectively, if you’re only using a full-colour video stream to create image information for processing then you can do the same using other basic camera systems, thus defeating the point of using Kinect.

However, it is likely that the depth buffer is being used in some way with regards to determining specific object tracking with the final image, even if it isn’t as fully featured as the system used in Microsoft’s own titles. The depth buffer provides valuable information allowing you to single out certain objects for tracking without having to isolate them from an entire full colour image in order to pick out the specific parts in which to track. Using the depth buffer for this purpose would also cut down on the amount of processing that needs to be done. Again, highlighting another plus point for reducing latency with Kinect whilst still taking advantage of some of its advanced features.

All this is simply based on an assumption on what Blitz Games are using with their custom software solution. But without knowing exactly which parts of Kinect they are using, and to what extent, it’s a fairly moot point at this stage. Although it does beg the question of whether it is actually possible to produce a nearly lag-free experience when using any kind of advanced body tracking features with the device.

One thing we can say for sure though, is that titles featuring low latency are probably not using the skeletal tracking system, and that titles which have a noticeable amount of lag are. It’s really that simple, though not in terms of getting the low latency solution to really work effectively.

For Blitz Games solution they have taken to using the GPU of the Xbox 360 along with performing some less strenuous calculations on the CPU. So far most titles we’ve seen use the CPU to do most of the work, and according to Oliver doing it this way can be pretty slow in comparison.

“Well that's interesting, because obviously if you're trying to run your game and look at these huge depth buffers and colour buffers, that's a lot of processing. And it's actually processing that a general CPU is not very good at. So you can seriously loses half your processing if you were to do it that way. We've found that it's all down to shaders, but turning a depth buffer into a skeleton is pretty hardcore shader programming. What you tend to do is write all your algorithms, get it all working in C++ code, and then work out how to now write that in shaders.”

What he’s describing here is how the team at Blitz Games are effectively doing all the depth map processing and skeletal tracking on the GPU using customised shader routines.

“The GPU on the Xbox is very powerful but we've all only been using it for glossy special effects. A really good example of this is Kinectimals, as the most intensive thing that you can do on a GPU is fur rendering. So that GPU is doing all the fur rendering, and I can guarantee that it's also doing a lot of image processing too. It's brilliant that the Xbox has a really good GPU and can handle both these things, but actually writing that shader code to do image analysis is hardcore coding at its extreme!”

Like we mentioned earlier having the GPU handle the processing of all the image data from the Kinect can be incredibly difficult. Although the GPU definitely appears to be more suitable for such a task, it’s just that writing code to take advantage of that requires looking at the problem from outside the box. Now while this is pretty commonplace in PS3 development, on 360 it’s a far rarer occurrence. However it seems that for anyone looking to provide a different Kinect experience, that’s exactly what you will have to do.

Certainly, from the comments coming from Blitz Games’ Chief Technical Officer we can expect titles using the GPU to not only have lower latency, but also retain some of the advanced skeletal tracking feature specific to Kinect. Oliver describes how he and his team are already working within two frames of latency, which should translate roughly to around 132ms lag on top of what your HDTV is already adding to the signal.

Comparatively, top titles which run at 30fps, such as Halo3, are displaying 100ms of lag on top of your HDTV processing delay, and Sony’s own Killzone 2 is said to feature at least 150ms on top of that also. By contrast many first-party Kinect titles are operating at 200ms plus input lag.

When looking at these numbers it is clear that having around 132ms lag in a best case scenario is actually rather good, and is inline with what we expect from most ordinary games running at 30fps. Even Criterion Games’ Need For Speed: Hot Pursuit exhibits at least 100ms of lag at present, with the developers hoping to reduce that down to below the 90ms mark.

Playstation Move also operates within the same basic threshold, with the system capable of delivering games with only around 1 or 2 frames latency; that’s around 66 to 140ms lag on top of your HDTV. This means that through a custom approach to processing data, Kinect is more than capable of matching the Move’s higher numbers with regards to latency.

So, what has become apparent today is that games specifically created for Microsoft’s Kinect don’t necessarily need to have high levels of latency, with the overall complexity of the motion tracking method used signalling just how much lag will or won’t be present.

This also means that there will be a correlation between the kinds of games you can expect to have high latency on the Kinect to the ones that don’t. Titles which use the full potential of the device’s advanced body tracking capabilities, such as yoga or fitness games, are also the ones which will feature noticeably more lag. However, they will also be the ones to really demonstrate how different Kinect is compared to other motion control solutions.

Replicating your movements and translating them to your TV screen accurately is exactly why most people will be buying a Kinect, regardless of the latency. If the experience provides something far in excess of what was possible on the Wii, or on PS Move to the same degree, then it won’t really matter all that much.

What does matter however, is that developers have finally confirmed that they do indeed have a solution to the problem. Performing image analysis on the GPU is clearly said to be faster than MS’s own CPU solution. Plus, there’s always the choice of circumventing some of the full-body tracking in order to use a simpler system whilst still taking advantage of the device’s ability to generate a depth map.

In the end gamers will be sold on Kinect not simply by how much lag they encounter, but by the types of experiences on offer. And if those experiences do indeed manage to bring something fresh and innovative to the table, then adjusting to a little lag here and there isn’t likely to influence their decision to jump in. So, whilst the device might not be suitable for the types of ‘core’ gaming experiences we know and love, that’s not to say that other ones which can possibly cater to those, and casual users, won’t in turn come along and provide the best of both worlds with regards to low latency and advanced body tracking.

Ultimately, it’s going to take one or two years to really see the benefits of this technology, and like when learning new console hardware for the first time, cracks and small niggling issues are going to have to be worked out. Perhaps fittingly, this is also a statement echoed by Blitz Games, who believe that the kinds of experiences that you’re going to be getting in second or third generation of software will be far over and above what we are currently seeing today.

All that’s now left is for developers to really spurn their creativity by fully taking advantage of the various coding tricks and optimisations they’ve learned. And of course, for Microsoft’s marketing department to convince people that Kinect is really worth that £129 price of entry.

Friday, 23 July 2010

Tech Report: A Look At LBP2's Graphical Upgrades

Despite being a platformer with a slightly cutesy disposition, Little Big Planet is no stranger to technical excellence. You might not think that just by looking at it, but under the hood the original LBP was as interesting from a tech point of view as it was from a gameplay perspective.

Little Big Planet 2 then, appears to be much the same in this regard; the characteristic real-world physics of the title so integral to the very core of the game backed up by some impressive, and downright interesting engine enhancements. This is precisely the reason for us to be taking an extended look at the title. What we have here could almost be described as a full tech analysis of sorts, but in reality it’s more of a small glimpse, and dare I say, intriguing update into what media Molecule are doing with this sequel.

The first thing to notice, obviously, when going over the screenshots is the abundance of clean lines and smooth looking edges, or rather the distinct lack of any jaggies spoiling the scene. Now you might be thinking ‘supersampled’ when seeing the quality of the screens, and initially that’s exactly how I felt about the situation. However, this isn’t actually the case as all the screens you see on this page are direct-feed captures rendering in native 720p (1280x720), and are not downsampled from a higher resolution, as is usually the case with most PR shots released to the press.

The reason for the game’s lack of jagged lines and supersampled look then is clear - the use of morphological anti-aliasing has been implemented into the graphics engine.

Although the above anti-aliasing method has been confirmed by Media Molecule themselves you can still see some evidence of sub-pixel based jagged edges, another hint as to the inclusion of MLAA, as with supersampling this just wouldn't occur


MLAA has been featured a few times before here at IQGamer, namely when covering Santa Monica Studio’s God Of War 3, and more recently Guerrilla Games’ Killzone 3. It feels like not a month goes past without some new first-party title using the technique, a technique which is not only cost saving in terms of memory, but also in terms of securing the highest levels of image quality in a console specific release.

From the screens featured on this page it is apparent that the MLAA does far more for the image than what is possible via the more traditional MSAA, only to be beaten by supersampling (SSAA) which isn’t doable, realistically, on consoles due to the additional rendering performance incurred.

In LBP2, like with GOW3 the use of MLAA provides up to 16x MSAA coverage on some surfaces, and better than 4x on most others. The only area in which this form of AA falls down is when dealing with sub-pixel aliasing, where by any polygon edges smaller than the size of a pixel (this is a sub-pixel) receive absolutely no AA coverage at all. The same is true for MSAA as well, but not for supersampling which covers all aspects of the entire image. Regardless, this new form of anti-aliasing is a huge improvement over the 2xMSAA used in the first game.

Of course, the use of MLAA is just one part of many fundamental changes to the underlying graphics engine. For this sequel the developers have also opted for a solely forward rendering approach throughout the entire game, making transparencies and shadowing much easier to do.


Previously the original LBP used a differed solution to rendering lighting and shadowing in the game, which meant issues with displaying certain effects and also a reduction in shadow quality.

By switching to the traditional forward rendering approach this time around, the developers have been able to easily upgrade the shadowing system used in this sequel. Soft shadows are present throughout, which look much nicer than the hard-edged ones used in LPBP1, with shadows being cast for every main light source you can see, now without the need to pre-calculate them as shadowmaps like before. These soft shadows also blend in well with the game’s newly implemented use of screen-space ambient occlusion (SSAO), which is performed in real-time along with most of the lighting and shadowing.

This use of SSAO also gives an even greater depth to the image not found in the last game, complementing the range of visual enhancements on offer.

Seeing as the computational requirements for producing such graphics effects have probably also gone up, the resolution of these shadows is still relatively low compared to the rest of the game. I’m not sure how much resolution loss is occurring, although it is apparent that all shadows look slightly softer than you’d expect them to be, especially compared to if they were rendering in same resolution to match the rest of the game.

Transparencies however, now look to be rendered in a higher resolution than the first game, using proper alpha coverage, whereas before they were rendered in a half-res of sorts using the bandwidth saving alpha-to-coverage, which lead to a slight screen-door effect being present on all objects that featured it.

The true nature of the translucency is a nice touch, and goes well with other enhancements being made to the whole particle/effects system used through the game.


From what we can see LBP2’s use of soft dynamic shadowing in combination with SSAO is undeniably impressive, providing an incredibly life-like appearance to how the whole scene is lit at any given time. The tightly woven nature of the lighting and how it commands the way shadowing occurs in the game cannot be understated, and to this the addition of a global illumination (GI) style solution adds more believability into the mix.

In LBP2 light on some stages looks like it travels down from a main point (the sun, or a singular internal light source) into the world appearing like it reflects off some objects and onto others as it shades and lights the environment. An impressive visual trick, as in reality it isn’t being done in real-time at all, but instead is a pre-calculated simulation that uses something along the lines of a lightmap, moving these around on certain surfaces to create this effect.

The illusion of proper GI, and some cleverly implemented god rays is the focus here, convincingly backing up the original LBP’s realistic lighting model. Alone, these elements are merely hot discussion points. But when all brought together they bring a real sense of naturalness and cohesion to an abstract game world, much like in the way the use of physics provides us with some tangible connectivity between the game and our own experience of reality.

There's still a disconnect for sure, seeing as the whole style and notion of the game’s world is completely off-the-wall, but in it’s own little confines feels completely organic, succinct even. And that’s perhaps Media Molecule’s biggest success, not the visual wonderment that all these effects and improvements provide, but the ability for them to casually sink in and blend together so seamlessly into being part and parcel of the experience.

Low resolution shadows aside, and maybe some stray un-anti-aliased sub-pixel edges, LBP2’s tech has seen some noticeable changes, and some ingenious solutions to problems most lesser developers tend to flake over. The game isn’t visually outstanding from a ‘wow factor’ point of view, but instead has its moment in delivering small subtleties that do more for the overall experience than just for the sheer technical hell of it.

Monday, 5 July 2010

Tech Report: The 3D Behind Crysis 2

During E3 Crytek boasted about it’s incredible method of displaying games in 3D with minimal performance hit. Previously many developers have spoken out about how difficult it would be to get high-end fully featured titles running in this format on consoles, sighting performance issues, and especially the limitations caused by having to render every frame twice (one for each eye) for the effect to be possible. However the ambitious developer of the massively anticipated Crysis 2 has managed to get the game up and running with far less impact on performance.

At E3 Crytek demonstrated a version of Crysis 2 running in full stereoscopic 3D, claiming that the effect cost them only an extra 1.5% of processing power over rendering the game in traditional 2D. Mightily impressive you might think, and that was exactly my thoughts after reading about their presentation. How could it be that a resource heavy game like Crysis was running in full 3D – and in 720p for that matter – without the kind of noticeable impact on performance that the likes of Killzone 3 and Motorstorm seem to be suffering from? The answer is in ‘how’ the effect is created compared to those games.


You see, for Crysis 2 the developers are using a cheat of sorts, a method of 2D to 3D conversion, much like the process that goes on behind the scenes in taking an old 2D movie and then processing it in certain ways so it displays in 3D, complete with a reasonable level of natural depth. Although Crytek themselves haven’t shed any light on the process, it is plainly apparent on what options they might be using, and all signs point to a pixel-shift, plus depth buffer approach to creating the 3D effect.

This type of 2D displacement tech is very similar to the ones used in film production, and the process works in much the same way in videogames. The only difference is that you are relying on a mathematical algorithm in order to fill in any gaps left behind the pixel shift, and of course using the Z-buffer for depth information. All work in 2D displacement is done on a pixel level, nothing geometry-wise is processed at all, it is a completely post process effect.

Here's how it works:

Starting off, the Z-buffer gives you the depth information from a single point of view (POV) required to create the 3D scene. After which the overall viewing distance for the eyes is calculated, thus creating an ideal viewing position in which to determine how far away certain objects are from the screen, etc. Next, the pixels are moved from left to right, and vice versa in order to create the images for each eye using the above Z-buffer info and calculated viewing distance.

You now have a rough approximation of two separate frames (one for each) in order for the 3D effect to be displayed. Essentially rendering is done for one viewpoint, and then two different views are created by shifting pixels around left to right, and vice versa. However, you may also have a few holes in the image arising from changes in what is visible on screen from one frame to the next after the left/right pixel shift has occurred.

Like with the post process conversion of 2D film stock into a 3D print, these holes need to be filled in with information that is not longer there. But unlike with that conversion process – in which a post production artist manually creates new details on a frame by frame basis – for videogame rendering it has to be done in real time by a cleverly designed algorithm instead.

This of course creates problems, seeing as it isn’t easy for a mathematical routine to fill in the gaps left behind in the image without some side effects. Just look at how the upscaling process can leave so many unwanted artefacts if not done carefully, and with a high degree of accuracy. The same thing is equally important here, with the developer needing to create something that carefully determines what information has been cut out, and what needs to replace it.

In Crytek’s case, their method of converting a 2D image into a 3D one in real-time is particularly successful, with little in the way of apparent side effects according to the press who have seen it running. The amount of depth perception is said to be lower than the likes of Guerrilla Games’ Killzone 3 – which actually renders individual frames for each eye – although still appearing fairly natural with only a slight hint at that cardboard cut out look which plague most 2D to 3D conversions.


Impressively, Crytek are also using 3D in a way no other developer seem to be doing at the moment. Rather than having images (such as explosions and particles) jump out at you during play, they are instead using the effect to create a natural depth which extends into the television set, acting as an extension of your natural peripheral vision.

So far the 3D tech behind Crysis 2 has certainly impressed, although Crytek it seems are not the only ones to be using it. Sony are also developing a similar process for first and third party usage in order to make 3D a little more achievable on the PS3, negating the heavy performance cost that comes with rendering the effect for real.

With the 3D race now officially on, it will be interesting to see how developers implement the effect into their games, especially with regards to either rendering in proper 3D with individual frames for each eye, or with the 2D displacement tech talked about on this page today.

Crytek has shown that it is possible to convincingly include support for the format without having to completely rewrite their engine in order to include it, whilst Guerrilla Games has also showcased the advantages of actually downgrading image quality in order to create an unparalleled natural depth that can only e achieved by actually rendering in 3D.

What is apparent is that the two different approaches to including 3D in the latest software releases provide clear support for developers on both sides of the coin when it comes to adopting the format.

Friday, 18 June 2010

Tech Report: How Powerful Is The 3DS?

So, the 3DS is finally out of the bag and the first screens and videos of some visually impressive titles are making their way across the interweb. Tuesday’s unveiling of Nintendo’s latest hardware entry couldn’t have gone better, with a crowd-pleasing assault of titles aimed squarely at the ‘core’ gaming market, and some solid tech backing it up.

This tech is what we’re going to be looking at here today at IQGamer, uncovering the details behind the visual mastery in the various screenshots doing the rounds, and assessing just how powerful the 3DS really is. Of course, without actual hardware specs there isn’t much to go on outside some released screens, and poorly captured internet video. However pictures do tell a tale, and in the 3DS’s case, a significant amount about the underlying hardware. Definitive conclusions you won’t find – this isn’t really possible at the moment – but an insight into just what we can expect from Nintendo’s latest is something we can clearly provide.


Many people were quick to point out in their initial impressions that the 3DS appeared to have PS2 or GameCube quality graphics. A bold statement indeed, as that would make the hardware incredibly powerful, matching the iPhone in pure polygon capability whilst lacking some of the mobile device’s advanced shader effects. In reality that doesn’t seem to be the case, with the system’s current performance looking to be in between the Dreamcast and the PS2, but with liberal use of bump-mapping and specular effects. Better than the PSP? Yes, but maybe not in terms of raw geometry pushing power.

Another thing to consider is the fact that the machine is rendering everything on screen in 3D, and this means rendering each frame twice. This takes up far more potential processing power than just rendering a single frame for 2D display, and more than likely impacts on the level of polygon performance the 3DS is capable of.

In addition the 3DS also allows you to adjust how much of the 3D effects is displayed in real-time using a slider next to the screen. The reduction or increase in the effect appears to be calculated on the fly by the processors inside the system, so clearly for it to do this eats up whatever power could have been used for something else.

Maybe if the 3DS didn’t have to render every game in 3D, then it would more than likely exceed the PS2 in terms of graphics overall, matching it with regards to real-world polygon performance, but completely topping it in the visual effects stakes. As it is, not so much so.

Using screens for comparison we can see just how the 3DS holds up against other formats, and whether or not claims of the machine being close in power to a PS2 or like an enhanced Dreamcast are really true.

It’s pretty clear from the offset that the 3DS’s polygon pushing power is nowhere near that of the PS2 or the GameCube in mid or high level scenarios, instead it does resemble some low end, low key GCN and PS2 style graphics but with a greater amount of visual effects.

Metal Gear Solid 3 is a good example of this. Here we have a title that is perhaps pushing around more on screen that of a Dreamcast – with lots of bump-mapping, specular highlighting, reasonable texturing, and some nice lighting – but that clearly falls short of a high-profile PS2 game, geometry wise at least. Instead polygon counts look very similar to top end Dreamcast games, but with a far more liberal use of special effects. Use of programmable shaders are also very apparent, clearly putting the hardware ahead of the PSP, PS2 and GCN in the effects department.



The same thing can be found with Resident Evil: Revelations, a game which initially looks strikingly next-generation but hides its low poly make-up under a veil of bump-mapping and shading. You can see that the character models are in fact a little blocky, lacking the kind of intricate geometry detail to be found in most PS2 and GCN games. Instead the game manages to fool you into thinking it is more high-end than it actually is by cleverly using a healthy amount of bump-mapping, and good use of texturing and lighting, which allows smoother edges and more detail with less geometry being needed.



To emphasize just how important bump-mapping can be to creating a smooth image, lets talk about Activision’s Call Of Duty for a second. In an interview with developers Infinity Ward it was said that the characters in Call Of Duty 4 were made up of less polygons than in COD2, but that they actually looked noticeably more detailed as a result of improved use of certain effects. The developers pointed out that by using improved normal mapping (a more advanced technique with similar results to bump-mapping) and better texturing, that they were able to create more detailed characters with less geometry cost. It’s this very same thing that is happening here with Nintendo’s 3DS.

Compared to Sony’s PSP, the 3DS does appear to be approaching it for the most part with regards to real-world polygon counts in games, with the exception of top-tier titles such as GTA and OutRun 2 which seem to be pushing closer to the PSP’s technical maximum of around 6 million polygons per-second. Other than that, the 3DS competes remarkably well but demonstrates a clear effects advantage over Sony’s machine.

It’s these effects that make some of the 3DS titles look so much better than what is available on the PSP. Strangely, it appears that it is mainly third-party titles that are pushing the hardware using a wide range of effects the new machine seems to offer. Nintendo’s own games instead, seem far more basic in comparison using the standard textured and shaded approach to graphics rendering. Of course it doesn’t help that most of their titles shown were either N64 ports or what looked to be enhanced DS games.

Capcom’s Super Street Fighter IV is a good example of this. The game appears to have polygon counts approaching PSP levels, but with much more detailed texturing, and more advanced use of lighting/shadowing, plus some evidence of advanced shader effects too. It’s noticeably better than anything on either the PSP or the Dreamcast, and like with Resident Evil shows signs of visual effects normally found on the original Xbox.

Particularly impressive is the use of self-shadowing, an effect absent from the original PS3 version of SFIV, but later included in the ‘Super’ version. This is not something you’d expect to see on a handheld title, that’s for sure.



As you can see in many of the screens, it looks like the 3DS clearly has programmable pixel/vertex shaders, like with the original Xbox, or the PS3 and 360. Initially some people pointed out that Nintendo’s machine could simply be using older fixed-function type effects, ones that aren’t programmable in any way but give off a similar look. There are many fixed variants of common shader effects, and it could be that is just what 3DS games are using. However it has since been confirmed in an interview with Miyamoto that the hardware is fully capable of using shaders, thus putting an end to such speculation.

What this means, is that although it is pretty obvious that the 3DS is similar in power to Sega’s Dreamcast in terms of polygon rendering capabilities, and pretty close to the PSP, it is substantially more powerful than either of those two machines, or even the PS2, GCN and the Wii with regards to effects. Supporting shaders, the 3DS goes beyond what any of those machines can do in this regard, but how about against the iPhone?


Comparing it to the iPhone is perhaps a little more difficult, not least of all because few developers have actually tried to push the hardware, but also because there is a heavy software layer hiding direct access to the machine’s graphics hardware preventing devs from fully exploiting it. However, we do know that the SGX535 GPU inside the iPhone 3GS is clearly capable of better visuals than the 3DS regardless of the software restrictions. Developers using the Open GL development environment have access to the full range of shader effects the SGX535 Shader Model 4.0 core provides, whilst also being able to push similar levels of geometry to lead PS2 games around on screen.

The question with the iPhone, is whether or not developers have enough incentive to do this. After all, the iPhone is hardly a hotbed for bleeding edge games development, and there’s also the case of making games look good with the limited resources you have on offer, neither of which seem to be happening on Apple’s platform. In this case it really is an example of one platform being vastly superior (iPhone), but in which there is little software to showcase this fact. So with this, it’s safe to assume that most third party 3DS titles will look better than some of the best iPhone games, but not because the actual hardware is more capable, but because it is in the developers best interests to do so.

At the end of the day it looks like we can expect graphics quality in between Dreamcast and the GCN, but with the added use of shader effects seen on the original Xbox and beyond. Is it possible for the 3DS to do more? Maybe, but without seeing the specs sheet we simply don’t know, and it would be foolish to try and make such assumptions so early on.


In terms of what hardware lies inside the 3DS, we don’t really know for sure as nothing has been confirmed. But we do know that there must be some kind of ARM-based CPU inside the machine to maintain compatibility with the old NDS - enhanced and clocked at a faster speed to also handle the new stuff too – And, that the GPU is looking likely to be one provided by Japanese firm DMP, seeing as the company already has a chip powering another portable 3D display.

All things considered, the 3DS is a pretty powerful piece of kit sitting somewhere in between the Dreamcast and the original Xbox in terms of overall graphical performance. It might not be able to push polygons around on screen like there’s no tomorrow (less than PS2, GCN and XB), but with a range of impressive visuals effects made possible through the use of shaders, it doesn’t need to. In that respect, Nintendo’s latest handheld appears economical in its hardware design, yet perfectly capable for the task at hand. And for a successful handheld that really is all you need.

As more information surfaces, and developments occur we shall endeavour to revisit our look at the hardware inside the 3DS, updating you with what we hope will be the most informative and accurate report of its technical capabilities around. For now, this little insight into what is possible will have to do.

Thursday, 20 May 2010

Tech Analysis: Alan Wake

I will warn you now, that this tech analysis for Alan Wake is an incredibly long and in-depth affair. With Remedy’s latest there is so much going on, and individual parts which make up the overall look and feel of the game, that after you’ve covered one thing, you’ve also just discovered another.

So, rather than skimp over the little details which might actually have a pretty large impact in he overall scheme of things, what we’ve done, is to try and deliver the most comprehensive look at the tech present in making up the world of Bright Falls, one of this years most defining visual achievements.

Usually it’s Sony’s PS3 exclusives that garner such attention and technical praise, often delving deep into the hardware to deliver an visual experience that on so many technical levels is almost unmatched by most other titles on competing platforms – PC of course excluded, but that’s a given really. Microsoft on the other hand, have a machine with a comprehensive set of development tools that are so much more traditional and easier to get to grips with what Sony have provided for the PS3, that in most cases developers are quite happy to think inside the established box when it comes to crafting that show-stopping visual showcase expected for current-generation games.

However, some developers such are beginning to break out of that cycle, creating a graphical experience which rivals that of Uncharted 2, Killzone 2 and God Of War 3. Both Bungie with Halo Reach, and Remedy with Alan Wake, are pushing the envelope of what is possible on MS’s machine, showing gamers and developers that what is possible in a PS3 custom-made approach title is also possible on the 360.

Last week we took an in-depth look at Halo Reach, casting our critical eye over the overall workings of the tech. Today IQGamer looks closely at Remedy’s Alan Wake, a game no stranger to controversy or our very own tech analysis, in which for this feature, we will be finally evaluating the tile whilst also attempting to fully uncover the truth behind that 540p rendering resolution.

Alan Wake, to cut away any lingering doubts or speculation does indeed render in 960x540 resolution, which is then upscaled by the 360 to form a final 720p image. Not all aspects of the game are rendered in this sub-HD resolution – with some effects actually being closer towards 720p – and the game does use an impressive 4x multi-sampling anti-aliasing at all times, a side effect of which is not only a clear reduction of jagged lines but also a much cleaner upscale.

The result of rendering in 540p isn’t quite as bad as what you might expect, with fine detail still being present, and the soft look of the game actually appearing quite clean and clear compared to some other sub-HD games. No doubt that the use of 4xMSAA actually helps in reducing any upscaling artefacts, whilst keeping the image relatively clean in the process. Softness, as obvious as it is, is the by-product of this, but its inclusion actually helps in creating the creepy atmosphere found in the game, along with the shadowy and foggy night time visuals all feeding a real sense of immersion.


It’s pretty clear from the screenshot above just how good Alan Wake looks for a game that is decidedly more sub-HD than many others on the market. But why did Remedy opt for using such a low resolution so late in the development cycle? And what happened to claims of rendering in 720p with 4xMSAA?

Well, you only have to look at last years gameplay footage in order to find out, which does indeed render in full 720p whilst sporting 4x multi-sampling anti-aliasing. Before this, Alan Wake was rendering at the standard 720p with 2xMSAA, in addition to having full resolution particle and transparency (framebuffer) effects. However the game suffered from constant screen-tearing and drops in framerate, no doubt as the engine struggle to cope with the game’s intense bandwidth requirements

It is pretty obvious that Alan Wake as a game is extremely bandwidth heavy, rendering all those fog, mist, smoke, and transparent particle effects. Transparencies are littered all over the game’s fictional town of Bright Falls, and the engine unsurprisingly was struggling to cope.

Originally Remedy simply decided to change from rendering full resolution effects to instead using half-res alpha-to-coverage effects (A2C), thus saving some bandwidth needed to keep performance up. However, A2C has the unfortunate side effect of giving all transparencies that use it an unsightly screen door effect. The only solution is to effectively up the level of anti-aliasing to 4xMSAA in order to blend away the A2C into the rest of the scene.

Despite these changes it’s clear that the game still suffered from severe performance issues, with tearing once again being at the forefront of those. In an interview the developers stated that the screen tear would be part and parcel of the experience, a sign that maybe they where having trouble in keeping things running smoothly. In light of all this, it seems that in order to claw back performance they finally opted to render in a sub-HD resolution to give them a smooth 30fps update and very little screen tear, whilst still having all the benefits of 4xMSAA improving the overall image quality (IQ).

In the end the use of A2C does very little to damage the overall look of the game. Alan’s hair for example (see below screenshot), is using it with very little in the way of noticeable side effects, though if you look closely you can indeed see some of the dithered nature of the A2C at work. The 4xMSAA manages to prevent any noticeable upscaling effects from being visible, and the soft look provided by the lower resolution isn’t particularly intrusive.


There’s not doubt that the game would look much sharper as a result of rendering at 720p over the current 540p framebuffer, however it is likely that many of the outstanding graphical effects and small visual touches would have to have been sacrificed in order to keep performance levels up. With this in mind it’s much better to have a smoother, graphically more impressive game as a whole, than to have a clearer albeit simpler one instead.

Resolution and framebuffer issues out of the way, the rest of Alan Wake’s engine is just as interesting, and serves as a clear technical benchmark for many 360 developers to follow.

The stable framerate for one is a pretty exceptional achievement. Alan Wake maintains a rock solid 30fps ninety-nine percent of the time, with the game instead opting for screen tearing in situations where the engine is struggling to keep a steady hold on things. As a result the game basically never drops frames at all, and when it does, the drop is so insignificant that it is barely even noticeable at all. The downside is that the tearing can get pretty messy at times, covering the image right in the centre of the screen where it is most noticeable.

Thankfully, these situations aren’t too common place, with most of the tearing smoothly appearing for a split second or so, before vanishing as quickly as it came. It should be pointed out though, that the game does tear regularly, although it isn’t at all intrusive in these smaller amounts.

Adding to the stable framerate is the games use of camera-based motion blur, which helps to create that smooth look and feel that the game has throughout. The motion blur is very subtle, never at all intrusive, instead being an organic part of the camera movement. Its implementation is just another part of the artistic flair that is running through every aspect of the game’s visual make up. The screenshot below shows the effect in action, though it seldom has that much of a dramatic impact on the game in motion.


What is surprising, is that the engine in Alan Wake is running at a almost constantly solid 30fps with a high level of dynamic visual effects - from fog, smoke, and particles - whilst also capable of delivering incredible draw distances without dramatically paring back the visual through the use of an aggressive LOD system. The game is full of sprawling vistas, from dense forests to towering mountains, and all of it is largely reachable, with the player travelling between these iconic places many times.

Perhaps, it is for this reason that the game engine has had sacrifices in other areas. Both the framebuffer, alpha effects, and textures have been downgraded by using a lower resolution, as to has parts of the game’s lighting and shadow system.

Originally Alan Wake was going to be a fully-fledged open world action horror, in which the player would be investigating the various paranormal and supernatural occurrences whist trying to find wife Alice and fend off scores of ‘The Taken’. In much of the final game you can clearly see the original open world nature of its design, with large organic multiple paths to take, long draw distances, and the ability to backtrack, go off the beaten path, before heading to you next destination.

The engine used in the final game is still highly optimised for such an experience, so despite the change to a more linear and controlled affair, the engine still has to draw vast distances in high quality, whilst also having to render all those alpha effects, shaders and textures, and keep up framerate at the same time. It’s like having a version of GTA or Just Cause with Uncharted 2 levels of graphical polish, something which is beyond any of the current gen consoles.

A good example can be seen below, in which the game renders detailed scenery for miles for many miles away from the player, with no additional visual effects hiding the incredible draw distances that the engine is capable of.


Texture detail is reasonably high, although the actual textures themselves appear to be of very low resolution. Up close any detail begins to break up, and from a distance they look clean, but at the same time a little blurry. The quality overall is very good, given the game’s 540p framebuffer and various effects that the engine is pushing around on screen, it’s just a shame that much to the intricate details get so broken up in close range, or blended away when at a distance. Nevertheless there are times in which the combination of artistic flair and attention to detail really show of the textute work.

In term s of filtering it’s not entirely clear what is going on in Alan Wake. Assessing levels of texture filtering by eye is always a difficult proposition, however it is possible to make so well placed judgements as to what is happening.

At times texture detail is visible for a good 16 feet or so into the distance before becoming blurry, whilst in other scenarios texture fidelity is lost just a few feet away from Alan himself. In these later scenes, it would appear that the game is perhaps using bilinear filtering (BF) at best, although that would fail to explain the clarity in other scenes. Instead, my best guess is that Remedy are actually using a combination of anisotropic (AF) and bilinear filtering for the textures, alternating between large amounts of AF and BF combined, to very little AF with no BF at all.

You could call this a ‘filtering bias’ with some scenes getting more filtering that others. But at the same time witn all the fog, mist and other effects going on at night, it is hard to make a solid judgement call on this. At the very least TF is definitely present, with small levels of AF in parts.


Unmistakably though, the use of high levels of visual effects such as volumetric fog, smoke, particles, and the impressively accurate dynamic lighting and shadowing system, is what puts Alan Wake above so many other titles available on either the 360 or the PS3. It’s these effects that work so well with the A2C and 4xMSAA that any hit in pure sharpness taken away by the 540p resolution isn’t on many occasions all that apparent. Especially in night time scenes in which all the visual effects come together, with loads of moving elements on screen pretty much most of the time.

The fog, shadows, light, foliage and physics based objects are cast their own shadows, some simply being pre-baked shadow maps, other being fully dynamic reacting to a multitude of light sources and environmental objects. The fog for example, interacts with the game’s lighting, with light shining through it, moving over the trees and the foliage, casting shadows for most objects in its path.

A hazy mist is also present during the environments at dusk and at night, which both reacts with light along with blending into the black fog which appears as ‘The Taken’ arrives, creating an ambience that adds to the tension felt throughout the experience.

Some of these effects do appear to be rendering at a lower resolution than the rest of the game. The fog in particular is pretty low res, which tends to blur everything in its path, obscuring detail and almost warping the game world. The blur doesn’t impact too much on the overall graphical feel that Remedy is going for, and at times actually benefits it. Sadly when the fog makes its way to cleaner areas, such as town buildings or remote gas stations, the blur effect is more noticeable, and less impressive.


For shadowing the game uses a combination of high and low resolution shadows, both static (pre-baked) and dynamic. The ones used in doors are most noticeably low res, as are some of the dynamic shadows cast by the players touch as they explore Bright Falls. However in outside areas, in which there is so many other effects going on, it’s incredibly hard to notice the odd poor quality shadow.

The games lighting also helps to back the mixture of shadow quality and various other visual effects. Light given off by the players touch casts shadows from objects all around the environment, as do flares and flashbangs, which dynamically change the surrounding shadows. This is perhaps the most impressive thing in Alan Wake’s graphics engine, the uniformity between light and shadow, the dynamic interaction between both, and how this helps create a beautifully organic look to the visuals on offer.

It is safe to say that the quality of these effects on offer in Alan Wake is perhaps some of the best we’ve seen in any console or PC title to date. Resolution issues aside, the consistency and quality seen here is a pretty impressive feat, given the constraints the developers have had to work with and the many issues faced along the way.


Overall the tech behind Alan Wake is extremely impressive. The game is at times combining several different transparency heavy effects together, along with a fully dynamic lighting and shadowing system, whilst maintaining incredibly high draw distances at a near perfect 30fps.

Certainly, things have been sacrificed in order to achieve this level of visual performance, but those sacrifices haven’t damaged the game in any significant way. In fact, most of the issues caused by the low resolution effects and 540p rendering resolution are barely noticeable during most of your time spent in Bright Falls, which is spent in a surreal world of darkness. This darkness helps hide much of the game’s graphical shortcomings, blending them in, and actually increasing the level of immersion to be found all through Mr Wake’s adventure.

Remedy, like with all the most highly talented developers, have shown just how to work in and around any limitations of the current platform they are developing for, making concessions in certain areas, whilst scaling back in others to ensure that the whole visual make up is as polished as it is balanced.

Alan Wake in this regard, represents exactly the right design choices made for the game at hand. Generating atmosphere to completely embody the player is paramount to the experience, and is something that the developers have achieved with the underlying engine behind the game. It isn’t always about getting all the elements together in the most technically advanced way possible. But instead, about making sure that each of the individual pieces fit together succinctly, and not just as separate visual standpoints in which to admire.

In conclusion, there’s no doubt that Remedy have achieved exactly what they have set out to do with Alan Wake, creating a game which is as gripping as it is visually alluring. For all the use of high-end tech that is powering the game, it is the carefully and often cleverly crafted nature of the art design which makes the package such a success. And whilst it may not have all the high definition goodness of Sony’s Uncharted 2, it more than matches it in sheer technical brilliance and pure artistic direction.