Game Graphics GPU Resource Limitations & Defining "Maps"
Posted on March 19, 2016
Art asset creation was one of our key points of discussion at GDC 2016. Speaking with CryEngine, we revealed some of the particle effects and computational fluid simulation performed at the engine-level – stuff that really drives games we play in the visuals department. Textures and “painted” objects are also a critical point for discussion, an aspect of game art that software tools creator Allegorithmic is intimately familiar with. Allegorithmic's “Substance” software tools are distributed to and used by major triple-A studios, including Activision's Call of Duty teams, Naughty Dog (Uncharted 4), Redstorm (Rainbow Six: Siege), and more.
In this behind-the-scenes discussion on game creation, we talk GPU resource limitations, physically-based rendering, and define different types of “maps” (what are normal, specular, diffuse maps?). For a previous discussion on PBR (“What is Physically-Based Rendering?”), check out last year's Crytek interview; PBR, for point of reference, is being used almost everywhere these days – but got major attention with its Star Citizen integration.
Graphics & Art Asset Creation
Developers use a toolbox to create game assets. Between rigging, animation, 3D modeling, sculpting, texturing, mapping, and the rest, there's no single tool to do it all: Maya is a common 3D modeling solution, Z-brush and PhotoShop get used in other applications, cinematic software (like Epic's new tools, replacing Matinee) – there's a lot out there. Allegorithmic's Substance has its own separate tools (painter, for one) that help streamline some processes and centralize assets; it also vectorizes textures so that they can be more easily scaled between 1K, 2K, 4K, and other resolutions.
Common Texture Resolutions vs. VRAM
We had two meetings with Substance's team. During the first, we spoke about common texture resolutions in games versus VRAM consumption. According to the Substance team, texture resolutions “aren't as big as you might think,” with many sitting at 1K (1024x1024). Some may scale up to 2K or even 4K for specific features, like for cut-scenes with heavily detailed faces, but scaling upwards in this fashion requires significantly more VRAM consumption.
One challenge that artists face is figuring out those resolutions early in the process. If a team made a 1K resolution texture and later decided they'd rather increase quality, and had “budget” (system resources) to do so, the texture would traditionally have to be remade. Scaling upward from a rasterized file made in PhotoShop, for instance, wouldn't work well. Some developers will design textures in 4K in PhotoShop, then scale down as needed – but this is inefficient on dev resources (takes longer to load, file sizes are larger on the server). Special tools like Substance will allow “non-destructive” asset creation, so textures can be scaled between resolutions freely by using vectorized assets. They also place all the maps in one place – normal maps, specular, and so forth, which speeds-up development.
Artists are given a “budget” by a game development team's engineers. This budget is non-monetary and instead is a reflection of maximum available system resources, given parameters that ultimately create the minimum & recommended system specs. For modelers and people who pull the scenes together, a “poly budget” dictates how many polygons can be crammed into each scene (suggested: read about z-buffers here).
Some objects – like Star Citizen's ships, with which we're technically familiar – might be comprised of hundreds of thousands of polygons. In order to not overload the CPU with draw calls (Dx11) or GPU with processing commands, developers use LOD scaling (Level of Detail) to ensure only what's needed is drawn. If a 300,000 poly ship is visible in the distance, maybe several kilometers away, there's no reason to draw all those polys and use the highest detail level textures.
A few things happen in this scenario: We won't be able to see the entire ship, anyway, so polygons and art which are invisible from the camera frustum will be culled. Most tools also only work with what's in the z-buffer (aside from tech like voxel-accelerated ambient occlusion), or what's visible presently to the camera. Anything else isn't drawn – it's just not necessary. That cuts a lot of the complexity from the render queue fairly immediately and does so without impacting quality to the player. The next step is lowering LOD – fine-detail polys will be stripped from our sample ship (maybe some of the polygons that comprise a circular porthole can be axed, or maybe small extrusions that add depth when viewed from a few feet away). Texture quality is also dipped; no reason to plant a 2K resolution texture with high scratch/damage detail if several km away. This saves on VRAM, where the poly reductions will help save on core utilization and CPU draw call bottlenecking.
And that's not to discuss all the different types of “maps” used to create game graphics. View the above video for commentary from Substance Technical Artist Wes McDermott.
Editorial: Steve “Lelldorianx” Burke
Video Production: Keegan “HornetSting” Gallick