Per-pixel lighting

From HandWiki
Revision as of 16:00, 3 August 2023 by John Stpola (talk | contribs) (fixing)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Computer graphics rendering technique

In computer graphics, per-pixel lighting refers to any technique for lighting an image or scene that calculates illumination for each pixel on a rendered image. This is in contrast to other popular methods of lighting such as vertex lighting, which calculates illumination at each vertex of a 3D model and then interpolates the resulting values over the model's faces to calculate the final per-pixel color values.

Per-pixel lighting is commonly used with techniques, such as blending, alpha blending, alpha to coverage, anti-aliasing, texture filtering, clipping, hidden-surface determination, Z-buffering, stencil buffering, shading, mipmapping, normal mapping, bump mapping, displacement mapping, parallax mapping, shadow mapping, specular mapping, shadow volumes, high-dynamic-range rendering, ambient occlusion (screen space ambient occlusion, screen space directional occlusion, ray-traced ambient occlusion), ray tracing, global illumination, and tessellation. Each of these techniques provides some additional data about the surface being lit or the scene and light sources that contributes to the final look and feel of the surface.

Most modern video game engines implement lighting using per-pixel techniques instead of vertex lighting to achieve increased detail and realism. The id Tech 4 engine, used to develop such games as Brink and Doom 3, was one of the first game engines to implement a completely per-pixel shading engine. All versions of the CryENGINE, Frostbite Engine, and Unreal Engine, among others, also implement per-pixel shading techniques.

Deferred shading is a recent development in per-pixel lighting notable for its use in the Frostbite Engine and Battlefield 3. Deferred shading techniques are capable of rendering potentially large numbers of small lights inexpensively (other per-pixel lighting approaches require full-screen calculations for each light in a scene, regardless of size).[1]

History

While only recently have personal computers and video hardware become powerful enough to perform full per-pixel shading in real-time applications such as games, many of the core concepts used in per-pixel lighting models have existed for decades.

Frank Crow published a paper describing the theory of shadow volumes in 1977.[2] This technique uses the stencil buffer to specify areas of the screen that correspond to surfaces that lie in a "shadow volume", or a shape representing a volume of space eclipsed from a light source by some object. These shadowed areas are typically shaded after the scene is rendered to buffers by storing shadowed areas with the stencil buffer.

Jim Blinn first introduced the idea of normal mapping in a 1978 SIGGRAPH paper.[3] Blinn pointed out that the earlier idea of unlit texture mapping proposed by Edwin Catmull was unrealistic for simulating rough surfaces. Instead of mapping a texture onto an object to simulate roughness, Blinn proposed a method of calculating the degree of lighting a point on a surface should receive based on an established "perturbation" of the normals across the surface.

Hardware rendering

Real-time applications, such as video games, usually implement per-pixel lighting through the use of pixel shaders, allowing the GPU hardware to process the effect. The scene to be rendered is first rasterized onto a number of buffers storing different types of data to be used in rendering the scene, such as depth, normal direction, and diffuse color. Then, the data is passed into a shader and used to compute the final appearance of the scene, pixel-by-pixel.

Deferred shading is a per-pixel shading technique that has recently become feasible for games.[4] With deferred shading, a "g-buffer" is used to store all terms needed to shade a final scene on the pixel level. The format of this data varies from application to application depending on the desired effect, and can include normal data, positional data, specular data, diffuse data, emissive maps and albedo, among others. Using multiple render targets, all of this data can be rendered to the g-buffer with a single pass, and a shader can calculate the final color of each pixel based on the data from the g-buffer in a final "deferred pass".

Software rendering

Per-pixel lighting is also performed in software on many high-end commercial rendering applications which typically do not render at interactive framerates. This is called offline rendering or software rendering. NVidia's mental ray rendering software, which is integrated with such suites as Autodesk's Softimage is a well-known example.

See also

Notes

  1. "Forward Rendering vs. Deferred Rendering". https://gamedevelopment.tutsplus.com/articles/forward-rendering-vs-deferred-rendering--gamedev-12342. 
  2. Crow, Franklin C: "Shadow Algorithms for Computer Graphics", Computer Graphics (SIGGRAPH '77 Proceedings), vol. 11, no. 2, 242-248.
  3. Blinn, James F. "Simulation of Wrinkled Surfaces", Computer Graphics (SIGGRAPH '78 Proceedings, vol. 12, no. 3, 286-292.
  4. Shawn Hargreaves|Hargreaves, Shawn (pl) and Mark Harris: "6800 Leagues Under the Sea: Deferred Shading". NVidia Developer Assets.