Various graphics settings are available in PC games that can improve graphics quality and detail, or increase game performance. This feature gives games the advantage of the highest level of computer performance, while allowing acceptable performance for slower computers. Some common settings are described below.
||These settings are properties of the game and not set by the graphics driver. Not all games allow the user to change all of the settings. Usually, a subset of these options is available in an option screen for the game.|
When enabling or disabling different features, there is usually a tradeoff between graphics quality and game performance (or frame rate). Try enabling a feature or increasing the detail level, then check the quality and resulting performance to find the best balance for that particular game.
Increased screen resolution improves graphics quality by increasing the number of pixels displayed at once. This feature delivers sharper graphics details and decreased stair-step patterns on the edges of polygons. In most cases, the higher the screen resolution, the lower the frame rate for the game. 800x600 and 1024x768 are common screen resolutions for games. Lower resolutions are good for network play, where the frame rate must stay high in order to compete with other players.
Games can give the option to run in either 16-bit or 32-bit color depth. This feature delivers the amount of video memory required for each screen pixel. 32-bit color depth features a larger range of colors to use, resulting in higher quality rendering. Due to the increased video memory bandwidth needed, 32-bit color reduces the frame-rate for the game. With some games, the result is a choppier performance.
Some games also allow setting the color depth of textures. 32-bit color can dramatically improve the appearance of textures and reduce artifacts, like dithering and banding. The improvement is especially visible when three or more textures are applied to a polygon. A small performance decrease can be seen with 32-bit color textures.
Texture detail level
This usually refers to how large or how many textures are used in the game. Large textures can take up a lot of video memory, but this can be alleviated by using texture compression, if supported by the game.
Mipmapping is a method of improving graphics quality and performance by using different mipmap levels, or texture sizes, depending on where a pixel is in the distance. Trilinear mipmapping also improves quality by smoothing the transition between mipmap levels. Anisotropic filtering also improves graphics quality by increasing the amount of detail seen when textures are viewed from certain angles.
The depth buffer (Z-buffer or W-buffer) is used in 3D games to indicate whether pixels, on one polygon, are in front of the pixels on another polygon. A higher precision depth buffer, such as 24-bit, prevents pixels from showing up in front of pixels that they should follow. A 16-bit depth buffer gives higher performance because of a large reduction in video memory bandwidth.
Texture compression is a method of reducing the amount of memory and memory bandwidth required for textures with a small reduction in visual quality. In certain games, where a low-resolution texture is used for a large surface (like a sky image), significant color banding can be seen if texture compression is enabled. A combination of enabling texture compression and high texture detail results in a good balance of quality and performance in many games.
Common lighting models for games include lightmap and vertex lighting. Vertex lighting gives a fixed brightness for each corner of a polygon. The lightmap model adds an extra texture, called a lightmap, on top of each polygon producing a variation of light and dark levels across the polygon. Because of the extra texture pass required, the lightmap model usually shows a lower frame rate than vertex lighting. But it also displays a much richer look to the games that use it. All Intel chipsets with integrated graphics support both types of lighting.
Anti-aliasing is used to reduce stair-step patterns on the edges of polygons in games. It gives a smoother, slightly blurred look to the edges. Full scene anti-aliasing renders each frame at a larger resolution, than scaling it down to fit the actual screen resolution. This feature can lower the frame rate by a large amount, while increasing quality by a small amount. Usually, increasing the screen resolution is a better trade-off than turning on anti-aliasing. Anti-aliasing is only useful for games when a lot of extra graphics performance is available. Intel chipsets with integrated graphics do not support full scene anti-aliasing. Anti-aliased lines are supported in OpenGL* applications.
OpenGL* Settings Guide
This applies to: