Graphics Processing Unit (GPU)

GPU is a dedicated graphics rendering device for the personal computer. GPUs are very efficient in manipulating and displaying computer graphics. They are used for calculations related to computer graphics as well as to accelerate the memory intensive work of texture mapping and calculations of polygons which are used to display images.

There is one important term that needs to understood before we proceed further, which is:
Shader Programs.

Shader Programs are a set of instructions as in any other programming language that operates on individual pixels. This is done to modify the attributes of each pixel in order to change its appearance or position. Basically shader programs determine the final look of what you see on the screen. For eg in the case of CG animation, shader programs take the rendered 3D objects that are built on the polygons and make them look more realistic.

Shader Programs are of 2 types:
1. Pixel Shaders
2. Vertex Shaders

Pixel Shaders can be used to alter the lighting, color and surface of each pixel. This in turn affects the overall color, texture and shape of 3-D objects built from these pixels. Pixel shaders help to smoothen out 3 -D objects by giving them a more real texture.

Vertex Shaders work by manipulating an object’s position in 3-D space. “Vertex” refers to the intersection of two co-ordinates in space. One can map the position of an animated object in 3-D space by giving it a value. These values are the x, y and z coordinates. By manipulating these variables, vertex shader can create special effects such as “morphing.” In real-time graphics, like the kind you see in video games, shaders work with the graphics processor. The shaders make billions of computations a second in order to perform their specific tasks.


A 3D application uses the CPU in the system to generate geometry so that it can be sent to the GPU for processing, as a collection of vertices.

(Vertices – Plural for Vertex.)

Vertex consists of attributes that define its position in 3D space along with anything else the developer wants to define such as a colour for the vertex or some other relevant piece of information.

To begin with, the CPU interfaces with the driver of the GPU and sends it the collection of vertices to start the rendering process using the vertex shader unit. The vertex lists are present on the hardware inside the GPU’s accessible memory which it processes accordingly.

The vertex shader program will process the above data and alter the attributes on a vertex-by-vertex basis, before they are passed to the next step.

To generate an image, the GPU has to perform the following four steps:-
1. Rasterization of vector data in to pixels
2. Pixel Processing
3. Displaying the Aliased output (rendered pixels) on to the screen
4. Anti-Aliasing
5. Final output by GPU


The process of rasterisation takes the geometry processed by the vertex hardware and converts it into screen pixels to be processed by the pixel shader (or more accurately pixel fragment) hardware. The GPU basically performs calculations on this big list of geometry per frame, analyses it per vertex, then outputs a pixel fragment for the pixel units to work on. The fragment designation comes from the fact that depending on how the geometry is to appear on screen, parts of the geometrical figures displayed can lie inside a pixel on your screen, but not totally cover it.


Pixel processing usually involves most complex calculations part of the graphics rendering process on a modern GPU and so usually takes the most time. This is taken care by the Pixel Shader program.


Processed pixels are stored in card memory ready to be resolved into completed screen pixels, for output onto your display. This task is handled by a GPU unit called the ROP. A modern GPU implements a number of ROPs. Apart from resolving and drawing pixels on the screen, the ROP hardware also performs a number of optimizations to save memory bandwidth when reading and writing pixels to and from a frame buffer, such as colour compression (even saving 1 byte of color data per pixel is a heady saving in bandwidth terms).


Anti-Aliasing works by effectively filtering a high frequency signal. For example if there is black line against a white background (the black and white color data being the high frequency signal). Filtering the signal will result in grey color providing a better representation of the data. This is known as Anti-Aliasing effect.


These set of steps, from geometry generation, shading, rasterisation, pixel processing, and finally drawing the fully rendered output, is the render process of a modern GPU.

If it’s a digital display being rendered to, the frame buffer data is converted into a binary representation and flashed at high speed to the digital monitor. If it’s an analogue display, the color data of the pixels is converted to an analogue signal across the scan lines by a DAC (Digital to Analogue converter).This process can be repeated for as many frames as you want to draw.

buy windows 11 pro test ediyorum