顶点和像素着色器是什么?

顶点和像素着色器是什么?

他们之间有什么区别? 哪一个是最好的?

91747 次浏览

A Pixel Shader is a GPU (Graphic Processing Unit) component that can be programmed to operate on a per pixel basis and take care of stuff like lighting and bump mapping.

A Vertex Shader is also GPU component and is also programmed using a specific assembly-like language, like pixel shaders, but are oriented to the scene geometry and can do things like adding cartoony silhouette edges to objects, etc.

Neither is better than the other, they each have their specific uses. Most modern graphic cards supporting DirectX 9 or better include these capabilities.

There are multiple resources on the web for gaining a better understand of how to use these things. NVidia and ATI especially are good resources for documents on this topic.

Vertex and Pixel shaders provide different functions within the graphics pipeline. Vertex shaders take and process vertex-related data (positions, normals, texcoords).

Pixel (or more accurately, Fragment) shaders take values interpolated from those processed in the Vertex shader and generate pixel fragments. Most of the "cool" stuff is done in pixel shaders. This is where things like texture lookup and lighting take place.

In terms of development a Pixel shader is a small program that operates on each pixel individually, similarly a Vertex shader operates on each vertex individually.

These can be used to create special effects, shadows, lighting, etc...

Since each Pixel/Vertex is operated on individually these shaders lend themselves to the highly parallel architecture of modern graphics processors.

DirectX 10 and OpenGL 3 introduced the Geometry Shader as a third type.

In rendering pipeline order -

Vertex Shader - Takes a single point and can adjust it. Can be used to work out complex **vertex lighting calcs as a setup for the next stage and/or warp the points around (wobble, scale, etc).

each resulting primitive gets passed to the

Geometry Shader - Takes each transformed primitive (triangle, etc) and can perform calculations on it. This can add new points, take them away or move them as required. This can be used to add or remove levels of detail dynamically from a single base mesh, create mathematical meshes based on a point (for complex particle systems) and other similar tasks.

each resulting primitive gets scanline converted and each pixel the span covers gets passed through the

Pixel Shader (Fragment Shader in OpenGL) - Calculates the colour of a pixel on the screen based on what the vertex shader passes in, bound textures and user-added data. This cannot read the current screen at all, just work out what colour/transparency that pixel should be for the current primitive.

those pixels then get put on the current draw buffer (screen, backbuffer, render-to-texture, whatever)

All shaders can access global data such as the world view matrix and the developer can pass in simple variables for them to use for lighting or any other purpose. Shaders are processed in an assembler-like language, but modern DirectX and OpenGL versions have built in high-level c-like language compilers built in called HLSL and GLSL respectively. NVidia also have a shader compiler called CG that works on both APIs.

[edited to reflect the incorrect order I had before (Geometry->Vertex->Pixel) as noted in a comment.]

There are now 3 new shaders used in DirectX 11 for tessellation. The new complete shader order is Vertex->Hull->Tessellation->Domain->Geometry->Pixel. I haven't used these new ones yet so don't feel qualified to describe them accurately.

DirectX Specific:

Shader:

Set of programs which implements addition graphical features to the objects that are not defined in the fixed rendering pipeline. Because of this we can have our own graphical effects according to our needs - ie., We are no longer limited to predefined “fixed” operations.

HLSL: (High-Level Shading Language):

HLSL is a programming language like C++ which is used to implement shaders (Pixel Shaders / Vertex Shaders).

Vertex Shaders:

A vertex shader is a program executed on the graphics card’s GPU which operates on each vertex individually. This facilitates we can write our own custom algorithm to work with the vertex's.

Pixel Shaders:

A pixel shader is a program executed on the graphics card’s GPU during the rasterization process for each pixel. It gives us a facility to access/manipulate individual pixels directly . This direct access to pixels allows us to achieve a variety of special effects, such as multitexturing, per pixel lighting, depth of field, cloud simula-tion, fire simulation, and sophisticated shadowing techniques.

Note: Both Vertex Shaders and Pixel Shaders (programs) should be compiled using specific version of compiler before use. Compilation can be done just like a calling an API with required parameters like file name, main entry function etc.,

There used to be a Flash demo showing the planes of exposure from close to the viewer to as the distance of the reflection is. This represents viewing planes of distance of reflected light distance back to the viewer through any other plane in its way. Anything on those planes can be used as they are or used in a parameterized data value to alter the returning light or color. This is the simplest of explanations you are going to see. I wish the flash demo was still executable but not any more due to security reasons. Actually any body that would set up a 3d shader showing x number of vertical planes along the Z axis showing the interaction with those planes could get famous real quick. The view could be shown at an angle the way a concept view would be shown so as the dissection was apparent. In other words the view would an angular cross section of what the pipeline sees and what the viewer sees. As a matter fact this could make someone really well paid when this shader is used as a shader creation where the developer just inserts the necessary planes into the Z axis adhoc. A viewing window off to the side would show the render results. I make millionaires.