By default the Normalized Device Coordinate is left-handed.
The glDepthRange is by default [0, 1] (near, far) making the +z axis point into the screen and with +x to the right and +y up it is a left-handed system.
Changing the depth range to [1, 0] will make the system right-handed.
Quoting a previous answer from Nicol: (the strike-through is my work, explained below)
I'm surprised nobody mentioned something: OpenGL works in a left-handed coordinate system too. At least, it does when you're working with shaders and use the default depth range.
Once you throw out the fixed-function pipeline, you deal directly with "clip-space". The OpenGL Specification defines clip-space as a 4D homogeneous coordinate system. When you follow the transforms through normalized device coordinates, and down to window space, you find this.
Window space is in the space of a window's pixels. The origin is in the lower-left corner, with +Y going up and +X going right. That sounds very much like a right-handed coordinate system. But what about Z?
The default depth range (glDepthRange) sets the near Z value to 0 and the far Z value to one. So the +Z is going away from the viewer.
That's a left-handed coordinate system. Yes, you can change the depth test from GL_LESS to GL_GREATER and change the glDepthRange from [0, 1] to [1, 0]. But the default state of OpenGL is to work in a left-handed coordinate system. And none of the transforms necessary to get to window space from clip-space negate the Z. So clip-space, the output of the vertex (or geometry) shader is a left-handed space (kinda. It's a 4D homogeneous space, so it's hard to pin down the handedness).
In the fixed-function pipeline, the standard projection matrices (produced by glOrtho, glFrustum and the like) all transform from a right-handed space to a left-handed one. They flip the meaning of Z; just check the matrices they generate. In eye space, +Z moves towards the viewer; in post-projection space, it moves away.
I suspect Microsoft (and GLide) simply didn't bother to perform the negation in their projection matrices.
I did strike one part since it diverged from my findings.
Either changing the DepthRange or the DepthFunc and using the ClearDepth(0) works but when using both they cancelled out each other back to a left-handed system.
OpenGL is right handed in object space and world space.
But in window space (aka screen space) we are suddenly left handed.
How did this happen?
The way we get from right-handed to left-handed is a negative z scaling entry in the glOrtho or glFrustum projection matrices. Scaling z by -1 (while leaving x and y as they were) has the effect of changing the handedness of the coordinate system.
For glFrustum,
far and near are supposed to be positive, with far > near. Say far=1000 and near=1. Then C= -( 1001 ) / ( 999 ) = -1.002.
The near and far planes, however, are specified differently. The near parameter is defined as
Near: The distance to the nearer depth clipping plane. This distance is negative if the plane is to be behind the viewer.
and far:
zFar The distance to the farther depth clipping plane. This distance is negative if the plane is to be behind the viewer.
Here we have a typical canonical view volume
Because the z multiplier is (-2/(far-near)), the minus sign effectively scales z by -1. This means that "z" is turned left handed during the viewing transformation, unbeknownst to most people as they simply work in OpenGL as a "right handed" coordinate system.
The book "WebGl Programming Guide" by Kouichi Matsuda spends almost ten pages on "WebGl/OpenGl: Left or Right Handed?"
According to the book:
In practice, most people use a right-handed system
OpenGl actually is a left-handed system internally
Internally, more deeply it's actually neither. At the very bottom OpenGl doesn't care about the z-value. The order in which you draw things determines what is drawn on top (draw a triangle first, then a quad, the quad overrides the triangle).
I don't fully agree with the "it's neither" but that's probably a philosophical question anyway.
You should only notice that OpenGL only knows NDC! and that is a left-handed coordinate system.
No matter what coordinate system you use -- left-handed or right-handed axis-coordinate system -- all need to be mirrored to NDC. If you like, you can totally handle world-space as left-handed coordinate system.
Why do we usually use right-handed coordinate system in world-space?
I think it`s conventional. It just does. Maybe it just want to distinguish from DirectX.
Opengl is definitely left-handed. You see a lot of tutorials stating the opposite because they are negating the z-value in projection matrix. When the final vertices are computed inside vertex shader, it's converting the vertices that you pass from client-side (right-hand coord) to left-handed, and the vertices will then be passed to geometry shader and fragment shader. If you use right-hand coordinate system in client-side, Opengl doesn't care. It only knows normalized coordinate system, which is left handed.
Edit: If you don't trust me, just experiment in your vertex shader by adding a translation matrix, and you can easily see if Opengl is left-handed or not.
By using OpenGL's built in projection and transformation functions, observing the movements on screen follow the rules of the right-handed coordinate system. For example, if an object in front of your view is translated in the positive z direction, then the object will move towards you.
The depth buffer is quite the opposite, and this is where the NDC (Normalized Device Coordinates) come into play. If passing GL_LESS into the glDepthFunc means that pixels will draw when they are nearer to you than what's already in the depth buffer, then pixels are considered to live in a left-handed coordinate system.
There's one more coordinate system, and that's the viewport! The viewport's coordinate system is such that +x points to the right, and +y points down. I think by this point the handedness is moot since we're only dealing with x, y at this point.
Lastly, gluLookAt under the hood has to negate the look-at vector. Since the math assumes a vector is pointing in a positive direction towards the object it's looking at, and a camera looks down -z, the look-at vector must be negated so that it aligns with the camera.
Something to chew on. It doesn't make much sense to call the z direction of a right handed coordinate system a forward vector :). I think Microsoft realized this when they designed Direct3D.