- OpenGL is an API used for drawing 3D graphics.
- OpenGL is not a programming language.
- The API is typically used to interact with a GPU, to achieve hardware-accelerated rendering.
Matrix44. There are multiple coordinate Systems involved in 3D Graphics:
- Object Space
- World Space (aka Model Space)
- Camera Space (aka Eye Space or View Space)
- Screen Space (aka Clip Space)
OpenGLWiki. The model matrix is used to describe where an object exists in the world. The view matrix is used to describe the vantage point of our view. The view matrix can be thought of as the position and angle of a camera used to take a picture. The projection matrix is used to give our view perspective such as making close objects appear larger than distant objects. The projection matrix also provides a field of view which can be thought of as a camera lens; you can decide to use a wide-angle lens or a telephoto lens.
Cartersian Space in 3D (left), and Object Space (right).
If we want to position the cube in World coordinates at position X=5, Y=0, Z=0, multiply each vertix with the World matrix
[1,0,0,5, 0,1,0,0, 0,0,1,0, 0,0,0,1]
- Multiply every vertex in the World Space with our View Matrix. Each Vertex is then in Camera Space – and the scene looks like we are looking at it through the camera.
- Imagine the camera as an abstract thing which is on the positive Z-Axis and looks down the negative end of the Z-Axis. Imagine also a rotation and translation of the World around you so that you can see what you want to see – this rotation and translation is the View Matrix.
- The last Coordinate System transformation is from Camera to Screen Space
- It is essentially nothing more than going from the 3D Coordinate System to the 2D Screen in front of us.
- This transformation is necessary as long as we don’t have real 3D Holographic Screens.
- In the process of transforming from Camera to Screen Space you can either choose a Orthographic or Perspective Projection. The difference between those two is that the Orthographic Projection does not apply a perspective distortion and the perspective one does. Perspective Projection is the natural one as we Humans see in a perspective Way – things farther away from us appear smaller.
- MODELVIEW Matrix. Because moving an Object around and “positioning the camera” is actually the same, people usually use this one matrix.
The Viewing Volume is also known as the Clipping volume or the Frustum. Here's the visual representation of the viewing volume.
There are two planes, the viewing plane and the far clipping plane. The viewing plane is actually the screen and the far plan indicates how far you can "see", whatever is behind the far clipping plane will not be visible. The viewing volume is the space between those two planes. The viewing volume is sometimes called clipping volume because you usually want to clip your polygons against it.
VAOs, VBOs, Vertex and Fragment Shaders
- The OpenGL 3.2 core specification removes the majority of the fixed function pipeline previously used, and replaces it with a completely programmable architecture using shaders.
- A Vertex Array Object (VAO) is an object which contains one or more Vertex Buffer Objects and is designed to store the information for a complete rendered object.
- A Vertex Buffer Object (VBO) is a memory buffer in the high speed memory of your video card designed to hold information about vertices. In our example we have two VBOs, one that describes the coordinates of our vertices and another that describes the color associated with each vertex. VBOs can also store information such as normals, texcoords, indicies, etc.
- A Vertex Shader in OpenGL is a piece of C like code written to the GLSL specification which influences the attributes of a vertex.
- A Fragment Shader is similar to a Vertex Shader, but is used for calculating individual fragment colors. This is where lighting and bump-mapping effects are performed.
- Geometry Shaders are used to create additional vertices.
- The shader pipeline behaves as follows: Vertex Shaders -> Geometry Shaders -> (Rasterizing Engine) -> Fragment Shaders.
- The shaders are compilied and then chained together into a Shader Program.
- The shaders receive input data from our VAO through a process of attribute binding, allowing us to perform the needed computations to provide us with the desired results.