I'm trying to setup a sort of 2.5D projection, think the 45 degree angle that old 2D RPGs have. The scene will be 3D, with walls perpendicular to the floor. Then I'll have an orthographic camera on a 45 degree tilt. Something like this:
That way I get all the benefits of being 3D (depth buffer, can do deferred lighting, etc). I have a prototype setup in Unity, and it works out well, however when I try to imitate it in my code, I'l getting different results.
Math code:
struct Mat4f
{
real32 E[4][4];
Mat4f operator*(const Mat4f& right);
};
Mat4f Mat4f::operator*(const Mat4f& right)
{
Mat4f result = {};
for(uint32 row = 0; row < 4; row++)
{
for(uint32 column = 0; column < 4; column++)
{
for(uint32 rc = 0; rc < 4; rc++)
{
result.E[row][column] += right.E[rc][column] * E[row][rc];
}
}
}
return result;
}
Mat4f identity_matrix()
{
Mat4f result = {};
result.E[0][0] = 1.f;
result.E[1][1] = 1.f;
result.E[2][2] = 1.f;
result.E[3][3] = 1.f;
return result;
}
Mat4f ortho_projection(real32 viewWidth, real32 viewHeight, real32 zNear, real32 zFar)
{
Mat4f result = {};
result.E[0][0] = 2.f / viewWidth; // 2 / (right - left)
result.E[1][1] = 2.f / -viewHeight; // 2 / (top - bottom)
result.E[2][2] = -(2.f / (zFar - zNear)); // -2.f / (far - near)
result.E[3][0] = -1.f; // -((right + left) / (right - left))
result.E[3][1] = -(viewHeight / -viewHeight); // -((top + bottom) / (top - bottom))
result.E[3][2] = -((zFar + zNear) / (zFar - zNear)); // -((far + near) / (far - near))
result.E[3][3] = 1.f;
return result;
}
Mat4f translation_matrix(Vec3f position)
{
Mat4f translationMatrix = identity_matrix();
translationMatrix.E[3][0] = position.x;
translationMatrix.E[3][1] = position.y;
translationMatrix.E[3][2] = position.z;
return translationMatrix;
}
Usage:
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
Mat4f projectionMatrix = ortho_projection(window.width, window.height, 0.01f, 10.f);
Mat4f cameraMatrix = translation_matrix(Vec3f{0.f, 0.f, 2.f});
Mat4f mvp = projectionMatrix * cameraMatrix;
glUniformMatrix4fv(0, 1, false, &mvp.E[0][0]);
Then the shader just multiplies that to the vertex.
In my test scene I have three triangles. One is at z=0, one is at z=1, and the other is at z=-1. When I run the code as it is above, with the camera at z=2 only the the triangles at z=0 and z=1 are visible. When I set the camera z to 1, all the triangles are visible. When I set z = 0, only the triangles at -1 are visible.
The camera.z=0 makes sense because the depth function is LESS, and I suppose the camera.z=1 also makes sense. However, since my z-far is set to 10...shouldn't everything be visible while the camera is between 0.01 and 10? As in the above case where camera z is 2...everything should still be visible because its within the depth range.
I have a prototype setup in Unity and it works with various camera heights:
↧