Hello!
I have a very high-quality and fast SSAO implementation right now which is a bit backwards that I have a problem with. In most (really bad IMO) tutorials online, SSAO is done by sampling random points in a half sphere around the view space position of each pixel, then computing "occlusion" from depth differences. In contrast, I do random samples in a circle around each pixel, compute the view space position of each sample, then compute occlusion in 3D in view space, taking the distance and direction towards each sample into account. This produces very good quality results with only a small number of samples (together with a bilateral blur).
However, this technique has an issue with the perspective projection at the edges of the camera. The farther out towards the edges you get from the camera, the more "stretched" the view gets. What I really want to do is just sample the 2D area of a projected 3D sphere with a certain radius at each pixel, which is what the usual SSAO examples are doing. In addition, I've recently experimented with eye motion tracking, which can produce extremely skewed/stretched projection matrices, which massively worsen this issue. To get this right, I'd essentially need to go from view space to screen space, sample depth at that position, then unproject that pixel back to view space again. This would need ~40% more ALU instructions than my current version, so it'd make it really slow.
This lead me to thinking that it'd be great if I could keep the original 2D-sampling code and simply compensate for the stretching in 2D. As I already have a 2D matrix which handles random rotation and scaling of the sample offsets, if I could bake in the circle elongation into that matrix I could compensate for the stretching without having to add any per-sample cost to the shader! It'd just need some extra setup code.
All this leads to my question: Given a sphere in view space (position, radius) and a projection matrix, how do I calculate the shape of the resulting ellipse from projecting the circle? Optimally, I'd want to calculate the center of the ellipse in NDC, then a 2D matrix which when multiplied by a random 2D sample offset on a unit circle (x^2+y^2<1.0) results in a position on the ellipse in NDC instead. So far I've found this, which exactly shows the axes/vectors I want to find: https://www.shadertoy.com/view/XdBGzd. This doesn't however use a projection matrix, and from what I can tell does all its calculations in view space, which I would like to avoid if possible. I've been trying to essentially play around with the math to try to get it to work out, and I think I've almost nailed calculating the center of the ellipse in NDCs, but I don't even really know where to start on calculating the long and short axes of the ellipse...
This kind of stretch compensation could have a lot of cool uses, as it ensures that (with properly configured FOV) things that are supposed to be round always look round from the viewer's viewpoint. Theoretically, it could also be used for bloom (to make sure the blur is uniform from the viewer's viewpoint), depth of field, motion blur, etc.
EDIT: Indeed, I believe the center of the ellipse in NDC is
vec2 ellipseCenter = ndc.xy * (1.0 + r^2 / (ndc.w^2 - r^2))
where ndc.xy is the projected center of the sphere, r is the radius of the sphere and ndc.w is the (positive) view space depth of the sphere.
↧