I am trying to calculate irradiance map based on this article: http://www.codinglabs.net/article_physically_based_rendering.aspx
I understand the idea, you must place an hemisphere over the normal, then you need to sum every incoming radiance. Then write the radiance back at the corresponding pixel in the cube map. However I didn't understand how author iterated over the hemisphere and the direct translation from hlsl code to glsl didn't worked as I expected. It smooth the original cube map a lot. As if I dropped resolution from 1024px to 32px.
I changed the hemisphere iteration as below:
normal = normalize(normal);
vec3 up = vec3(0.0, 1.0, 0.0);
vec3 right = normalize(cross(normal, up));
int index = 0;
vec3 irradiance = vec3(0.0 ,0.0 ,0.0);
for (float longi = 0.0; longi <= 90.0; longi += 3.0)
{
mat4 trl = rotationMatrix(right, radians(longi));
for (float azi = 0.0; azi <= 360.0; azi += 3.0)
{
mat4 tra = rotationMatrix(normal, radians(azi));
vec3 sampleVec = (tra * trl * vec4(normal, 1.0)).xyz;
irradiance += texture(iChannel0, sampleVec).rgb * dot(sampleVec, normal);
index++;
}
}
fragColor = vec4((PI * irradiance / float(index)), 1.0);
Generated irradiance map seems to me too bright. Also I don't understand why we are averaging the summed radiance by dividing it to "index" ? Aren't we after the total incoming radiance to a point ?
Here is the link to the shader.
https://www.shadertoy.com/view/4sjBzV
↧