The first run always fails. Though, I am happy with the fact that I see something that looks like a shadow. I visualized the shadow factors (and some fog as well) obtained after PCF filtering at one position only. This results in three noticeable artifacts. First, a bright (high shadow factor) plane seems to subdivide my scene at the spotlight's origin. This seems to be related to the positioning of my default near z-plane (0.01f). I tried to reduce the near value, resulting in the disappearance of that bright plane. For some near values, flickering can still be noticed (especially while moving). Unfortunately, reducing the near value seems to let my shadow disappear (perspective matrix is kind of broken due to divisions by low near value).
Second, I notice an anti-symmetric shadow appear on the other side of my scene (pinhole camera principle). I don't know if this is actually problematic or can become problematic, since the light contribution, which still needs to be multiplied with the shadow factor, will be zero anyway.
Third, both shadows seem to disappear when I move close to them with the camera? Of course, a camera dependence on a static light is mathematically not possible. In my code, however, there is a small connection. My shading is performed in camera view space (cview), so my positions in camera view space need to be transformed to light projection space in the PS. I use the following matrix chain for creating this cview-to-lprojection matrix on the CPU:
cview_to_lprojection = view_to_world * world_to_lview * lview_to_lprojection
For my shadow map depth passes, I perform the same transformations to avoid numerical issues. This means that vertices are transformed to camera view space first and then to light projection space in the VS (so not a combined transformation matrix).
↧