I currently use three structured buffers for each light type (directional, omni and spot) and a constant buffer for light-related data (ambient, fog, number of each light type). In order to support shadow mapping, I want to add three additional structured buffers, one for each light type but now for shadow mapping. The light structures for shadow mapping are pretty much the same: one camera-view-to-light-projection (directional and spot) or camera-view-to-light-view (omni) transformation matrix is added. Furthermore, I added two Texture2D arrays (directional and spot) and one TextureCubeArray (omni) for the shadow maps. That way, all lighting can be done (excl. the generation of the shadow maps) in a single pass and there is no limit on the number of lights each (except the physical limits of the GPU). Furthermore, tiled and clustered shading are quite trivial to add in the future.
I wonder, however, how many shadow maps are typically used? One omni-light already results in the generation of six shadow maps, each resulting from a separate depth pass, which seems huge, even in the presence of some CPU-side culling (for this reason, I don't want to use the GS)? If you are not be carefull with the fall-of distance, the depth passes of a single omni-light can become a bottleneck?
Does one normally support multiple formats (16bit vs. 32bit) and multiple resolutions?
↧