I've built some simple normal maps out of meshes and a custom HLSL shader that writes their normals to the screen. While I've only used this for creating tiling normal maps, where I control the orientation of the mesh used to generate normals, I don't see why I couldn't do this for a full-model normal map, placing the models in screen space based on their UV rather than world-space coords, writing the normals of the low-poly to one image, the high-poly to another, and the vector necessary to transform the normals of the first to the second onto a third image. With the tiling normal maps I've made, I haven't seen any artifacts or weirdnesses. All it takes is one or two models, a relatively simple shader, and a single frame of computer time.
But when I visit modelling sites, baking normals sounds like a major headache, involving the creation of a cage and a lengthy bake process. It sounds like the modelling packages are using some kind of raycasting algorithm.
There must be a reason not to be doing things the way that I've been doing them. Can anyone explain to me the problems with creating normal maps via shader?
↧