Hi all,
I'm working on a terrain for some project I'm involved in. The terrain is pretty large and represents a ~90 Meter (3 arc-second) resolution for a surface that is 1024x1024 KM in size. For that purpose the Heightmap texture is large and its size is 11072^2. Actually the original size was 11052^2, but I had to pad it to be 11072^2 as the rendering algorithm is using 64x64 patches for the DX11 64-limit tessellation to divide the terrain exactly as necessary, so 11072 / 64 gives exactly 173 patches, which is the number of patches entering the pipeline.
Now, I have 2 methods for rendering the terrain which yields the EXACT same results (There is a very minor textures difference but the vertices from both methods are exactly one on top the other):
1. Basic method that renders the heightmap as-is with the tessellation pipeline - I had to ditch that method due to severe FP accuracy issues with the art textures, but still the geometry is OK.
2. More complicated method by Nvidia as described here - https://www.yumpu.com/en/document/view/32144510/directx-11-terrain-tessellation-nvidia-developer-zone - I used the Nvidia source code and manipulated it for my engine and use a constant heightmap without noise, rather than the original procedural noise they generate in the original code.
So, these 2 methods render EXACTLY the same geometry which should represent the heightmap 1:1. I'm sure it should be 1:1 because In the 1st method I input to the pipeline 173 quad patches which are then divided by the tessellator from 1-64 times (I'm actually using power of 2 fashion for the tessellation, but it shouldn't matter as the issue I have is anyway relevant only to the highest detailed tessellation).
The heightmap data is coming from a RAW texture contains 11072x11072 components of 16-bit values (i.e the size of the texture is ~232MB). As the heightmap contains values in feet from real world SRTM data, I'm just scaling it after loading from the texture to be grid local float values (i.e I have a FEET_TO_GRID value which is used for the scaling). Since I'm loading RAW texture as-is I consider it should be 100% accurate as it's not compressed like DDS texture. Technically for the loading I read the texture data into an array of 16-bit short values (cause data could be negative as well), and then I feet the scaled values into a float array of the same size as the heightmap, i.e 11072x11072, then I'm creating a SRV with 32-bit float format (i.e DXGI_FORMAT_R32_FLOAT).
Now to the issue - The float array used to create the SRV is saved in system RAM so I can use it for simple collision detection and triangles normals calculations. The calculations for the collision and normals are correct as I'm using Barycentric interpolation (Example here https://classes.soe.ucsc.edu/cmps160/Fall10/resources/barycentricInterpolation.pdf). Let's leave normals alone for now, I only care about the height. But I didn't managed to put objects EXACTLY on the terrain surface, usually objects hover a bit or sink into a bit, but I must have them exactly on the terrain surface, and I think it should work because I'm holding in system RAM exactly the same height field data as the most detailed tessellated patch rendering. So I can't understand why my objects can't stick to the terrain.
Things I thought about and tried, and also some stuff I noticed that may give more info:
1. I've noticed that the heightmap is being "followed", I.e I can see the object moving with the terrain surface changes, but there is some shift or scale that cause the height to not be in sync. I tried to look in the rendering for such shift/scale distortion but couldn't find any, and since I have the same terrain rendered with 2 different methods, small chance that both cause the same distortion in the rendering if there was some shift/scale issue (I'd expected them to differ).
2. Using a constant value for the height when loading the heightmap and the object stands perfectly on the terrain, so there is no FP accuracy issue with the scaling factor etc.
3. Using sine wave instead of the heightmap values showed same as #1 - The object is moving in a sine wave way but not in-sync with the rendered surface.
4. I thought maybe since the tessellator divides the tris in "different" fashion in a given patch (i.e sometimes it's ABC/BCD and sometimes ACD/ABD). So I tried to change the collision computation to this and that fashion, but anyway both give ~same results, so it can't be due to "different tri-dividing fashion).
5. My main concern - Could it be that because the heightmap texture UV size is irregular, then the shaders introduce some distortion compared to the data? Maybe DX does some up/down scaling to a more "friendly" number for the Texture size and so creating the distortion?
Some reference pictures:
Both terrain methods rendered one on top of the other, you can see the vertices are 1:1 exactly. Black is 1st method and Red is the Nvidia method:
Crate sunk in the terrain, if I make it move then you can see it hovers/sinks at some points but still following the terrain shape, although mostly the movement changes aren't exactly in sync with the surface (And the distortion I suspect):
Please help because I'm out of ideas. I know that in some games some distortion is allowed and expected as the collision data doesn't hold the full-resolution rendering details, but here it's not the case, and also cannot be accepted as this crate in the picture represents a pretty large object in the terrain I'm working on (i.e it's ~9x9 meters building/structure as it's size is 0.1 and the grid units are ~90 meters in size).
I thought of trying to output the vertices world positions somehow directly from the Domain shader (I'm not yet sure how exactly, if possible at all), or alternatively render to texture once with full tessellation on all patches (So data should be highest detail) and save the texture data to some array and compare it with the data I'm using for the height calculations. If you have other ideas, please share.
Thanx!
↧