Quantcast
Channel: GameDev.net
Viewing all articles
Browse latest Browse all 17825

D3D9 - How to linearize depth buffer obtained from closed source application

$
0
0
Hey guys, I hope for help on the following problem - hopefully an easy one for you, but I am an absolute beginner... I want to simulate a Lidar in a closed source D3D9 game (ArmA2-based I think), meaning I want to get the pixel positions in view space. For that I intercept the D3D9 calls (via proxy dll) and retrieve the handle to the depth buffer texture (R32F) which I have identified . I've attached an example of the contents of this depth buffer (just StretchRect'ed the contents to the back buffer) and the corresponding rendered image. Now what I am trying to do is transform the depth buffer values to view space but I keep failing on this. I am trying to do it as described in this stackoverflow post (the only change I made after I saw that it does not work is to delete the first line 'z = depth*2.0 - 1.0' and instead directly using the depth value, because as far as I know the projection matrix(from MSDN: https://msdn.microsoft.com/de-de/library/windows/desktop/bb147302(v=vs.85).aspx,, Q = Zf/(Zf-Zn) )is already adjusted to output a depth in [0,1] - and thus should also be directly usable in inverted form for the transformatino from clip space to view space) I should mention that I can query the game API for the view frustum values (left + right angle, top + bottom angle, near and far plane), but as you can see a large part of the depth buffer is black So here are my questions: 1) What I don't quite understand is that apparently the whole depth buffer value interval is inverted, meaning far objects are dark (a color of (1,1,1,1) is white) . Are there any other projection matrices commonly used together with D3D9, that have this behavior and which I could try? 2) Is the approach shown in the mentioned stackoverflow post a valid one? Especially I'd really like to know if this division by w after applying the inverse projection matrix is correct - I thought, this division is necessary to get from clip space to normalized device coordinates. But why is it necessary here? 3) Is there any other approach I could use to get the pixel positions in view space? Thanks in advance!

Viewing all articles
Browse latest Browse all 17825

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>