Quantcast
Channel: GameDev.net
Viewing all articles
Browse latest Browse all 17825

Comparing depth bias in DX vs Vulkan

$
0
0
Hi, As part of writing a unified interface on top of DirectX (11 and 12) as well as Vulkan I have noticed that they represent depth bias in the rasterizer state quite differently. Depth bias in DX is described here: https://msdn.microsoft.com/en-us/library/windows/desktop/cc308048(v=vs.85).aspx and a typical values seems to be in the tens of thousands. For the Vulkan counterpart I haven't seen any as in-depth explanation. The reference page for VkPipelineRasterizationStateCreateInfo, https://www.khronos.org/registry/vulkan/specs/1.0/man/html/VkPipelineRasterizationStateCreateInfo.html, simply states that "depthBiasConstantFactor is a scalar factor controlling the constant depth value added to each fragment". The value is in floating point and a typical value seems to be around 2.0-3.0. I have a PSO description struct containing a value for the depth bias and I would like to convert it into the format each graphics API expects, but I am unsure what that conversion is. Eg. how would I convert Vulkan's representation (not even sure what unit it is in, some fraction of the depth buffer) into the one used by DirectX? Cheers!

Viewing all articles
Browse latest Browse all 17825

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>