Quantcast
Channel: GameDev.net
Viewing all articles
Browse latest Browse all 17825

Using a vertex buffer with the format R16G16B16A16_SINT

$
0
0
In DirectX 9 I would use this input layout: { 0, 0, D3DDECLTYPE_SHORT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION, 0 } with this vertex shader slot: float4 Position : POSITION0 That is, I would use the vertex buffer format SHORT4 for corresponding float4 in the shader and everything would work great. In DirectX 12 this does not work. When I use the format DXGI_FORMAT_R16G16B16A16_SINT with float4 in the shader, I get all zeros in the shader. If I use int4 in the shader instead of float4, I get numbers in the shader but they are messed up. I can't figure out exactly what is wrong with them because I can't see them. The shader debugger of visual studio keeps crashing. The debugger layer does not say anything when I use int4, but it gives a warning when I use float4. How can I use the R16G16B16A16_SINT input layout?

Viewing all articles
Browse latest Browse all 17825

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>