Hello i recently started to tackle the various mesh decimation & simplification algorithms, mostly from Hoppe & Melax. The algorithms in question are the Quadric error metrics mesh simplification & the Progressive mesh simplification. I found a few good github starting points (VTK, hoppe's & melax's githubs etc.) to try and test the various implementations but i run into some problems:1. I tried to simplify a few of mine and a few of the provided (inside the code base) meshes and i noticed that not all meshes survive the same simplification. For example a 400 vertices robot mesh can be simplified down do 10-20 (or about 3-7% of the src vertices) vertices without any problem no missing faces or huge topology distortion, yet an Eagle mesh with 3000 vertices can be just simplified to 1900-2000 (or about ~60-70% of the src vertices) vertices before i start noticing that the mesh starts to lose faces (which leads to a mesh with quite noticeable number of holes). And that is something i noticed with quite a few models (be it mine or the provided). I would like to know what is the actual cause which stops / prohibits some meshes from being simplified as much as others. I am probably asking a quite stupid or obvious question 2. Even if the simplification was perfect i still have difficulty figuring out how would i preserve the vertex appearance (uv coord, tangents, normals etc.) I read a few papers - one in particular from Hoppe - New Quadric Metric for Simplifying Meshes with Appearance Attributes. But did not find any real world examples or at least snippets of code to give me a better grasp of the technique
Furthermore what has been bugging me lately is more about techniques to dynamically render different LOD levels when walking over a large terrain mesh (non regular grid - NOT generated from a height map, noise, 2D data, etc. so clipmapping techniques and alike go to the bin). I will give you a CS:GO analogy or Take League of Legends as it seems a more appropriate example here, i am not sure if they are doing that exact thing, but presume that their maps are mostly hand crafted, all trees, boxes, walls buildings which are not interactive, destructible or static are probably embedded and modeled along with the rest of the map & terrain. This can potentially allow for fewer draw calls, and much richer environment. But then again how would one optimize such a huge rendering step, where the entire model of the map / terrain may consist of at least 400-500k triangles at bestOr is it just a flat plane, with everything else rendered as a separate entity on top (instancing where possible, although given that most objects are quite unique and not very many instancing will be limited) ?
↧