What you need to do is split the textures into smaller parts and provide different sizes depending on the level of detail. When your camera scales deep enough, you can use the highest resolution textures, but you also need to limit the number of textures displayed. When you show the surface of a planet enlarged with only a small piece of it, it can be seen on the screen, but the enlarged front surface is shown. So divide the texture into small pieces, and also create lower resolution textures for other zoom levels. You will also need to create custom geometry and assign small pieces of high resolution texture to it. Finally, you will need to decide which textured geometry to show for what kind of camera depending on distance or viewing angle. Using the frustum view, you also need to decide which parts are visible in the current scene. I am currently facing the same problem. I have already created all the auxiliary cells and smaller textures using SCNNode (do not load textures here - they need to be loaded only on demand!), However, I don’t have a working solution to test which sub-nodes are visible. The scene method explored inside frustum does not help here because it limits only the bounding rectangle and bounding rectangles to large ones, most of them will always be partially inside the anti-aliasing (so I'm currently trying to implement my own tests). And you will also need some surface normals to check if the front of the surface is pointing towards the camera. Unfortunately, because I still do not have a working solution, I cannot post any code here. I can only describe my “coding plan” that will work (at least using OpenGL, I already implemented things like years ago). Maybe the basic idea of the solution is already useful for you? Otherwise, perhaps we can find the rest together ... :-)
source share