Display extremely high resolution in a sphere in SceneKit

In my iOS application, I have a set of planet maps in the form of images up to 14k in size. I can apply reduced versions to spheres to create planet models in SceneKit. However, I want users to be able to zoom in to see full resolution image details. I also want this to be possible if my application runs out of memory. Is there a way to automatically alternate textures in a sphere such as Google Maps and download only parts and resolutions when they are needed?

+5
source share
2 answers

Two optimization methods that you can apply with very little programming effort are mipmaps and levels of detail.

If you set the mipFilter property for SCNMaterial , representing your planet map, you will get an automatically generated mipmap.

If you supply multiple instances of SCNLevelOfDetail for your planet SCNGeometry , you will get versions with a very reduced number of polygons that will save memory.

They are both mentioned in a 2013 WWDC SceneKit conversation. SCNLevelOfDetail again mentioned in 2014. Sample code 2014 contains examples of both mipmap generation in the AAPLPresentationViewController and the LOD in slide 58.

+1
source

What you need to do is split the textures into smaller parts and provide different sizes depending on the level of detail. When your camera scales deep enough, you can use the highest resolution textures, but you also need to limit the number of textures displayed. When you show the surface of a planet enlarged with only a small piece of it, it can be seen on the screen, but the enlarged front surface is shown. So divide the texture into small pieces, and also create lower resolution textures for other zoom levels. You will also need to create custom geometry and assign small pieces of high resolution texture to it. Finally, you will need to decide which textured geometry to show for what kind of camera depending on distance or viewing angle. Using the frustum view, you also need to decide which parts are visible in the current scene. I am currently facing the same problem. I have already created all the auxiliary cells and smaller textures using SCNNode (do not load textures here - they need to be loaded only on demand!), However, I don’t have a working solution to test which sub-nodes are visible. The scene method explored inside frustum does not help here because it limits only the bounding rectangle and bounding rectangles to large ones, most of them will always be partially inside the anti-aliasing (so I'm currently trying to implement my own tests). And you will also need some surface normals to check if the front of the surface is pointing towards the camera. Unfortunately, because I still do not have a working solution, I cannot post any code here. I can only describe my “coding plan” that will work (at least using OpenGL, I already implemented things like years ago). Maybe the basic idea of ​​the solution is already useful for you? Otherwise, perhaps we can find the rest together ... :-)

+1
source

Source: https://habr.com/ru/post/1259420/


All Articles