I am developing an image processing program that relies on OpenGL ES 2.0, so it can be deployed on a wide range of devices. In many cases, people will use small images, and this will not bypass the limit of the texture, but with large images that are thousands of pixels in each direction, it may not appear on multiple devices.
My first thought was to split the image into smaller squares and display each one individually, which will work for simple programs, but for tasks that need to be displayed based on neighboring pixel values ββsuch as convolutions or warping effects, this is not will be sufficient.
How does Photoshop retain its 300,000 x 300,000 size with the addition of OpenGL support in their application for many of its effects?
What is the most efficient way to perform post-processing tasks on images larger than GL_MAX_TEXTURE_SIZE?
Displaying only the viewing area and zooming images to scale before sending them for processing? but it will require me to reprogram the image to simply zoom in / out and pan the image. The only problem I see with this approach is that it is not possible to export the image as a full-size image, so this method works well until the user tries to save his work.
source share