I implemented an image blending method for seamless blending using simple C ++. Now I want to convert this code to GPU (using OpenGL ES 2 Shaders for mobile devices). Basically, the method creates Gaussian and Laplacian pyramids for each image, which are then combined from a low resolution to the top (see also the article "Laplacian pyramid as a compact image code" from Burt et al., 1983).
My problem is that the levels of the Laplace pyramid may have negative values, but my devices do not support floating or integer type textures (using the ORB_texture_float extension, for example.).
I have already searched for documents on GPU-based pyramids, but not finding anything really useful.
- How to effectively implement such a pyramid for the GPU?
- Is it possible to calculate the level of the Gauss / Laplace pyramid without iterating through previous levels?
Hello,
EDIT There seems to be no โgoodโ way to fully compute the Laplacian pyramids on the GPU, except for two passes (one for characters, one for values) that have no support for signed types (like ARB_texture_float), or for types, large bytes when the range of image data is between [0..255]. My Laplacian pyramid works fine on GPUs with the extension ARB_texture_float, but without the extension (and some settings for compressing the range), the pyramid becomes โwrongโ due to the compression of the range.
source share