16 bit shades of gray png

I am trying to write (using libpng) a 16-bit grayscale image where each dot color is equal to the sum of its coordinates. The following code should contain 16-bit PNG, but instead it generates 8-bit, for example this . Why?

#include <stdio.h> #include <stdlib.h> #include <stdint.h> #include <png.h> void save_png(FILE* fp, long int size) { png_structp png_ptr = NULL; png_infop info_ptr = NULL; size_t x, y; png_bytepp row_pointers; png_ptr = png_create_write_struct(PNG_LIBPNG_VER_STRING, NULL, NULL, NULL); if (png_ptr == NULL) { return ; } info_ptr = png_create_info_struct(png_ptr); if (info_ptr == NULL) { png_destroy_write_struct(&png_ptr, NULL); return ; } if (setjmp(png_jmpbuf(png_ptr))) { png_destroy_write_struct(&png_ptr, &info_ptr); return ; } png_set_IHDR(png_ptr, info_ptr, size, size, // width and height 16, // bit depth PNG_COLOR_TYPE_GRAY, // color type PNG_INTERLACE_NONE, PNG_COMPRESSION_TYPE_DEFAULT, PNG_FILTER_TYPE_DEFAULT); /* Initialize rows of PNG. */ row_pointers = (png_bytepp)png_malloc(png_ptr, size*png_sizeof(png_bytep)); for (int i=0; i<size; i++) row_pointers[i]=NULL; for (int i=0; i<size; i++) row_pointers[i]=png_malloc(png_ptr, size*2); //set row data for (y = 0; y < size; ++y) { png_bytep row = row_pointers[y]; for (x = 0; x < size; ++x) { short color = x+y; *row++ = (png_byte)(color & 0xFF); *row++ = (png_byte)(color >> 8); } } /* Actually write the image data. */ png_init_io(png_ptr, fp); png_set_rows(png_ptr, info_ptr, row_pointers); png_write_png(png_ptr, info_ptr, PNG_TRANSFORM_IDENTITY, NULL); //png_write_image(png_ptr, row_pointers); /* Cleanup. */ for (y = 0; y < size; y++) { png_free(png_ptr, row_pointers[y]); } png_free(png_ptr, row_pointers); png_destroy_write_struct(&png_ptr, &info_ptr); } int main() { FILE* f; if((f=fopen("test.png", "wb"))!=NULL) { save_png(f, 257); fclose(f); } return 0; } 
+6
source share
2 answers

The linked image appears as 16-bit in Windows 7 Properties. I think you just see that various applications return to converting up to 8 bits for display, which (I think) is quite expected, since most display devices do not support 16 bits.

+5
source

Sorry for resurrecting the old stream, but I got here after searching on Google to write 16-bit grayscale images. I ran into similar problems and I thought it would be helpful to post how I solved the problem.

TL DR:

a) Bytes must first be provided to the MSB library, so it works if you flip the lines above:

 *row++ = (png_byte)(color >> 8); *row++ = (png_byte)(color & 0xFF); 

b) To see a 16-bit value on an 8-bit screen, any values ​​below 256 will simply be cropped to black. In practice, values ​​that are multiple of 256 must be used to see anything at all. The color code = x + y above probably did not give enough bright values.

As I came to the conclusions above:

I started with the code above, using only "x" as the color, not "x + y".

The goal was for the gradient, which disappeared from black on the left, to the point where max x was on the right.

However, instead of having one long gradient, I got a few narrow gradients. It shouted "WRONG FIND!"

I tried to invert the bits, but then I got a black image. It took some time to understand, but since the screen is only displayed in 8 bits, even the maximum value (in my case) of 968 was too dark. This displays 2 or 3 on an 8-bit screen, and even with high gamma I do not see the difference.

Since I knew that my max X was about 1000, and the maximum value for the 16-bit value is 65000 ish, so I used (x * 60) as my color. This led to a visible result.

Thanks for the original post. It was a great example to start with.

+1
source

Source: https://habr.com/ru/post/905702/


All Articles