RSS/Atom feed Twitter
Site is read-only, email is disabled

8-bit to 16-bit precision change errors

This discussion is connected to the gimp-developer-list.gnome.org mailing list which is provided by the GIMP developers and not related to gimpusers.com.

This is a read-only list on gimpusers.com so this discussion thread is read-only, too.

1 of 1 message available
Toggle history

Please log in to manage your subscriptions.

8-bit to 16-bit precision change errors Elle Stone 22 Aug 14:14
Elle Stone
2012-08-22 14:14:04 UTC (over 12 years ago)

8-bit to 16-bit precision change errors

I noticed an error in the resulting values when changing image precision from 8-bit integer to 16-bit integer.

For example, in a ten-block 8-bit RGB test image, the block with the linear gamma sRGB 8-bit values of (1,1,1) should be (257,257,257) upon changing the precision to 16-bit integer. But instead, the Gimp 16-bit integer values are (258,258,258). Similar errors occur with 8-bit values of (2,2,2), (4,4,4), (8,8,8), . . (32,32,32). (64,64,64) and up is accurate.

Gimp 32-bit floating point and 32-bit integer values are error-free upon changing the precision from 8-bits and then exporting a 16-bit png (so errors might be hidden by the reduction in bit-depth upon export). The (128,128,128) block is off by 3 when changing the precision to 16-bit floating point.

If anyone is interested, the test image and a spreadsheet with the correct values and formulas can be found here:

http://ninedegreesbelow.com/temp/gimp-lcms-4.html#precision

"Round-tripping" back to 8-bit values gets you back where you started. But that is an artifact of collapsing 256 "steps" in the 16-bit image back to 8 bits. The values could by off by as much as 127 in either direction in the 16-bit image, and still "collapse" back to the correct 8-bit value.

Elle