RSS/Atom feed Twitter
Site is read-only, email is disabled

How precise is Gimp 32-bit floating point compared to 16-bit integer?

This discussion is connected to the gimp-developer-list.gnome.org mailing list which is provided by the GIMP developers and not related to gimpusers.com.

This is a read-only list on gimpusers.com so this discussion thread is read-only, too.

Elle Stone
2013-12-16 16:59:20 UTC (over 10 years ago)

How precise is Gimp 32-bit floating point compared to 16-bit integer?

To state the obvious, 16-bit integer offers more precision than 8-bit integer:
*There are 255 tonal steps from 0 to 255 for 8-bit integer precision. *There are 65535 tonal steps from 0 to 65535 for 16-bit integer precision. *65535 steps divided by 255 steps is 257. So for every tonal step in an 8-bit image there are 257 steps in a 16-bit image.

I've read, and it makes sense because with floating point you have to share the available precision with the numbers on both sides of the decimal place, that 16-bit integer is more precise than 16-bit floating point. And 32-bit integer is more precise than 32-bit floating point.

My question is, for Gimp from git, is 32-bit floating point more precise than 16-bit integer? If so, by how much, and does it depend on the machine and/or things like the math instructions of the processor (whatever that means)? And if not, how much less precise is it?

To restate the question, in decimal notation, 1 divided by 65535 is 0.00001525878906250000. So 16-bit integer precision requires 16 decimal places (lop off the four trailing zeros) in floating point to express the floating point equivalent of 1 16-bit integer tonal step, yes? no?

The Gimp eyedropper displays 6 decimal places for RGB values. 0.00001525878906250000 rounded to 6 places is 0.000015. 0.000015 times 65535 is 0.983025.
0.000016 times 65535 is 1.04856.

How many decimal places does Gimp 32-bit floating point actually provide?

Elle

Daniel Sabo
2013-12-16 17:19:49 UTC (over 10 years ago)

How precise is Gimp 32-bit floating point compared to 16-bit integer?

32bit floats have a precision of 24bits*. The exact size of the ulps** (unit in the last place) in the range [0.0, 1.0] is more complex because the exponent will give you more precision as you approach 0. This gets even more complicated because actually doing any math most likely gives you some rounding error, e.g. the gamma conversions are not precise to 24bits but are to more than 16bits.

We have never done an error analysis of the entire gimp pipeline, but 16bits is already beyond human perception (in my unscientific opinion). The real value of using floating point is that it can hold out of gamut values.

* https://en.wikipedia.org/wiki/Single_precision#IEEE_754_single-precision_binary_floating-point_format:_binary32 ** https://en.wikipedia.org/wiki/Unit_in_the_last_place

Simon Budig
2013-12-16 17:30:06 UTC (over 10 years ago)

How precise is Gimp 32-bit floating point compared to 16-bit integer?

Elle Stone (ellestone@ninedegreesbelow.com) wrote:

My question is, for Gimp from git, is 32-bit floating point more precise than 16-bit integer?

Yes, at least for the range from 0.0 to 1.0.

If so, by how much, and does it depend on the machine and/or things like the math instructions of the processor (whatever that means)? And if not, how much less precise is it?

AFAIK it does not depend on the processor, floating point numbers are defined in IEEE 754, and to my knowledge that is what all processors use.

For 32 bit floats there are 23 bits in the mantissa, so in the range from 0.0 to 1.0 we easily have more precision than with 16 bit ints.

To restate the question, in decimal notation, 1 divided by 65535 is 0.00001525878906250000. So 16-bit integer precision requires 16 decimal places (lop off the four trailing zeros)

you're barking up a wrong tree here. The length of the decimal expansion is not necesssarily helpful, because most of them represent rounding errors.

(btw. - you divided by 65536)

1.0 / 65535 = 0.000015259021896696422

but

1.0 / 0.00001525913 = 65534.53... --> gets rounded to 65535

and

1.0 / 0.00001525891 = 65535.48... --> gets rounded to 65535

So with 11 decimal digits we easily have all the precision we need to represent the fractions for a 16bit int.

How many decimal places does Gimp 32-bit floating point actually provide?

It has 23 bits mantissa, 8 bit exponent and 1 bit sign.

For decimal notation it depends a lot on the range. Numbers with a bigger magnitude (exponent > 0) the number of digits after the decimal points become less.

BTW: You can view 16 bit ints "somewhat like a float with no sign bit, no exponent bits and 16 bits mantissa". I.e. sign is always positive, exponent is always 0. That makes it clear that a 32bit float completely encompasses the 16 bit integer values

Bye, Simon

simon@budig.de              http://simon.budig.de/
Elle Stone
2013-12-16 20:05:01 UTC (over 10 years ago)

How precise is Gimp 32-bit floating point compared to 16-bit integer?

Daniel and Simon, thanks! for answering my questions about Gimp precision.

On 12/16/2013 12:19 PM, Daniel Sabo wrote:

32bit floats have a precision of 24bits*. The exact size of the ulps** (unit in the last place) in the range [0.0, 1.0] is more complex because the exponent will give you more precision as you approach 0. This gets even more complicated because actually doing any math most likely gives you some rounding error, e.g. the gamma conversions are not precise to 24bits but are to more than 16bits.

Gamma conversions are the conversions to and from the sRGB TRC, yes?

We have never done an error analysis of the entire gimp pipeline,

That would be very interesting. I've seen a bit of "if less than some value, round to some other value" code in babl and gegl and wondered how it might affect processing accuracy. How would you do an error analysis of the pipeline?

but
16bits is already beyond human perception (in my unscientific opinion).

For LDR photographs, 16bits is plenty to avoid the appearance of posterization, even when using linear gamma image editing - leastways I've never seen banding when using linear gamma image editing at 16bits.

What about various scientific applications? I suppose that's where 32-bit integer comes in, if someone really needs extra precision. Or HDR applications? How precise is 32-bit floating point openexr? It's used to store HDR information, so there must be some cap on the allowed precision of values, yes? no? I'm being lazy, I can look that up.

The real value of using floating point is that it can hold out of gamut values.

On 12/16/2013 12:30 PM, Simon Budig wrote:

Elle Stone (ellestone@ninedegreesbelow.com) wrote:

My question is, for Gimp from git, is 32-bit floating point more precise than 16-bit integer?

Yes, at least for the range from 0.0 to 1.0.

I was curious about how large the RGB values can get, without straying outside the realm of real colors, when converting from a larger to a smaller RGB color space and thereby producing out of gamut values. So I used transicc to see what the equivalent sRGB values are when converting the reddest red, greenest green, etc from larger color spaces to sRGB. Here's some sample values:

Most saturated: Red Green Blue AllColors/ACES Red 2.4601 -0.2765 -0.0103 BetaRGB Red 1.6142 -0.0758 -0.0211 BetaRGB Green -0.5470 1.1023 -0.0823 BetaRGB Blue -0.0672 -0.0265 1.1035 CIE-RGB Red 1.1944 -0.1329 -0.0062 CIE-RGB Green -0.3139 1.2592 -0.1469 CIE-RGB Blue 0.1195 -0.1263 1.1531 WideGamut Red 1.8280 -0.2054 -0.0077 WideGamut Green -0.8815 1.2914 -0.0868 WideGamut Blue 0.0535 -0.0859 1.0945 Rimm/ProPhoto Red 2.0354 -0.2288 -0.0085 (AllColors/ACES and Rimm/ProPhoto bluest blues and greenest greens are imaginary colors.)

So real colors can easily fall outside the range 0.0 to 1.0 if they are converted to sRGB. What happens to the precision when dealing with RGB values up around 2.5 or down around -0.9?

If so, by how much, and does it depend on the machine and/or things like the math instructions of the processor (whatever that means)? And if not, how much less precise is it?

AFAIK it does not depend on the processor, floating point numbers are defined in IEEE 754, and to my knowledge that is what all processors use.

For 32 bit floats there are 23 bits in the mantissa, so in the range from 0.0 to 1.0 we easily have more precision than with 16 bit ints.

To restate the question, in decimal notation, 1 divided by 65535 is 0.00001525878906250000. So 16-bit integer precision requires 16 decimal places (lop off the four trailing zeros)

you're barking up a wrong tree here. The length of the decimal expansion is not necesssarily helpful, because most of them represent rounding errors.

(btw. - you divided by 65536) 1.0 / 65535 = 0.000015259021896696422

but

1.0 / 0.00001525913 = 65534.53... --> gets rounded to 65535

and

1.0 / 0.00001525891 = 65535.48... --> gets rounded to 65535

Thanks! that makes things more clear.

So with 11 decimal digits we easily have all the precision we need to represent the fractions for a 16bit int.

How many decimal places does Gimp 32-bit floating point actually provide?

It has 23 bits mantissa, 8 bit exponent and 1 bit sign.

For decimal notation it depends a lot on the range. Numbers with a bigger magnitude (exponent > 0) the number of digits after the decimal points become less.

BTW: You can view 16 bit ints "somewhat like a float with no sign bit, no exponent bits and 16 bits mantissa". I.e. sign is always positive, exponent is always 0. That makes it clear that a 32bit float completely encompasses the 16 bit integer values

Bye, Simon

Elle

Simon Budig
2013-12-16 20:47:46 UTC (over 10 years ago)

How precise is Gimp 32-bit floating point compared to 16-bit integer?

Elle Stone (ellestone@ninedegreesbelow.com) wrote:

So real colors can easily fall outside the range 0.0 to 1.0 if they are converted to sRGB. What happens to the precision when dealing with RGB values up around 2.5 or down around -0.9?

Sign is stored in its own bit. Precision is not affected there.

Floating point numbers are represented as mantissa * (2**exponent), So whenever the exponent gets increased by one you lose one bit of "absolute" precision.

So for numbers up to two you have 22 binary digits of precision (after the "dual" point), for numbers up to four you have 21 binary digits of precision.

Bye,
Simon

simon@budig.de              http://simon.budig.de/
Daniel Hornung
2013-12-17 01:00:38 UTC (over 10 years ago)

How precise is Gimp 32-bit floating point compared to 16-bit integer?

On Monday, 16. December 2013 11:59:20 Elle Stone wrote:

To restate the question, in decimal notation, 1 divided by 65535 is 0.00001525878906250000. So 16-bit integer precision requires 16 decimal places (lop off the four trailing zeros) in floating point to express the floating point equivalent of 1 16-bit integer tonal step, yes? no?

Not quite:

0/(2^16-1) = 0
1/(2^16-1) ≈ 0.000015259022
2/(2^16-1) ≈ 0.000030518044
3/(2^16-1) ≈ 0.000045777066
...
(2^16-1)/(2^16-1) = 1

So it is quite sufficient to show the first 6 digits after the decimal point, any error after that is smaller than 2%, and even that only in the worst case of the first step.

Actually the big advantage of floating point numbers (IMHO) is that they do not have a linear precision, but each exponentially in-/decreasing times-2- interval (0, 1/2^8, 1/2^7, ..., 1/2, 1) is split into 2^23 linear steps.

Since our senses work basically logarithmically, this means that we perceive the steps to be of about equal size -- except that we cannot distiguish such small steps visually anymore anyways, be it float32 or int16, as pointed out by Simon already.

Cheers,
Daniel

Mein öffentlicher Schlüssel / My public key: 4096R/600ACB3B 2012-04-01
Fingerabdruck / Fingerprint:
9902 575B B9A0 C339 CFDF  250B 9267 CA6B 600A CB3B
Runterladen z.B. bei/ Get it e.g. from:
pgp.mit.edu, subkeys.pgp.net, pgp.uni-mainz.de, pool.sks-keyservers.net, ...