Wilt wrote in post #18363217
BigAl007, thanks for your remarks. Let me rebut...
- If each color sensel has 2^14 bits, each color in the triad can have one of 16384 levels.
- So in combination, the triad of sensels can be translated to a R-G-B pixel which has a color which is one of 4.3 * 10^12 hues.
- 2^16 color space is 65536 * 65536 * 65535, or 2.81 *10^14 total hues, or 64 times as many colors as the sensels can be captured and interpreted, or fundamental overkill -- even at 2^16 for each color of the triad.!
- So going to 2^32 takes us to 4.29 Billion levels of each of the triad, or 7.92 * 10^28 total hues,
or 1.8 * 10^16 times as many hues as the sensel themselves can capture! ...which to me is ludicrous levels of OVERKILL.
EACH PIXEL has 1.8 * 10^16 more empty capacity.
Wilt you are failing to remember that each sensel's worth of data has no directly relateable colour information recorded in it. It is simply a value related the brightness at that location. If you convert each of those values back to a brightness value you get a monochrome image.
Now because you know that in front of the sensor there is a Bayer CFA with alternate rows of red/green and green/blue filters, for which you know the spectral responce, it becomes possible to calculate an RGB triplet value for each sensel location, by using the data from that sensel, and those surrounding it.
Now as part of that calculation suppose that you need to multiply two sensel values together. The computer architecture limits you to using either 8, 16, 32, or 64 bit data structures. Also don't forget that modern processors are designed to work in 64 bits, since that is the size to the hardware devices that actually perform the computations. We had 32 bit registers etc way back in the 80's, so even then working in 32 bit was normal.
So if I have two 14 bit numbers, I have to put them in a 16 bit container. I only get 15 bits to work with, since one bit is used for the sign (+-). Now when looking at this you have to consider what happens when you are dealing with numbers near the maximum possible value. Small values are not the problem. So you need to multiply 17203 by 16300, and the result is 280408900. But hold on, you only want me to use 15 bits to hold the value, so the largest value I can have is 32767. So I get an out of range error, and the program terminates, or at least does what it does in that situation.
So in order to prevent the overflow error you simply allocate the next size up chunk of memory to hold the result of your computation. So in this case the result of multiplying two 16 bit values is a 32 bit value.
One very simple answer to why this is important is White Balance. To apply WB to the RAW data you multiply the value from sensels with that colour filter by a constant value, for that particular lightsource. For the red or blue channel that can often be a value gteater than 2. If you hold the resulting values in 32 bit, you won't get an overflow on WB. This is useful if you ETTR and maximise the exposure, without saturating any sensel. If you limit yourself to using only 16 bit values you can reduce the maximum useable exposure by a stop or more.
As well as dealing with overflow issues it is possible to scale the data by chosing where within the 32 bits you chose to drop your 14 bits of data. This then allows you to have room for the computations not overflowing, and also for greater precision during the computations. This is good because you don't get rounding errors introduced at each step of the computation. You only have to deal with that at the point it is converted back to an 8 or 16 bit RGB triplet at the end of the computations.
To end with I will again point out that system architecture does most to set at what level most computations will be done. If the hardware is optimised to work with 32 bit wide chunks of data, then that is what you feed it. Feeding it 8 or 16 bit wide data just runs with empty bits anyway.
Oh and it can be perfectly possible to derive accurate 32 bit per channel RGB colour data from a 14 bit sensel based sensor. Mostly we don't bother, since it is far more information than we might make use of.
Since the vast majority of output devices struggle to present more than about 6 bits of image data to the viewer, is that a good reason to limit all photographic computations to 6 bit?
Alan