Thank you all for the replies.
Wilt wrote in post #17969680
Consider that some of the banding seen in skies is most likely due to too much data compression, via Quality values like 3 or 4 rather than 8 or 9 when the JPG is created, and not at all due to too few bits for color encoding.
As for color differention limits, due to limitations of the printing equipment, it might be impossible to see a gradient of hues between
100000-->100001-->100002-->100003-->100004, rather than simply seeing the progression of hues 25000-->25002-->25004
Put into terms of a different sense, does your skin detect a temperature change of 0.1°F?! Think of the 8-bit per color 16.7 Million hues like sensing temperature shifts of 1°F, vs. the 16-bit-per-color ability to detect/store values as small as 0.1°F
We drank the coolaid about the aRGB ability to hold an
extended range of hues compared to sRGB, yet BOTH still only support 16.7 Million different values! And it is very difficult to find a commercial printer who prints aRGB without converting (and losing data) the file before printing. If we truly had aRGB be superior to sRGB, it should have 33.4 Million hues, so that it can portray all of sRGB -- plus more hues.
So 281 Trillion hues (16 bit) is nothing more than a gleam in daddy's eye today, in the hopes that the future ever expands beyond 16.7 Million hues of 8 bit color.
BUT then you have to stop and remember that the human eye can apparently only detect TEN million colors (and the CIE findings of the 1930s claimed only the ability to detect about 2.8 Million colors.) Do I hear 'overkill'?!...we already can store 6.7 Million more hues than our eyes can see!
gjl711 wrote in post #17969733
Remember, its binary. 16 bit color is not double 8, its 256 times 8 or working your analogy, if each color of a jpeg is 1 degree, in 16 bit it would be .004, or, for each color and its adjacent neighbor in a jpeg you have 1 graduation whereas in 16 bit color, you would have 256.
Most of the time it doesn't make a difference but for images with lots of shades, like sunsets, and you start processing, you can get the gradation effect quite easily. Good rule of thumb I use is if I am working from raw, my work flow stays 16 bit until the last step where I convert to jpeg.
I always use the highest quality jpg setting, and I appreciate the losses inherent to jpg, but I'm talking about what I see when working with a raw file in ACR/PS.
I'm not so bothered about whether I can detect the difference between 8 bit and 16 bit by looking at an image initially, but I'm concerned with how well the image stands up to processing.
I've just done a test. The screenprint below shows a comparison of the same raw file processed in PS as an 8 bit file at the top, and a 16 bit at the bottom. I've done some heavy processing to see what happened. It's a bit hard to see on the screenprint but the 8 bit version shows significant banding, and none at all is visible on the 16 bit version (you can see some a little bit on the screenprint, but that's not actually visible in PS - must be to do with the compression in the screenprinting process).
That's a big difference for me, so from now on I'll be working in 16 bit.
IMAGE LINK: https://flic.kr/p/GeYYnK
8 bit vs 16 bit test
by
Jack Henriques
, on Flickr
tzalman wrote in post #17970212
Since the OP didn't say whether he is doing his Raw processing in LR or ACR, and to avoid any confusion, it should be noted that the internal workings of LR/Develop and ACR are identical, including the fixed working space. The only difference (and it is confusing) is in the preview data sent to the monitor and used for the histogram. In LR it is in a hybrid space called Melissa RGB (unless soft proofing is on) and in ACR it is in whatever space you have set as PS's working space (unless you change it in the blue link in the center of the bottom margin.)
@jack880
Never your monitor profile. That is unique to your monitor and using it in an image file (rather than a universal ICC space) could cause everybody you share the photo with to see it improperly rendered.
I use ACR. I realise how to set the colour space (in PS and in ACR), I just don't know what to set it to. Ok, so my monitor profile controls how the image is displayed on my particular monitor, but has nothing to do with the actual image itself. Thanks.
Wilt wrote in post #17970556
I don't think there is much debate about aRGB providing hues not seen in sRGB. But given the fact that both color spaces only have room to encode 16.7 Million values, which hues do we have to give up when using aRGB in lieu of sRGB, and what is the visual impact of that trade off?
The question is not 'Is aRGB a paper tiger?' but 'what stripes do we give up, when changing tigers?'
We do know that largely one has to print to ones own home printer which supports aRGB, as fewer than one handful of commercial print vendors in the world can even accept aRGB -- without needing to first convert the data to sRBG before printing (and losing data in the conversion).
So is sRGB what I should be using then? I normally just post my images online, and only occasionally print them. When I do, I use both a commercial printer (specialist printer, not supermarket printers) and my own Canon Pixma printer.
I see that this is a much-debated subject, but please can someone tell me what I personally should be using? If it's sRGB, which one? There are many different versions of sRGB in the ACR/PS options. Is it the one in my OP?
Or if it's aRGB I presume I should also set my camera to aRGB? It's currently set on sRGB.
Many thanks