pwm2 wrote in post #8633235
Different technologies often have different terminology. But that doesn't mean that you often can compare technologies and find similar phenomena.
Absolutely agreed.
Film may be analog, and our Canon sensors digital. But you get into the same problem if you run out of bits to capture the tonals or if your signal drops below the noise level. Canon could of course have made the sensor logarithmic. This would not have changed the issue - just added IQ problems from the logarithmic multiplier not being perfectly logarithmic - just as the film isn't perfectly logarithmic.
Fair enough. All I was trying to do was explain why ETTR is useful not just for maximizing the use of the camera's dynamic range.
What the OP was trying to do was to make more apparent what detail was recorded in the shadows. Since the shot was already slightly clipped in the highlights, he could not have used ETTR to improve his situation. And what my point was with respect to what he was doing relative to film is that the quality of the details he'll get from pushing the shadows is almost certainly less than the quality he'd get from doing the same with film, assuming that the intensity of the noise is the same between the two. It may not matter at all, as it happens. Or it may. If it does matter, then film wins. Digital cannot win here without being less noisy than film in the same situation, until the tonal resolution in the darkest stop exceeds what the eye can detect.
First off is that you shouldn't stress the "infinite" part too much about analog film. "Infinite" would only have been applicable if film was noise-free. As it is, you have to limit your view of "infinite" at a suitable point below the noise level. Too much below the noise level, and you will no longer see a detail modulated by noise.
Right. I did mention that, but I probably didn't sufficiently highlight it.
A linear sensor will have more tonal values in the brigtest stops. Way more than we can see. Film will have a constant tonal range in each stop, since it is a semi-logarithmic media. But that is only relevant for the brightest stops. When you get to the dark sections, both a linear and a logarithmic sensor will fail, even if for different reasons.
Well, yes, because both eventually hit their limits in terms of their sensitivity to light relative to noise.
The ADC of the digital sensor will have noise and it will have a limited number of available bits. So you get few and discrete steps. And the selected steps may be wrong because of noise. But the film will also get into troubles. It does not have a limited number of steps in the same way, but instead each grain in the film will get the value based on a very large dice.
Yes.
Essentially, the noise has roughly the same characteristics for both (well, except for pattern noise like the OP talks about), but the signal does not. If the noise is sufficient, the difference in the signals will be lost and you won't be able to tell the difference between the two.
If you do capture a dark and evenly lit surface with a small intensity gradient with a digital sensor and with film, both will show the gradient with a semi-infinite tonal range until you zoom in. The physical function may be different, but both alternatives will contain noise averaged around the expected value. Switching to a patterned surface on the other hand, the spatial resolution will make a difference. When a detail is small, the noise will no longer be able to average around the expected value. In the end, you will either lose the detail, or you will get the wrong tonal value.
And the important thing: That happens with both film and with a digital sensor.
Right. Such is the nature of noise.
But the difference is this: if you take that darkest recorded stop and expand it to occupy your entire tonal range (black to white), with digital you will see noisy but discrete steps in the tone gradient, while with film you will see a noisy but continuous gradient.
How much of a difference there really is depends on the number of discrete tonal values the sensor is able to record in that one stop of data.
This is why the tonal resolution of the analog to digital converters matters a great deal. It has to be usefully higher than the dynamic range of the camera for the dynamic range of the camera to be truly useful.
If the camera's sensor were a logarithmic medium instead of a linear one, the tonal values would be evenly distributed across the recorded stops, and you'd be able to get away with a significantly smaller tonal resolution while still exceeding the human eye's ability to distinguish between tones in the shadows. With a linear sensor, on the other hand, you have to start, in the shadows, with the tonal resolution that you can use for any given stop of the logarithmic sensor, and every stop of dynamic range recorded from that point requires an additional bit of tonal resolution to record. This means that a camera with a linear sensor has to have a tonal resolution (number of bits per color channel) of the dynamic range being recorded plus the number of bits required to store the tonal resolution of one stop of the logarithmic sensor's range. Which means if, say, 16 bits per color channel is the minimum tonal resolution sufficient to record the dynamic range of a logarithmic sensor without two adjacent tonal values being distinguishable to the human eye, and the sensor is recording 16 stops of dynamic range, then the number of bits per stop of dynamic range being used is 12, and the linear sensor would need 12 + 16 = 28 bits worth of tonal resolution to achieve the same tonal resolution in the darkest stop.
This isn't just an issue with storage space, either. The linear sensor needs an analog to digital converter that can resolve the number of bits required. In the above example, it would need to have 28 bits worth of resolution in the analog domain, while the logarithmic sensor would only need 16 bits. As a result, the linear sensor is at a severe disadvantage here.
In the end, it is possible to create a sensor with a logarithmic or linear capture. But which one you select doesn't matter for the tonal range, as long as the linear sensor has enough bits compared to the dynamic range.
Enough bits, yes. But enough bits is different between the two, with considerably more needed for the linear sensor than the logarithmic one.
If we keep just the top 8 stops, then 14 bits isn't a problem. With a 10-stop image aligned to the right, you still have enough bits to capture details with enough tonality.
For the 10-stop image aligned to the right, that would give you 4 bits worth of tonal resolution in the lowest stop, or 16 discrete values. So the question is: is that "enough"? If the human eye can distinguish between one of those tonal values and another anywhere between it and an adjacent (in the given tonal space) value in the target output medium then one could argue that it's not enough.
But once more - linear or logarithmic doesn't really matter as long as the linear alternative has enough bits. When a linear sensor has enough bits, the only difference is that it will fill the memory card faster, by capturing too much tonality for the bright parts of the picture. A logarithmic sensor is evenly good/bad all through the range. A linear sensor is excessively good at the high end. There are logarithmic AD converters available, but they are normally way worse than linear. Just as your logarithmic film has tonality problems thruogh a number of stops.
I'm not sure that a linear sensor run through a logarithmic AD converter is really equivalent to a logarithmic sensor. The reason is that when we talk of "logarithmic" versus "linear" as regards sensors, we're talking about the response curve of the sensor itself. This matters, I believe, because a logarithmic sensor would, I expect, have a noise signature that is constant in the logarithmic domain, while a linear sensor has a noise signature that is constant in the linear domain. Which means that as you increase the sensitivity of the logarithmic sensor, the noise would increase linearly with the number of stops of sensitivity increase while the noise in the linear sensor increases exponentially with the number of stops increased. Frankly, I'm now getting beyond what I know of digital sensors and sensors in general, so someone please correct me if I'm wrong in this regard about this.
Contrast isn't normally a term to use for a sensor. Contrast is normally something to use when talking about the final print. And the contrast will then depend on how you used curves to compress the dynamic range of the sensor into a narrower dynamic range of a print or a monitor.
But what we're comparing here isn't the dynamic range of the camera versus the output medium, we're comparing the dynamic range of the camera to that of the human eye.
Consider what you'd see on your monitor if your monitor had the same dynamic range and color resolution as your camera. You'd still see light gray shades (as you saw directly with your eye when you looked at the scene) mapped to even lighter shades on your monitor, and dark gray shades mapped to even darker shades on your monitor. That is a contrast bump, because the apparent brightness difference between light and dark is greater in the image produced by the camera than it is in real life.
That's what I mean when I say that the camera is a "contrast multiplier".
(And yes, I realize that the intensity level of "white" in the monitor may in fact be less than the intensity level of light you saw in the real scene. The human brain is very good at automatically adjusting its idea of "white" and "black", so we're really talking here about perceived contrast)
No. That is contrast clipping.
It's dynamic range clipping, which isn't the same thing.
(Is it even meaningful to talk about clipping contrast? Contrast is the difference between light and dark. The greater the difference, the greater the contrast. What would it mean to "clip" it?)
I basically agree with everything else you've said.