davesrose wrote in post #17101108
No, in audio, DR is the range of DB of volume.
Which was exactly what I wrote - the distance between the hissing noise floor level up to the maximum volume before clipping. In audio expressed in dB, since a Bell is a very big unit. In photography, the unit is instead stops.
In photography, it's the range of luminance (be it your scene, what the sensor records, and the contrast range of your output image).
Just that the range doesn't care about the number of steps between lowest and highest. It only care about the distance. The number of steps is the resolution if talking audio and tonality if talking photography. But separate from the distance between noise floor and max (clipping) level.
As for the rest of all your replies, I don't see much contradictions to my original posts. It appears the only thing you're arguing is sensor DR. Never did I say 14bit processing leaves a clean 14bit file. You'll see in my last post I prefaced the situation of utilizing full 14 bits with a hypothetical "magic" sensor that can fill a full 14 stops. A 14bit processor gives you 14bit tonal "precision".
Nothing "magic" about a sensor that can fill 14 bits. They already exist.
But it doesn't matter if the sensor can even capture enough range for 15 or 16 or 24 bits - the signal conditioning and digitizing parts must also be able to maintain enough margin to be able to capture 14 bits of usable information. And there is where the Canon DSLR bodies currently falls very short.
You're also missing context in some of my posts: my comparison of DVD to blu-ray was in response to teamspeeds assertion that the image quality of an upscaled image can be just as good as a native image.
Not at all. TeamSpead never did imply such a thing. Upscaling an image can't present you with features that wasn't captured in the original, low-resolution, data. But upscaling will not magically invent new features unless you use one of the special image packages that makes use of fractal pattern functions to invent extra structure in the upscaled image.
The thing here is that is debated is that the fewer pixels of the 22MP 5D3 represents the same sensor area as the larger number of pixels of the 36MP D800E. The correct way to compare is then to either upscale 22MP -> 36MP or downscale 36MP -> 22MP. Both alternatives will give the same result, i.e. showing that the true difference between the 5D3 and D800E is greater than what it looks like if you just look at two 100% crops. The 100% crops are not relevant because they are not comparable to what you get if you take the same percent of the image sensor data and creates a fixed-size web image or a same sized print.
Ditto about perlin noise in 3D graphics (that's completely irrelevant as that's intended procedural noise).
Which was why I did say that your digitally rendered images are irrelevant because they only contain intended procedural noise and have no corresponding noise floor issues as real sensor data has. A rendered image with 8-bit or 16-bit or 32-bit pixel data doesn't have some magic "wet cloth" put over the data drenching all the weak shadow information.
As I've already stated, the Sony sensor is clearly superior at noise handling and extra resolution.
Just that most of what you think is superior noise handling isn't superior handling but more dynamic range.
The noise you get when using high ISO is noise on top of the image data. But it doesn't really change the intensity of the captured data - it just adds intensity deviations to individual pixels.
With the Exmor sensor, you get additional data that is totally lost in noise with a Canon body. So directly accessible additional dynamic range.
I have yet to see it's highlight recovery though: the largest value in tonal DR. With the Sony sensor, you can be more confident there isn't noise while pushing shadows.
This is a side track. The Exmor sensor doesn't get extra shadow stops by losing highlight recovery. The Exmor sensor gets extra shadow stops because the total distance from max highlight to weakest shadow is larger. Which mean that any low-ISO exposure setting where the Exmor sensor handles the highlights as well a Canon body will have the Exmor sensor manage extra shadow stops. Or low-ISO exposure setting where you get similar quality from shadows will allow the Exmor sensor to support extra stops of highlights without clipping any color channel.
But if we look at tonal DR, that's just 16 shades of grey (if looking at the first 4 stops). Lets say the sensor and situation can record up to 13 stops of light. The last stops (where your highlights are) is much greater then that: over 2048 shades of grey just in the last stop.
You are forgetting that if a single pixel is limited to 16 shades of gray, that doesn't mean that a photo will be limited with these 16 shades of gray. All because of noise. If you have a 10x10 pixel area that is exposed to the same level, then each of the 100 pixels will capture the input signal +/- a bit of noise. This allows the visible area to represent thousands of levels of gray even when each pixel had just 16 shades of gray.
This is also why the right-side image looks like grayscale despite just having black or white pixels:
Make the pixel sizes small enough, and you will end up with an image that our eyes will find identical to the left image. And this is one of the important concepts used in many printers. An ink-jet printer can only manage a limited number of dot sizes. But it can print with very small dots and vary the distribution of smaller or larger dots or the distance between the dots.
A number of signal capture devices does perform multiple samples of the input while adding noise to the input signal just to measure what percentage of the samples that gets bumped a bit value higher - so allowing an 16-bit ADC to capture sound or voltage data with 17 or 18 bits of resolution. In the case of photography, random noise in the sensor + photon noise + our eyes will manage the same - allowing us to see more tonality than the sensor itself is able to capture.
But anyway - this is irrelevant because we are debating DR while you constantly jump into debating tonality. They are different concepts. And you aren't happy if you get good tonality from the sensor but don't get enough dynamic range so the shadows or highlights ends up clipped - i.e. with zero tonality.
Effective DR is more significant at the higher stops of light.
You just adjust your exposure to decide if you want more margin for highlights or shadows - as long as the sensor does have enough DR, that's your freedom of choice. Without the DR, it isn't really even worth debating because then you have a tool that isn't up to the task.
From the information I've seen, the Sony sensors really reign supreme for shadow recovery. If we believe the DXO info that the Canons have a higher saturation point (and I suspect so, if their performance exceeds the Sony at high ISO), then it only confirms the fact that you should be ETTR more with a Canon.
Note that the Canon sensors can capture more electrons in each well because each well is larger. But that isn't really important. The Sony sensor has two wells for every one well of the Canon sensors. Two Sony wells captures more electrons than one Canon well. Which is why the Canon will end up losing when you do the same-size print. And any exposure where the highlight just almost fills the wells of a Canon or a Sony sensor will give an image where the Sony sensor has several stops extra of shadow detail.
While any exposure where you instead expose for the shadows will end up with the Sony sensor having several stops extra for highlight recovery.
In the end, it's just up to the photographer to decide if they should expose to the right or to the left. Keep the extra stops for shadows or for highlights. The sensor really doesn't care which choice the photographer selects. Extra stops are extra stops - it's just a question of adapting aperture or shutter time. A bit similar to moving a microphone closer to or further away from a sound source to adjust the signal strength that gets received by the mixer table.