Wilt wrote in post #17742395
Dexter has been taking some recent heat for his statements, but I think the one he makes above is on target! After all, pros shooting for any print media (advertising, company brochures, 10K reports, product literature)
have to reduce the DR of a scene into a range which
can be offset printed -- which is barely even 6EV of DR -- regardless of B+W vs. color transparency, this has always been a limitation! So then that raises a response from folks, "But if I can capture a wider DR, I can nevertheless compress it to fit my output". That raises the reaction that you hear from others, when HDR techniques have been applied to a shot, "It (HDR) looks artificial!".
If we accept that monitors (and especially print media) have limited DR, then we must always be looking for ways to represent an image that could have too much DR for that output medium. A simple curves or contrast tweak could be sufficient, or dragging the shadows and highlights sliders in Lightroom. Or we could look into HDR techniques.
When people talk about the "HDR look" they are (probably without realising) usually referring to the tone mapping algorithm that was used to map a high DR input image into a lower DR output. The most commonly used technique (at least in the early days) was one where brightness levels are manipulated in specific areas in relation to other local detail (often called "local contrast" or "local adaptation") - in order to fool your eye into thinking there's a larger DR in the image than is actually being displayed. It's a similar visual trick to showing a dark grey square on a black background next to another dark grey square on a white background - the squares look different, but it's your brain playing a trick on you.
When pushed hard, those local contrast techniques result in weird looking halos around objects, and give that hyper-real "grungy" HDR look.
But... there are more subtle techniques, so compressing a large DR range down to something that can be displayed can be done without it looking overly artificial.
Wilt wrote in post #17742395
That leads me to offer this challenging question, to hear the responses:
So if DR compression (e.g. HDR) results in artificial appearance, and our media (offset printed page, our monitors, even photographic prints) are inherently 'limited DR' media, just why is it so necessary to get any more than 12EV of DR than can be accomplished today (via Sony sensor)?! As noted, DR compression doesn't necessarily guarantee an artificial result - there are many ways of skinning a cat.
As to why you'd want to capture more than 12EV: when you observe a scene, you look at different elements, your eyes are adjusting to the light levels in each area, and your brain is putting together an image of the whole. If I'm standing in a dark building and looking at the wall/window frame, then look outside to a bright outdoor scene, I don't "see" a clipped outside any more than I "see" a pitch black interior. If I took a single photo of the whole though, I'd get one of the two (depending on my exposure).
More DR at capture allows a better chance of a recreation of what we "see". Whether the tone mapping techniques used to create the final image are pleasing/realistic/artificial is down to the intention of the shooter (and his PP skill) and the artistic interpretation of the viewer.
davesrose wrote in post #17742656
Yes, because all digital cameras have sensors limited to 14 bits worth of tonal data. I have still seen instances with the D810 that has clipping in highlight areas...for certain scenarios, I believe there still needs to be better DR for any camera sensor.
If you have an image that has clipping in highlight areas where you wanted detail, the shot was overexposed. Simple as that.
Remember that highlight recovery is a red herring; a pixel is either over exposed (clipped) or it's not. The issue is that the histograms in our cameras usually give us data based on the JPEG rather than the raw, so detail that might be flagged as clipped in-camera may (or may not) be clipped in the raw file. If our cameras would give us raw histograms then there'd be no ambiguity (and what was clipped would really be clipped - no chance of recovery).
Circling back - the solution to a photo with clipped highlights is obviously to reduce the exposure sufficiently until any (desired) highlights are no longer clipped. That of course reduces the exposure of all areas in the image, meaning that some/much of the image may be in lower stops and thus has a worse signal to noise ratio.
If you then push (brighten) that darker detail in post and it's visually unacceptable due to noise; that's insufficient DR.
In a nutshell: the more DR we have at capture, the better chance you have in holding detail in your highlights (due to taking a darker exposure) whilst still getting acceptable quality in shadow areas.