davesrose wrote in post #17671608
Well that's the only main point I'm taking some exception (we're only debating a fine point, and agreeing overall). Could be getting back to the times we argued about HDR: I'm involved with absolute computer graphics
Within the digital realm, resolution is the absolute measure of visual sharpness (disregarding OOF areas: a digital image will appear very soft if the perceptual resolution is very low: IE printing a large scale image at 72DPI when you're viewing within a foot away will appear soft). I appreciate that the CoC values/ analogue DOF theorems had to do with perceived sharpness (and conceptually, the science is all still equivalent now). Whether you want to follow CoC values and DOF calculators (that also correlate with crop factor proportions), it's all relative. Because we are working digitally, perceptual resolution (IE, printed size vs DPI) is the main factor for sharpness. Most all examples of judging DOF with your classic examples is viewing really large scale prints (and correlate to my premise of perceptual resolution exceeding CoC limit). Maybe it is because I do deal with other graphics standards...I would rather keep DOF the focus range of lens aperture (which has always been the definition of DOF anyway)
IMO, it's best to think of DOF as the lens's focus depth, and perceived sharpness of print as your perceptual resolution. Again, I defy you to see a deference between any high MP image having a difference in sharpness (at any viewing distance) printed at 5x7 vs 8x10.
For the definition I quoted for DoF it does not matter (within reason) how sharp the sharpest parts of the image are (those on or near the actual focus plane), it's which parts are perceived as "not as sharp those sharpest areas". (If none of it is perceived as sharp, then you are simply too close for that image presentation - I'm not discussing that condition)
There is a sensor side counterpart to depth of field and that is depth of focus. You can draw ray diagrams back from the near and far points of the depth of field back to planes above and below the sensor/film to see the "depth of focus".
In computer graphics, unless you tell the renderer that (there is a z dimension and) it should do a computerized lens blur, it won't have OOF areas. If you do, then you will create an image that (if you did it well) will resemble an actual optical capture from a real lens at that aperture. From then on, the print size and viewing distance will determine the areas perceived by the viewer as "sharp" and "not as sharp as the rest" and the transitions between those two areas mark the edge of the DoF. How you tell the renderer to compute the blur is going to affect the rendered image in the same way as choosing the aperture for the real world situation - it will constrain the DoF for a given print size and viewing distance. I'll guess this will be different based on the anticipated viewing experience AoV - close-up to Imax vs on an iPhone screen, for example
Rather than your experiment, I'll propose a slightly different one with no need to print (better for internet discussions): a retina display set at a fixed distance behind eyepieces, so that viewing distance cannot be changed and so that the display has more resolution than the eye can perceive. On the screen is part of aa very high resolution image of (yes) a slanted ruler taken with a relatively wide aperture (where conventional DoF calculators say only a small part of the scale will actually be "sharp"). I set the onscreen magnification to value X and ask a range of viewers to tell me the where the distances (the numbers on the scale) start getting soft. I then change it to Y and repeat, and Z and repeat etc. I then plot the distances against magnification and find there is a good correlation. As I zoom in on the image (cf. increasing print size for fixed viewing distance) the range of distances on the ruler people report as sharp will go down. (Easy to find any image and do an approximation of this experiment with a good imaging program that can scale an image down from 1:1 image pixel:display pixel). One of these onscreen magnifications will approximate the 5x7 and one will approximate the 8x10 and with enough observations, the graph will show the DoFs are different.
On the other hand, if you didn't get the renderer to do the lens blur, then the whole scene will be rendered as sharp (at your desired resolution) and the whole ruler will be legible and there will be no transitions from "sharp" to "not so sharp" and so the whole ruler will be within the DoF. We can do that in the real world by stopping down (and/or using lens movements) - landscapers strive for this condition - to where the CoC and pixel size are close, so all appears sharp, all the way to 100% on screen. Then the scene is all within the DoF and we take a 50MP image. Now, I'll agree that whether this is printed at 5x7 or 8x10 I won't be able to distinguish which is sharper because my visual acuity is the limiting factor. Go up to 20x30 and 40x60 and things might change in perceived image sharpness but not in DoF