Capn Jack wrote in post #18769119
Imaging is imaging, whether through the lens of a microscope, telescope, or a camera. In microscopy, we use "numerical aperture" instead of f-stop A larger NA provides more resolution and light, but a smaller depth of field. The reference I provided didn't mention NA or f/number.
Then it is not directly useful to the conversation. At least for subject distances multiples of the focal length or longer, f-ratio is a useful proxy for the angles of incidence; actual aperture size is not.
I think you are asserting there is a mis-match between the acceptance angle of the micro-lenses on the sensor and the light coming in the lens when the aperture is large (see ray diagram below)?
When the aperture is wide open, more light hits at a larger angle to the sensor and the microlenses can't accept that light?
Again, it seems that there is an assertion that the angle of some of the light from the camera lens exceeds the acceptance angle of the sensor micro-lenses?
You've said that three times now in this post. Is that to make up for your taking so many days to get what I was talking about?
I'm pretty sure they try to build that into the design of the camera sensor although there are limits. You may be interested in this link from my employer, but another division than where I work:
http://blog.teledynedalsa.com …le-on-optical-acceptance/
Please note they work down to f/1.2.
We can directly examine the FSI digital cameras being discussed, which have easily measurable losses at low f-ratios, which anyone can test by unscrewing a wide-open f/1.2 or f/1.4 lens in all-manual mode, or by examining the black noise levels in RAWs to see how manufacturers try to sweep these losses under the carpet. This is far more relevant than optimistic theory.
I note that you use the term "quantum efficiency" again, when I suspect you really mean "optical efficiency". "Quantum efficiency" is a function of the material used to make the sensor and the wavelength detected. "Optical efficiency" is the is defined by the amount of light hitting the sensor depending on the geometrical location of the sensor (photosite, or pixel, in this case) and the pixel's location with respect to the imaging optics.
There aren't enough terms, perhaps, to break down efficiency into its distinct components. I thought I wrote "effective" efficiency, at least that is what I meant. The thing is, exposure is the light heading toward the sensor plane from the lens, and QE is generally thought of as the percentage of those photons in the exposure which become charge in the photosites (most of the quoted QEs are for a green wavelength in the green-filtered pixels, which hide the actual massive losses of light, especially red light, where about 92% is typically lost to the color filters). This microlens/photosite loss occurs inside the sensor sandwich, which is why I feel that it can be referred to as a loss of efficiency, although it is not a fixed loss. It is part of the exposure/capture ratio.
In fact, I also referred to it as a geometrical/optical effect.
See the actual cameras that we shoot. I told you how. Please stop bringing other equipment with different design concerns into the conversation as justification for your initial denial. I mentioned a phenomenon in the digital cameras discussed in these forums, and you're trying to use special equipment to show that it is small if it even exists at all. It can be significant.
The other thing one should observe if more of the light exceeds the acceptance angle for the pixels for higher f/numbers is vignetting.
I assume you actually meant "lower f/numbers".
If they are "cooking" the numbers, you should see more "noise" at the outer sides of the image as the manufacturer compensates for light hitting those pixels at larger angles than the middle of the sensor. "Read noise" is only one aspect of what is seen- the sensor itself has thermal noise too.
I've measured the phenomenon in cameras that don't correct RAWs for darker corners. Canon so far isn't cooking the RAWs in that way, at least not in my Canons. They have the same standard deviation and histogram spikes and gaps everywhere in the RAW image. The only noise differences from center to corner are those present when there is low-frequency horizontal banding noise, but that doesn't even budge standard deviations of read noise; mainly just local mean black levels.
Thermal noise is firmware-scaled just like all noises in a blackframe. The firmware can not separate them. Thermal noise is insignificant, anyway, with short exposures at room temperature, especially a DSLR in OVF mode which doesn't have a constantly-read sensor.