Wilt wrote in post #17867543
The reminder that needs to be refreshed in everyone's minds...
The LENS can only deliver a finite amount of detail to the focal plane. So if you 'crop' (digital, or what had been done with film in the darkroom for almost 6 generations of photographers) you throw away lens resolution, and it does not matter how many pixels on sensor per square millimeter!
This is true, the lens can only deliver so much resolution to the sensor. Lens resolution is usually measured in Line Pairs per unit length, although since the introduction of digital, it has tended to be normalised as per image height, which is a useless measure for this conversation. In order to convert an analogue signal (that comes from the lens) to a digital one you need to record two digital samples for each line pair. If you record less than two samples per line pair you end up recording a distorted signal at a much lower resolution. This is set by the Nyquist-Shannon Sampling Theorem (it's not really an unproved theory, it is what a lot of people might consider a law). So you have to have a minimum of two samples per line pair, but more are fine, but could be wasted if the signal never reaches a high enough frequency. From what I have been able to deduce, because most tests normalise to per picture height, based on testing with a digital sensor, the very best lenses for DSLRs seem to be capable of providing around 120 LP/mm. To record that signal correctly you then need to have 240 image sensels per mm, so you would need to have a 22.5×15mm sensor record 5400×3600 or 19.44 MP from a crop camera or 49.77 MP from a 35mm frame. If the sensor has less resolution than this you have to add a filter to reduce the maximum frequency that the lens can transmit, these are known as Optical Low Pass Filters (OLPF) or Anti Aliasing (AA) filters. The interference caused by aliasing is impossible to filter out AFTER the image has been digitized. It is impossible to tell the low frequency signal caused by aliasing, from a wanted signal at the same frequency.
Back in the days of analogue film what you wanted to have happen was that the lens could deliver more resolution than the film could record, because the film was the limiting factor, and having too much resolution had no drawbacks, except in the possible costs of the lenses. With digital though we need to have the sensor record just a bit more resolution than the lens can actually deliver, as now the sensor is the limiting factor, and exceeding the limit of the sensor has serious problems that cannot be undone.
It seems as if the current sensors are just reaching the the optical limits of the current best of breed lenses, but they still need to improve in resolution quite a bit, as the above figures are for a monochromatic sensor. Most digital cameras use a Bayer Colour Filter Array with alternating Red/Green and Green/Blue rows of filters over the sensels, this effectively reduces the colour resolution, as to get full colour we have to use a square group of 4 pixels in an RGGB layout. Full colour information is really recorded at only half of the sensors grayscale sensitivity. Tricks can be done to improve things, but colour resolution is still reduced by about 25%. To record the full resolution from the best lenses in colour would require a doubling of the monochromatic resolution, or about 480 sensels/mm, requiring a 100MP 35mm sensor.
It is only when you get to a sensor resolution above 4× (for a Bayer CFA) the maximum resolution of the lens that adding more resolution to the sensor will not result in a better quality final digital image. With the exception of the 5DS/5DSR the current Canon FF sensors are only capable of recording around 76 LP/mm from a lens, compared to the approximately 120 LP/mm from an APS-C sensor. Is it surprising that most lenses seem to max out at about 75 LP/mm when mounted on a FF camera? You have simply hit the sensor resolution. Since we can only see about 6 LP/mm at about a 12" viewing distance current best lens designs, that provide resolution that just about matches the Nyquist limit of our best sensors at 120 LP/mm allow for images at are 18"×12" without adding any digital interpolation, or reducing the maximum image resolution below that of the average human observer. We can actually reduce the output resolution by about one third, to about 2LP/mm (100PPI) and the image still looks generally very good, if it didn't we would all be clamouring for 15" 4K displays for editing. Given that, it is possible to take a 5×3.3mm crop from a current APS-C sensor and get a useable 12×8 image. That is a 60× enlargement which would be approx 85"×57" for a full size same resolution image from the 5DS series cameras, or 53×35 from a 7DII.
Adding extra resolution to our highest resolution sensors, even without an improvement in lens resolution will give a much better colour resolution in images that have been enlarged to this size, so is actually still very worth doing. Wanting a lens to outresolve a digital sensor is the biggest misconception in digital photography after the crop factor and DoF.