Most statements about megapixels understate their usefulness. For example, it is often stated that when you're only printing an 8x12, it's impossible to tell the difference between an 8 MP camera and a 15 MP camera. But in many common circumstances, the difference is striking.
That is not to say that high MP is necessarily a requirement for a good photo. I have enjoyed beautiful 20x30 prints that were made with less than 2 MP. Most film theaters only achieve resolution equivalent to 0.9 MP (some digital ones are up to 2.4 MP), yet people sit close to the 50 foot screens and enjoy the cinematography anyway.
At what point does additional resolution contributes no discernible improvement to the displayed photograph? The answer depends on many factors:
* Display size
Given the same resolution per area, a larger display can benefit from more megapixels than a small display (e.g. 20x30 vs 4x6).
* Display resolution
Given the same display size, a high resolution display can benefit from more megapixels than a low-resolution display (e.g. 300 ppi vs 72 ppi).
* Cropping for aspect ratio
An 8x10 at 360ppi is 10.37 MP. But a DSLR has a different aspect ratio than 8x10: 2:3 vs 4:5. After you crop a 10.37 MP DSLR to 8x10, it only has 8.64 MP left. A 12.44 MP DSLR, cropped to 8x10, results in a 360 ppi file.
* Cropping for viewfinder inaccuracy
Most photographers cannot afford a DSLR viewfinder that is 100% accurate. And all DSLR can get out of tolerance or miscalibrated. The image seen through the viewfinder will be slightly differen than the raw file. The difference may contribute a few extra percent to the amount an image is cropped. Cropping just 3% off of two sides of the photo turns a 6 MP image into 5.6 MP and a 21 MP image into 19.8 MP.
* Cropping for composition
This is of course the most well-known benefit of more megapixels, but I think many people don't realize just how much resolution is lost by cropping even small amounts. For example, cropping just 10% off each side cuts 15 MP down to 9.6MP. I always strive to get the composition just right before I snap the shutter, but I still find myself cropping by more than 10% on a routine basis. I change my mind many times after the photo enters the darkroom, and I try a variety of crops.
* Increase contrast from the OLPF
The OLPF (optical low pass filter) reduces aliasing artifacts. One unfortunate side effect is that is also reduces contrast. Ideally, the megapixels will be so high that the contrast-reducing effects of the OLPF are completely gone.
Contrast is one of the most important and striking factors in image quality. If you compare the 8x12 of an 8 MP camera vs an 8x12 of a 15 MP camera, the increase in contrast from the OLPF is easily seen.
[The technical reason for this is that smaller pixels have Nyquist at a higher frequency, and the MTF curve of the OLPF is designed realtive to Nyquist, so even if it has the same curve, it will affect higher spatial frequencies than what is seen in the final display. Generally, most of the contrast-reducing effect of the OLPF can be negated by increasing spatial resolution by about 30%.]
* Reduce aliasing artifacts
The OLPF, or Anti-Alias filter, usually blurs the image enough to reduce most (but not all) luma aliasing; however, it does not blur the image enough to reduce chroma aliasing. This can result in artifacts.
As the number of megapixels are increased, aliasing artifacts move to higher and higher spatial frequencies. That is, they get smaller and smaller for a given print size. At some point, they will mostly cease to be visible. That is another benefit of higher megapixels.
Aliasing artifacts are explained in this metaphor. In real life, when you pour two liters of water into a one liter container, water spills out and makes a mess. But camera design is different: when you pour two liters of water into a one liter container, the water folds back on itself and corrupts the entire container. The amount of water is the level detail (spatial frequency), and the volume of the container is the number of megapixels in the camera. Aliasing is the corruption. Anti-aliasing filters reduce detail down to a level that can fit within the pixel resolution.
Aliases are a certain kind of image artifact; they can be described as jaggies, stair-stepping, unwanted sparkling, "snap to grid", wavy lines, bands, patterns, fringing, popping, strobing, noise, or false detail. Some photographers desire aliasing artifacts and describe them with positive terms such as ”crunchiness”, ”sharpness”, etc. Other photographers perceive the artifacts as an unnatural, unwelcome ”digital” look. The only aliasing artifact that is universally disliked by all photographers is moiré.
Here is an image that demonstrates aliasing artifacts, created by John Sheehy:
One can see how the anti-aliased images are more blurry, with no sharp contrast from one pixel to another. The non-AA images, on the other hand, have more contrast at the pixel level (Nyquist).
Here is an example of moiré, which is the worst kind of aliasing artifact:
It is from this web site with a good explanation of aliasing:
Aliasing is also described in this SD9 review:
* Fewer de-Bayer artifacts
De-Bayer artifacts such as mazing occur with some combinations of demosaic algorithms and images, and are usually exacerbated by aliasing. You can see examples of it here:
As the number of megapixels are increased, such artifacts become less and less of a problem.
* Color resolution
Bayer cameras sample chroma (color) at half the resolution of luma. This can be clearly seen in charts and test shots, but does not have a noticeable effect in most real life images. Some images, however, will have high frequency color detail that can be noticed with full chroma sampling. To get that level of color resolution with Bayer requires four times the megapixels.
For example, if you think 2.16 MP is just right for a 4x6, then to get full color resolution in the 4x6 (both red and blue) would require 8.64 MP. Foveon proponents think getting full color resolution is very important.
* Horizon correction (rotation)
We strive to get the horizon level at the time of the shot, but sometimes an image still requires this correction, especially in fast-paced shooting. But even a slight rotation causes the image to blur or the addition of a lot of artifacts. Having more megapixels to start with allows this correction to occur without any negative effects on image quality of the final display.
For example, say you have two 12 MP images: one that is level already, and one that was slightly off and then corrected in post. If you print both at 4x6, they may look the same. But print both at 12x18 and the one that was corrected in post will look worse (softer or more artifacts) than the one that was already level.
* Aberration correction: CA, PF, distortion, deconvolution
There are many types of lens aberrations that can be corrected in post processing:
- Barrel distortion
- Pincushion distortion
- Wavy-line distortion
- Lateral Chromatic Aberration
- Purple Fringing
- Other aberrations that are deconvolved with a defined PSF
* "Shift lens" correction, changing projection, fisheye
- It's possible to reproduce the same effect as costly "shift" lenses in software. It removes the keystoning in the image.
- Another popular technique is to fix distorted subjects near the edge of the frame caused by the normal rectilinear projection of a lens. Volume Anamorphosis correction.
- It's also possible to change to and from circular fisheye distortion.
In some circumstances, more megapixels will encounter diminishing returns due to diffraction: there will be a loss of contrast. When the noise is not too high, normal sharpening techniques can help increase that contrast, but a specialized algorithm such as Richardson-Lucy Deconvolution does much better.
This technique gives higher megapixels a great advantage.
* Faster demosaic algorithms
The reason why current de-Bayer interpolation algorithms are so slow is that they attempt to extract the maximum possible amount of detail through time-consuming context analysis and other techniques. But if the resolution of the camera is increased sufficiently so that it is more than needed for post-processing and display, then a different demosaic method can be used, particularly for regular factors (2X linearly).
It's actually *faster* to demosaic 24 MP into a 6 MP RGB than to demosaic a native 6 MP itself, and the quality is higher. If the resolution of the camera is sufficiently higher than what's needed, then those types of algorithms can be used, resulting in faster post-processing times.
* Film-like storage formats
Most software for storing digital images is not very advanced. Our cameras rely on using a relatively small number of pixels with very high bit depth per pixel (e.g. 14-bits). Film is made of billions of 1-bit "pixels". Individually, they don't look like much, but taken together they store a lot of good image data. In the same way, higher resolutions allow for much more advanced compression systems. REDCODE is one current example: it stores 9.5 MP raw files in just 2 MB.
* Future Proofing
There is also the benefit of having a high resolution original to come back to years later, like a film negative. Whatever size, crop, or post-processing you happen to be using today, in the future you may want to revisit the photograph and do something different.
* Printer input resolution
The native input resolution of prints varies, e.g. 300ppi, 360ppi, or 720ppi. The actual resolution (lw/mm) after it is printed to paper is often lower than the theoretical maximum of the native input resolution.
* Paper resolution
Different types of paper affect resolution as well. On some papers, the ink spreads over a wider area, and so has lower resolution.
* Viewing distance, environment, and visual acuity
Viewing a print or display from across the room will reduce the effectiveness of higher resolutions compared to close inspection from just a few inches. A viewer that is not wearing their glasses will not get the same benefit from higher megapixels as the one that has 20/20 vision. Also, acuity tends to get worse in dim viewing environments.
* Diminishing returns
There are many factors that can cause diminishing returns. Camera shake, subject motion, lens aberrations, diffraction, etc. The full benefit of additional pixels will not be realized unless these are avoided. It's also possible that smaller pixels will have too much noise, although I think that most views about that are exaggerated:
Small pixel sensors do not have worse performance
To help illustrate the concept, let's examine a few scenarios and see how many megapixels are needed to reach the ideal.
The way most people see prints is that anything over 250ppi makes no difference, so any camera over 5 MP is overkill. It isn't.
* Different 8x10 print example
Here are the conditions for this example:
- A quality paper/printer combination that can achieve a true 360ppi resolution.
- A viewer that has 20/20 vision good enough to see that 360ppi.
- 100% viewfinder, so what he saw was exactly what was recorded to file.
- Ultra precise composition, so that not even 1% had to be cropped.
- The photographer doesn't mind aliasing artifacts and de-Bayer artifacts
- Full color resolution is not needed: there's no small color details.
- Very careful horizon leveling, so no rotation in post was required.
- No aberration correction needed because the $5,000 lens is completely perfect.
- No "shift lens" effect needed for this shot.
- F-number was wide enough that diffraction could not be measured.
- The photographer has a brand new computer, so speed/size benefits aren't needed.
- Future proofing is not something the photographer cares about: pass.
- A tripod was used to avoid diminishing returns.
- [10.4 MP] 8x10 @ 360ppi
- [12.4 MP] Cropping for aspect ratio
- [21.0 MP] 30% increase in resolution to remove OLPF blur
What if some of the conditions were slightly different?
- 95% viewfinder, so what he saw was off by 3% from what was recorded to file.
- Good composition, but the shot would benefit greatly from a 10% crop on all sides.
- Horizon was slightly askew, a small rotation in post is required.
- The lens is only $2,000, and has many aberrations that benefit from software correction.
- [10.4 MP] 8x10 @ 360ppi
- [12.4 MP] Cropping for aspect ratio
- [13.2 MP] 3% crop for the 95% viewfinder that didn't match the file.
- [19.0 MP] 10% crop on all sides for a better composition.
- [32.1 MP] 30% increase in resolution to remove OLPF blur.
- [42.8 MP] Sufficient resolution for ideal rotation/aberration correction and OLPF contrast
- [76.0 MP] Resolution needed for full color resolution (to match Foveon).
Resolution is not the most important aspect of a photograph, but if you want to get the maximum contrast and resolution, the optimal number of megapixels might be higher than you think.