RDKirk wrote in post #7882286
There appears to be a term missing in this discussion. When you're talking about sensors, you are not talking about "pixels," you're actually talking about "sensor elements" (sometimes called "sensels"). A single sensor element will produce data that will be processed into a single "pixel."
However, that sensor element itself is a complicated device that contains several filtered photon collectors under a microlens and a bit of integrated circuitry. In the overall array of sensor elements, the distance from the center of one sensor element to the center of the adjacent sensor element will be about the same size as the resulting pixel of sensor data.
That was the semantics part. I have seen and even participated in discussions about the distinction between "sensels" and pixels, but at the end of the day, it's unrelated to the point I'm making. My point relates directly to pixels, using even your definition. The more sensor area over which data is collected, by whatever means, the more accurate that data will be. If I call that area a pixel, I don't care whether it is a one-to-one matchup with a sensor element or just a sum produced by the camera's processor, or whether it's carved into silicone by the hammering of microscopic Oompaloompas.
In practical terms, we can assume similar technology with current products (there are a few exceptions) and divide the length of the frame by the number of pixels to get an idea of how much area on the sensor feeds each pixel. When that number is bigger, there are more photons being sampled and therefore the resulting number integrates more information. It will thus be more accurate.
This is just as true if we are talking about grains of silver salts versus electronic sensor elements. And the resulting practical advice remains the same.
1. We choose the medium on the basis of how big a print we want to make. Kodak developed Panatomic-X (ISO32) as a very thin emulsion film for the sole purpose of trying to make 35mm look like medium format looks when one used Plus-X (ISO125). We developed Panatomic-X in Rodinal, a high-acutance developer, to wring as much resolution out of it as we possibly could. We did this so we could make 11x14 prints that really looked good. We chose a film that when enlarged 12 times (to make an 11x14 print), the grain would be acceptable. 6x4.5 only has to be enlarged five times to make an 11x14 print, and the bigger grain on Plus-X was still within the range of acceptability. And the faster film allowed us to match the depth of field by using a smaller aperture. Even with that, the prints from 6x4.5 looked better because of more subtle tonality reflecting more information.
Large-format photographers use fast film for convenience. I only have to enlarge a 4x5 negative a little more than 2-1/2 times to make an 11x14. ISO400 Tri-X is fine. And they'll develop that Tri-X in an acutance developer to maximize sharpness, knowing the grain won't be an issue. When we used Tri-X in a 35mm camera, we developed it in a solvent developer like Microdol to break the grain down so it wouldn't be so obtrusive. even though it cost us some sharpness.
Just yesterday I read a long-standing complaint among ultra-large-format photographers that the film producers only seem to provide their slowest films in ultra-large sheets (12x20" being a common example). These photographers are making contact prints--they are not enlarging at all and have no need for fine grain. But they do need speed--their lenses are rarely faster than f/9 or f/11 even at maximum aperture.
In the digital world, if our goal is to make nominal 13x19 prints (the largest print most folks can make), we need, at a minimum, 3000x4500 accurate pixels.
If we want to make 30x45" prints, then we'll need 7200x10,800 pixels (33MP), unless we put a rope in front of the print keeping viewers from getting too close. And we'll need a camera system that can feed each of those pixels accurately, and that, my friends, is the crux of the matter. It leads directly to...
2. We choose the format based on desired image quality (in part). Ansel Adams was asked how big a camera did he use, and his response was "the biggest one I can carry." The camera system must feed each of those picture elements (digital or chemical) with accurate information about the scene. And the bigger the image frame, the more information it will describe.
What makes an accurate pixel? Two things: 1.) How good a sample of the scene's illumination and color we are making available to the elements that comprise (or feed) the pixel, and 2.) how accurately we measure that illumination and color. Sensors are continually being improved in the second of these categories. But there is more to be gained by improving the first category, and that's the basis for all my comments in this thread including what follows. We buy better lenses for the sole reason of improving the first of these requirements. But increasing the format size has a bigger effect still.
And we know this without having to be told. Assuming ideal optical quality, and assuming that both choice provide a surplus of pixels for our intended print size, which would you prefer of the following two choices:
Choice 1: Making an image with a 400mm lens, or
Choice 2: Making an image with a 200mm lens, and then cropping it to show the same field of view as the first choice.
We know without having to be told that the first option will be better. The second option has a name, digital zoom, that is something we all recommend against. See? We know that format is fundamentally critical. And that's what the second option is: turning our sensor into a smaller sensor. Again, we assumed that we will in both cases have more pixels than we need to make the desired print. So, that print will integrate more information with the first choice than with the second, irrespective of technology. It's just as true for film as for digital.
The 13x enlargement needed to make a 13x19 print from a full-frame sensor also enlarges lens faults by 13 times. If a viewer can see 5 lines/mm on the print (which is where we got our 240 pixel/inch requirement for the Epson output), then a 13x enlargement requires 65 lines/mm to be available in the image. That, in turn, requires 130 pixels/mm. That's very demanding, and beyond capabilities of most of the lenses we will use, even when used optimally. So, we are not only stressing the digital resolution of a 5D sensor when we make a 13x19 print, we are stressing the optical resolution of the lens by an even greater amount. The good news is that we don't really have to achieve that 5 lines/mm standard to make a really good looking print.
But with larger formats, it's easy to do so. For example, the 6x9 format, which has the same aspect as a full-frame sensor, makes an image of about 56x84mm (this varies by equipment a little, but it all uses 120 roll film and puts 8 exposures on a roll). We only have to enlarge that 5.6 times to make our nominal 13x19 print. To achieve 5 lines/mm on the print, we only need 28 lines/mm on the film, which can be represented by 56 pixels/mm. I may scan that film at 4000 "sensels" per inch using my Nikon scanner, but for a 13x19 print I don't need to. I only need to scan it at 53 pixels/mm, or 1360 pixels/inch.
Now, here's where Lowner's point comes into play. Scanning that 6x9 negative at 1360 ppi might not actually gather as much information as scanning it at 4000 ppi, because instead of integrating the light across the bigger area to feed one pixel, the scanner might just be skipping a few pixels and getting a smaller sampling of that light. A larger sensor area devoted to feeding one pixel doesn't necessarily mean it integrates more light into that sensor. But that's why we assume equivalent technology, and I think it's reasonable to assume that the microlens integrates that light pretty consistently in practical terms. So, I would probably still scan my 6x9 negative at 4000 pixels/inch and then downsample it to perform that integration. That would make it a bit more comparable to a camera sensor, rather than a second-generation scan of film.
Yes, my statement requires the assumption that when a pixel is informed by a larger portion of the sensor, it integrates proportionately more of the light reaching that element area when compared to a pixel fed by a smaller area. Practical photographers who, for the most part, keep up with their technology and view their images critically can, I think, make that assumption reasonably. In fact, medium-format sensors actually integrate light into their sensor elements more accurately than do small-format sensors, because they don't use an anti-aliasing filter.
When comparing camera sensors, we are talking relatively small effects. When comparing formats, we are talking large effects that improve every aspect of the system from optics to printing.
Rick "figuring only Wilt and Lowner made it this far" Denney