shooter mcgavin wrote in post #7478931
Look at where digital imaging has gone in the last 5 years, or where photographic equipment in general has gone in the last 20. Perhaps we're reaching a plateau where advances in digital imaging as we know it is concerned, but I'm willing to bet something new will be introduced that will blow even the best cameras today out of the water, much like any professional dslr has done today to even the best 35mm body.
Where there is room for improvement is in dynamic range, noise, and sensitivity. Moore's Law doesn't apply to the optics--those are constrained by cost and physics--but it does apply to the sensor, and moreso to the processor. For example, right now we can do an HDR with three images made by bracketing. It might be possible to do this during a normal exposure by more complex processing and management of the sensor. So, even if the sensors don't improve exponentially, the processor's ability to derive more from the sensor will.
Storage is always an issue, even with storage capacities following Moore's Law (and beyond). But at some point we just won't need more megapixels, and storing them will be an annoyance, so there will be better ways to summarize (i.e. compress) the data in the camera and computer. Raw is already a more compact format than TIFF or PSD, for example. And we might learn to take advantage of having more megapixels by doing averaging (instead of interpolation) that will improve the fidelity of the image without merely increasing its pixel count.
That might be the solution for noise, for example. Noise is a random artifact applied on top of the pixel's response to light. But if we had, say, ten times as many pixels in that same area (I'll pick 5 microns to provide an approximate resolution of 100 lines/mm), we'd need a .5-micron pixel pitch. That's about 100 times as many pixels as we have now in a full-frame sensor. We think that storing 3500 megapixels is beyond consideration, but we used to think the same of 35 megapixels (which is not unusual in medium-format sensors). Moore's Law will take care of that. But the progress will be in the realization that we don't even want to store all those pixels. If we averaged 3.5 billion pixels into 35 million pixels, each pixel we print would include 100 pixels in the sensor. If the noise is distributed randomly (and even if the noise is severe, which it will be just because so few photons will be measured), we'd be averaging it out over 100 samples. That could increase dynamic range considerably my improving the signal/noise ratio, and be a way to take advantage of Moore's Law without running into the limits of optics.
And I can see a time when a manufacturer plugs a lens into a test apparatus that measures and records that particular lens's performance characteristics, which is read and corrected by the camera at exposure time. That is also using software to work around physical limitations.
Rick "who already improves images by downsampling them" Denney