Three topics are discussed in this thread:
- Artifacts caused by sRAW
- Misconceptions about compression and sRAW
- Superior alternatives to sRAW
The center of the image (a zone plate) is a low spatial frequency (large details), and it gets progressively higher as you go out to the edges. It's easy to see the artifacts increase for the fine details in the raw image. But the sRAW image has false detail much sooner, at lower spatial frequencies, in the form of circles. Here is what the original image looks like:
Some photographers desire aliasing artifacts and describe them with positive terms such as ”crunchiness”, ”sharpness”, etc. They prefer Sigma Foveon images for the same reason. But aliasing artifacts cause an unnatural and telltale ”digital” look that distract the viewer from the image. Then there are the de-Bayer artifacts that are are exacerbated by aliasing.
There is at least one aliasing artifact that everyone hates: moiré. Moiré does not occur in every image like the other artifacts do, but it does happen sometimes. In real life, when you pour two liters of water into a one liter container, water spills out and makes a mess. But camera design is different: when you pour two liters of water into a one liter container, the water folds back on itself and corrupts the entire container. The amount of water is the level detail (spatial frequency), and the volume of the container is the number of megapixels in the camera. Moiré is the corruption.
Of course, aliasing is not fundamentally necessary for sRAW. It would be possible for Canon to resample to sRAW with much more care and heavy computation to avoid aliasing at lower spatial frequencies. But even if it was fixed in the future, there would still be one avoidable downside of sRAW: loss in resolution.
If there were no better and simpler methods for reducing file size, then such a trade in resolution might be an acceptable compromise. But the plain fact is that there are many other alternatives, and they are all superior to sRAW in every way.
The first is to stop adding Marketing bits to the files. None of Canon's high end cameras, not the 5D2, 1D3, nor even the 1Ds3 have read noise low enough to warrant 14 bits. The last two bits can be replaced by random noise in post processing and it makes absolutely no difference to the image, because they're already just random noise. This is demonstrated very clearly in this image courtesy of John Sheehy (click the thumbnail):
IMAGE LINK: http://forums.dpreview.com …rum=1018&message=31239793
A longer explanation is available in the Noise, Dynamic Range, and Bit Depth part of Emil's essay.
After getting rid of the two bits that are *always* wasted, Canon should then get rid of the bits that are only wasted at certain settings. That is, to remove bits as the analog gain (ISO setting) is increased. ISO 1600 only needs 9 bits to record every last ounce of data from the camera, and ISO 25,600 can only ever make use of 5 bits!
Next is to allow the user to remove even more bits if they so desire. Most people aren't going to use all 13.5 stops of dynamic range, and sometimes they will be willing to sacrifice finer gradations for smaller file sizes, especially if they know it will never be used at full resolution. The choice to truncate bits and dither to a smaller bit depth would give the user more power.
In the the 50D, as much as *half* of the sRAW2 size comes from the two embedded jpegs (thumbnail and full size). The user should have control over the size and compression quality of the large jpeg: some people such as my self never even use it as a part of the workflow, it just wastes space. Of course, this may result in a trade-off between review speed (i.e. to check focus at 10X) and file size, but that should be left up to the user. One simple solution is to just allow the review images to be stored separately from the raw so they may be deleted in post without changing the read-only state of the raw file, which is highly necessary for safety against bugs.
Most important of all, Canon should stop wasting so many bits on random photon shot noise. Unless they're expecting light itself to stop obeying the laws of physics, there is no reason to keep bloating the file. Emil Martinec explained this concept lucidely in the above essay. Instead, Canon should only use the necessary amount of precision. That is what Nikon does with one of their NEF formats, shown in this image, again courtesy of Emil:
It shows that the number of raw levels corresponds to the amount of photon shot noise.
The NEF format is sometimes called "lossy", but that is really a misnomer. Photon shot noise itself is what causes the loss of level precision. No amount of bit depth is ever going to get that back. Using the minimum number of bits necessary is no more "lossy" than using 100 bits: it's just less bloated with useless data. Using a bit depth that is far larger than necessary is only a delusion of higher accuracy where none actually exists.
Of course, Nikon made some mistakes in their implementation of this design, such as performing white balance preconditioning and a sub-optimal LUT, that cause posterization (quantization error) in rare circumstances. But those mistakes in the implementation do not take away from the soundness of the underlying design principles.
One of the biggest reasons that users want sRAW is not the filesize, but the post processing time. But why are they slow? Because they are designed to spend as much time as possible extracting every last ounce of detail possible with timeconsuming context analysis and other techniques. But when only half the resolution is needed (like the use case for sRAW), a different demosaic method can be used. In fact it's not really demosaic at all, so it can be lightning fast and still come up with the same quality as a full demosaic-and-downsample, particularly at regular factors (2X linearly), such as 24 MP raw -> 6 MP RGB.
In fact, it's actually *faster* to demosaic 24 MP into a 6 MP RGB than to demosaic a native 6 MP itself, and the quality is even higher. The only reason that all raw converters don't have this feature already is that people are still willing to accept long post processing times for higher resolution.
The advantage of just doing a "fast de-Bayer" (whatever the actual algorithm) over sRAW is that if high resolution is needed (years later, even), one can still do the slower demosaic and extract the full detail possible.
With all these simple improvements, which would not take any more time to develop than sRAW, Canon's 21 MP RAW would be be 5 MB instead of 25 MB, yet it would have absolutely no difference in image quality, even under heavy post processing and close scrutiny by the strongest compression hater.
But why stop there? Those are only the simplest ideas and easiest methods for reducing filesize with no loss. There is an entire world of advanced ideas and non-simple methods they could apply to the image to reduce files even more at the cost of an imperceptible (or at least nearly so) effect on the image. Cineform RAW and REDCODE show that technology exists to reduce file size in ways that are undetected by the eye. These are the types of compression that are more fairly described as "lossy", even if they are not lossy to the eye without extreme post processing. At the very least, the "lossy" effect will be much smaller than just dumping entire pixels like sRAW does. However, this type of compression truly does take a lot more development effort than anything else, and may require more horsepower in the camera. Although I could argue that the cost of a few ASICs combined with the increased market size would make up for it, I'd just rather focus on asking Canon to replace sRAW with something equally simple.
Only after all the other options have been exhausted does a scheme like sRAW begin to appear as an acceptable compromise.
All that said, I must admit that I can see the shrewd logic behind sRAW. Canon knew that most photographers are very uniformed about all these issues and are the victims of many misconceptions and myths, namely:
- Photographers should hate anything labeled "compression" (false).
- Compression is worse in some ways than a downsampled raw (false).
- Small pixels are worse (false).
- More bits are always better (false).
- Canon's cameras actually use the 14 bits (false).
- They can sell to all the people with an irrational fear of small pixels: "I can get 5D1-sized large pixels with sRAW? I'll buy it!"
- And they also get all the people with an irrational fear of compression ("I can get 5D1 file sizes without compression? I'll buy it!")
- But without losing any of the snake-oil believers who were taking the extra 2-bits placebo ("Still 14-bits? I'll buy it!").