Reader Jon Barrett wrote in to comment on the New York Times posting I did earlier on whether the number of pixels in an image matters once you reach a certain level. His email is below:
Since this got to be fairly lengthy, I decided to e-mail rather than comment on-line.
I don't doubt David Pogue got the results he claimed, however he used an over-simplified methodology which could only reinforce what one can't help but suspect was a predetermined conclusion.
As some of the comments note, the test was flawed. First and foremost, the subject matter was apparently a relatively simple image with not a lot of detail (a baby against a plain background. Since there's not a lot of fine detail to be captured, you don't need a lot of pixels to present it. We don't know what aperture he used for the original photo;sing a wide aperture will give a very shallow depth of field, further reducing available detail. One of the commenters linked to pictures of trees at two different resolutions, and the additional detail in the leaves/needles of the higher resolution image was clearly visible.
Secondly, he didn't mention the downsizing technique used. A straight resizing, casting out every nth pixel, would have different results than any of the resampling algorithms available. What algorithm would most closely represent the different sampling of a smaller pixel-count sensor?
What levels of JPEG compression were used in the downsized files? we all know that degree of compression of a JPEG makes a significant difference in the quality of the image. And very few cameras make zero compression a default JPEG setting.
A better test would have encompassed more than 3 images, and used a more complex image. I'd suggest starting with the large image as he did, performing straight resize as well as a more complex resample to the target resolutions. Add in additional exposures at alternate camera resolutions (as many of the respondents noted, cameras generally offer multiple capture levels). Then, zoom out so crops of the zoomed pictures corresponding to the principal area have the reduced MP count being tested. Then re-position the camera, without zooming, again so that crops provide the same MP count. As a final component, use several cameras from the same product line (and released at approximately the same time) to take the same picture. For instance, use the Canon PowerShot A640, A630, A540 and A530.
I'd also suggest that the final prints be made by a method the average family snapshooter is more likely to use: take the card down to the local Ritz, Wal-Mart or K-Mart for standard machine processing, or use an inexpensive ink-jet printer. It appears that he went to a specialist printing shop; their customized and customizable workflow is designed to minimize loss of quality from even the worst images.
Part 1 (different resampling methods) helps define the effect of
resampling on the image.
Part 2 (different camera resolutions) uses the same lens, settings, and sensor to capture the different resolutions. As all resizing is done in-camera, you're minimizing variations in internal processing as well. Comparison
with Part 1 allows you to get a better idea of what resizing/resampling algorithm is used
in-camera.
Part 3 (zoom and crop) lets you compare the "native" sensor performance, although it introduces lens performance variables due to differing focal lengths and magnifications. IQ at different focal
lengths and due to different degrees of magnification are external variables.
Part 4 (move and crop) also allows comparison of "native" sensor performance, while minimizing lens variables due to the zooming. It still leaves effects of magnification on lens quality in play.
Part 5 (different cameras), by using cameras designed around the same time, and from the same product line, you minimize differences in processing and can concentrate more on the differences introduced by the different
sensors.
As a quick check, I just took a 6 MP shot from my old D60, resized it in Irfanview to half-size (1.5 MP) and resampled to the same size. There were noticeable differences in colours between the original and the resized images when viewed at the same screen size (10% magnification for the resized ones and 50% for the original). This implies that there's more to MP than just resolution, that colour reproduction enters into the equation.
There's another reason people use the LCD rather than the optical finder (and most Canon PowerShots have optical finders). The framing and parallax of optical finders is a problem; using the LCD, WYSIWYG for a lot less than my DSLRs. Now there are compromises that people could make to allow them to use both, like composing with the LCD and finalizing the shot with the optical finder, but that's also awkward. However, one significant reason IS is creeping into cameras is not operator technique but lens technology. As lenses get longer, and apertures get smaller, exposure times get longer, so camera shake becomes increasingly difficult to control. The smaller sized cameras that are so popular amplify this because they're difficult to brace adequately no matter how you view the image. Standard SLR technique has the photographer supporting the camera from under the lens. Tried that with a pocketable P&S with a 10x, or even a 6x, zoom lens?
Jon Barrett
Comments (1)
I kind of agree with both.
The point that is missed by both though is that much of the picture quality depends on the quality of the photographer.
If I take a picture with a 1MP or a 10MP camera you won't notice the difference. Unless you want to correct it with Photoshop, then the second pic wil show more bad pixels :)
Posted by sjon | November 30, 2006 11:35 AM
Posted on November 30, 2006 11:35