| Author | Thread |
|
|
09/19/2007 06:40:04 AM · #1 |
I would like to know your opinion about this statement,
"...more pixels are merely splitting your lenses' optical performance (and your technique) in more ways. Maybe the D300 can use 11 pixels to split a hair and the D200 only has 10, but ultimately, this many pixels jammed onto a DX sensor are usually only serving to give you a more precise picture of your lenses fuzziness or the limits of your technique." Ken Rockwell.
Will the same lens show more flaws on a D300(12mpx) than on a D50(6mpx)?
It this true for both, CCD and CMOS sensors?
Thank you for your answers.
|
|
|
|
09/19/2007 06:42:22 AM · #2 |
I don't think he's saying it will show MORE flaws, just that existing flaws will be more obvious, more precisely delineated.
R.
|
|
|
|
09/19/2007 06:53:43 AM · #3 |
And what flaws are we talking about? Chromatic aberrations? Vignetting?
He is talking about fuzziness but if the more mpx sensor is able to show more detail how can this be possible?
Sorry if i am missing something obvious. |
|
|
|
09/19/2007 07:31:38 AM · #4 |
Originally posted by GabrielS: And what flaws are we talking about? Chromatic aberrations? Vignetting?
He is talking about fuzziness but if the more mpx sensor is able to show more detail how can this be possible?
Sorry if i am missing something obvious. |
Think of it this way: the more detail a sensor can capture, the more difference there is between sharp and not-sharp, to name one example. A sensor consisting of a single pixel would show no distinction.
Ever notice how a picture can look sharp in thumbnail and be hopelessly not-sharp at 640 pixels, let alone 3,500 pixels? Those thumbnails are the equivalent of a lo-res sensor, and they mask a multitude of flaws.
Remember, Rockwell mentions BOTH deficiencies in the lens and deficiencies in your "technique".
R.
|
|
|
|
09/19/2007 07:58:05 AM · #5 |
Thank you very much for your answers.
With my D50 normally I had to apply USM to get a really sharp image and now with my D80 i do not have to do anything because when i apply USM most of the times i am only adding noise to the picture. It seems that the more detail allowed by the D80 10mpx (same D50 6mpx size)sensor makes the picture sharper. That is where my confusion comes from.
|
|
|
|
09/19/2007 11:33:57 AM · #6 |
Are you shooting RAW or Jpeg?
Message edited by author 2007-09-19 11:34:09. |
|
|
|
09/19/2007 01:17:03 PM · #7 |
Originally posted by GabrielS: ...now with my D80 i do not have to do anything because when i apply USM most of the times i am only adding noise to the picture. |
Increase the treshold amount.
|
|
|
|
09/20/2007 03:40:39 AM · #8 |
I am shoting Jpeg (large/fine),why?
Thank you azrifel, i will follow your advise and will see.
|
|
|
|
09/20/2007 04:37:01 AM · #9 |
Another issue is pixel density. If you take the same sized sensor and cram in more pixels, then each pixel receives less light. Increasing amplification of the signal to boost the light level to the same thing for the same amount of time exposed can lead to more noise in the image, thus resulting in a less sharp picture despite the increase in pixel count. [/url] |
|
|
|
09/20/2007 04:57:52 AM · #10 |
Originally posted by GabrielS: I would like to know your opinion about this statement,
"...more pixels are merely splitting your lenses' optical performance (and your technique) in more ways. Maybe the D300 can use 11 pixels to split a hair and the D200 only has 10, but ultimately, this many pixels jammed onto a DX sensor are usually only serving to give you a more precise picture of your lenses fuzziness or the limits of your technique." Ken Rockwell. |
as usual, it's ken rockwell, so, chill...
anyway, as soon as ken got himself a d300 and start to like it, he will change his words and will start praising the new higher density sensor. it has always been his "thing" to be doing that - its kinda funny. |
|
|
|
09/20/2007 12:05:22 PM · #11 |
Originally posted by GabrielS: I am shoting Jpeg (large/fine),why?
Thank you azrifel, i will follow your advise and will see. |
I would say the difference in sharpness between the two cameras is simply the in-camera processing, then. Different cameras, different levels of sharpening during the in-camera jpeg creation. |
|
|
|
09/20/2007 12:14:46 PM · #12 |
Originally posted by option: Originally posted by GabrielS: I am shoting Jpeg (large/fine),why?
|
I would say the difference in sharpness between the two cameras is simply the in-camera processing, then. Different cameras, different levels of sharpening during the in-camera jpeg creation. |
On my D70, I can set the amount of jpeg sharpness using the menus. I'd assume D50 and D80 are the same. I would guess that the default settings on the D50 and D80 are different from each other, and the D80 is set to produce sharper images. You can probably adjust the D50 to be just as sharp right out of the camera.
edit: to get this back on topic...none of this has anything to do with Ken Rockwell's statement, which is basically correct, if a bit simplistic.
Message edited by author 2007-09-20 12:16:06. |
|
|
|
09/20/2007 12:41:00 PM · #13 |
First, I apologize if the following seems unduly technical...
The sharpness of the output file from a camera is a product of the combined performance of all the physical, electrical, and software systems in the imaging system, including:
- The lens ('nuf said)
- The hot mirror (It's another optical element)
- The AA filter (big effect)
- The micro-lens assembly (if present)
- The sensor
- The algorithms used to demosaic the sensor data
Each component in the chain degrades the ideal performance. Imagine that you are taking a picture of a checkerboard pattern, and the pattern projected on the sensor precisely matches the sensor pixels, so that each alternating one is light, dark, light, etcetera. In an ideal world, the output image would be a perfect black & white checkerboard. In the real world, that's never the case, even with a "perfect" lens, but some cameras would still come much closer than others. Here are some reasons why:
- The microlens system may not be able to perfectly gather all rays from the lens and focus them on the desired pixel. Some may be misdirected. The smaller the pixels, the harder it is for the microlens system to perform to near-perfection.
- The AA (Anti-Alias) filter is intentionally strong on many cameras. This is changing, with most professional and now, many prosumer DSLRs incorporating weaker AA filters. The purpose of the AA filter is to intentionally blur the image a little bit to reduce moire.
- The sensor itself needs to keep the electrical charges (electrons) in their respective cells (pixels). If some leak to adjacent cells, image "blooming" will result.
- The in-camera demosaic algorithms are pretty good, but since each pixel only collects one color, the luminosity data (bright vs. dark) is mixed up with the color information. It's not possible to perfectly sort it all out. This effect is pretty small compared to some others.
Now, what's the upshot of all this? Bottom line is that most DSLRs with >6Mpx are capable of showing up flaws in nearly all consumer-level lenses, and even in many professional level lenses. Some DSLRs will show flaws to a greater degree than others. How much they show up flaws is a function of both the pixel density and the total "acuity" of the camera system. Think of acuity as how well our checkerboard pattern is reproduced. It's difficult to predict just where is the point of diminishing return with regard to pixel density, since acuity plays into the equation. It is generally accepted, though, that beyond about 22 to 24 Mpx, a full-frame camera will gain little or no benefit from most lenses under the majority of conditions.
One aspect to all of this that we haven't touched on is undersampling vs. oversampling. I'll just say that if a camera outputs a sharp image, that is, light-dark transitions encompass 1 pixel, then the image is undersampled. Undersampling is generally considered to be a bad thing, but customers expect that their images will be razor sharp. In reality, if we could afford to process the required amounts of data, and we could efficiently collect the light in such small pixels, we should oversample by at least 2x (4x in pixel count) to avoid problems like moire, edge jaggies, and "fake-looking" fine detail. But the resulting images would look "soft" out of camera, and market pressure will always drive cameras to be sharp (undersampled).
Hopefully this long, rambling post provides some useful information, LOL. |
|
|
|
09/20/2007 12:51:07 PM · #14 |
Originally posted by kirbic: First, I apologize if the following seems unduly technical...
The sharpness of the output file from a camera is a product of the combined performance of all the physical, electrical, and software systems in the imaging system, including:
|
See, I *said* that Ken Rockwell's statement was overly simplistic....
Thanks, kirbic. That was really informative. |
|
Home -
Challenges -
Community -
League -
Photos -
Cameras -
Lenses -
Learn -
Help -
Terms of Use -
Privacy -
Top ^
DPChallenge, and website content and design, Copyright © 2001-2025 Challenging Technologies, LLC.
All digital photo copyrights belong to the photographers and may not be used without permission.
Current Server Time: 12/30/2025 06:54:50 AM EST.