DPChallenge: A Digital Photography Contest You are not logged in. (log in or register
 

DPChallenge Forums >> Hardware and Software >> Futuristic sensor
Pages:  
Showing posts 1 - 13 of 13, (reverse)
AuthorThread
02/20/2005 11:58:51 AM · #1
People have posted questions about what they would like to see in a camera or lens, but what about the sensor. With nanotechnology in the future I see a sensor that is a wafer, one atom thick. As you open the shutter the electrons are excited by the light. When the electrons settle down they give off a charge, this charge will be carried to a proccessor through a wire bundle where each wire also is an atom thick and the charge is associated to a color. The smallest change in charge will be a different shade of a color. There would be no need for ISO since the electrons act as fast as light. If you want to stop the action set it to 1/20000 of a sec even in the middle of the night. If you want a blur you can still set it to a higher setting. I see sensors that can be a hundred thousand megapixels and can make a night shot look as though it was taken in mid-day.
02/20/2005 12:11:12 PM · #2
In the future I think with software and hardware advances you'll see images that are much more scaleable without all the upsampling and sharpening required with today's images.
02/20/2005 12:17:53 PM · #3
Nanotechnology isn't the future...
02/20/2005 12:18:03 PM · #4
Yeah, then the nanobots go haywire (Microsoft had the contract). You put your eye up to the viewfinder and they invade your brain, take over control, and start doing all kinds of weird bad stuff. Then suddenly all you can see is a blue screen with some cryptic error message.

Sorry, haven't had my morning coffee yet.


02/21/2005 12:17:12 PM · #5
I think someone should design a new type of sensor. Why are there only two types of sensors? CCD and CMOS. And both seem to be as sensitive as film. Is that coincidence?

Couldn't someone figure out how to make a sensor that's permanantly set to the equivalent of ISO 8000 or higher? Then they could add a third light blocking mechanism: internal electronic ND filter. Can block out light at 1/6th stop increments all the way up to opaque. Shutter would control amount of motion blur, aperture would contol depth, and IEND Filter would control exposure.

Sounds feasible to me...
02/21/2005 01:31:00 PM · #6
From a physics perspective, there is only so much that can be done. To address a couple things brought up by the OP, ISO (or a like concept) will always be a part of the equation. Although we can make gains in "fill factor" and there is still some room for improvment in sensor noise and efficiency, we are actually closing in on the theoretical limits of sensitivity. When the number of available photos is very low, statistics limits our signal-to-noise ratio. As we raise ISO, it's then like turning up the volume on a really distant AM radio station. You get mostly noise.
Thus, the ISO is limited by the available number of photons. The only ways to increase the number of photons (short of adding light) are:
- Increase the pixel size
- Increase counting efficiency
If we increase the pixel size we can not gain much unless we reduce the pixel count or build a larger sensor. We are nearing the limit to the efficiency we can achieve; we might be able to increase efficiency by a factor of two or so, but not with current sensor technology. In any case a factor of two is not a big increase.
IMO, the ultimate sensor would eliminate the need for a filter array, and would sense the wavelength of each photon counted. Though it is possible to do this, it's not yet possible to do so in an array type sensor that can be installed in a camera. The processing power necessary to deal with all this information would be huge. The imager would in fact be gathering more information than necessary, since our vision is trichromatic (we see essentially three primary colors and sense other colors as combinations of those three). It would be necessary to sort out the data on-chip, possibly in analog circuitry prior to conversion to digital.
Realistically in the near term, the greatest need is to increase the dynamic range of the sensor. That will have the effect of increased signal-to-noise ratio, effectively reducing noise dramatically, at the expense of a longer exposure (collect more photons), and increasing "latitude" in exposure. Elimination of the color filter array with a "stacked" technology like Foveon would certainly be a good step forward as well.
02/21/2005 01:32:10 PM · #7
Originally posted by Plexxoid:

I think someone should design a new type of sensor. Why are there only two types of sensors? CCD and CMOS. And both seem to be as sensitive as film. Is that coincidence?

Couldn't someone figure out how to make a sensor that's permanantly set to the equivalent of ISO 8000 or higher? Then they could add a third light blocking mechanism: internal electronic ND filter. Can block out light at 1/6th stop increments all the way up to opaque. Shutter would control amount of motion blur, aperture would contol depth, and IEND Filter would control exposure.

Sounds feasible to me...


Sigma uses the Foveon sensor which has three different laters - red , green and blue. Because they are stacked each pixel site records all three colours to make up the resultant colour at each site. By doing this there is no need for Bayer interpolation and anti-aliasing and no need to USM in post-processing (this point is debateable). I'm sure someone will correct me on the finer points. ;)
02/21/2005 02:11:28 PM · #8
I think theres already more than 2 censors and then on top of that there are even more senors take fuji for an instance
02/21/2005 02:21:18 PM · #9
What would happen if you had two sensors that took a picture of the same thing but from a slightly different angle? The camera would then receive a stereoscopic image giving it a much better idea of how things are placed in the third dimension. From there, it could cancel out the noise, and could interpolate to a much bigger file size based on assumtions between the two images.

Though very complex, I think it would work.
02/21/2005 02:37:07 PM · #10
Originally posted by LEONJR:

I think theres already more than 2 censors and then on top of that there are even more senors take fuji for an instance


At least four sensor types currently on the market. CMOS, CCD, Fovon, and what ever that 4 color thing is that Fuji is using. God knows how many still under wraps.
02/21/2005 02:43:27 PM · #11
Originally posted by Plexxoid:

What would happen if you had two sensors that took a picture of the same thing but from a slightly different angle? The camera would then receive a stereoscopic image giving it a much better idea of how things are placed in the third dimension. From there, it could cancel out the noise, and could interpolate to a much bigger file size based on assumtions between the two images.

Though very complex, I think it would work.


You can already do this, if you take two pics and just move the camera left or right about a foot between exposures (you could also use two linked cams with shutters fired in unison). Process the images identically and then view them either with a stereo viewer or using the "cross-eyed" or "wall-eyed" techniques. You'll find that the resulting stereo image appears to have higher resolution and lower noise than either image alone. The analog computer between your ears does all the required processing!

02/21/2005 03:33:59 PM · #12
Originally posted by nsbca7:

Originally posted by LEONJR:

I think theres already more than 2 censors and then on top of that there are even more senors take fuji for an instance


At least four sensor types currently on the market. CMOS, CCD, Fovon, and what ever that 4 color thing is that Fuji is using. God knows how many still under wraps.


Don't forget Sony's 4 colour sensor RGBE.
02/21/2005 03:38:12 PM · #13
Originally posted by kirbic:

Originally posted by Plexxoid:

What would happen if you had two sensors that took a picture of the same thing but from a slightly different angle? The camera would then receive a stereoscopic image giving it a much better idea of how things are placed in the third dimension. From there, it could cancel out the noise, and could interpolate to a much bigger file size based on assumtions between the two images.

Though very complex, I think it would work.


You can already do this, if you take two pics and just move the camera left or right about a foot between exposures (you could also use two linked cams with shutters fired in unison). Process the images identically and then view them either with a stereo viewer or using the "cross-eyed" or "wall-eyed" techniques. You'll find that the resulting stereo image appears to have higher resolution and lower noise than either image alone. The analog computer between your ears does all the required processing!


Yes, but that's now what I was thinking. You can't put a stereoscopic picture on the wall for viewing. The camera would have to process them into one image based solely on interpolating the two together.
Pages:  
Current Server Time: 09/14/2025 05:55:36 PM

Please log in or register to post to the forums.


Home - Challenges - Community - League - Photos - Cameras - Lenses - Learn - Help - Terms of Use - Privacy - Top ^
DPChallenge, and website content and design, Copyright © 2001-2025 Challenging Technologies, LLC.
All digital photo copyrights belong to the photographers and may not be used without permission.
Current Server Time: 09/14/2025 05:55:36 PM EDT.