Go Back   Steve's Digicams Forums > Digicam Help > General Discussion

Reply
 
Thread Tools Search this Thread
Old Oct 24, 2003, 4:54 PM   #1
djb
Senior Member
 
Join Date: Apr 2003
Posts: 2,289
Default pixel size

i know pixel quantity has been talked about and i guess the general consenses is that more is better. but, i am not sure if pixel size has been talked about much. so, does pixel size really matter? how small can pixels be? i know i've seen astrovideo cameras with pixels in the 5 micron to 24 micron range. how large are pixels in the cameras we use here on this forum? is smaller really better?? is there a cost factor that limits size and quantity on a ccd or cmos device? how can we find out what size our pixels are being used in our cameras? i had asked nikon about the pixel size in the ccd on the d100 and nikon said they wouldn't tell me. i'm not very savvy on these things so, i figured you real gurus can answer and explain the science behind pixels. am i correct to assume it takes 3 pixels to make a color dot? i mean is there clusters of 3 pixels that each have a there own sensitivity to the 3 basic colors?

dennis
djb is offline   Reply With Quote
Sponsored Links
Old Oct 24, 2003, 10:18 PM   #2
Senior Member
 
Join Date: Oct 2003
Location: Indian Rocks Beach, FL
Posts: 4,036
Default

Pixels donít have sizes. If your screen resolution is 600 X 800 a 600 by 800 pixel image will fill it. A pixel is a single RGB element with code that represents a certain color. You can think of them as little packages of information. If you display it on a large screen it is larger than on a small screen. If you blow up a tiny image you can start seeing the pixels because you are spreading them over a larger area.

A single pixel represents a color Ė it doesnít take three. This has a good explanation: http://www.scantips.com/basics1b.html

Dots from inkjet printers have sizes Ė usually the smaller the better. But the pixels the dots are representing are a constant.

The sensor has CCDs. Generally the closer you bunch them the more image noise you generate. That is why the large sensor arrays in digital SLRs give better images than the smaller sensors on the regular consumer digital cameras. I find that the extra resolution more than compensates for the noise and currently both of my digital cameras are 5Mp. Later next year when the right camera comes out with the new Sony 8Mp CCD I will likely buy it.

I donít think the size of pixels is a top secret at Nikon Ė all pixels are 8bit RG and B packets of information and they are the same size in kilobytes unless you get an advanced camera that can extract 12 or 16 bit images. A Nikon 8bit pixel is exactly the same size as a Canon 8 bit pixel.
slipe is offline   Reply With Quote
Old Oct 24, 2003, 10:46 PM   #3
djb
Senior Member
 
Join Date: Apr 2003
Posts: 2,289
Default

hmmm, i thought pixels had a size. the sony icx248al ccd sensor used in the stellacam ex astrovideo camera states a pixel size of 8.4 um x 9.8 um. i think that is a size. as for each pixel picking up a r, g, or b why is there foveon technology to create 1 pixel that has 3 different layers with each layer capturing a different color due to the effects of wavelength of light. i thot this was to try to get around a 3 pixel per color effect. that's why foveon technology of a 3 megapixel ccd is supposed to be the equivalent to a 9 megapixel standard type ccd. like i said, i'm not that savvy about this stuff but i would think pixels need a size, else, how would you see them. i'm not talking about a video screen. i'm talking only a ccd or cmos imager. if they had no size then how do they count them and why do they show pixelization when pics are enlarged?

dennis
djb is offline   Reply With Quote
Old Oct 24, 2003, 10:53 PM   #4
Junior Member
 
Join Date: Oct 2003
Posts: 18
Default

Maybe it is the smallest pixel size possible.
toothpaste100 is offline   Reply With Quote
Old Oct 24, 2003, 10:57 PM   #5
djb
Senior Member
 
Join Date: Apr 2003
Posts: 2,289
Default

slipe, i just read that link you posted. it is interesting but does not explain the reasoning for foveon technology. it does mention all pixels are the same size and rectangular but, i believe that it means all the pixels in a particular sensor are the same. different sensors can have different sized pixels. i.e. sensor a can have 10um x 10um while sensor b might have 25um x 25um pixels. SBIG ccd cameras have a number of different sensors each with it's own sized pixels. i'm still confused.

dennis
djb is offline   Reply With Quote
Old Oct 25, 2003, 12:07 AM   #6
Senior Member
 
BillDrew's Avatar
 
Join Date: Jun 2002
Location: Hay River Township, WI
Posts: 2,512
Default

I think there is a bit of confusion between "pixels" in your camera and those on your CRT/LCD, i.e., between input and output. To avoid that, it is better to refer to the camra's "pixels" as sensors or detectors.

If the linear dimension of camera's sensor element is increased by a factor of two, there will be four times as many photons falling on it. Four times the signal. I'd expect the noise to increase much less than that, so a larger signal/noise ratio and less noise in the image. Or that increase can be used to get higher ISOs.
BillDrew is offline   Reply With Quote
Old Oct 25, 2003, 12:23 AM   #7
djb
Senior Member
 
Join Date: Apr 2003
Posts: 2,289
Default

Bill, yes i am talking about sensors. and the larger the sensor the higher the "iso". the sigma sd9 has a variable size "sensor" to do that. they bundle or cluster several sensors to create a larger sensor for more sensitivity or higher iso with less noise. so one of my original questions was does sensor size matter? and it seems from what we have been stating that sensors do have different sizes. now do we get finer and finer detail as sensors get smaller? i assume once a sensor is smaller than the finest detail a lense can resolve then there is no need for smaller sensors. now what about the question of colors. i understand that standard (nonfoveon) ccd are set up in 3 sensor checkerboard arrays that are filtered so a sensor can only capture a specific color. so, that is where i came up with a 3 sensor array to designate 1 color because we need 3 colors (rgb) to be combined to make all the various other colors we see. is this correct? and in this case does sensor size matter?

dennis
djb is offline   Reply With Quote
Old Oct 25, 2003, 3:27 PM   #8
Senior Member
 
Join Date: Dec 2002
Posts: 5,803
Default

The term you are looking for is "photosite". That is the name for the part of the sensor which measures/samples the light.

Both the size and density of the photosites on the sensor matters. Pack them too close together and you can get higher noise. Make them too small, and they can have higher noise. As the technology advances, these limitations are being overcome.

It is possible to find out how large the photosites are and how densely packed they are for a given camera. You have to find really technically savvy reviews. I don't know if Steve's reviews has them. dpreview.com might.

As for the Foveon, what they did was allow a single place on the chip to sample each color (RGB.) On standard sensors a given photosite samples only one range of wavelength of light (R, G, or B.)

I'm not sure what you mean by "the larger the sensor the higher the "iso". " That makes no sense to me. If you mean the higher possible ISO that the camera (and therefor sensor) can support. Ok, that makes some sense. Of course, as the technology is advancing, smaller and smaller sensors are supporting better quality, higher ISO values.

Your comment about sensors getting better than lenses is a real thing. Some say that the better digital cameras show up flaws in lenses which film doesn't. Usually this is said about the Canon 1Ds (for example.)

No, the pattern of the photosites is not always a checkerboard. I believe there are more green sensors, and their pattern can be different. Some bright engineer looked at some animalís eye (eagle? human?) and found that the sensors in that eye was not laid out that way. When they imitated that patter they found the sensor worked better (for some definition of "work" and "better".) I believe that layout is patented in some way, so only they have it.

So in general, sensor size does matter, but on the other hand... not really. What matters is the quality of the picture the camera can produce. Since you can't swap sensors between cameras, itís only an interesting intellectual exercise to study the issue at that level. Yes, the sensor makeup and size does effect the camera's output... but so do many other things. Some say that the Canon 300D takes better pictures than the 10D. They have (itís been said) exactly the same sensor. So clearly something else improved.

Eric

ps. As a side note, the 1Ds has a full frame sensor (matches 35mm film.) Since the light has to hit the photosite at an angle much closer to perpendicular than film requires, Canon had to develop some micro lenses to put over the sensor to alter the angle the light hits the photosites. This, I am sure, raises the cost of the 1Ds more than just the fact that its sensor is physically large and therefor yield in production is lower. So again, size can matter... but mostly to the engineers developing the camera. What I care about is the output quality from the camera not the sensor.
eric s is offline   Reply With Quote
Old Oct 26, 2003, 10:46 AM   #9
Senior Member
 
Join Date: Oct 2003
Location: Indian Rocks Beach, FL
Posts: 4,036
Default

Quote:
Originally Posted by djb
slipe, i just read that link you posted. it is interesting but does not explain the reasoning for foveon technology. it does mention all pixels are the same size and rectangular but, i believe that it means all the pixels in a particular sensor are the same. different sensors can have different sized pixels. i.e. sensor a can have 10um x 10um while sensor b might have 25um x 25um pixels. SBIG ccd cameras have a number of different sensors each with it's own sized pixels. i'm still confused.
If you are into astrophysics at all think of a pixel as a singularity. It has no shape or dimensions. It is theoretically an infinitely small point with color information. Programs and displays put whatever size and shape they are designed or programmed for to display those points of color.

Look at your computer screen with a loupe or strong magnifier. CRTs usually display the pixels as circles, hexagons or whatever the grid is designed for with a black mask around the individual pixels. When you do an extreme blowup the program making the blowup has to use multiple pixels to represent a single pixel and invent a shape for that cluster of pixels that is representing a single pixel. By convention most programs draw a square cluster. They could just as easily cluster the pixels in a round or star shape.

An 8 bit pixel is simply a series of three 8 digit binary codes. 8 binary digits can have 156 possible values in 10base, so there are 156 possible values for red, green and blue. 156 X 156 X 156 gives 3,796,416 possible colors from an 8 bit pixel. The full image file has other information about the relative position of the pixels, compression or decoding information, layers, the size the file would print at etc. But the pixels are simply code that represents a color.

How a sensor creates a pixel has nothing to do with what a pixel is. A Foveon, standard CCD, or even the sensor they will have a hundred years from now that can get a full pixel from a single molecule still has to put out the same standard pixel or programs wouldnít know what to do with them. There is no reason a discourse on pixels would have anything to do with explaining Foveon technology.
slipe is offline   Reply With Quote
Old Oct 26, 2003, 12:31 PM   #10
Senior Member
 
Join Date: Aug 2002
Posts: 2,162
Default

I think I captured the gist of these threads. Putting precise definitions aside, I thought the first post might have introduced a pertinent issue, which is how does sensitivity and noise relate to photosite (pixel) density as a function of area. This is particularly important when manufacturers sell their cams and everyone buys on the MegaPixel label.

So which camera would give the highest sensitivity and lowest noise: a full frame area chip filled with photosites but not much wasted area, equal to say 5Mpix, the same camera 5Mpix but much smaller area photosites on the full frame. The same 5Mpix, smaller photosites still, on common pocket cams? VOX
voxmagna is offline   Reply With Quote
 
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off



All times are GMT -5. The time now is 9:45 PM.