Go Back   Steve's Digicams Forums > Digital SLR and Interchangeable Lens Cameras > Canon Lenses

Reply
 
Thread Tools Search this Thread
Old Feb 27, 2003, 8:44 PM   #1
Junior Member
 
Join Date: Feb 2003
Posts: 2
Default Focal Length (2 questions)

There are 2 parts to this question:

1. With a smaller CMOS with a focal lenght adjustment of 1.6 (like on the Canon EOS D10), what does this do to the images, quality and size?

1a. Does this adjustment mean that if I am using a 50mm prime, that 50mm is optical and the other 30mm is digital?

50mm x 1.6 = effective 80mm

2. Is the focal length of the lens recoreded on the image when it's stored? I know time/date/aperture/film speed are, but what about the lens I was using?

Thanks!

tgabowaacr
tgabowaacr is offline   Reply With Quote
Sponsored Links
Old Mar 6, 2003, 8:20 PM   #2
Senior Member
 
Join Date: Jun 2002
Posts: 259
Default

1. With a smaller CMOS with a focal lenght adjustment of 1.6 (like on the Canon EOS D10), what does this do to the images, quality and size?

The image is exactly the same as a full frame 50mm image, for example, but uses the central 22.7x15.mm of what would have been a 24x36mm frame. If you had a full frame CCD or CMOS with proportionately more pixels, the 22x15mm crop from it would be exactly the same as the D60 image.


1a. Does this adjustment mean that if I am using a 50mm prime, that 50mm is optical and the other 30mm is digital?
50mm x 1.6 = effective 80mm

Not really, if I understand your question. The 50mm lens will stilll function in every way as a 50mm lens. If you superimposed a frame of 22x15 on the 35mm frame, you would be getting only part of the full image, so your effective Field Of View is smaller. But the subject in the D60 frame would be exactly the same size as it would be in a full 35mm frame.

2. Is the focal length of the lens recoreded on the image when it's stored? I know time/date/aperture/film speed are, but what about the lens I was using?

Yes.

I hope I haven't added to the confusion...
WalterK is offline   Reply With Quote
Old Apr 6, 2003, 1:03 AM   #3
Moderator
 
Join Date: Jun 2002
Posts: 1,139
Default

Yes, Walter is right - but let me see if I can perhaps clarify a little about how the "magnification" occurs and why there is usually confusion about the terms.

For just a moment, let's forget about film and just look at digital sensors. Let's take a full frame sensor of let's say six megapixel resolution with a 100mm tele lens and a 1.6x field of view crop sensor of the same resolution with the same lens and see how the image ends up with an effective focal length of 160mm.

Think of it like this - the lens gathers light which actually describes a circle of image which finds itself focused on the sensor. This circle is referred to as a "circle of definition." The lens, being built for a 35mm frame size makes a circle into which one could draw a rectangle which discards the circular portion outside the boundaries of the 35mm frame size. This, then is the 35mm full frame sensor.

Now imagine that you took a piece of black paper, measured out the dimensions of the 1.6x field of view crop sensor and overlaid it so that it fit exactly in the center of the 35mm frame size sensor. This black paper would mask everything outside the boundaries of the cut out, and what we have left exposed inside the reduced dimensions would be the same as if we had taken a contact "print" of the 35mm frame and cropped out this central region and discarded the rest. It would also contain the same frame of information as one would see on the full frame sensor had the 100mm lens been swapped with a 160mm lens.

At this point we have effectively cropped the full frame to exclude all information not captured by a 160mm lens. So because we actually have a smaller sensor frame, we capture less information than the full frame camera although we use the 100mm lens.

So, then is this exactly the same thing as we would have if we simply cropped the image from the full frame sensor? No, it's not, and this is where the "magnification" comes into play.

We have a "fixed" number of photodiodes on each sensor. Remember, we are comparing a full frame six megapixel sensor with a field of view 1.6x crop six megapixel sensor. When we cropped the full frame image, we tossed out a considerable amount of "resolution." With the field of view crop sensor, we still have the full six megapixels worth of photo receptors.

Let's simplify things and temporarily forget that we are dealing with a Bayer algorithm and just assume that we get a mathematical value from each sampling site (photodiode or photo receptor) and that this value eventually results in the creation of a single pixel which is either displayed on screen or printed on a print device.

Since these mathematical values are directly converted into pixels, with the field of view crop sensor, we have six million values producing six million pixels in an array of so many wide by so many tall. With the full frame cropped sensor, we have lost a good number of these pixels, and therefore have considerably less than six million left.

When an image is printed on a print device or displayed on a monitor, each pixel represents a given amount of real estate. That is it has a real size in two dimensions. This size does not vary, e.g., one pixel is the same size as the next. So if you have more pixels in one case (the 1.6x crop factor) and fewer in the other (the cropped full frame) the crop factor sensor will produce a larger image than the cropped full frame sensor simply because there are more pixels used to draw the image. It will also have greater resolution (higher de-facto pixel count) than that created from the crop of the full frame image, even though the field of view (total content of the frame) is identical.

This then is where the "magnification" issue come from. It's not an an optical "magnification," it simply happens because of the fact that we are using the full complement of photodiodes to represent a smaller amount of real estate and because the pixels created from each photo receptor occupy a fixed and identical space in two dimensions. This is true whether we print or display the image on screen.

As an aside, when we crop a film negative or transparency, then enlarge, we loose resolution just as we would when cropping a full frame sensor image. When we capture with a field of view crop sensor, we do not loose resolution vis a vis the identical "crop" from a full frame, and we get the "boost" in size simply because of the fact that these mathematical values end up creating pixels of fixed dimensions and we have more of them.

There is a meaningful difference then, between a simple crop of a full 35mm frame, and a crop factor sensor. It's "similar" to what happens when we crop and enlarge a full frame, but there is a significant difference.

Lin
Lin Evans is offline   Reply With Quote
Old Apr 6, 2003, 8:14 AM   #4
Senior Member
 
Join Date: Jun 2002
Posts: 259
Default

Wow, Lin--that's quite a graphic explanation. Now, my circle of confusion may be getting bigger

Would I be correct in assuming that if we had a full 35mm frame of 12 megapixels, vs a 6mp frame with the 1.6 crop factor (again, excluding the Bayer interpolation,) such that the crop on the full frame would end up being 6mp, that the two crops then would not only have the same number of pixels, but would then produce the same size image on the monitor, at exactly the same resolution?
WalterK is offline   Reply With Quote
Old Apr 6, 2003, 9:51 AM   #5
Moderator
 
Join Date: Jun 2002
Posts: 1,139
Default

Hi Walter,
Yes, that's exactly correct. If there were precisely 12 megapixels in the full frame and precisely six megapixels in the 1.6x field of view crop sensor, then a six megapixel crop of the 12 megapixel sensor would be functionally and operationally identical to the 1.6x crop factor's screen display and "resolution."

Of course in reality, the relationships between the actual sensors presently available are not linear. For example, a 1.6x crop of the 1DS's image results in a considerable loss of resolution (pixel count) vis a vis the 10D or D60.

Obviously there are numerous other factors affecting image quality such as photo receptor well depth and diameter which affect signal to noise ratios, so it's much more difficult to draw specific conclusions about which image might be "superior." Even with its four megapixel resolution and the comparative loss of resolution, the output from a 1D capture compares very favorably with that from a D60/10D, primarily because the well depth and diameter afford favorable signal to noise for the 1D.

The "magnification" portion of the crop factor has several facets, of which the "resolution" is only one. The other issues concern the fact that we have "cropped" to a field of view. For example, let's look at the full frame for a moment. We have "X" number of pixels in the matrix, and these pixels describe an area bounded by the native frame size. When we reduce the frame size without increasing the pixel count as in a true crop, we loose the resolution lost to the crop. So when this cropped image is displayed on a fixed resolution screen, the true size of the figures within the frame remain constant, but the frame dimensions decrease according to the crop factor. But imagine what would happen if we could magically perform this crop and add the lost pixels back into the matrix while maintaining the fixed display resolution. Since the pixels which constitute the elements of the image are of fixed dimensions, the proportional relationships between the figures comprising the area within the crop remain constant, but the overall size increases in a linear relationship vis a vis the resolution increase.

So the confusion often results from "thinking" in terms of 35mm film where a field of view crop results in a smaller "negative" which gets optically "enlarged" at the expense of resolution. With the crop factor sensor the "crop" happens, but not at the expense of resolution. In relation to film, it's as if we could somehow squeeze the number of silver halide crystals in that portion of a 35mm film frame lost to the crop into the cropped area to "keep" the resolution. It's sort of an electronic "sleight of hand" which makes it slightly different from a simple field of view crop.

Of course this is why photojournalists, especially sports photographers, dearly love the crop factor sensors. It gives them a real "boost" in terms of effective magnification without the resulting loss of resolution they would experience with film. Thus they can use shorter focal length lenses - lighter, smaller, etc., and get similar results to what they would have with longer, heavier glass with 35mm film frame.

Is it magic? In a way yes. Think of the incredibly good images even at print sizes of 14 inches or better which it's possible to get with the tiny little sensors on the better consumer cameras. You pack five megapixels worth of microscopic photo-receptors into a less than postage stamp sized sensor and emulate what's possible with APS film quality, even rival 35mm film quality at nominal print sizes. All this because you can use tiny true focal lengths and still get sufficient light with tiny lenses to "magnify" the images an incredible amout and make them look much like a 35mm "equivalency." Of course one gives up favorable signal to noise ratios and must accept much greater depths of field than their true "equivalents" in terms of 35mm frame size. But nobody ever questions that "magnification" has happened there, but when we talk in terms of using 35mm lenses with crop factor sensors, suddenly the term "magnification" is somehow a "dirty word" and has caused probably more arguments and discussions than any other single factor in digital imagery.

I think the confusing part is that when we "think" in terms of 35mm film, we loose sight of what's really happening at the electronic level. Obviously, there is no free lunch - there are trade offs associated with this "magic," but the magic is still "real" and the results are astounding.

Best regards,

Lin
Lin Evans is offline   Reply With Quote
Old Apr 6, 2003, 1:16 PM   #6
Senior Member
 
Join Date: Jun 2002
Posts: 259
Default

Whew! Thanks for the explanation. I think I understand it, too. So when are you going to put your knowledge out in book form? You do so good a job of clarifying these issues that people with DSLR's who are interested in the technical side of what they do would fall over each other to get copies! I'm serious about that...
WalterK is offline   Reply With Quote
Old Apr 6, 2003, 9:49 PM   #7
Junior Member
 
Join Date: Jan 2003
Posts: 13
Default

Lin,
I am very impressed with your knowledge in this area and was wondering if you could offer your comments on a related topic. A while ago I was reading a thread on another site driven by an individual who maintains where is a loss of image quality (on modestly enlarged prints) when apertures above f/11 are used, due to the 1.6x factor of the sensor. I really can’t say I fully understand his theory, but he did go to great lengths to explain it. I personally haven’t noticed any deterioration in my images above f/11 on my D60. Others had similar comments but his explanation was simply that our eye wasn’t good enough. Have you come across this theory and if so do you have any thoughts on it?
Regards,
Roy
R_Patterson is offline   Reply With Quote
Old Apr 7, 2003, 12:33 AM   #8
Moderator
 
Join Date: Jun 2002
Posts: 1,139
Default

Hi Roy,

Yes, I read that too, but lost interest because it didn't mesh with practical experience.

In the digital photography arena, there are numerous "theories," some prove true, some just don't pan out for one reason or another. There are two basic ways of developing a theory. The first is to simply look at known events, then attempt to explain them from what we know, or think we know about the world and the physics of our universe. The second way is to attempt to understand how things "might" be from deductive reasoning. Simply put, it's inductive versus deductive reasoning.

As a former scientist, I appreciate both approaches, but when it comes to practical physics, I find the inductive method to be more practical. The theory he was presenting, as I remember concerned diffraction, but having never experienced a problem at small apertures with the crop factor sensors and serious enlargments, I must assume that somewhere he has missed something important which makes the rest of the conclusions invalid.

In my personal experience, the 1.6x crop factor images made even at F22 enlarge beautifully if done properly.

Lin
Lin Evans is offline   Reply With Quote
Old Apr 7, 2003, 9:04 PM   #9
Junior Member
 
Join Date: Jan 2003
Posts: 13
Default

Thanks Lin,
Maybe my eye isn't that bad afterall...
Regards,
Roy
R_Patterson is offline   Reply With Quote
Old May 21, 2003, 2:47 AM   #10
Senior Member
 
Join Date: May 2003
Posts: 577
Default

Hi Lin,

Since the 10D sensor is 1.6 times smaller than a standard 35 mm sensor, don't you lose a lot of light intensity, and therefore the 10D will perform poorer in low light conditions than a standard 35mm SLR would? Assuming using the same lens, same aperture and shutter speed.

Basically more light from the circle the lens projects over the sensor is discarded in the 10D case. Thus less photons are hitting the sensor to record information.

Barthold
barthold is offline   Reply With Quote
 
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off



All times are GMT -5. The time now is 11:43 PM.