Go Back   Steve's Digicams Forums > Digicam Help > General Discussion

Reply
 
Thread Tools Search this Thread
Old Mar 11, 2006, 9:30 PM   #121
DBB
Senior Member
 
Join Date: Jan 2004
Posts: 2,483
Default

Hi Eric

The problem I have with TMoreau is that he has no comprehension of what "resolving" means. He confuses resolving with magnification. The cropping factor magnifies, it does NOTHING to resolve.

I agree with you here on the faults of Mark's posts. But the larger error is with TMoreau who has no understanding that the ability to resolve detail is NOT just adding more MP's to the sensor - meaning his repeated statements that given enough MP's even a a small focal length with a cropping factor equivalent, can match a long lens on a camera with less MP's.

Dave
DBB is offline   Reply With Quote
Old Mar 11, 2006, 10:18 PM   #122
Senior Member
 
tmoreau's Avatar
 
Join Date: Dec 2005
Posts: 477
Default

eric, the aperture size can be deduced easily (though, not exactly) by taking 400 divided by 2.8, which equals 142.85, divide by 25.4 (to convert from mm to inches), and you get 5.62". Thats pretty close to what canon listed in your link, but a handier way to figure it. Figuring this way you can assume the lens will be a little larger to avoid vignetting and other issues, but its a theoretical minimum, and effective diameter.

I'm glad to see you addressing that little slip, I've been wondering if I was the only one that noticed!

I'm working on my proper response, but as a few of you have pointed out (thanks, and point taken) we need to look at this more carefully and scientifically, which takes a bit of time. I'll have it ready soon (oohh, the suspense, eh?).
tmoreau is offline   Reply With Quote
Old Mar 11, 2006, 10:43 PM   #123
Senior Member
 
Aumma45's Avatar
 
Join Date: Jul 2005
Posts: 1,940
Default

Thanks Eric,

In my view you have clarified matters; I wish the other contributers can take cue from you. This is an interesting argument with lots of merits but issues are so mixed up that feable mind folks like me can't follow. I am glad that those who have contributed after you have tried to restore some sanity to the matter in question. :G Regards. Jaki
Aumma45 is offline   Reply With Quote
Old Mar 11, 2006, 11:01 PM   #124
Member
 
Join Date: Jan 2006
Posts: 91
Default

Quote:
You keep saying over and over that some how inherently a 300mm lens gathers more light than a 200mm lens. That is absolutely and totally incorrect because that isn't enough information. A 300mm f4 lens gathers *exactly* the same amount of light as a 200mm f4 lens. Period. That is what an f-stop of 4 means, and is independent from focal length. In fact a 400 f2.8 lens gathers *more* light than a 500mm f4 lens. Yes, even though the 400mm lens has less focal length. That is the definition of what f2.8 and f4 mean. An f-stop of 2.8 gathers twice as much light as an f-stop of 4. The 400 f2.8 will weight more than the 500mm f4 because it has a much bigger front element (11.8 lb vs. 8.5lb.) Because a bigger front element (a bigger magnifying glass, like in your example) will gather more light. But that is independent of the focal length of the lens.
You are confusing the amount of light captured with the intensity at which you see it. A 400mm lens will be further away from the sensor than a 300mm so the light will be less intense. A small magnifying glass focused close to your hand will burn you, a larger magnifying glass if kept further away will not, however the larger magnifying glass will clearly have collected more light. the difference is the way that it is focused and the way you see it.

The smaller magnifying glass however no matter how close or how far away will never be capable of producing the same amount of heat or the same intensity as the larger glass.

What you fail to grasp is what enables more optical magnification to produce a better picture. It is the reason why video camera manufactures make a big deal out of advertising that there camera has a bigger OPTICAL zoom. If there were no benefit to a BIGGER OPTICAL ZOOM, they wouldnt bother, they would just all have the same optical zoom and use less CHEAPER digital zoom.

Do you not comprehend what optical zoom is? a 300mm lens has more optical zoom than a 200mm lens, its that simple! If you dont understand this then ask a video camera producer about the benefits of optical zoom as opposed to digital zoom.

A 1.5 crop camera is a crop of a 35mm. This means that the sensor of the35mm camera has been cropped just like you crop a photo. It hasnt been shrunk, it has been cropped. This means that if you take a 35mm 12mp camera and you crop it, that you are removing the outside pixels to make it a smaller sensor with less pixels, so it only has say 8mp or whatever the actual amount is. It hasnt been shrunk, so that now you have a 15mm sensor with 12mp, it has been cropped so that the 4mp have been cropped away. So if you were to restore the 15mm to the original 35mm size simply by enlarging it, you would end up with a 35mm sensor with only 8mp.

The sensor is like film, if you want to enlarge the 35mm to a4 size and enlarge the 15mm to a4 size, you will have to enlarge the 15mm more times to get to the same size since it is smaller. This is digital magnification as you are enlarging it digitally not optically.
Mark47 is offline   Reply With Quote
Old Mar 12, 2006, 12:04 AM   #125
Member
 
Join Date: Jan 2006
Posts: 91
Default

Optical magnification is not simply making things larger, its bringing in more detail. If you see a person in the distance, its hard to recognize them. Is this simply because the image is too small? NO The actual size of the image is not the problem it is the amount of deatail available to you. If you see a very small photograph of someoe say the size of a coin, you have no problem recognizing who it is, yet the actual size of the image is comaprable to the size of the image of a person you see in the distance. It is not the size of the image but the amount of detail that is brought by the available light. The further light has to travel the more it diperses.

Suppose you have a slide projector and you shine an image on a screen ten feet away, so that there are 10,000 dots per square inch. Now you move the screen back further to 20 feet what happens? The image gets bigger and so there are now less dots per square inch. You move it back 30 feet, what happens? The image gets larger and so there are less dots per square inch. So the image is not simply getting bigger so that there are always 10,000 dots per sqaure inch, it is becoming dilated, so there are less dots per square inch. This means that in between the dots, there are blanks.

This is what happens when you move further away from an object, the further away, from the object the less dots per square inch. The less dots per square inch, the less detail.Now suppose you are looking at an image on the wall of someones face and you are close up so that all you can see is that persons face. Suppose we say that your eye can only resolve 10,000 dots per square inch. When you are directly in front of the image, all 10,000 dots are made up of that image. You move back and what happens? You see more. Although your angle of view has increased, the number of dots has not, you still resolve ten thousand dots, however that is now made up of not only the persons face but the surrounding area as well, so that now the image you see of the face contains only 8,000 dots.

You move back further and what happens? Your angle of view increases so you see more. The number of dots you see is still the same however it is divided up between more scenery. So now the number of dots that you receive from the face is only 5,000 per square inch. This is why when someone is far away from you, you cant recognize them, its not because the image is too small but the amount of information available to you or dots per square inch is less, so your mind has to fill in the blanks. It fills in the balans just like it does when you see a painting of trees. there are no actual leaves of branches painted, it is just your mind filling in the blanks and focring you to believe it is actually a tree.

What optical magnification does, is not simply make the image bigger but it obtains the light CLOSER to the source where there are more dots per sqaure inch.

suppose you are looking at the image of the face on a wall and it is 10,000 dots per square inch. You move back and now look at it through a 300mm lens on a 35mm camera so you have the same field of view, how many dots does the face contain? It contains 10,000 dots. Now you have another 35mm camera with a 200mm lens at the same distance away from the image. the field of view will be larger wont it? You wont only see the face you will see the area around it as well, therefore the number of dots that are actually coming from the face will only be 8,000. Now you have a third camera a 1.5 crop with a 200mm lens the same distance. the field of view will be identical to the 300mm lens correct? However the field of view is only the same because you have removed, or cropped the sensor and are seeing a croped version of what the other 200mm lens is seeing. It is not the optics that have changed but the image siize inside the camera. So the 200mm lens is seeing the same thing as the other 200mm lens, therefore the dots per sqaure inch it is seeing is only 8,000.

Because you are not getting as many dots per sqaure inch of the face, you are not getting as much detail. So it is like the oil painting, you have less detail or dots per square inch to work with, so your mind has to fill in the blanks. This is why video camera manufacturers make a big deal out of the OPTICAL magnification, they wouldnt bother making more powerful optical magnification unless there was a benefit to it, and this is the benefit, the amount of DETAIL. Its not about making things bigger but bringing in more detail from the source because light dilates, so you need a more poweful lens to bring it back together and focus it.

Or as NASA put it:
Quote:
Simply stated, the primary function of a telescope is to collect light. The larger the telescope, the more light it can collect. The more light collected, the fainter and more distant the objects that can be observed.

Mark47 is offline   Reply With Quote
Old Mar 12, 2006, 12:23 AM   #126
Senior Member
 
VTphotog's Avatar
 
Join Date: Mar 2005
Location: Extreme Northeastern Vermont, USA
Posts: 4,214
Default

Mark47 wrote:
Quote:
Quote:
The smaller magnifying glass however no matter how close or how far away will never be capable of producing the same amount of heat or the same intensity as the larger glass.
Quote:
The same amount of heat, no, but the same intensity, yes.
Quote:

A 1.5 crop camera is a crop of a 35mm. This means that the sensor of the35mm camera has been cropped just like you crop a photo. It hasnt been shrunk, it has been cropped. This means that if you take a 35mm 12mp camera and you crop it, that you are removing the outside pixels to make it a smaller sensor with less pixels, so it only has say 8mp or whatever the actual amount is. It hasnt been shrunk, so that now you have a 15mm sensor with 12mp, it has been cropped so that the 4mp have been cropped away. So if you were to restore the 15mm to the original 35mm size simply by enlarging it, you would end up with a 35mm sensor with only 8mp.
Quote:
The 1.5 crop factor means that the sensor is 2/3 the size of the 35mm frame. 35mm is 24x36, so a 1.5 crop factor gives you 16x24mm sensor size. The image circle of a 200 mm lens is larger than the sensor, so you are only using 2/3 of the lens. Therefore, to the sensor, looking out into the world, it seems to be seeing the same angle of view as the full frame does with a 300mm lens. In your example, the smaller sensor has fewer pixels, but if they were both 12MP, the image would be the same. This means the pixels on the sensor have to be smaller, though.
Quote:

The sensor is like film, if you want to enlarge the 35mm to a4 size and enlarge the 15mm to a4 size, you will have to enlarge the 15mm more times to get to the same size since it is smaller. This is digital magnification as you are enlarging it digitally not optically.
Quote:
It won't matter if they have the same number of pixels. The problem with digital zoom is that it is cropping the existing sensor. The crop factor on DSLRs is cropping the lens (as near as I can explain it), not the sensor.
Quote:
I hope this helps clear things up a bit.
Quote:
brian
VTphotog is offline   Reply With Quote
Old Mar 12, 2006, 12:41 AM   #127
Senior Member
 
VTphotog's Avatar
 
Join Date: Mar 2005
Location: Extreme Northeastern Vermont, USA
Posts: 4,214
Default

DBB wrote:
Quote:
The problem I have with TMoreau is that he has no comprehension of what "resolving" means. He confuses resolving with magnification. The cropping factor magnifies, it does NOTHING to resolve.
Quote:
Perhaps you are both looking at it from such different perspectives that you cannot see that you are closer to being in agreement than you think.
Quote:

I agree with you here on the faults of Mark's posts. But the larger error is with TMoreau who has no understanding that the ability to resolve detail is NOT just adding more MP's to the sensor - meaning his repeated statements that given enough MP's even a a small focal length with a cropping factor equivalent, can match a long lens on a camera with less MP's.
Quote:
I don't think he said that. He did say, and I agree, that adding more pixels will increase resolution, but only up to the resolving power of the lens. At least that is what I have gathered from his posts. Which is very similar to what you are saying.
Quote:
I also don't think enough attention has been paid to Jim C's point that the smaller sensor is using the center of the lens circle, which tends to eliminate much of the edge softness in some lenses, allowing higher optical resolution. This means pixel density can be effectively increased and remain within the resolution of the lens.
Quote:
What the argument seems to be about, to me is that TMoreau seems to think that lenses are not the limiting factor in resolution, so much as other factors. I don't know, but given the history of the technology, I suspect that pixel density will increase faster than lens resolution, and we may very soon get to the time when lenses become the limiting factor, if other issues, such as noise and AA filters allow.
Quote:
brian
VTphotog is offline   Reply With Quote
Old Mar 12, 2006, 1:52 AM   #128
Member
 
Join Date: Jan 2006
Posts: 91
Default

Quote:
The sensor is like film, if you want to enlarge the 35mm to a4 size and enlarge the 15mm to a4 size, you will have to enlarge the 15mm more times to get to the same size since it is smaller. This is digital magnification as you are enlarging it digitally not optically.
Quote:
It won't matter if they have the same number of pixels. The problem with digital zoom is that it is cropping the existing sensor. The crop factor on DSLRs is cropping the lens (as near as I can explain it), not the sensor.
Of course it wontmatter from a digital perspective. That is what we are taking about the same digital resolution with different optical resolution.So if you have a 15mm crop with 12mp and a 200mm lens and a 35mm with 12mp and a 300mm lens, the 35mm will produce more detail because it is capturing more detail because of the more powerful lens.

A cropped sensor is called a crop sensor because it is a 35mm sensor that has been cropped. This means it hasnt been shrunk it has been cropped, you have cropped the outside of the 35mm so that only 15mm is left.This in turn will crop the field of view of a full frame lens but not a DC lens but it is the sensor that has been physically cropped. It was 35mm and now it is 15, that is cropped which is why they are referred to as cropped sensors.
Mark47 is offline   Reply With Quote
Old Mar 12, 2006, 2:08 AM   #129
Senior Member
 
tmoreau's Avatar
 
Join Date: Dec 2005
Posts: 477
Default

Whoa, Dave almost restated my whole argument in his own words, in a way that implied he agreed with me. Almost. Missing was the idea that the resolving power of the lens leaves us the ROOM to increase the resolving power of the sensor, before hitting ANY limitations. Oh, wait, he did say it. So why disagree? Are you simply resisting my percieved misuse of terminology or somesuch semantic BS? I'll let the recent contradictions lie, since I'm more interested in the topic than the argument.

Alright, lets try to create a starting point then. The purpose of the following is to determine the theoretical limitations of a perfectly constructed lens, and if you can achieve as good results with a shorter focal length as you would with a longer focal length, all else being equal. We already know they will capture the same image (composition wise), the question is if one has reduced detail due to focal length differences. The examples will be "real-world" compatible, rather than wild exagerations that lose scientific basis and real-world relevance.

Baseline Camera Specs:
Camera 1 FF:
--36x24mm sensor
--3000x2000 pixels (83.3 pixels per mm, aka 12 micron photsites)
--200mm f/2.8 lens with 12 degree FOV (10 degrees horizontally, 6.9 vertically)

Camera 2 APS-C:
--24x16mm sensor
--3000x2000 pixels (125 pixels per mm, aka 8 micron photosites)
--135mm f/2.8 lens with 12 degree FOV (10 degrees horizontally, 6.9 vertically)

Let us assume they are constructed identical, lenses are of exacly equal quality, ditto sensors, image processing, air quality, temerture, and any other variables you can think of, except the ones listed. Noise differences and other sensor design complications that may be caused by the size difference of the photsites will be ignored for our purposes (DOF, anyone?).

The cameras are in the same location, and the object being photographed is in the same location. Since the sensors are a different size, and the focal lengths are different by a relative amount, the composition will be the same.

Lets assume that the lenses are constructed so that at thier respective focal lengths, they are optimized for the given FOV (i.e., we are not using a 35mm lens on a 1.5x crop camera, its a full frame lens on a full frame camera and a "digital only" lens on the APS-C camera). As we know, a 200mm lens can be constructed so that it has a FOV of 100 degrees, or a FOV of 5 degrees. This fact has no bearing on the discussion, the FOV is 12 degrees on both lenses.

Now, with these facts established, what can we determine about the physical construction of the lenses? Is one larger? Does one have higher magnification? Lets put some numbers to those concepts.

Object Being Photographed:
Size: 4.5 feet wide by 3 feet tall (Filling the entire frame)
Distance: Aprox. 21 feet

Easiest to consider is the magnification factor, camera 1 = 1:38.1 and camera 2 = 1:57.2, niether, of course, is magnifying the object being photographed. They are reducing it relative to actual size, one more than the other, to fit on the sensor. This is the case in all examples of photography except macro, where the object is often recorded at 1:1, or microphotography where the object is actually enlarged. These are not relevant to this discussion. In fact, the magnification factor isn't either, but the term gets used a lot so here it is.

Now for lens size, a 200mm f/2.8 lens (camera 1) has an effective diameter of 71.4mm. The 135mm f/2.8 lens (camera 2) has an effective diameter of 48.2mm (i.e., a smaller lens). These would be the aperture diameters if the aperture was placed directly behind the front element, even though it isn't its ratio remains the same and the front element must be at least this size, perhaps larger to prevent vignetting.

A few interesting quotes before we continue:

Wikipedia "This means that if you have a 75-300 mm lens, a physically bigger diaphragm opening will be needed at 300 mm than at 75 mm, to maintain the same f-number. More light is needed as the focal length increases, to compensate for the fact that light from a smaller field of view is being spread over the same area of film or detector."

Digiscoping article "In a perfect telescope [the] resolution is limited by the effective aperture and the wave nature of light (i.e., diffraction effects)."

Obviously (and I have stated this before) there is a limit to how much a lens can resolve. Our purpose here is to find if that limit is something we are on the brink of, or if its so far off as to be practically irrelevant. We could stop here and start drawing conclusions (guesses, really) relating to the aperture, but that wouldn't be any fun. It also would NOT answer the question we are pursuing in any scientific manner.

Lets pause to consider for a moment how much detail the sensors can resolve. With each camera capturing the same composition using the same number of pixels, on our 4.5 foot by 3 foot subject each pixel is .018" square. For scale reference your average human hair is .003" thick. This size is equal to roughly 12.25 seconds of arc (.0034 degrees).

So how, in a scientific manner, do we answer the resolution question? Could We refer to the Numerical aperture? The airy disk and Abbe equation? It appears thats just what we need to refer to.

Code:
Numerical Aperture (N.A.) = n sin θ
Where n is the index of refraction of the medium in which the lens is working (1.0 for air) and θ is the half-angle of the maximum cone of light that can enter the lens, 6 degrees for our purposes. This means that both lenses (camera 1 and 2) have a N.A. of .10453

A point should be made regarding the demosaicing process, if the 8-12 micron photosites are one color only (assuming a bayer pattern filter) and have thier full color interpolated from surrounding photosites... does the luminosity remain relative to the raw data? To maintain sharpness an edge must be resolved by the lens onto as few pixels as possible. If a sharp edge is projected onto the sensor as a gradient spanning several pixels, then the sensor would be said to be outresolving the lens. If luminosity is degraded in demosaicing, it would have the same effect as lowering the lens resolution relative to the sensor resolution (i.e., if the lens resolved several orders of magnitude more detail than the sensor, that would be lost (of course), and then further degraded). For our purposes, lets assume the sensor works with theoretical perfection and this is not a concern, after all, were really talking about the lenses here.


Ok, using the Abbe equation,
Code:
R.P.= (Wavelength of light in nm * 0.61)/N.A.
Refering to the visible spectrum we find that at 400nm we are resolving 2334nm of detail, or 2.334 microns. At 700nm we are resolving 4084nm, or 4.084 microns. That means that we are resolving twice as much data as the sensor in camera 1 can use (in the worst case of our "perfect" example), and three times more than the sensor in camera 2 can use. Of course this means that the smaller sensor with a shorter focal length lens (same FOV, same amount of pixels) is closer to the theoretical maximum lens resolution than the larger setup. Its still unimportant in the context of this thread (looking back at the original posts, outrage against lenses turning into super-telephotos).

So, just what resolutions can we achieve on the current 35mm SLR platform?
Lets forget looking up angle of view for different lenses and sensors, and figure it for ourselves.
Code:
Angle of view α = 2 arctan (d/2f)
Where d is the sensor dimension (i.e., 36mm for the horizontal measurement of a full frame sensor) and f equals the focal length of the lens. Arctan can be found on your calculator as tan^-1.

So horizontal angle of view of a 200mm lens on a FF camera: 2*tan^-1(36/(2*200)) = 10.3 degrees (Good, that matches the above data found elsewhere on the net).

Knowing all of the above formulas, its easy to put together the following chart. The incredible revalation here, is that as you scale the camera down theoretical lens resolution scales right with it! This means a small high-res sensor with a short focal length lens performs to the same theoretical maximums as a larger sensor of the same resolution and a longer focal length lens. I KNOW I've said that before, and I KNOW I've been told I have no idea what I'm talking about.

Please, for the love of photography, somebody correct me if I've fudged it all up. I'm not a lens engineer, and while I've checked my data against several sources, we all make mistakes

Oh, and I DO acknowledge the fact that at these tiny sizes manufacturing tolerances, glass properties, and such interfere with the theoretical maximum resolution in a non-linear manner. There are of course other limitations to sensor design, too.

I have provided lots of data and figures that I did not use, that is to be thorough and to give others a shortcut if they want to try to disprove my findings, please feel free! On the same token, lets not throw around emotionally based ideas and feelings. Prove them, as I have my ideas.

Attached Images
 
tmoreau is offline   Reply With Quote
Old Mar 12, 2006, 3:22 AM   #130
Member
 
Join Date: Jan 2006
Posts: 91
Default

Basically what you have done is completely ignored what I have said, blocked out the fact that is is correct, and found a different way to same the same thing. You have done zero to show any fault in what I have said, only that you prefer your opinion over mine.

This is an example of your reasoning method:

"If you someone runs their into the back of your car, its their fault because they were driving too close.

"If you run your car into the back of someone its their fault because they braked too hard.

"Selective reasoning; if you run your car into someone its your fault, full stop end of story. This however doesnt stop you from blocking this factor from your mind as if it doesnt exist, which is exactly what you have been doing to my argument; you block it out even though you know its correct, you can block it out and pretend you havent seen the truth, you know you were driving too close behind, you know its wrong but still you can block that out by substituting your own version of reality. The truth is irrelevant, as long as you can find another version of your own reality. Its a very bizzare human characteristic that people will go to such lengths to prove themselves right as if the truth is so important to them but when it is staring them in the face they can completely ignore it " I wasnt too close you braked too hard". How can someone see the correct answer as so important but when it is given to them they have no interset in it? Quite bizzare!

style="BACKGROUND-COLOR: #000000"In this case the ever faithful use of jargon, technical terms and quotes from books. People use this all the time because they think it makes them look like they know what they are talking about. The probelm is that you dont understand it or what relevance it has, since it has none, it just looks "impressive" because of some jargon but is basically blabbering on about nothing of any relevance.

style="BACKGROUND-COLOR: #000000"I was talking about the properties of light and how they effect detail. Bascially as was clearly demonstrated was that the further you move away from an object the less light from the source you are getting and the more AMBIENT LIGHT. Whatever you are blabbering on about does nothing to address this FACT. All you do is read through what I have said, and go "he still doesnt agree with me therefore he is still wrong, therefore I will explain myself a different way" . So because you havent tried to comprehend anything I have said, you are not even talking about the same subject, your post has as much relevance as talking about truck tyres.
Mark47 is offline   Reply With Quote
 
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off



All times are GMT -5. The time now is 2:46 PM.