|
![]() |
|
LinkBack | Thread Tools | Search this Thread |
![]() |
#121 |
Senior Member
Join Date: Jan 2004
Posts: 2,483
|
![]()
Hi Eric
The problem I have with TMoreau is that he has no comprehension of what "resolving" means. He confuses resolving with magnification. The cropping factor magnifies, it does NOTHING to resolve. I agree with you here on the faults of Mark's posts. But the larger error is with TMoreau who has no understanding that the ability to resolve detail is NOT just adding more MP's to the sensor - meaning his repeated statements that given enough MP's even a a small focal length with a cropping factor equivalent, can match a long lens on a camera with less MP's. Dave |
![]() |
![]() |
![]() |
#122 |
Senior Member
Join Date: Dec 2005
Posts: 477
|
![]()
eric, the aperture size can be deduced easily (though, not exactly) by taking 400 divided by 2.8, which equals 142.85, divide by 25.4 (to convert from mm to inches), and you get 5.62". Thats pretty close to what canon listed in your link, but a handier way to figure it. Figuring this way you can assume the lens will be a little larger to avoid vignetting and other issues, but its a theoretical minimum, and effective diameter.
I'm glad to see you addressing that little slip, I've been wondering if I was the only one that noticed! I'm working on my proper response, but as a few of you have pointed out (thanks, and point taken) we need to look at this more carefully and scientifically, which takes a bit of time. I'll have it ready soon (oohh, the suspense, eh?). |
![]() |
![]() |
![]() |
#123 |
Senior Member
Join Date: Jul 2005
Posts: 1,940
|
![]()
Thanks Eric,
In my view you have clarified matters; I wish the other contributers can take cue from you. This is an interesting argument with lots of merits but issues are so mixed up that feable mind folks like me can't follow. I am glad that those who have contributed after you have tried to restore some sanity to the matter in question. :G Regards. Jaki |
![]() |
![]() |
![]() |
#124 | |
Member
Join Date: Jan 2006
Posts: 91
|
![]() Quote:
The smaller magnifying glass however no matter how close or how far away will never be capable of producing the same amount of heat or the same intensity as the larger glass. What you fail to grasp is what enables more optical magnification to produce a better picture. It is the reason why video camera manufactures make a big deal out of advertising that there camera has a bigger OPTICAL zoom. If there were no benefit to a BIGGER OPTICAL ZOOM, they wouldnt bother, they would just all have the same optical zoom and use less CHEAPER digital zoom. Do you not comprehend what optical zoom is? a 300mm lens has more optical zoom than a 200mm lens, its that simple! If you dont understand this then ask a video camera producer about the benefits of optical zoom as opposed to digital zoom. A 1.5 crop camera is a crop of a 35mm. This means that the sensor of the35mm camera has been cropped just like you crop a photo. It hasnt been shrunk, it has been cropped. This means that if you take a 35mm 12mp camera and you crop it, that you are removing the outside pixels to make it a smaller sensor with less pixels, so it only has say 8mp or whatever the actual amount is. It hasnt been shrunk, so that now you have a 15mm sensor with 12mp, it has been cropped so that the 4mp have been cropped away. So if you were to restore the 15mm to the original 35mm size simply by enlarging it, you would end up with a 35mm sensor with only 8mp. The sensor is like film, if you want to enlarge the 35mm to a4 size and enlarge the 15mm to a4 size, you will have to enlarge the 15mm more times to get to the same size since it is smaller. This is digital magnification as you are enlarging it digitally not optically. |
|
![]() |
![]() |
![]() |
#125 | |
Member
Join Date: Jan 2006
Posts: 91
|
![]()
Optical magnification is not simply making things larger, its bringing in more detail. If you see a person in the distance, its hard to recognize them. Is this simply because the image is too small? NO The actual size of the image is not the problem it is the amount of deatail available to you. If you see a very small photograph of someoe say the size of a coin, you have no problem recognizing who it is, yet the actual size of the image is comaprable to the size of the image of a person you see in the distance. It is not the size of the image but the amount of detail that is brought by the available light. The further light has to travel the more it diperses.
Suppose you have a slide projector and you shine an image on a screen ten feet away, so that there are 10,000 dots per square inch. Now you move the screen back further to 20 feet what happens? The image gets bigger and so there are now less dots per square inch. You move it back 30 feet, what happens? The image gets larger and so there are less dots per square inch. So the image is not simply getting bigger so that there are always 10,000 dots per sqaure inch, it is becoming dilated, so there are less dots per square inch. This means that in between the dots, there are blanks. This is what happens when you move further away from an object, the further away, from the object the less dots per square inch. The less dots per square inch, the less detail.Now suppose you are looking at an image on the wall of someones face and you are close up so that all you can see is that persons face. Suppose we say that your eye can only resolve 10,000 dots per square inch. When you are directly in front of the image, all 10,000 dots are made up of that image. You move back and what happens? You see more. Although your angle of view has increased, the number of dots has not, you still resolve ten thousand dots, however that is now made up of not only the persons face but the surrounding area as well, so that now the image you see of the face contains only 8,000 dots. You move back further and what happens? Your angle of view increases so you see more. The number of dots you see is still the same however it is divided up between more scenery. So now the number of dots that you receive from the face is only 5,000 per square inch. This is why when someone is far away from you, you cant recognize them, its not because the image is too small but the amount of information available to you or dots per square inch is less, so your mind has to fill in the blanks. It fills in the balans just like it does when you see a painting of trees. there are no actual leaves of branches painted, it is just your mind filling in the blanks and focring you to believe it is actually a tree. What optical magnification does, is not simply make the image bigger but it obtains the light CLOSER to the source where there are more dots per sqaure inch. suppose you are looking at the image of the face on a wall and it is 10,000 dots per square inch. You move back and now look at it through a 300mm lens on a 35mm camera so you have the same field of view, how many dots does the face contain? It contains 10,000 dots. Now you have another 35mm camera with a 200mm lens at the same distance away from the image. the field of view will be larger wont it? You wont only see the face you will see the area around it as well, therefore the number of dots that are actually coming from the face will only be 8,000. Now you have a third camera a 1.5 crop with a 200mm lens the same distance. the field of view will be identical to the 300mm lens correct? However the field of view is only the same because you have removed, or cropped the sensor and are seeing a croped version of what the other 200mm lens is seeing. It is not the optics that have changed but the image siize inside the camera. So the 200mm lens is seeing the same thing as the other 200mm lens, therefore the dots per sqaure inch it is seeing is only 8,000. Because you are not getting as many dots per sqaure inch of the face, you are not getting as much detail. So it is like the oil painting, you have less detail or dots per square inch to work with, so your mind has to fill in the blanks. This is why video camera manufacturers make a big deal out of the OPTICAL magnification, they wouldnt bother making more powerful optical magnification unless there was a benefit to it, and this is the benefit, the amount of DETAIL. Its not about making things bigger but bringing in more detail from the source because light dilates, so you need a more poweful lens to bring it back together and focus it. Or as NASA put it: Quote:
|
|
![]() |
![]() |
![]() |
#126 | |||||||||
Senior Member
Join Date: Mar 2005
Location: Extreme Northeastern Vermont, USA
Posts: 4,309
|
![]()
Mark47 wrote:
Quote:
|
|||||||||
![]() |
![]() |
![]() |
#127 | |||||||
Senior Member
Join Date: Mar 2005
Location: Extreme Northeastern Vermont, USA
Posts: 4,309
|
![]()
DBB wrote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
|
|||||||
![]() |
![]() |
![]() |
#128 | ||
Member
Join Date: Jan 2006
Posts: 91
|
![]() Quote:
A cropped sensor is called a crop sensor because it is a 35mm sensor that has been cropped. This means it hasnt been shrunk it has been cropped, you have cropped the outside of the 35mm so that only 15mm is left.This in turn will crop the field of view of a full frame lens but not a DC lens but it is the sensor that has been physically cropped. It was 35mm and now it is 15, that is cropped which is why they are referred to as cropped sensors. |
||
![]() |
![]() |
![]() |
#129 |
Senior Member
Join Date: Dec 2005
Posts: 477
|
![]()
Whoa, Dave almost restated my whole argument in his own words, in a way that implied he agreed with me. Almost. Missing was the idea that the resolving power of the lens leaves us the ROOM to increase the resolving power of the sensor, before hitting ANY limitations. Oh, wait, he did say it. So why disagree? Are you simply resisting my percieved misuse of terminology or somesuch semantic BS? I'll let the recent contradictions lie, since I'm more interested in the topic than the argument.
Alright, lets try to create a starting point then. The purpose of the following is to determine the theoretical limitations of a perfectly constructed lens, and if you can achieve as good results with a shorter focal length as you would with a longer focal length, all else being equal. We already know they will capture the same image (composition wise), the question is if one has reduced detail due to focal length differences. The examples will be "real-world" compatible, rather than wild exagerations that lose scientific basis and real-world relevance. Baseline Camera Specs: Camera 1 FF: --36x24mm sensor --3000x2000 pixels (83.3 pixels per mm, aka 12 micron photsites) --200mm f/2.8 lens with 12 degree FOV (10 degrees horizontally, 6.9 vertically) Camera 2 APS-C: --24x16mm sensor --3000x2000 pixels (125 pixels per mm, aka 8 micron photosites) --135mm f/2.8 lens with 12 degree FOV (10 degrees horizontally, 6.9 vertically) Let us assume they are constructed identical, lenses are of exacly equal quality, ditto sensors, image processing, air quality, temerture, and any other variables you can think of, except the ones listed. Noise differences and other sensor design complications that may be caused by the size difference of the photsites will be ignored for our purposes (DOF, anyone?). The cameras are in the same location, and the object being photographed is in the same location. Since the sensors are a different size, and the focal lengths are different by a relative amount, the composition will be the same. Lets assume that the lenses are constructed so that at thier respective focal lengths, they are optimized for the given FOV (i.e., we are not using a 35mm lens on a 1.5x crop camera, its a full frame lens on a full frame camera and a "digital only" lens on the APS-C camera). As we know, a 200mm lens can be constructed so that it has a FOV of 100 degrees, or a FOV of 5 degrees. This fact has no bearing on the discussion, the FOV is 12 degrees on both lenses. Now, with these facts established, what can we determine about the physical construction of the lenses? Is one larger? Does one have higher magnification? Lets put some numbers to those concepts. Object Being Photographed: Size: 4.5 feet wide by 3 feet tall (Filling the entire frame) Distance: Aprox. 21 feet Easiest to consider is the magnification factor, camera 1 = 1:38.1 and camera 2 = 1:57.2, niether, of course, is magnifying the object being photographed. They are reducing it relative to actual size, one more than the other, to fit on the sensor. This is the case in all examples of photography except macro, where the object is often recorded at 1:1, or microphotography where the object is actually enlarged. These are not relevant to this discussion. In fact, the magnification factor isn't either, but the term gets used a lot so here it is. Now for lens size, a 200mm f/2.8 lens (camera 1) has an effective diameter of 71.4mm. The 135mm f/2.8 lens (camera 2) has an effective diameter of 48.2mm (i.e., a smaller lens). These would be the aperture diameters if the aperture was placed directly behind the front element, even though it isn't its ratio remains the same and the front element must be at least this size, perhaps larger to prevent vignetting. A few interesting quotes before we continue: Wikipedia "This means that if you have a 75-300 mm lens, a physically bigger diaphragm opening will be needed at 300 mm than at 75 mm, to maintain the same f-number. More light is needed as the focal length increases, to compensate for the fact that light from a smaller field of view is being spread over the same area of film or detector." Digiscoping article "In a perfect telescope [the] resolution is limited by the effective aperture and the wave nature of light (i.e., diffraction effects)." Obviously (and I have stated this before) there is a limit to how much a lens can resolve. Our purpose here is to find if that limit is something we are on the brink of, or if its so far off as to be practically irrelevant. We could stop here and start drawing conclusions (guesses, really) relating to the aperture, but that wouldn't be any fun. It also would NOT answer the question we are pursuing in any scientific manner. Lets pause to consider for a moment how much detail the sensors can resolve. With each camera capturing the same composition using the same number of pixels, on our 4.5 foot by 3 foot subject each pixel is .018" square. For scale reference your average human hair is .003" thick. This size is equal to roughly 12.25 seconds of arc (.0034 degrees). So how, in a scientific manner, do we answer the resolution question? Could We refer to the Numerical aperture? The airy disk and Abbe equation? It appears thats just what we need to refer to. Code:
Numerical Aperture (N.A.) = n sin θ A point should be made regarding the demosaicing process, if the 8-12 micron photosites are one color only (assuming a bayer pattern filter) and have thier full color interpolated from surrounding photosites... does the luminosity remain relative to the raw data? To maintain sharpness an edge must be resolved by the lens onto as few pixels as possible. If a sharp edge is projected onto the sensor as a gradient spanning several pixels, then the sensor would be said to be outresolving the lens. If luminosity is degraded in demosaicing, it would have the same effect as lowering the lens resolution relative to the sensor resolution (i.e., if the lens resolved several orders of magnitude more detail than the sensor, that would be lost (of course), and then further degraded). For our purposes, lets assume the sensor works with theoretical perfection and this is not a concern, after all, were really talking about the lenses here. Ok, using the Abbe equation, Code:
R.P.= (Wavelength of light in nm * 0.61)/N.A. So, just what resolutions can we achieve on the current 35mm SLR platform? Lets forget looking up angle of view for different lenses and sensors, and figure it for ourselves. Code:
Angle of view α = 2 arctan (d/2f) So horizontal angle of view of a 200mm lens on a FF camera: 2*tan^-1(36/(2*200)) = 10.3 degrees (Good, that matches the above data found elsewhere on the net). Knowing all of the above formulas, its easy to put together the following chart. The incredible revalation here, is that as you scale the camera down theoretical lens resolution scales right with it! This means a small high-res sensor with a short focal length lens performs to the same theoretical maximums as a larger sensor of the same resolution and a longer focal length lens. I KNOW I've said that before, and I KNOW I've been told I have no idea what I'm talking about. Please, for the love of photography, somebody correct me if I've fudged it all up. I'm not a lens engineer, and while I've checked my data against several sources, we all make mistakes ![]() Oh, and I DO acknowledge the fact that at these tiny sizes manufacturing tolerances, glass properties, and such interfere with the theoretical maximum resolution in a non-linear manner. There are of course other limitations to sensor design, too. I have provided lots of data and figures that I did not use, that is to be thorough and to give others a shortcut if they want to try to disprove my findings, please feel free! On the same token, lets not throw around emotionally based ideas and feelings. Prove them, as I have my ideas. |
![]() |
![]() |
![]() |
#130 |
Member
Join Date: Jan 2006
Posts: 91
|
![]()
Basically what you have done is completely ignored what I have said, blocked out the fact that is is correct, and found a different way to same the same thing. You have done zero to show any fault in what I have said, only that you prefer your opinion over mine.
This is an example of your reasoning method: "If you someone runs their into the back of your car, its their fault because they were driving too close. "If you run your car into the back of someone its their fault because they braked too hard. "Selective reasoning; if you run your car into someone its your fault, full stop end of story. This however doesnt stop you from blocking this factor from your mind as if it doesnt exist, which is exactly what you have been doing to my argument; you block it out even though you know its correct, you can block it out and pretend you havent seen the truth, you know you were driving too close behind, you know its wrong but still you can block that out by substituting your own version of reality. The truth is irrelevant, as long as you can find another version of your own reality. Its a very bizzare human characteristic that people will go to such lengths to prove themselves right as if the truth is so important to them but when it is staring them in the face they can completely ignore it " I wasnt too close you braked too hard". How can someone see the correct answer as so important but when it is given to them they have no interset in it? Quite bizzare! style="BACKGROUND-COLOR: #000000"In this case the ever faithful use of jargon, technical terms and quotes from books. People use this all the time because they think it makes them look like they know what they are talking about. The probelm is that you dont understand it or what relevance it has, since it has none, it just looks "impressive" because of some jargon but is basically blabbering on about nothing of any relevance. style="BACKGROUND-COLOR: #000000"I was talking about the properties of light and how they effect detail. Bascially as was clearly demonstrated was that the further you move away from an object the less light from the source you are getting and the more AMBIENT LIGHT. Whatever you are blabbering on about does nothing to address this FACT. All you do is read through what I have said, and go "he still doesnt agree with me therefore he is still wrong, therefore I will explain myself a different way" . So because you havent tried to comprehend anything I have said, you are not even talking about the same subject, your post has as much relevance as talking about truck tyres. |
![]() |
![]() |
![]() |
Thread Tools | Search this Thread |
|
|