Go Back   Steve's Digicams Forums > Digital Cameras (Point and Shoot) > Nikon

Reply
 
Thread Tools Search this Thread
Old Oct 1, 2003, 10:16 AM   #21
Senior Member
 
Join Date: Dec 2002
Posts: 5,803
Default

I agree with NHL. The slow part of most (all?) cameras is the writing of the data to the CF card. Uncompressed TIFF is so large, writing it out makes it a very slow (I would say almost useless) format for a camera. RAW is larger (and therefor slower) than jpg, but has its benefits so people use it. Personally, if the camera supports RAW TIFF is almost useless. I assume TIFF support is a checkbox item for marketing people.

Some people claim amazing wonders in photo recovery with RAW formats. I've seen examples where the difference is so small as to be not worth the effort. I assume the truth is inbetween; subjective, and picture specific. I haven't switched to RAW, but I expect I probably will. My fear is that it will become another barrer for me when editing the pictures. I don't enjoy that part of the process (even though it's not that long) so I put off doing it... adding RAW conversion might only make me do it less frequently. I like looking at the pictures, seeing which ones work and didn't... but not the true post-processing stuff.

There was a discussion about parts of what you asked several months ago. The general conclusion was that the camera might do things to a picture before certain noise reduction and filtering is done. That might produce better/different results than if you do it after all that in PS. On the other hand, it is hard to deny that PS can use the amazing computational power of your computer to use better algorithms to process the picture. This usually produces better results (but the others shouldn't be ruled out.)

All RAW formats are company specific, and often they are camera specific. This is especially true when you add a new sensor into the mix (the D2h has a brand new sensor made by Nikon.) For example, the Canon 10D's raw format was different than all other Canon formats. It was close enough that you could tweek it and get it to work but all the software that supported RAW conversion had to put out a new version to support it for real.

NHL

As you hinted at, the real benefit of having 48-bit color is in rounding and math. You have more to work with and less arounding errors. Its the same with computer graphics cards. Their internals are getting wider and wider data paths eventhough most people don't run at 32-bit depth on their monitors (and they even have wider than that.) Its purely because computer games now adays do so many rendering passes on the scene that the rounding errors accumulate and become artifacts. With more data and then down sampling those errors are small and then are down sampled away.

Eric

ps. There is a reason that Intel was funding multi-media companies in the mid 90's. They needed to generate a market for their new processors. Most people didn't need them. Gaming and home video/picture editing helps sell new processors.
eric s is offline   Reply With Quote
Old Oct 1, 2003, 8:45 PM   #22
Senior Member
 
Join Date: Sep 2003
Posts: 137
Default

One of the nicest things about this forum, is that the more people contribute, the more the "pieces of the jig-saw puzzle" get filled in, and eventually it all starts to make sense.

My brain has figured out a lot of what you said, but before I ask what could be some really dumb questions, let me ask the following.....

My Olympus is supposed to be "4 megapixels". That's a starting point. The "best" resolution it can capture in 'tiff' or 'jpg' is 2240 x 1680. Well, 2240 x 1680 = 3,763,200 which is pretty close to four million.

So, question #1 - is that where the four million came from? If you want to know how many megapixels your camera is capable of shooting in, do you take the maximum resolution as I did, and multiply?


I know, the maximum resolution means on the "image sensor", not on the saved image, but I'm guessing that the sensor is 4 million pixels, and the best resolution is the same, so the image represents each one of the pixels in the image sensor, when the picture was taken. .....is this right?


------------------------------------------------------------------

eric - I wouldn't call the 'tiff' format useless, as my Photoshop program will happily work with it, but not with 'raw'. So, for people like me, who don't know enough to really know what's going on, the "tiff" is more useful than the "raw", but then again, my huge "tiff" file doesn't look any better than my much smaller "jpg" file shot in SHQ mode. I've spent a lot of time comparing the two, and maybe I don't know what to look for, but I can't see any advantage (on my Olympus e-10) in using 'tiff' rather than SHQ "jpg".


I've got a few more questions, but before I go any further, I'd like to find out the answer to the above question, as if it's a wrong assumption on my part, then my follow-up questions are pretty dumb.
mikemyers is offline   Reply With Quote
Old Oct 1, 2003, 9:18 PM   #23
NHL
Senior Member
 
NHL's Avatar
 
Join Date: Jun 2002
Location: 39.18776, -77.311353333333
Posts: 11,547
Default

Quote:
My Olympus is supposed to be "4 megapixels". That's a starting point. The "best" resolution it can capture in 'tiff' or 'jpg' is 2240 x 1680. Well, 2240 x 1680 = 3,763,200 which is pretty close to four million.

So, question #1 - is that where the four million came from? If you want to know how many megapixels your camera is capable of shooting in, do you take the maximum resolution as I did, and multiply?
Well... Yes and No. The 2240 x 1680 is the actual final image size which is close, but the sensor/CCD is able to capture much more than that...

Usually a CCD is rated in absolute term, ie the actual number of pixels that the sensor is designed for and capable of, but then during manufacturing not all the pixels end up good (and are mapped out) or the peripherals one are used so they end up with an effective number of pixels which is slighly less. Then from theses you get the interpolated final image size which is your 2240 x 1680!
http://www.dpreview.com/learn/key=effective+pixels


Quote:
I know, the maximum resolution means on the "image sensor", not on the saved image, but I'm guessing that the sensor is 4 million pixels, and the best resolution is the same, so the image represents each one of the pixels in the image sensor, when the picture was taken. .....is this right?
Right except you get only a 1/3 of the resolution in each monochrome color (if it's RGB). The final total # of pixel is in color but an interpolation of all 3 monochrome pixels which I described earlier in my previous post:
http://www.dpreview.com/learn/key=colour+filter+array


Quote:
I can't see any advantage (on my Olympus e-10) in using 'tiff' rather than SHQ "jpg"
This is why Eric and most people called it useless. Use raw for max quality, and jpeg for space efficiency. Tiff is neither and is only good if you want to archive your final picture and don't want any compression to happen to it... Your Tiff is only 8-bit, if it was 16-bit only limited functions are allowed on current version of Photoshop!
NHL is offline   Reply With Quote
Old Oct 1, 2003, 11:04 PM   #24
Senior Member
 
Join Date: Dec 2002
Posts: 5,803
Default

mikemyers

I'm talking about a camera producing TIFF. Many cameras have to wait for the TIFF to be written to the flash card before taking another picture. Because there is so much data in a TIFF, that can be upwards of 40 seconds on some cameras. And you are locked out of using the camera for that entire 40 seconds. To me, that makes is unacceptable. And I'd say for most people too. (It should be said that the 40 seconds comes from the D100. The D2h is a different class of camera.. it might take less time in that camera, I don't know.)

TIFF as a format is very nice. I use it my self. But why cameras produce pictures in it is a bit beyond me now that RAW support has grown. RAW seems to suplant TIFF as an "out of camera" format. If someone ever produces a camera which can write TIFF as fast (or nearly as fast) as jpg or raw that would be great! More color depth without having to learn how to do the RAW conversion. Very nice.

And I also agree with both you (mikemyers) and NHL about the difference between TIFF and jpg. There really isn't, so why put up with the in-camera slowness of TIFF?

Eric
eric s is offline   Reply With Quote
Old Oct 2, 2003, 12:42 AM   #25
Senior Member
 
Join Date: Sep 2003
Posts: 137
Default

Great - I feel like I'm in school.... I think you both have answered the biggest question I was wondering about. Let's see if I got it right.

I think you're telling me that the image sensor has some number of pixels, and somehow they capture an image. I think you're telling me that the "raw" image is right from the image sensor - whatever the camera "recorded" electronically is what goes into the "raw" file. It can be processed, if desired, in the camera or on a computer it gets transferred to.

Then, to create a useable image that can be viewed, the camera (or computer) has to take all this data and spread it out into say, 2240 x 1680 (or whatever) number of squares laid out in a geometric pattern, which represents the image. Those 2240 x 1680 squares (pixels?) can be viewed on a CRT screen or any other kind of viewing device.

If this is true, then is that the reason why "raw" is better than the image - because the image has to be manipulated and processed to distribute it over those 3,763,200 squares. It's obvious that some quality will be lost in doing that. If this is true, then that makes perfect sense as to why it's "better" to do as much manipulating as possible while the image is in "raw" format, and when you're all done, only then save it out to a different format that can be viewed, printed, or whatever.

Yes?





Here's a sticky question. I took one of my images, and zoomed in as close as I could, so a bunch of those boxes (pixels) can be seen quite obviously. I know you can't "see" a "raw" image, but if you could, would it look like this, or would the boxes be quite different? Specifically, the "grey" boxes that go all around the number "2".... if this image was made with 100 times more "pixels", if I can call them that, would the "2" still "look" like it's made up of lots of colored boxes, or would it then look more like a "2", maybe blurry, but not "pixelated"? (I imagine that if it was captured on film, there would be no "boxes" at all, just a slightly blurry "2".

Here's the image.... and I guess the question is how much gets "lost" when the millions of pixels of data on the image sensor get processed to fill up a specific number of boxes for a viewable image, in my example, 3,763,200 of them?



mikemyers is offline   Reply With Quote
Old Oct 2, 2003, 12:53 AM   #26
Senior Member
 
Join Date: Sep 2003
Posts: 137
Default

(You know, if what I just posted is correct, and the new D2h Nikon can produce images of 2464 x 1632, compared to my Olympus e-10 which does 2240 x 1680, I think I'm going to love the image quality even though it's "only" rated at 4 megapixels. I've made 24" x 36" enlargements for a show, from my e-10, and everyone was amazed at how well they came out. Heck, I was amazed - I would have had a pretty hard time doing that well with 35mm film!!!

However, suppose I've now got that "raw" (or Nikon format) image stored on my computer. Is there any reason on the computer, as to why I couldn't save it out to a format that is "bigger" than the camera specification of 2464 x 1632? If I didn't want a gigantic enlargement to look "pixelated", couldn't I have my computer save it out to any size I wanted? This wouldn't make the image any better than what the camera recorded, but it sure would make for a better print, should I ever want to make one that's say, six feet by eight feet in size...)
mikemyers is offline   Reply With Quote
Old Oct 2, 2003, 5:36 AM   #27
NHL
Senior Member
 
NHL's Avatar
 
Join Date: Jun 2002
Location: 39.18776, -77.311353333333
Posts: 11,547
Default

Quote:
I know you can't "see" a "raw" image, but if you could, would it look like this, or would the boxes be quite different?
Mike - You still failed to grip that each pixel from the CCD is not in color, but monochrome R, G, and B with a resolution of a 1/3 less! I simpilfied a bit here, but if you refer back to the previous link from dpReview theses pixels are arranged in a Bayer pattern with actually more G than the other two! The picture you posted is the result after de-Bayer (or demosaic), ie after each monochrome pixel has been recombined into a full color pixel!

ie for simplification from the CCD/sensor:
1/3 resolution @ 12_bit monochrome R
1/3 resolution @ 12_bit monochrome G
1/3 resolution @ 12-bit monochrome B

----- In camera or on PC computation ------>

To an image which is is now interpolated and recombined into a full resolution picture that you posted:
Full resolution full-color (8_bit R, 8_bit G, 8_bit B)

So if a RAW image is displayed, it'll actually be much coarser than your picture (ie a 1/3 of the resolution for each color), with only 3 primary colors, ie no yellow, no peach/purple color or even black/white, but the color depht is 12-bit each! Your image is the results of the computation with the full pixel counts where every pixel is no longer monochrome but finer and in full color, ie all 3 color combined with all its shade, but only at 8-bit depht per color for a grand total of 24-bit per pixel!
Capiche?

BTW just to confuse you, the Foveon sensor is full color/pixel! :lol:


Quote:
as to why I couldn't save it out to a format that is "bigger" than the camera specification of 2464 x 1632?
You're doing this all the time when you re-size the picture! :lol: :lol: :lol:

Instead of resizing in down, size it up, of course you can't create more datapoint from where it does not exist before, but some fractal programs will interpolate quite well with even color/less details such as skin tone...
NHL is offline   Reply With Quote
Old Oct 2, 2003, 8:25 AM   #28
Senior Member
 
Join Date: Sep 2003
Posts: 137
Default

You know, I read that link twice last night, but somehow it didn't sink in, even though you've mentioned it before...

OK, for the "raw" image, I've got three "much coarser" sets of data, one for each of the three colors (R G B), but the data for each "color" is 12-bit, meaning many more shades of gray representing each color.

So, simplifying things, based on what I think you just said, if I could "see" just the "red" data from out of a "raw" image, I'd have a much coarser "grid", with much larger "boxes", all the boxes would be "gray", and I'd have many more shades of gray than are used later on in the finished (8-bit) image.

For the "finished" image, made up of lots of "colored and shaded boxes" called pixels, each of those "boxes" contains 8 bits of red data, 8 bits of green data, and 8 bits of blue data, meaning each of the four million boxes contains 24 bits of data, per pixel, and I've got four million of those pixels (at the resolution I picked earlier).




If I've (finally) got that right, then maybe the real reason professionals like dealing with the "raw" image is because it is still working with 12-bit data for each of the three colors? .....and similarly, the reason why Photoshop won't work with these "raw" images, is because each camera manufacturer, for each model camera, has a different specification for what their "raw image" is, so there's no "standard", and that's why Photoshop doesn't understand them? .....that, and some use the GRGB sensor, and others use a CYGM sensor, and there are probably lots of other things unique to each manufacturer/model?



By the way, I read that link a couple of times, and even though I missed the things that now seem obvious (you pointed out what I missed), the last part of it still doesn't make sense to me. I see where they take the red, green, and blue channel pixels, and combine them to get a "combined" result, but I'm still lost on how they get from that "combined" image to the "finished" image below it. It says they use a "demosaicing algorithm" which looks at the surrounding pixels to somehow figure out what to do. Well, just considering the top right hand corner, how is that going to change a tan area into a white area? Even more puzzling to me, is if that represents a white area, why isn't it white when the three colors are combined?
mikemyers is offline   Reply With Quote
Old Oct 2, 2003, 9:25 AM   #29
NHL
Senior Member
 
NHL's Avatar
 
Join Date: Jun 2002
Location: 39.18776, -77.311353333333
Posts: 11,547
Default

Quote:
So, simplifying things, based on what I think you just said, if I could "see" just the "red" data from out of a "raw" image, I'd have a much coarser "grid", with much larger "boxes"
... Almost right. The boxes are not larger, but they are the same size as the finished pixel but only the red one will show up, the green and blue ones will be a void, ie a larger blank in between since a red filter will only look for the red color... The same is true for the other two colors. You're seeing three different sets of 12-bit grey tone for each color with the total being the whole...

When all three colors are combined in the de-mosaic process, the void in each color will be "fill-in" by a guess of the adjoining color/pixels... You know most of this explanation, and picturals are 'oversimplified'... there's people researching this all the time! :lol:
NHL is offline   Reply With Quote
Old Oct 2, 2003, 11:33 AM   #30
Senior Member
 
Join Date: Dec 2002
Posts: 5,803
Default

Also, you are right on about why PS doesn't (before the Adobe Raw Converter plugin) like RAW. Itís just a name for loads of different file formats which are all (basically) proprietary. Technically, since many camera manufactures donít make the sensor, they buy them, many of the RAW formats might be very close to each otherÖ but very close isnít good enough for software.

Now you can buy a plugin to PS (from Adobe) which will convert RAW files of many cameras into something usable (I assume TIFF or JPEG, I donít know.) Iíve heard mixed things about it. I get the impression that if you get the settings right, the results arenít bad. If not, you get ok results. A newer version of that plugin is built into the new version of PhotoShop (PhotoShop CS, was PS 8.0.)

Just to add to the info overload, the actual term for the part of the sensor that records the light is a ďphotositeĒ. So you have a layout of photosites on the chip, each recording one wavelength band of light (Red or Yellow, for example) and those are assembled into pixels in software (or firmware.) If Iím wrong with that description, Iím sure someone will step up and correct me, but I believe itís correct.

As to the aliasing effect on the 2 in your picture. That will happen to every camera if you zoom in far enough ( I donít know how far you zoomed in this example but I assume very far if a single pixel is that large.) They do this to smooth out the transition in curves and lines so they look better. The point will be that if you have more data (either a higher resolution camera or you enlarged the image and used interpolation to add more data) you can print with more unique pixels per inch which should made your printed pictures look better. Of course, after a certain point, the picture wonít benefit from more data. And the brain is good a filling in details, so not all pictures require more data to make good prints at higher dots-per-inch.

Eric
eric s is offline   Reply With Quote
 
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off



All times are GMT -5. The time now is 8:35 AM.