Go Back   Steve's Digicams Forums > Digicam Help > General Discussion

Reply
 
Thread Tools Search this Thread
Old Sep 28, 2005, 11:32 PM   #1
Senior Member
 
LINEBACKER 2's Avatar
 
Join Date: Aug 2004
Posts: 423
Default

I own a Kodak DX7630 which for a 6.1 MP camera, is decent. However, as I learned more and more about digital cameras, I have found out that the compression that the camera does to my images is actually quite brutal, even on the "fine" setting.

Are all point and shoot cameras this harsh for compression?

Steve


LINEBACKER 2 is offline   Reply With Quote
Sponsored Links
Old Sep 28, 2005, 11:48 PM   #2
Senior Member
 
Join Date: Aug 2005
Posts: 283
Default

My Olympus C-7070 in its SHQ mode (its highest setting) outputs a file equivalent in size to the highest quality JPEG in Photoshop, level 12.

How big are your files?
David French is offline   Reply With Quote
Old Sep 29, 2005, 2:58 AM   #3
Administrator
 
Join Date: Jun 2003
Location: Savannah, GA (USA)
Posts: 22,378
Default

LINEBACKER 2 wrote:
Quote:
I own a Kodak DX7630 which for a 6.1 MP camera, is decent. However, as I learned more and more about digital cameras, I have found out that the compression that the camera does to my images is actually quite brutal, even on the "fine" setting.

Are all point and shoot cameras this harsh for compression?
My best guess (and this is only a guess), is that Kodak is actually varying the amount of compression based on some kind of internal scene analysis.

The amount of compression seems to vary in a peculiar manner (file sizes don't seem to change in the same way you would expect them to). You may have one scene that's very detailed with a file size that would be indicative of very little compression.

Yet, you may have another scene that's relatively detailed, too (but it's file sizes may be significantly smaller, with more difference in them than I'd expect to see based on typical JPEG algorithms.

There is something odd going on with how they seem to do it. I've often suspected something like that was going on for how they handle color, too (a peculiar way some of their cameras seem to handle sky color -- almost as if they are changing some kind of internal curves when they recognize sky in an image to make it appear to be a deeper blue. It seems to be more than just saturatoin and contrast. Not all models exhibit this behavior.

My guess is that this scene analysis could also be tied back into the JPEG compression scheme (agan, just guessing -- based more on a "gut feeling" than anything else, based on images I've looked at from some of their models).

I've seen relatively detailed images that don't appear to be overcompressed, yet may see another photo from the same model, with some parts of the scene being roughly the same, where those parts may be appear to be too compressed in a different photo. Hence, my thinking that they must be looking at the scene as a whole, then applying more or less compression to the entire scene based on some type of analysis.

I'm a bit puzzled by it (but, I'm not sure that all of the problem is with amount of JPEG compression either).

JimC is offline   Reply With Quote
Old Sep 29, 2005, 10:33 AM   #4
Senior Member
 
Join Date: Sep 2005
Posts: 1,093
Default

A few years back, I did a detailed analysis of JPEG lossiness for a medical imaging product that my company produced. We wanted to get better compression (our raw images were about 100 MB per image, so they could bog even a hospital LAN down if you were moving them around.) The final upshot was that JPEG is very hard to predict wrt its lossy impact on images. If you know the relevant frequency content of the kind of image you care about, you can create optimized quant tables for the lossy filtering step that will leave the frequencies you care about largely intact. But even that is unbelievably difficult to do. Plus, for most imaging, the "magnification" will vary, so deciding what frequencies apply to this image -- even if you know what they are in absolute terms -- is a very hard problem.

The final result of all this is that you just can't tell with JPEG (or any other lossy algorithm with which I am familiar). Even the size of the file is not a very good indicator of the subjective "quality" of the image. The stuff that we want to preserve varies from one image to the next, and is in the eye of the beholder. When that detail is diagnostic information, it is somewhat more objective, and can be measured by controlled tests. But the plain fact is that you can't predict, for any specific image, whether diagnostically-meaningful information will be lost. On average, you can say with high confidence for a given modality that a given compression won't hurt you. But, if you're the outlier patient whose cancer was discarded by a lossy algorithm (or your baby loses the wonderful glint in his eye because of some stupid algorithm), the fact that "on average" the compression desn't affect the image really doesn't mean much.

For what it's worth, in the medical imaging community, "large matrix images" (typical digital camera images would qualify as this -- video images would be considered "small matrix" and require a bit less compression) are considered to be "essentially" lossless if the JPEG compression is a factor of ten or less. Of course, the truncation of the image from 12-bits-per-channel to 8 bits per channel is not even part of the JPEG compresssion as such, and does not enter into these calculations. (In medical applications, we don't do that. 12 bit data remains 12 bit, although, of course, it must be scaled to 8-bit for display. But, if the physician wants to change the window and level ("brightness" and "contrast" within the image), we can rescale to reveal more of the relevant data range on his display. Unfortunately, prosumer JPEG data doesn't do this -- probably because commercial computer programs are expecting 8-bts-per-channel JPEG.

Sorry to blather on about this, but I find this stuff fascinating. I also looked at DAUB4 wavelet compression. The good thing about this kind of compression is that it appears to hold onto detail at higher compressions. The same level of quality that JPEG maintained at 10x compression was held by DAUB4 wavelet compression up to about 20x. In addition, unlike JPEG, you know how much compression you will achieve for any given compression setting. You can ask for a 10x compresssion, and that's what you get. With JPEG, you set the quant table, and the file sizes can easily vary for that setting bya factor of 2or more. (BTW, if you have the option of generating the Huffman tables for the specific image, you can achieve as much a an additional factor of 2 compression with no increase in lossiness -- it just takes longer to compress.)

The problem in a medical context for the wavelet stuff is that the artifacts that it introduces ("ricing") can look organic, resulting in biopsies that were occasioned by the lossy compressionrather than the patient's condition as revealed on the raw data set. With JPEG, the artifacts look very non-organic and do not result in needless procedures. However, in a context that does not have the same constraints as medicine, wavelets are much better compression routines. I imagine that, once JPEG2000 finally gets going, the cameras will switch over to that format. It will be a big win.

tclune is offline   Reply With Quote
Old Sep 29, 2005, 10:56 AM   #5
Administrator
 
Join Date: Jun 2003
Location: Savannah, GA (USA)
Posts: 22,378
Default

I actually found some notes from someone that managed to "hack" the firmware in one of the Kodak models a while back (I don't remember which model).

They figured out how to change the quantization tables. But, the only thing they ended up with is larger file sizes (with virtually no discernable difference in image quality) -- another reason I'm not convinced it's just a JPEG Compression issue (although the "hack" may not have been doing what the hacker thought it was doing).


JimC is offline   Reply With Quote
Old Sep 29, 2005, 10:58 AM   #6
Senior Member
 
Join Date: Aug 2005
Posts: 283
Default

Fascinating tclune, thanks!
David French is offline   Reply With Quote
Old Sep 29, 2005, 11:13 AM   #7
Administrator
 
Join Date: Jun 2003
Location: Savannah, GA (USA)
Posts: 22,378
Default

JimC wrote:
Quote:
I actually found some notes from someone that managed to "hack" the firmware in one of the Kodak models a while back (I don't remember which model).

They figured out how to change the quantization tables. But, the only thing they ended up with is larger file sizes (with virtually no discernable difference in image quality) -- another reason I'm not convinced it's just a JPEG Compression issue (although the "hack" may not have been doing what the hacker thought it was doing).

I located the info on the hacked firmware.

If you look at this thread, I quoted the person that modified the firmware for JPEG Quantization tables in one of the Kodak models (it was the DX6340)

http://www.stevesforums.com/forums/v...mp;forum_id=18

He ended up reverting back to the original firmware.


JimC is offline   Reply With Quote
Old Sep 29, 2005, 11:14 AM   #8
Senior Member
 
Join Date: Sep 2005
Posts: 1,093
Default

JimC wrote:
Quote:
I actually found some notes from someone that managed to "hack" the firmware in one of the Kodak models a while back (I don't remember which model).

They figured out how to change the quantization tables. But, the only thing they ended up with is larger file sizes (with virtually no discernable difference in image quality) -- another reason I'm not convinced it's just a JPEG Compression issue (although the "hack" may not have been doing what the hacker thought it was doing).


Yeah, quant tables are an interesting subject in themselves. If you read Pennebaker & Mitchell, they explain the justification for the sample quant tables that were included in the JPEG standard. The rationale has absolutely nothing to do with today's equipment (it had to do with the expected spatial resolution of monitors back when JPEG was being standardized). Nonetheless, these sample tables work really well in general on equipment that has nothing to do with the parameters for which they were developed. There have been some application-specific quant tables developed within my field. They do seem to allow as much as a factor of two improvement in compression for the given application, but they don't "port" well to other conditions. Given that the original intent of both the sample quant tables and the sample Huffman tables in the standard was just to provide examples of each, they work surprisingly well over a large range of applications. I tried to deevelop improved tables, and they were better in limited applications, but worse in others. Since it is hard to know exactly what th user will be doing, we decided to stick to the quant tables in the standard -- although it is always a win to generate your own Huffman tables if you can spare the time. If I were going to change a quant table today, I would limit my attention to the chrominance tables, because they really beat the crap out of the chrominance data. This should be easy to make an improvement on if you weren't unduly concerned aboutfile size.


tclune is offline   Reply With Quote
Old Sep 29, 2005, 10:25 PM   #9
Senior Member
 
LINEBACKER 2's Avatar
 
Join Date: Aug 2004
Posts: 423
Default

Usually under 2 megs are the size of my pictures. For a 6.1 MP camera, isn't that a little small?

Steve


LINEBACKER 2 is offline   Reply With Quote
Old Sep 29, 2005, 11:07 PM   #10
Administrator
 
Join Date: Jun 2003
Location: Savannah, GA (USA)
Posts: 22,378
Default

LINEBACKER 2 wrote:
Quote:
Usually under 2 megs are the size of my pictures. For a 6.1 MP camera, isn't that a little small?
Nah.. not really (although some cameras produce larger files in their highest quality modes, a 2 mb file can be pretty decent (using a camera with a good sensor, supporting chipset, and image processing algorithms).

You really have to examine each camera on a case by case basis.

For one thing, depending on the scene content, it may compress better and give you smaller file sizes. If you've got a large areas of the frame thathave the same colors without a lot of detail and image can compress much better (it doesn't take as much space to represent what's being repeated in the same area)

More detail in the scene will not compress as well (taking more space to represent in a JPEG file).

Also, demosaic algorithms (how the camera is converting the data from the CCD into a bitmap before processing and conversion to JPEG) can be a lot different between camera models. One camera may have better or worse algorithms compared to another, leaving more or lessartifacts, more or less smoothing, etc.

Noise can also impact file sizes. For example, some cameras I've seen with very large JPEG files really didn't have very good images (more noise in a photo doesn't compress well and can cause larger JPEG file sizes). Images from some cameras with large areas of sky (that would normally compress OK), sometimes have much larger files than commonly found, just because of the sky noise produced by the camera's sensor and/or supporting chipset.
JimC is offline   Reply With Quote
 
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off



All times are GMT -5. The time now is 6:42 PM.