I actually found some notes from someone that managed to "hack" the firmware in one of the Kodak models a while back (I don't remember which model).
They figured out how to change the quantization tables. But, the only thing they ended up with is larger file sizes (with virtually no discernable difference in image quality) -- another reason I'm not convinced it's just a JPEG Compression issue (although the "hack" may not have been doing what the hacker thought it was doing).
Yeah, quant tables are an interesting subject in themselves. If you read Pennebaker & Mitchell, they explain the justification for the sample quant tables that were included in the JPEG standard. The rationale has absolutely nothing to do with today's equipment (it had to do with the expected spatial resolution of monitors back when JPEG was being standardized). Nonetheless, these sample tables work really well in general on equipment that has nothing to do with the parameters for which they were developed. There have been some application-specific quant tables developed within my field. They do seem to allow as much as a factor of two improvement in compression for the given application, but they don't "port" well to other conditions. Given that the original intent of both the sample quant tables and the sample Huffman tables in the standard was just to provide examples of each, they work surprisingly well over a large range of applications. I tried to deevelop improved tables, and they were better in limited applications, but worse in others. Since it is hard to know exactly what th user will be doing, we decided to stick to the quant tables in the standard -- although it is always a win to generate your own Huffman tables if you can spare the time. If I were going to change a quant table today, I would limit my attention to the chrominance tables, because they really beat the crap out of the chrominance data. This should be easy to make an improvement on if you weren't unduly concerned aboutfile size.