For people interested in this sort of technology, the August 2006 IEEE Computer has a section on computational photography. Apart from some generally interesting stuff (e.g. synthetic-aperture cameras with an effective F/56 (yes, 56, not 5.6) aperture), there's also a discussion of various types of technology that you can use to do this sort of thing. One is an HDR sensor using either multiple sites sensitive to different levels of light (a bit like the Fuji sensors), the other is having different sets of sites on the sensor exposed to the scene over different integration times. The result (after off-camera postprocessing) is the ability to generate HDR images without using bracketing.
A somewhat less useful technique is to interpose a micromirror array between the lens and the detector and use the mirrors to shut down over-bright pixels. Unfortunately since it's a binary operation (either on or off), you get dimming at the cost of losing all the light to that pixel.
All of this stuff is purely experimental technology (and unlikely to make it into cameras any time soon unless someone figures out how to rewrite the laws of physics/optics), but it's a cool read nonetheless.