Camera Developments

Desperation is definitely having useful effects in the camera market. Having recently succumbed to temptation and bought a D7000, I’ve been avoiding camera news (and concomitant buyer’s remorse).

Q-Branch’s Latest Silly Gadget

Pentax has beaten Nikon to the small sensor interchangeable lens punch with its Q-series. Unfortunately, a 1/2.3″ sensor is, in my opinion, a bad, bad choice — it’s smaller than necessary to get the lens size down (and indeed the initial lens offering looks tiny; the mount seems superfluously large, and the lens itself seems to be far deeper than necessary, essentially a giant lens shade) — indeed, it’s smaller than the sensors in the enthusiast compacts like the XZ-1 and LX-5. I just don’t think serious shooters will pick an expensive camera with a new lens system and a 1/2.3″ sensor over a cheaper camera with a good fixed lens and better sensor as a primary or secondary camera.

Oh, and I think it’s the butt-ugliest design I’ve seen in years (although it looks better in black, assuming you don’t pop out the flash). It’s one thing to make your camera look like a classic rangefinder, but the Q looks more like one the clunky Russian Leica knockoff I owned as a teenager.

A Pen for the Pixel Peepers

Olympus has released a slew of new Micro 4/3 cameras, and the E-P3 in particular appears to address every possible complaint about earlier cameras, namely:

  • autofocus speed
  • display resolution and quality
  • video resolution and data rate
  • high ISO performance

This means that Olympus has finally released a compact micro-four-thirds camera with fast autofocus, sensor-shift stabilization, good display, and — judging from the high-ISO comparison shots on dpreview, competitive low light performance (no question it trails the NEX and X-100, but it’s at least in the fight which is adequate for most users). It makes me wish Apple actually would release a Micro Four-Thirds iPhone.

Living Pictures

Lytro is a company commercializing a Stanford research project (one of the committee members who signed the dissertation in question (PDF) is none other than Mark Horowitz of Andreesen Horowitz). The basic idea is that rather than focusing the image using a lens you record both the color and direction of incoming photons (using micro lenses on the sensor). Then, with a whole bunch of math you can calculate an image with more or less depth of field focused at whatever distance you like.

All this lets you make the same kinds of tradeoffs as with normal photography (use more samples but get less depth of field or use fewer samples and get more depth of field) but you can make those tradeoffs at processing time, or create interactive images with varying focus and depth of field. In theory, you could create 3d scenes with eye-tracking to simulate depth of field and focus point based on what you look at.

It’s a simple but brilliant idea. I remember reading about it a few years back (Hacker News?) and it’s amazing how fast things come to (or at least approach) market these days. Of course, it’s still vaporware for the moment, and the big question to my mind is whether it will end up being competitive with more conventional cameras that use brute force approaches to get similar results (e.g. Sony’s pellicle cameras can shoot rapid bursts and use the additional data to improve low light performance; they could easily vary focus and aperture and produce similar results to Lytro, at least for still scenes.

Still, I want to play with one.