Cameras as we know them have long been eye-like: a lens captures light and focuses it on film or a detection sensor, just as the lenses in our eyes focus light on our retinas. What would an eye be like without a lens? Capable of receiving light and to some extent discerning color, but otherwise useless, completely unable to focus that light. So a camera without a lens is kind of deal-breaker, right?
Maybe not. But before we go there, you need to understand something called “compressed sensing” — or at least I did.
Compressed (alternatively, compressive) sensing involves the notion that traditional signal capture techniques gather more information than they need to — far more than necessary to all but perfectly recreate the original signal, anyway. With compressed sensing, then, you gather samples of a signal or image, then use a special construction algorithm to reproduce the original perfectly.
I don’t pretend to understand the math underlying the theory — if you’re a math geek, this Stanford paper is a math-splosion on the subject — but imagine taking a picture of a flower, only capturing and storing random pixels from the subject (thus saving both time and space). With compressed sensing, an algorithm would then work to fill in the missing pixels, eventually producing a high definition image that’s essentially indiscernible from one taken using a traditional “just grab everything” lens-based camera. I’m oversimplifying here, but think of it as a little like working backwards from a compressed signal to an uncompressed one: You’re taking just pieces of something, then using predictive math to assemble those into an accurate whole.
Before you ask: no, as I understand it. It doesn’t work on data that’s already been compressed, say JPEG images or lossy audio files. The trick with compressed sensing is that you’re capturing information at the outset in a way that works with the reconstruction algorithm somehow.
Back to our lensless camera. I chanced on this story at MIT Technology Review and did one of those cartoon double-takes: a camera not only without a lens, but capable of snapping perfectly focused pictures with a sensor the size of a pixel. It’s the work of Bell Labs engineer Gang Huang and others in New Jersey, who report they’ve built an LCD panel that operates using a range of shutter-like apertures that can open or close at random and through which light can pass to a single-pixel sensor.
“The architecture consists of two components, an aperture assembly and a sensor. No lens is used,” Huang told the magazine.
The apertures allow light through at random, grabbing multiple samples of an image source, then a compressed sensing algorithm pores over the information and assembles a relatively complete version. The more samples taken, the higher quality the image, and you need only a fraction of the scene data to create these images.
What might you do with a lensless camera? Not have to worry about the sort of optical aberrations that can crop up with lenses, for starters. As Technology Review notes, absent a lens, “The scene is entirely in focus and the resolution of the image depends on the size and number of the apertures and the point-like nature of the light sensor.” The Bell Labs camera is also reportedly built from off-the-shelf components, so it’d be cheap, in theory, to produce.
The only downside: It takes a fair bit of time for the camera to gather its samples, meaning it’s only really workable (for the moment) when observing static scenes. Still, impressive stuff. Imagine this sort of technology in future space missions, for instance, where you might craft a more robust, solid state image detection system (as Technology Review notes, the technology works at other light wavelengths, including infrared).