Finally, a Camera Without a Lens (and a Sensor the Size of a Pixel)

Engineers have developed a lens-less camera that actually works.

  • Share
  • Read Later
IEEE International Conference on Image Processing

Cameras as we know them have long been eye-like: a lens captures light and focuses it on film or a detection sensor, just as the lenses in our eyes focus light on our retinas. What would an eye be like without a lens? Capable of receiving light and to some extent discerning color, but otherwise useless, completely unable to focus that light. So a camera without a lens is kind of deal-breaker, right?

Maybe not. But before we go there, you need to understand something called “compressed sensing” — or at least I did.

Compressed (alternatively, compressive) sensing involves the notion that traditional signal capture techniques gather more information than they need to — far more than necessary to all but perfectly recreate the original signal, anyway. With compressed sensing, then, you gather samples of a signal or image, then use a special construction algorithm to reproduce the original perfectly.

I don’t pretend to understand the math underlying the theory — if you’re a math geek, this Stanford paper is a math-splosion on the subject — but imagine taking a picture of a flower, only capturing and storing random pixels from the subject (thus saving both time and space). With compressed sensing, an algorithm would then work to fill in the missing pixels, eventually producing a high definition image that’s essentially indiscernible from one taken using a traditional “just grab everything” lens-based camera. I’m oversimplifying here, but think of it as a little like working backwards from a compressed signal to an uncompressed one: You’re taking just pieces of something, then using predictive math to assemble those into an accurate whole.

Before you ask: no, as I understand it. It doesn’t work on data that’s already been compressed, say JPEG images or lossy audio files. The trick with compressed sensing is that you’re capturing information at the outset in a way that works with the reconstruction algorithm somehow.

Back to our lensless camera. I chanced on this story at MIT Technology Review and did one of those cartoon double-takes: a camera not only without a lens, but capable of snapping perfectly focused pictures with a sensor the size of a pixel. It’s the work of Bell Labs engineer Gang Huang and others in New Jersey, who report they’ve built an LCD panel that operates using a range of shutter-like apertures that can open or close at random and through which light can pass to a single-pixel sensor.

“The architecture consists of two components, an aperture assembly and a sensor. No lens is used,” Huang told the magazine.

The apertures allow light through at random, grabbing multiple samples of an image source, then a compressed sensing algorithm pores over the information and assembles a relatively complete version. The more samples taken, the higher quality the image, and you need only a fraction of the scene data to create these images.

What might you do with a lensless camera? Not have to worry about the sort of optical aberrations that can crop up with lenses, for starters. As Technology Review notes, absent a lens, “The scene is entirely in focus and the resolution of the image depends on the size and number of the apertures and the point-like nature of the light sensor.” The Bell Labs camera is also reportedly built from off-the-shelf components, so it’d be cheap, in theory, to produce.

The only downside: It takes a fair bit of time for the camera to gather its samples, meaning it’s only really workable (for the moment) when observing static scenes. Still, impressive stuff. Imagine this sort of technology in future space missions, for instance, where you might craft a more robust, solid state image detection system (as Technology Review notes, the technology works at other light wavelengths, including infrared).

4 comments
randomraccoon
randomraccoon

This reminds me of trying to watch low-quality videos pre-YouTube. When I saw something interesting in an action sequence that I wanted to look more closely at, still-frames were uselessly corrupted. I could barely make out a thing. But when watching it in video, the individually low-quality stillframes became a much higher-quality video. I could see details that I would never see in individual frames, because my brain combined the scant data from each frame into a fuller picture. I guess it surprises me that it isn't more common to apply that same principle for image capture.

IgorCarron
IgorCarron

To those of you wondering,


Tthe use of compressive sensing and single pixel is not new and was already featured on the ArXiv blog a while back:

http://www.technologyreview.com/view/412593/why-compressive-sensing-will-change-the-world/

I also provided an explanation on how it worked right here:

http://nuit-blanche.blogspot.com/2007/07/how-does-rice-one-pixel-camera-work.html

Let me also note that the technology developed at Rice for the single pixel camera is already put in products by a company based out of Austin, TX called Inview ( http://inviewcorp.com/ ) and their products are indeed looking into the IR range.

Igor.

PS: I write a small blog on issues related to compressive snesing anhd machine learning at: http://nuit-blanche.blogpost.com

cosminyes
cosminyes

@DRMINHU wrong : Intel Competition Winner Self-Driving Autonomous Car Low-Cost Innovation Tech News. Ionut Budisteanu, 19, of ROMANIA was awarded first place for using artificial intelligence to create a viable model for a low-cost, self-driving car at this year's Intel International Science and Engineering Fair, a program of Society for Science & the Public.

DRMINH
DRMINH

Matt Peckham @mattpeckham

Thank you for this article.

The winner of intel competition this year, a young Bulgarian student,  developed an algorithm for self-driven vehicles that will be much cheaper (a factor of 100) than Google algorithm. He uses a employs a low definition 3-D camera but uses artificial intelligence to construct the rest as contrast to expensive high definition 3-D cameras used by Google.

I was told by Masako Ishikawa, a pianist that she reads only 80% of the scores and the rest is processed by her experience on the music theory and practice. We can drive effectively in the evenings for the same reason: detailed information of the environment can be filled in by experienced drivers.

This is exactly how the brain works: 2 processes.

I am using this philosophy in my profession to extract big data information and sentiments of the information: The Sentiment Engine.

drminh.com