The Camera Man

Popular Photography|January 2017

The Camera Man

Over a 30-year career in imaging science, Shree Nayar has already transformed the camera in your pocket. Now he’s out to unlock images you’ve never seen before.

Corinne Iozzio

Standing in front of a projection screen in his office at Columbia University’s School of Engineering and Applied Science, Shree Nayar points to a close-up of a human eye. At first glance, it’s nothing remarkable: just a healthy brownish color with striations zigzagging between the edge of the iris and the pupil. But the cornea of the eye, Nayar explains, has a thin film of tear on it that makes it a reflective surface, a mirror. Straight on, that mirror is a circle; at an angle, it’s an ellipse; but it’s always a mirror.

Nayar clicks to the next slide, an inside-out image of that same eye. “You can go back to the picture and figure out exactly what’s falling on the full mirror, which is a wide-angle view of the world around the person.” The subject’s surroundings are apparent in the image, but algorithms developed by Nayar’s lab can isolate the specific thing the person is focusing on. This research, first published in 2004, is only one example of how Nayar believes we can develop new photographic technologies that will reveal our world in ways we’ve never seen before.

Nayar, 53, heads the Columbia Vision Laboratory, where he has been a pioneer in the discipline of computational imaging, or computational photography. Conventional digital photography largely emulates the structure of the original camera obscura, which uses lenses to deliver and replicate a 3-D image on a 2-D plane. Computational imaging uses digital processes and novel optics to capture light in ways that would be garbled or unrecognizable to our eyes. Following capture, it’s the sensor or image processor’s job to unscramble the data to reveal a final image. The approach opens up features and functions that would not be possible using traditional photography.

Computational imaging is a technology that’s been trickling into the mainstream. Cameraphone sensors have done simple facial or object recognition and automatically corrected color and distortion for years, and HDR capabilities—many of which are based on a 15-year research collaboration between the Columbia Vision Lab and Sony—have quickly become the norm. Dual-sensor camera phones like the LG G5 and the iPhone 7 Plus capture depth information, which allows after-the-fact refocusing. And Canon’s new EOS 5D Mark IV features a novel dual-pixel Raw mode, which captures separate image info from two photodiodes on each pixel; the extra data allows photographers to subtly adjust focus and correct ghosting within Canon software after the picture is taken.

But Nayar wants to develop technologies that capture all the image data a photographer might need “before the damage has been done,” he says—that is, before the image is stored to memory. Today’s features and tricks are just the tip of the iceberg. Ultimately, this vast volume of data will let us to see the world in new ways.

articleRead

You can read up to 3 premium stories before you subscribe to Magzter GOLD

Log in, if you are already a subscriber

GoldLogo

Get unlimited access to thousands of curated premium stories and 5,000+ magazines

READ THE ENTIRE ISSUE

January 2017