How Computational Photography Is Letting Mobile Phones Perform Like ILCs
HWM Singapore|January 2019
How Computational Photography Is Letting Mobile Phones Perform Like ILCs

We take a look at three photo features that are turning up in today’s smartphones and the technology behind them to see how they’re producing ILC-like images.

Marcus Wong

Computational photography is the capture and processing of images involving the use of digital computation on top of the regular optical processes. Simply put, instead of just recording a scene as the sensor captures it, computational photography also uses the information gathered to fill in details that have been missed out. Here are three applications of this technique that you’ll see in the latest smartphones today.

1. SEEING IN THE DARK WITH GOOGLE

The problem with trying to capture images in low light with digital sensors is that you get image noise which results in artefacts and random spots of color in the image. Every camera suffers from this because at low light the number of photons entering the lens varies greatly.

Traditionally, we counter this by letting more light in to the sensor for each exposure. Placing the camera on a tripod works, but you’ll then need your subject to hold still enough for the capture. So, Google uses multiple short exposures and the optical flow principle to calculate the optimal time for each exposure.

articleRead

You can read up to 3 premium stories before you subscribe to Magzter GOLD

Log in, if you are already a subscriber

GoldLogo

Get unlimited access to thousands of curated premium stories, newspapers and 5,000+ magazines

READ THE ENTIRE ISSUE

January 2019