Computational Photography

Smart Photography|June 2020

Computational Photography
Ashok Kandimalla has been in the photographic field for over three decades and has extensive experience in both film and digital photography. Being an electronics engineer by profession and a photographer, he possesses a unique and deep insight into the technical aspects of digital photography and equipment. He has published more than a 100 articles on photography and some of his writings have also been published in the well-known international magazine Popular Photography. An avid collector of photographic books and vintage cameras, Ashok has a keen interest in the history of photography and a passion for sharing his knowledge on photography through teaching and writing. He is the only Indian photographer to be featured on the Nikon Centenary website. He is presently working as a Management and Engineering consutant. He can be reached at kashokk@gmail.com.
Ashok Kandimalla

Most serious photographers take a dim view of photography with a smartphone. Or at least used to, till now. The reasons being, small sensors in smartphones resulted in noisy images, poor dynamic range, not enough pixels and so on.

That has changed. These days you can, with a sophisticated smartphone get a perfectly exposed image of an extremely high contrast scene which would be challenging to a full-frame ILC (an Interchangeable Lens Camera that is, a D-SLR or a mirrorless camera). Or you can take a handheld photograph of a low light scene with beautiful colours and little noise. Today, images taken with smartphones are good enough to grace the covers of the most prestigious magazines, appear on billboards, etc.

How did this change come about? You know that the physics behind imaging has not changed. The reason is the way the data is being captured and processed by the computers inside your smartphone running increasingly sophisticated software to get images that would be considered incredible just a few years ago. Since all the magic is happening because of computers and the software running on them, the two words that aptly describe this magic are ‘Computational Photography’. Currently, this technology is mainly with smartphones equipped with cameras but likely and hopefully will migrate to all cameras.

While there is no rigid definition on what Computational Photography means, one article by ACM (Association of Computing Machinery) which is the most authoritative body for all things relating to computing, describes it as - “….use of computational photographic technologies, which utilise algorithms to adjust photographic parameters in order to optimise them for specific situations, users with little or no photographic training can often achieve excellent results. The boundaries of what constitutes computational photography are not clearly defined, though there is some agreement that the term refers to the use of hardware such as lenses and image sensors to capture image data, and then applying software algorithms to automatically adjust the image parameters to yield an image.”

Only the word ‘algorithm’ needs some explanation. As per a standard dictionary, it is - ‘a process or set of rules to be adhered to in calculations or other problem-solving operations, usually by a computer’. An algorithm is the basis for the firmware which when running on the computer inside your smartphone (or camera) performs various tasks. To make the wordless intimidating, you can consider a cooking recipe as a rudimentary algorithm!

Once digital photography started and consequently, manipulating images through software became feasible, many image processing software packages like Photoshop came into existence. These brought techniques and processes which were unheard of or were difficult or even downright impossible to implement in the film era. A few of these like HDR (High Dynamic Range) imaging, panorama creation, etc., are already known to most of us and have been dealt with in detail earlier in Smart Photography.

Computational photography takes all these processes to a different level and adds even more interesting features to bring some wonderful things to photography. More importantly, it makes achieving these results much easier, even to a novice or a non-photographer. No specialist skills are required.

One question that you may have is, some of the techniques described here, for example, HDR imaging and panorama creation were there before ‘computational photography’ came into existence and perhaps you are using them too. So, what’s new about this? Well, there are some important differences. Earlier these were done during the post-processing stage with an external computer and even before that the photographer had to follow certain techniques at the capture stage, to make the needed post-processing possible.

As an example, to make an HDR image, the photographer first needs to recognise that the brightness range of the scene warrants HDR imaging and captures three or more images (depending on the contrast), each with a different exposure to record all details in both highlights and shadows. This only completed the image capture part. Next, he had to go through post-processing which involved aligning all the images accurately, then blending them and finally perform tone mapping to get the required HDR image. As you see this involves a lot of steps at both the capture as well as post-processing stages. A smartphone utilising computational photography would have eliminated most of these and more importantly, the user would have needed no knowledge of HDR imaging at all to get the desired result!

articleRead

You can read up to 3 premium stories before you subscribe to Magzter GOLD

Log in, if you are already a subscriber

GoldLogo

Get unlimited access to thousands of curated premium stories and 5,000+ magazines

READ THE ENTIRE ISSUE

June 2020