Named as one of CB Insights
100 Most Promising AI Startups 2023

How is Computational Photography Revolutionizing Smartphone Cameras?

31/7/23

Computational photography has revolutionized smartphone cameras. But what exactly does this term mean and why has it become so important? This article will help unpack the algorithms behind the stunning images we’ve become accustomed to seeing, and also take a look at how generative AI will impact the field.

What is computational photography? 

Computational photography is a field that uses digital computation to enhance or transform images. A rapidly developing research field, it has evolved from computer vision, image processing, computer graphics and applied optics.

It can improve the capabilities of a camera, introduce new features, or reduce the cost or size of camera elements. It uses techniques such as artificial intelligence, machine learning, algorithms, image stacking, depth mapping, and more. Computational photography refers to digital image capture and processing techniques that use digital computation instead of optical processes. 

Where is computational photography used?

Computational photography is  most prominently used in smartphones and digital cameras. In fact, computational photography does the heavy lifting to create the great-looking images you see in your smartphone photo gallery. It has improved photography by introducing new features that were not possible with film-based photography. Here are some examples:

High Dynamic Range (HDR) Imaging

HDR combines multiple exposures of the same scene to capture a wider range of light and dark details. The technique blends the different exposures to create an image with enhanced dynamic range.

High Dynamic Range captured using computational photography

Panorama Stitching

Panorama stitching combines multiple images of a scene taken from different perspectives into a single wide-angle or 360-degree image. Algorithms align and blend the images seamlessly to create a cohesive panorama.

Panorama image

Image Stacking

Image stacking involves capturing multiple images of the same subject and combining them to reduce noise and increase detail. This technique is commonly used in astrophotography to capture clear and sharp images of stars and galaxies.


Portrait Mode

Portrait mode is a popular feature on smartphones that uses depth sensing or dual-camera setups to create a shallow depth-of-field effect, blurring the background while keeping the subject in focus. This simulates the bokeh effect typically achieved with larger aperture lenses.

Portrait mode focuses on the subject in the foreground, while applying a blur to the background, to mimic the focus achieved by high-end professional cameras

Low Light Imaging

In low light conditions, computational photography techniques like noise reduction and image fusion can be employed to improve the quality of images. These techniques help reduce noise and enhance details in darker areas of the photo. Several smartphones have developed a ‘Night Mode’ for taking a still photo in low light. However, this feature isn’t available yet for video, meaning that taking video at night is still very challenging, with dark, blurry results. (Side note: this is what we’ve developed at Visionary.ai. Check out our real-time video denoiser to learn more.)

Super-resolution

Super-resolution techniques use algorithms to enhance the resolution and detail of an image beyond its original capture. These techniques can be used to upscale low-resolution images or improve the level of detail in digital zoom.

Image upscaling

Image Deblurring 

Computational photography can help reduce the effects of motion blur or camera shake in an image. By analyzing the movement during capture, algorithms can estimate the original scene and deblur the image.

Live Photo and Cinemagraphs

Live Photos (Apple) or Motion Photos (Google) capture a few seconds of video along with a still photo. Cinemagraphs are images where a small portion is animated while the rest remains static. These techniques add motion and interactivity to photographs.

Automatic Scene Detection and Optimization

Smartphones and digital cameras can use computational algorithms to detect the scene being photographed and apply appropriate optimizations automatically. This can include adjusting exposure, color balance, and other parameters to enhance the overall image quality.

Software Zoom

Software zoom, also known as digital zoom, is a technique used in digital cameras and smartphone cameras to magnify the image by digitally enlarging a portion of the captured frame. Unlike optical zoom, which involves using physical lens elements to zoom in and out optically, software zoom relies solely on software processing to simulate the effect of zooming in.

When you use software zoom, the camera captures the image at its native resolution, and then the software algorithm crops and enlarges a selected portion of the image to create the zoomed-in view. This process essentially enlarges the pixels, resulting in a loss of image quality and detail compared to optical zoom.

These are just a few examples of computational photography techniques. The field is evolving rapidly, and new techniques are constantly being developed to push the boundaries of traditional photography.

How will generative AI impact computational photography?

Generative AI is having a significant impact on computational photography by enabling new and innovative techniques for image synthesis, manipulation, and enhancement. Here are some ways generative AI has influenced computational photography already:

Image Synthesis

Generative adversarial networks (GANs) have been used to generate realistic and high-resolution images. This technology can be employed to create synthetic data for training purposes or to generate entirely new images that resemble photographs. GANs have been used to generate realistic faces, landscapes, and even art.

Style Transfer

Style transfer techniques use generative models to apply the style of one image to another. This allows photographers to apply artistic styles to their photos, giving them a unique look. Neural style transfer is a popular example, where the style of a famous painting can be transferred to a photograph.

Image Editing and Manipulation

Generative AI models can be used for various image editing tasks. For example, image inpainting techniques leverage generative models to automatically fill in missing or damaged parts of an image. This can be useful for removing unwanted objects or restoring old photographs.

Image Super-resolution

Generative models, such as Variational Autoencoders (VAEs) and GANs, have been employed to enhance the resolution of images. These models can generate high-resolution details from low-resolution inputs, allowing for better image quality and increased detail.

Image-to-Image Translation

Generative models can be used to translate images from one domain to another. For instance, they can convert a daytime image to a nighttime scene or transform a photo into a painting. This technology has opened up new possibilities for creative expression and artistic exploration.

Augmented Reality (AR)

Generative AI has also played a crucial role in the development of AR applications in computational photography. AR filters and effects that overlay digital elements onto real-world scenes are often created using generative models. These models can generate virtual objects that seamlessly integrate with the live camera feed.

Generative AI has expanded the capabilities of computational photography, enabling photographers and users to create, modify, and enhance images in ways that were previously not possible. It has brought about exciting advancements in image synthesis, editing, and manipulation, pushing the boundaries of what can be achieved with digital photography.

The future of photography

Today, when we look at videos shot a decade ago, they appear significantly poorer in quality when compared to videos recorded with the latest technologies available. This would have been hard to believe in 2013 when the quality was already so high. Now, with the new impact of AI and computational photography, it’s safe to assume that video quality a decade from now will be even more advanced.

Computational photography has already introduced new features that were not possible with film-based photography, and has improved the capabilities of digital cameras and smartphones. In the coming years, we can expect more computational photography techniques that can run efficiently on low power smartphones, making stunning images even more accessible, and impactful on our lives.