Home Mobile Google Pixel Google Explains How The Live HDR+ and Dual Exposure Controls on Pixel...

Google Explains How The Live HDR+ and Dual Exposure Controls on Pixel 4 and 4a Work

Google’s new Pixel 4a supports Live HDR+ and Dual Exposure features. The features were first introduced by Google on the Pixel 4. After the launch of the Pixel 4a, the tech giant has published a detailed blog post which explains how these features work.

First, what is Live HDR+? It is a name given to a technology that shows the real-time preview of what the final HDR+ photo will look like. It shows a live preview of a final HDR+ image using artificial intelligence.

The original HDR+ imaging is a method to capture scenes from a wide range of brightness and exposure. It takes 3-15 underexposed photos, aligns them, and then merges them all to create a single enhanced image.

Google Explains How The Live HDR+ and Dual Exposure Controls on Pixel 4 and 4a Work
Image courtesy: Google Blog

Google says that because of the challenge in computing 30 frames per second in real-time, they have improved the viewfinder using a machine-learning-based approximation. Alongside this, they have also created dual exposure controls that separately adjust the rendition of shadows and highlights. These two features combined provide the HDR imaging with real-time creative control.

What is HDR+ and How it Works

Comparison of Linear RGB Image and HDR+
Image courtesy: Google Blog

Google explains in its official AI blog post that when the user presses the shutter in the Pixel camera app, it captures 3-15 underexposed images. The multiple images are then aligned and merged to reduce noise in the shadows. The resulting image is a 14-bit intermediate “linear RGB image” having pixel values proportional to the scene brightness. The HDR+ process is too slow, and hence Google has come up with the new Live HDR+, using a new algorithm. 

Need for Live HDR+ and Its Working

Google Pixel Linear HDR vs HDR+
Image courtesy: Google Blog

The current mobile chipsets lack the computational power to do the 30 times per second imagery, but now a dash of AI is used. It slices the image into small tiles and predicts the tone mapping for each one of them. Then on the viewfinder, every pixel is computed as a combination of the tone maps from the nearest tiles.

The HDR+ automatically balances the highlights and shadows. On the other hand, the Dual Exposure sliders give the user manual control over the process. Now you can get the desired look for your photo in the camera app itself. Earlier, it was done afterward by processing the raw file.

If you are looking for a more detailed explanation of how all of this process works, you may follow the source link to Google’s official blog post.

LEAVE A REPLY

Please enter your comment!
Please enter your name here