Pixel 8 Pro videos get much brighter with Video Boost if you use it right

-Gudstory

Pixel 8 Pro videos get much brighter with Video Boost if you use it right -Gudstory

Rate this post

[ad_1]

When Google introduced Night Sight on the Pixel 3, it was a revelation.

It was as if someone had literally turned on the lights in your low-light photos. Previously impossible shots became possible – no tripod or deer-in-the-headlights flash required.

Five years later and taking photos in the dark is a thing of the past – every phone up and down the price spectrum comes with some kind of night mode. However, the video is a different story. Night mode for still photos captures multiple frames to create a brighter image, and the mechanics of that feature aren’t possible to copy and paste into video, which by its nature already A series of images. The answer, as it appears recently, is to call on AI.

When the Pixel 8 Pro launched this time, Google announced a feature called Video Boost with Night Sight that will be coming in a future software update. It uses AI to process your video – bringing out more detail and enhancing colors, which is especially helpful for low-light clips. There’s just one problem: This processing happens in the cloud on Google’s servers, not on your phone.

As promised, Video Boost started arriving on devices with the December Pixel update a few weeks ago, including my Pixel 8 Pro review unit. And that’s good! But it’s not as important a moment as the original Night Sight was. This shows how powerful Night Sight was when it first launched, but also shows the special challenges video poses for smartphone camera systems.

Here’s how Video Boost works: First, and importantly, you must have a Pixel 8 Supporter, not the regular Pixel 8 – Google hasn’t answered my question about why this is. When you want to use it you just turn it on in your camera settings and then start recording your video. Once you’re done, the video will be backed up to your Google Photos account automatically or manually. Then you wait. and wait. And in some cases, keep waiting – Video Boost works on videos up to ten minutes long, but even a clip that’s only a few minutes in length can take hours to process.

Depending on the type of video you’re recording, that wait may or may not be worth it. Google’s support document says it’s designed to let you create videos in any lighting “in higher quality and with better lighting, colors, and detail” on your Pixel phone. But main What Video Boost is in service of is better low-light video – so group product manager Isaac Reynolds tells me. “Think of it as Night Sight video, because all the changes made to other algorithms are in Night Sight discovery.”

All the processes that make our videos look better in good light – stabilization, tone mapping – stop working when you try to record video in very low light. Reynolds points out that even Kind The blur you get is different in low-light videos. “OIS [optical image stabilization] Can stabilize a frame, but only of a certain length.” Low-light video requires longer frames, and this is a bigger challenge for stabilization. “When you start moving in low light, So with such long frames you can get a certain kind of intraframe blur that is the only residual that OIS can compensate for.” In other words, it’s very complex.

This all helps explain what I’m seeing in my video boost clips. In good lighting, I don’t notice much difference. Some colors pop out a bit more, but I don’t see anything that would compel me to use it regularly when there is plenty of light available. In extremely Low Light Video Boost can regain some of the color and detail that is completely lost in a standard video clip. But it’s not as dramatic as the difference between a regular photo and a Night Sight photo under similar conditions.

However, there is a real sweet spot between these extremes, where I can see Video Boost actually being useful. In a clip where I’m walking down a path in a dark pergola with kobe vines at dusk, there’s a noticeable improvement in shadow detail and stabilization after the boost. The more I used Video Boost in regular, medium-low indoor lighting, the more I saw the case for it. You start to see how washed-out standard videos look under these circumstances – like my son playing with trucks on the dining room floor. Turning on Video Boost restored some vibrancy I’d missed.

Video boost is limited to the Pixel 8 Pro’s main rear camera, and it records at 4K (default) or 1080p at 30fps. Using Video Boost results in two clips – an initial “preview” file that is not boosted and available to share immediately, and eventually, a second “boosted” file. However, there’s a lot more going on under the hood.

Reynolds explained to me that Video Boost uses an entirely different processing pipeline that keeps much of the captured image data that is typically discarded when you’re recording a standard video file. – Like the relationship between RAW and JPEG files. A temporary file keeps this information on your device until it is sent to the cloud; After that, it is removed. This is a good thing, because temporary files can be large – several gigabytes for longer clips. However, the final boosted video size is far more reasonable – 513 MB for the three-minute clip I recorded versus 6 GB for the temporary file.

My initial reaction to Video Boost was that it seemed like a stopgap — a feature demo of something that requires the cloud to work now, but that will run on the device in the future. Qualcomm recently showed off a similar on-device version, so this must be the end game, right? Reynolds says he doesn’t think about it that way. “Things you can do in the cloud will always be more impactful than things you can do on the phone.”

The difference between what your phone can do and what a cloud server can do will fade into the background

Case in point: He says that right now, Pixel phones run various smaller, optimized versions of Google’s HDR Plus model on devices. But the full “parent” HDR Plus model that Google has been developing for its Pixel phones for the past decade is too big to realistically run on any phone. And on-device AI capabilities will improve over time, so it’s likely Some? Things that could only be done in the cloud will come to our devices. But equally, what is possible in the cloud will also change. Reynolds says he sees the cloud as just “another component” of Tensor’s capabilities.

In that sense, Video Boost Is A glimpse of the future – It’s a future where the AI ​​on your phone works together with AI in the cloud. More tasks will be handled by a combination of on- and off-device AI, and the difference between what your phone can do and what a cloud server can do will fade into the background. Night Sight was hardly an “aha” moment, but it’s going to be a significant shift in the way we think about our phones’ capabilities.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *