The new feature is being tested in Google’s YouTube Stories app, and will do away with the need to use green screen or expensive and time-consuming third party tools.
The process of video segmentation involves separating the background of a video from the foreground, essentially treating them as two separate visual layers within the same video.
This technique is commonplace in professional video editing, but isn’t an option for amateur video creators – not unless they spend a fortune on hiring a professional video editor or constructing a green screen studio.
But it’s about to become something anyone can do thanks to an AI technique that will turn your smartphone into a mobile production studio.
In a blog post announcing the development, Google explains in detail how it leveraged machine learning to identity the difference between foregrounds and backgrounds.
Essentially, Google developers annotated thousands of videos to teach its AI to understand foreground poses and background settings. The end result will give users the option to choose from a range of backgrounds pre-coded by Google.
There are a handful of video editing tools that can handle video segmentation, but the best ones cost money and obviously Google’s functionality will be free and – conveniently – in-app.
The effect is impressive, particularly given that the gif above is running on a smartphone. Users can replace backgrounds in real time, while maintaining a frame rate of 100 fps on an iPhone 7 and more than 40 fps on a Pixel 2.
“Our new segmentation technology allows creators to replace and modify the background, effortlessly increasing videos’ production value without specialized equipment,” read the blog post.
“Our immediate goal is to use the limited rollout in YouTube stories to test our technology on this first set of effects. As we improve and expand our segmentation technology to more labels, we plan to integrate it into Google’s broader Augmented Reality services.”
>> READ NEXT: 8 of the best Google Chrome extensions for social media <<