Google used a 64-camera rig to train its portrait lighting AI

Google’s Portrait Light feature can make some of your more mediocre photos look a lot better by giving you a way to change their lighting direction and intensity. The tech giant launched the AI-based lighting feature in September for the Pixel 4a 5G and Pixel 5 before giving older Pixel phones access to it. Now, Google has published a post on its AI blog explaining the technology behind Portrait Light, including how it trained its machine learning models.

To be able to train one of those models to add lighting to a photo from a certain direction, Google needed millions of portraits with and without extra lighting from different directions. The company used a spherical lighting rig with 64 cameras and 331 individually programmable LED light sources to capture the photos it needed. It photographed 70 people with different skin tones, face shapes, genders, hairstyles and even clothing and accessories, illuminating them inside the sphere one light at a time. The company also trained a model to determine the best illumination profile for automatic light placement. Its post has all the technical details, if you want to know how the feature came to be.