March 18, 2022
Adobe VP has signaled his goal to “democratize creative photography”, designing an AI-powered “universal camera app” that uses computational photography processes to meet the needs of all photographers .
As Adobe’s Emerging Products Group Director, Marc Levoy [right] provided an interesting insight into his vision for the future of photo editing. In the interview on the Adobe Bloghe also made some slightly controversial statements dismissing the existence and merit of “direct photography”.
Who is Marc Levoy?
Levoy may not be a household name among photographers, but as a computer graphics researcher, he played a major role in the development of the smartphone camera. He has spent most of the past decade working at Google as a tastemaker, AKA chief engineer, for the Pixel smartphone camera. During this period, from 2014 to 2020, he led the team that designed HDR+, Portrait and Night Sight modes in Google Pixel.
This work, he claims, has helped democratize “good photography”. As smart phone cameras use computational processes, everyday users can now achieve quality image results that previously would have required some knowledge of camera settings. Pixel’s Night Sight, for example, allows users to adequately capture handheld astrophotography, which would normally require a DSLR, tripod, and manual programming settings.
Earlier in Levoy’s career, he started a Google-funded Stanford research project called CityBlock, which would later be commercialized as Google Street View. And before that – in the 90s – Levoy specialized in cartoon animation, volume rendering and 3D scanning. Learn more about his career biography here.
He was recently named a Fellow of the National Academy of Engineering, a private, nonprofit institution elected by his peers, in recognition of his work in computer graphics and digital photography.
Adobe “democratizes creative photography”
Levoy joined Adobe in mid-2020 and brought his expertise – and business philosophy – to the company. He attributes the rapid development of the smartphone camera in part to the “release culture” of manufacturers, including Google, “which has allowed other companies to become ‘fast followers'”. At Adobe, he encourages his team to do the same and publish research for peer review.
“At Google and now here at Adobe, I’ve tried to hire mostly superstars with PhDs. They’re smart, they’re creative, and they think of things that other people don’t,” Levoy said. “But these people want to be recognized for what they’ve invented, and they want to talk about it at conferences and get feedback from their peers. In short, they want to be part of a research community. To attract this caliber of people, I have to let them publish. Industrial Light and Magic and Pixar under Ed Catmull used the same strategy. In fact, I learned it from him.
‘Does this strategy allow competitors to catch up faster? Sometimes it does, and that’s probably why Apple’s smartphone photography has improved so rapidly over the past 3 years. How can a publishing team respond to this threat? Maybe delay posting a bit. Otherwise, run faster and breathe deeper. Invent cooler stuff.
Sharing secrets with competitors is an attractive strategy for Adobe, given that its monopoly on photo-editing software hasn’t been achieved by playing nice with others. But this approach is apparently essential for Levoy to accelerate his goal of “democratizing creative photography,” which means making it accessible to the masses.
“Adobe is an attractive place for this because it caters to people who are trying to take their photography to the next level and are therefore willing to spend a little more time composing and capturing an image.”
Levoy does not disclose specifically what his team is working on. But he points out that Adobe is ready to develop tools that “combine professional controls with computational photography image processing pipelines.” And, of course, artificial intelligence (AI) plays a central role.
Good photo editing always requires photographers to have a fine set of skills. Many even refer to editing as “art”. While one-click presets and automated processes become more and more powerful and intuitive, most professional photo editing remains a manual endeavor. There is an unquenchable thirst among keen photographers to learn photo editing, and there is no shortage of workshops and masterclasses. Quality photo editing is one of the few remaining barriers to entry for certain styles of professional photography, and it seems Levoy’s vision aims to break down those barriers.
AI in photography seems relatively new. Most full-frame mirrorless cameras are now equipped with various AI-enabled face/eye/animal/object autofocus functions. In its early days, reviewers criticized early attempts for being clunky gimmicks, but they’ve since been hailed as more serious tools. Levoy highlights another achievement of AI in photography: automated white balance.
“Deciding how a scene has been lit and partially correcting strongly colored lighting is what mathematicians call an ill-posed problem. Is this park bench yellow because it was painted yellow, or because it was painted white but is lit by a yellow sodium vapor street lamp? Until 5 years ago, white balance was mainly solved by seat-of-pants heuristics. As part of our Night Sight at Google article, we described an AI-based white balancing algorithm. It worked well. There are undoubtedly other cameras that use AI-based white balance. This is a great achievement for AI in photography.
The camera’s AI is now used to detect the sky and apply “special processing”, he says. And smart phone cameras use “AI to classify the type of scene” – photos of food, for example, are processed to make them “appetizing”.
“AI is also used to estimate depth maps in many phones, which helps them defocus backgrounds for portraits. Several companies are working on AI-based portrait relighting, though so far with mixed results. Adobe is pushing the boundaries in this area with its Sensei-powered Neural Filters in Photoshop, but relighting remains a difficult issue.
AI creating images
On the subject of AI and photography but unrelated to Levoy and Adobe, another trending technological development is the generation of artificial images. Below are recent results published by the stock exchange agency Smarterpix, which yielded some amusing results. They just seem a little off.
These synthetic humans are made from parts of real humans – an ear or two from Bob, a chin from Maria, someone else’s nose, etc. The result is legally clean datasets, also available for licensing, and cost-effective, litigation-free synthetic content.
Part of the groundwork is that stock photography in five to 10 years will almost entirely use AI-generated models instead of humans. While not having to pay for models or get secure model releases can make life easier for the cash-strapped stock photographer, there’s no “window to the soul” with these synthetic people.
Straight photography “a myth”
Levoy’s journey has undoubtedly left him with a unique perspective on photography. But some may dispute his rejection of “direct photography.” Here’s the full excerpt from Levoy when asked about “the balance between technology that enhances photography and an artist’s creativity and individual expression”:
“There is a myth in photography of ‘straight photography’. Perhaps the myth originated with Ansel Adams and the f/64 club he founded in 1932. Likewise, cameras often have a processing option called “Natural”. But here, there is no direct photography or “natural” processing. The world has a higher dynamic range (the difference in brightness between dark and light) than a photograph can reproduce. And our eyes are adaptive sensing engines. What we think we see depends on what’s around us in the scene – that’s why optical illusions work.
“As a result, any digital processing system adjusts the colors and tones it records, and those adjustments are inevitably partly subjective. I was the main “taster” of Pixel phones for several years. I liked Caravaggio’s paintings, so Pixel 2-4 looked dark and contrasty. Apple certainly has tastemakers – I know some of them.
“The key to artistic creativity lies in mastering the image. Traditionally, this happens after capturing the image. Adobe has built a business on this premise. If you’re capturing RAW, you usually have more control, so Adobe Lightroom specializes in playing RAW files (including its own DNG format).
“What’s exciting about computational photography is that – far from taking control of the artist, it can give artists *more* control, and at the point of capture.” An example of this is Pixel’s Dual Exposure controls – separate controls for highlights and shadows, rather than a single control for exposure compensation. Another example is Apple’s Photographic Styles, which are live in the viewfinder. This is just the tip of the iceberg. We’ll start to see more controls and more opportunities for artistic expression in the cameras. I can not wait !’
Do you agree? Let us know in the comments below.
Read the full interview here.