Adobe Firefly is a set of tools based on generative artificial intelligence models that can create and convert audio, video, illustrations and 3D models through text prompts, just like Dall-E and ChatGPT. Last month, Adobe showed off the set of tools for the first time, and on Monday, Adobe announced a series of upgrade plans to further empower users through Creative Cloud video and audio applications. These additions will be added to Firefly’s beta program later this year.
Firefly’s functions have covered Adobe’s ecosystem, including applications such as Premiere Pro, Illustrator, After Effects and Photoshop, but to experience these functions, you need to wait until the beta program is open. The new features announced this Monday are mainly to help professional video editors reduce their tedious work, such as improving color levels, inserting placeholder images, adding effects, automatically recommending B-roll auxiliary shots suitable for the project, etc., editors only need to Tell Firefly what they want with text prompts and let the algorithm do the rest.
These new features include “text to color enhancements,” a wide-ranging feature that can adjust brightness and saturation levels, change the time of day and even the season of the year with natural language cues. The generative AI capabilities also extend to audio, allowing background music and sound effects to be inserted by describing what the editor wants. The animated font feature seen at last month’s Adobe event is also coming soon, along with an automated b-roll feature that analyzes the content of scripts to generate storyboards and recommend video clips. Firefly can even create personalized tutorial guides to help new users learn how to use these features.