Author: Allison Green

Creativity & AI Take Flight with Adobe Firefly

Adobe Firefly website displayed on computer mac laptop screen
Credit: Adobe Stock

Riding the wave of the AI buzz, Adobe has released their own standalone generative AI tool- Adobe Firefly. Starting as a beta, it launched for commercial use on September 13, 2023 and it is a part of the CSU license with Adobe Creative Cloud. Firefly can be used on its own as a web-based product, but its features are also integrated into other Adobe apps and products like Photoshop, Illustrator, Lightroom, and Express. With the CSU license, anyone within the Cal State system can harness the power of generative AI which otherwise has a pay-per-credit model.  

According to Adobe Gen AI tutorials and press releases, Firefly’s content derives from: 1) Adobe stock images, 2) public domain content, 3) from uploaded user images (rather you can upload images to augment). On top of that, you can add textures and styles to regenerate photos and vectors to match your vision!  

Here are my impressions of the new Firefly tools to date:  

Text to Image 

This feature allows users to be the most creative. Like ChatGPT and other LLMs (large language models), you can share a description of what you would like to see, and Adobe will generate a handful of images for you to select. If none are to your liking, you can also ask it to generate more options.  

My favorite part of this feature that I’ve discovered is the ability to augment the style or the “content type” of the image. Depending on the look and feel you’re striving for, you can choose photo, graphic, or art as a style.  

One thing to note as you tinker and refine your images, it differs from ChatGPT in the sense that you cannot have a ongoing conversation to track your edits. If you want to add or change what has been generated in terms of content, you have to go back to your original text and make necessary edits from there.  

Text to Image is better for non-human images and art since some of the human-based art with people is a little off (I.e. full-bodied art is disproportional and hands/limbs are strange- typical of most AI generated images). Plus, many campuses want to feature their own students and campuses landmarks and often have photobanks.  

Three beagle puppies wearing birthday hats
Text to Image feature with prompt “beagle puppies with party hats in a grassy park”

Text Effects  

Using text effects is probably the least relevant tool for work use cases, in my opinion. You can choose a font and size for your text then apply a text effect to help text come to life. You can apply glitter, metallic, floral, and many other types of looks to your text and then specify the specific type of style. The only professional use case for this feature I can readily think of might be designing informal promotions like student organization posters or social media content. Those who adhere to a university or department style guide may not be able to use this feature to its full capability. 

Thin birch trees in the background with "Let It Snow" block letters stylized with metallic snowflakes.
Text Effects feature with prompt “snowflakes and silver glitter”

Generative Fill 

The feature with the most promise to me is the Generative Fill. Admittedly, I need more practice to hone my prompts to deliver the images I want. But this feature is good for photo augmentation on Express and Photoshop. As a Firefly learner and novice, the tool to mark the placeholder was frustrating to use on Express. The brush tool (used to mark the placeholder) didn’t always translate to where I wanted the image to go or how much space I wanted the new image to take up. Ultimately the process did not yield as varied or creative results as Text to Image. Lastly, the asset that appears is not moveable (or not without doing some heavy Photoshopping), you can’t adjust it, and you have to regenerate a version of the prompt. 

Arguably, Generative Fill is better suited for removing elements than adding unless you know how to masterfully describe exactly what you’re looking for. What I found to be most useful, especially when thinking about using stock photos or campus photos, is the ability to remove pesky elements that are distracting or out of place. Much like a wedding photographer would remove someone’s camera phone in a shot of the ceremony, I could remove distracting stickers or logos on student laptops or a Starbucks logo in the background of a shot of the student union.  

If the goal is to create posters or creative content that have more creative flexibility, the generative fill can be useful to add snowflakes for a department holiday card or similar designs. But I would use it more for photos.

Students around a table looking at at large computer monitor
Original Image courtesy of CSUCO Marketing Communications
Students around a table looking at a large computer monitor
Image retouched with Generative Fill: posters on curtains removed,
glasses of lemonade replaced with bottled water, and a tablet device added to the table.

Generative Recolor 

The last feature that is released to date is Generative Recolor. I want to start by saying that Generative Recolor only works on vector objects for recoloring and does not work on raster-based images such as PNG or JPG files. With this in mind, I would find more use for this feature on Illustrator or Express (although not exclusively on these apps).  

Generative Recolor takes your vector image and can change the entire color palette or match your project’s look and feel. The feature also gives options for sample color palettes, or you can use a CYMK/RBG sample to build from. After recoloring, you can still recolor specific sections if they are off from your vision or you want to change specific elements. For example, if you are recoloring an illustration or cartoon person, it may change a character’s skin tone to a non-normative color; you can easily change elements back to a flesh-tone if you wish.  

Screenshot of an Adobe Illustrator art board with a colorful vector image
Illustrator gives the option to apply generative recolor (in gray panel) or to recolor with a library or manually

AI Bias 

I would be remiss if I didn’t mention the controversy of bias built into AI. Thankfully, Adobe as a company is especially attuned to mitigating and addressing bias and upholding their enterprise values of diversity, equity, inclusion, and justice. As it generates images, users can flag suggestions that are deemed inappropriate or insensitive. Because these features are so new to the public, they also have benchmarks to ongoingly listen to users’ experiences. I also encourage you to be a part of the solution by reporting any bias in images you come across. Firefly will be constantly learning and refining based on feedback it receives.  

On the user side, it can be easy to take generated images as face value, but what ways can we check our own bias or challenge our assumptions? If we quickly need an image of faculty for a flyer, what might that default be? As designers it’s important for us all to perform our own gut check and draw on campuses’ anti-bias training or DEIJ measures in place. We want to represent our diverse CSU system employees and its student body. 

Overall Impressions 

My overall impression of Adobe Firefly and its features are positive! They are fun and really help users expand their horizons and go beyond the previous confines of digital design.  

There are still arguments on whether generative AI is more helpful or more harmful; from my standpoint with AI and digital art, it can be a powerful and supportive tool to take designs to the next level. I would remind anyone who feels overwhelmed or intimidated by AI that you as the user/creator are still in control of the art. Adobe allows users to remix, regenerate, and give feedback on their offerings.  

Regardless of if you’re creating logos, slide decks, digital promotions, or templates, using Firefly tools can save a lot of time instead of sifting through pages of stock photos or meticulously creating original assets (unless you want to).  

Adobe is currently exploring even more features and technologies. Take a peek at what they’re researching- https://research.adobe.com/research/