Photoshop's Generative Fill tool has users flooding Twitter with cool expansions of iconic...

Cal Jeffrey

Posts: 4,179   +1,427
Staff member
In context: Despite the reservations from some groups wanting developers to tap the brakes on generative AI, the technology continues to improve at a breakneck pace. This advancement is arguably more evident in the subsector of image generation since it's had some time to brew and simmer down some of the early controversies it faced, which large language models are facing now.

Recently, Adobe released a Photoshop feature called "Generative Fill, " enabling content creators to expand an image beyond its initial borders. In a nutshell, the tool is part of Adobe's image-synthesis model Firefly. Users can extend the edges of a picture in any direction, and Generative Fill will produce cohesive content with or without contextual prompts.

People have already started going wild with the feature on social media, with many examples going viral. One impressive set of images is from a Twitter user going by "Dobrokotov" (AI Molodca on Telegram). The self-proclaimed Russian multimedia artist used Generative Fill to expand some iconic album covers. His rendition of Nirvana's Nevermind (above) racked up over 2.3 million views in just a few days.

While the Nirvana album looked like a quick job – simply expanding the borders and then letting Generative Fill do its thing – some of the other covers took some prompting and trying out a few generated elements before landing on something stunning. Dobrokotov's take on the Abbey Road album is a prime example (below).

Although the Beatles released Yellow Submarine three years before Abbey Road, Dobrokotov added a yellow submarine off to the side as a neat little tribute. Another effect he likely prompted from the AI is the surrealistic space theme across the top of the image. Additionally, the super-extended crosswalk stripes and the fish-eye effect to the sides make the image fit right in with the 1960s era of psychedelic album art. Speaking of which: Dobrokotov's extension of Muse's Black Holes & Revelations is pretty trippy too (below).

While Dobrokotov didn't share his process for creating his stunning expansions, others have. As you can see in the video below, getting the final image to look good is usually more than just stretching out the borders and calling it a day. Getting a beautiful eye-catching piece takes a good understanding of image composition, a bit of trial and error, and some final touchup work. Generative image AI has come a long way but is still struggling with a few things, like eyes and fingers.

However, with some creative editing and post-generation touchups, artists can mask these flaws enough to pass the work off as fully human-created. The industry has already voiced this concern in the subgenre of AI image generation, and it's still being debated on several levels, but in the meantime, it's fun to see what people can come up with with a little (or a lot) of aid from a machine (below, image credit: fangming.li).

Permalink to story.

 
I was checking the generation of cute girls with Photoshop Beta and all of them got deformed teeth. Next I asked for a monkey with a coconut and bananas, and I got only bananas and no coconut, and again deformed images. Adobe needs to polish their AI.
 
While most people on here seem to be miserable enough to hate other people's fun take on music albums, I personally think this type of software expands people's imagination without the need of being a masterful designer.

People on here seriously need to get off the hate horse.
 
It's interesting technology. I found a photo of a dry lake bed on the web.
Put that in the photoshop beta. Then I told it to add some clouds to the crystal clear sky.
I picked from 3-4 it came up with and added that. Then I told it to add a blue car. Picked
from one of about 7. Then, I drew a egg shap circle under the car, told it to add reflective
water. VERY minor clean up and it was perfect.
Took about 10-15 minutes. Had I tried that myself, it would have taken an hour or more,
just on the reflective water.
As I tell friends & coworkers, just because you see a photo, unless you know what and where
to look, it might be fake.
Some people are pretty good, but most don't get reflections, shadows, bokeh correct when
trying to photoshop something.
 
Cool! They did a good job with extending out these albums!

Regrading the concern of generative AI art being passed off as human-made, this won't help with text, but for images and video I think the simple solution would be to have the generative AI software watermark the art. (This would be done outside the AI, like the AI generates an image, it's run through the watermark system, and THEN sent to the user... you know, so the user can't just say "You know, that looks great, but what would it look like without the watermark?" to prompt the AI to leave it off.) Google Maps does that, for instance, the maps look fine when you look at them normally, you zoom WAAY in and realize it has something like "google maps (c) 2023" or similar in various locations all over the map.

Just like the movie company fantasy that they can put the right DRM on movies and make them uncopyable, I'm sure it's fantasy to think these watermarks could not be removed. But it'd be easier to just admit you had an AI help with some artwork rather than claiming you did it all yourself and trying to remove watermarks from it.

 
Talented humans can still do it better. But it takes them a lot longer time. And we know that evolution favors speed.
 
Back