Exploring generative AI (Midjourney, Adobe Firefly and more)
This year I've been experimenting with various generative AI tools. They’re hugely impressive and there’s great potential in how they can assist in the creative process, but some ethical considerations too. Here are my initial observations, learnings and thoughts.
How I’ve been using generative AI tools
I've tried using generative AI tools to create branding assets and logos, social media marketing adverts, original imagery for art direction, photo editing, infographics and website UI design.
The tools I have tried include Stable diffusion, Dall-e2, Midjourney, Adobe Firefly, Gamma, plus many more. I've also tried using ChatGPT in combination with some of them too.
Free vs. paid tools
There’s a range of free and paid options and, unsurprisingly, the paid versions performed better. They were more stable and had additional features that were not present in the free versions.
You get out what you put in
My first experiments with generative AI tools created disappointing results. It quickly became clear that there is a skill in being able to produce quality imagery and it's all about how good your prompts are. In the last few months, my prompt writing skills have developed and I've seen the quality of my output drastically improve. My prompts are now more specific and detailed, referencing things like specific lighting conditions and photographic techniques.
You can use reference images in addition to text as a starting point and blend multiple images. There are also websites that provide the sample images and the prompt text that created them to inspire you.
Brainstorming using generative AI
A big takeaway from my experimentation has been that generative AI tools can be helpful for the early idea brainstorming phase of a project. These tools can output a large volume of ideas relatively quickly, helping you get the obvious stuff out of the way so you can move on to the more promising ideas. There's still a significant amount of effort and skill required to generate these ideas quickly but it feels a bit like having a helpful assistant.
Generative AI tools as an alternative to pre-made stock imagery
Finding suitable stock imagery can be tedious and time-consuming, especially if you need something really specific. I’ve had some success in creating the imagery I need from scratch using generative AI tools. You may end up spending the same amount of time as if you were trawling through stock image websites, but the end results can be much more tailored. I use these images as placeholders that indicate the type of thing that could be created from a real-world photoshoot.
Watch out for random visual errors
There have been many times that I've created imagery using these tools that can appear perfect, only to spot a random visual error upon close inspection. Humans are still definitely needed not just to input the initial idea, but to check and edit the final output too.
AI needs to be used ethically
With the right prompt instructions, it's possible to create imagery in most styles imaginable. With this power comes great responsibility. Although we can produce imagery in the style of specific artists, is it ethical to do so?
Similarly, it’s possible to create incredibly convincing imagery of fake scenarios and people. There’s the potential that using those types of images could erode trust in an organisation's real work.
The tools are developing rapidly
It feels as though extra features and functionality are being added to these tools every month. You can now quickly create variations of images, change dimensions, adjust the level of stylisation and even edit specific parts of the image. The most significant development I've seen is that some tools are using a more conventional button-based UI, which reduces some of the need to type everything out and offers masses of customisation choices at easy reach.
To be continued…
This blog is only the start of my generative AI exploration. I continue to use and test these tools as well as try new ones regularly and I'm excited about how I can incorporate the tools more into my creative process.
I’ll be back with another update soon.