I’ve been a full stack designer for a long time. As part of that, I have always religiously tried new things and tried to focus on the visual and front end of things – new code, apps, tools, styles, techniques, benchmarks, and research continuously using every source available. I like to always be learning and to keep it fresh, and there’s always something new to try.
Ever since AI generation happened years ago, I have been deep in it trying to figure out just exactly what it is and what is it capable of – to sort the hype from real world professional use cases, and gauge output quality, see what design habits change and how, and immerse in all new ways of creating things based on what’s possible using AI tools.
I still have the eye of a creative director and the design system awareness of an interactive front-end designer and coder. And, I still have the years of vector illustration, interactive animation, and creating things for global brands in continually evolving medias. And, I still have the human factors design benchmarks deeply ingrained that led to creating an app UI for a number one design app in the Appstore that led to acquisition and that is used by most people. And, I still have the lens of someone who created visual designs for some of the top visited sites on the internet, and whose work is used by the governments of multiple nations.
So, that’s the lens I bring to testing AI generation in design, and especially visual design. To say I'm just curious is an understatement.
There’s no other way to understand something deeply other than to dive in. And, for years the quality was not as realistic. You could tell it was AI. So, what you could do with it was slightly more innocent since basically it was generating cartoons.
And, I tested image generation across many different platforms. I had many years of design experience and a lot of illustrations as part of that, and there was a massive amount of visual things I came to it wanting to see if AI could generate out of sheer curiosity. As part of the deep dive, I ran prompts to generate as many things as I could think of, just to see the quality of the output. Like that one specific shading technique I had been wanting to explore, or even see what Midjourney could do with my extensive vocabulary around visual design.
To know the tool’s capabilities you had to use them, even though the training data was, like most of these tools, coming from a position of research where they fed it as much different kinds of scraped public data as possible to push the tools to see what is possible. And not commercial -where that same data training becomes a completely different thing entirely. It was a part of visual design now, so I had to understand it down to the algorithms.
I know things change fast, but it’s how things are changed as well.
As someone who has helped design design tools, and has thought a lot about the subject, one of the benchmarks for making generative AI useful as a pro visual design tool is being able to get repeatable output that is controlled down to the last pixel. And editable. And completely controllable so you can more or less know that what you think it will do, when you need to do that thing, it will actually do it. Then you can meet your objectives. That’s important if a whole team is relying on you.
And, so one of the things I knew early on (because I didn’t expect AI generation to ever go away or do anything else other than evolve) was for AI gen tools to be useful it had to easily create consistent characters (not necessarily realistic looking) and repeatable (or programmatic) techniques for controlling outputs for everything. Now, I have done it more than once (quality discussion aside) using many different tools.
The thing that’s different now, is that now it’s possible to create a repeatable character so real, and have it speaking immediately (even singing and playing musical instruments) that you simply can’t tell, especially if you’re not looking for that and not aware how far things have come in that area.
I can put AI labels all around it and make it’s head spin in circles and out if context certain people will probably still think it’s a real person. So, I adjust my expectation.
Every time I achieve something hard (because it’s got to be visually exacting, repeatable, and controllable) with AI, it’s got a duality I’m still getting used to as a visual designer. Its great, and then it opens up more questions that have nothing to do with the beauty of it.
Usually when you set a goal to try and create something beautiful and useful, and then you go deep to pull out all the details and bring an idea to life, it’s a satisfying experience when it meets the goal visually.
Usually the result for the most part speaks for itself. Usually it’s something beautiful and useful and everyone is happy because it’s been filtered through many lenses. Depending on the context, sometimes it’s even got the resources of multiple governments approving the visual content I made, or a brand legal team approving content, or it’s designed to map to healthcare compliancy. Or, it’s vector art meeting the requirements of a platform marketplace. To name a couple of examples.
But, the way AI gen was introduced to the world, there’s always another level of something else I’m trying put my finger on.
As a designer, I wanted to create something beautiful to explore fashion, design and shopping visually – and explore all the interesting creative stuff happening there. And, so I created this workflow for creating characters and fashions. Then I created and refined a main character for exploring the latest evolving capabilities happening around AI vision and fashion and virtual try-ons, and all that realm of platforms and tech. I had a lot fashion-related designs I wanted to bring to life.
And, I started sort of finding myself trying to understand and put into words what about that is important to know or convey, or pivot around. And, I think it’s the idea of parasocial relationships. I think understanding that connection is the top concept to bring forward.
Parasocial relationships are not necessarily an AI related thing. But, with AI becoming so lifelike and the tools so widely available, it’s an overarching human factors benchmark to elevate in importance going forward. Maybe the guardrails and filters get better or something. We’ll see.
So I think going forward, there’s going to be a sort of PSA element threaded through all this. As a designer focused on visuals, I need to know what’s possible with AI, but it’s becoming so realistic that I’ve been speechless more than once at what I’ve created.
By the time creators are using it, at its source, it’s evolved so much farther than is released. And I see that, and then I see the audience at large and current media consumption patterns. And, I hypothesize that on both sides of the spectrum-both the creatives and the audience don’t have decades of experience creating, presenting or consuming synthetic content that’s indistinguishable from real life, so continually finding the honorable path amongst all the new AI generation capabilities is important, even if you have to spend the time and resources to do it.
Surreality aside, I assume most social media content online is at least partially AI based and the creation of it might have been automated for optimization on some level. I think I have become more aware of it the more fashion-related things I create.
I don’t give advice, I try to just show rather than tell because a picture is worth a thousand words, especially these days. Given that most creatives and audiences today don’t have decades of experience parsing synthetic content, the honorable path for designers seems to be showing what’s possible, and maybe threading PSA-style transparency into creative work (figuring out the nuances of what that looks like), and helping set responsible standards for AI-generated media where possible.
Ever since AI generation happened years ago, I have been deep in it trying to figure out just exactly what it is and what is it capable of – to sort the hype from real world professional use cases, and gauge output quality, see what design habits change and how, and immerse in all new ways of creating things based on what’s possible using AI tools.
I still have the eye of a creative director and the design system awareness of an interactive front-end designer and coder. And, I still have the years of vector illustration, interactive animation, and creating things for global brands in continually evolving medias. And, I still have the human factors design benchmarks deeply ingrained that led to creating an app UI for a number one design app in the Appstore that led to acquisition and that is used by most people. And, I still have the lens of someone who created visual designs for some of the top visited sites on the internet, and whose work is used by the governments of multiple nations.
So, that’s the lens I bring to testing AI generation in design, and especially visual design. To say I'm just curious is an understatement.
There’s no other way to understand something deeply other than to dive in. And, for years the quality was not as realistic. You could tell it was AI. So, what you could do with it was slightly more innocent since basically it was generating cartoons.
And, I tested image generation across many different platforms. I had many years of design experience and a lot of illustrations as part of that, and there was a massive amount of visual things I came to it wanting to see if AI could generate out of sheer curiosity. As part of the deep dive, I ran prompts to generate as many things as I could think of, just to see the quality of the output. Like that one specific shading technique I had been wanting to explore, or even see what Midjourney could do with my extensive vocabulary around visual design.
To know the tool’s capabilities you had to use them, even though the training data was, like most of these tools, coming from a position of research where they fed it as much different kinds of scraped public data as possible to push the tools to see what is possible. And not commercial -where that same data training becomes a completely different thing entirely. It was a part of visual design now, so I had to understand it down to the algorithms.
I know things change fast, but it’s how things are changed as well.
As someone who has helped design design tools, and has thought a lot about the subject, one of the benchmarks for making generative AI useful as a pro visual design tool is being able to get repeatable output that is controlled down to the last pixel. And editable. And completely controllable so you can more or less know that what you think it will do, when you need to do that thing, it will actually do it. Then you can meet your objectives. That’s important if a whole team is relying on you.
And, so one of the things I knew early on (because I didn’t expect AI generation to ever go away or do anything else other than evolve) was for AI gen tools to be useful it had to easily create consistent characters (not necessarily realistic looking) and repeatable (or programmatic) techniques for controlling outputs for everything. Now, I have done it more than once (quality discussion aside) using many different tools.
The thing that’s different now, is that now it’s possible to create a repeatable character so real, and have it speaking immediately (even singing and playing musical instruments) that you simply can’t tell, especially if you’re not looking for that and not aware how far things have come in that area.
I can put AI labels all around it and make it’s head spin in circles and out if context certain people will probably still think it’s a real person. So, I adjust my expectation.
Every time I achieve something hard (because it’s got to be visually exacting, repeatable, and controllable) with AI, it’s got a duality I’m still getting used to as a visual designer. Its great, and then it opens up more questions that have nothing to do with the beauty of it.
Usually when you set a goal to try and create something beautiful and useful, and then you go deep to pull out all the details and bring an idea to life, it’s a satisfying experience when it meets the goal visually.
Usually the result for the most part speaks for itself. Usually it’s something beautiful and useful and everyone is happy because it’s been filtered through many lenses. Depending on the context, sometimes it’s even got the resources of multiple governments approving the visual content I made, or a brand legal team approving content, or it’s designed to map to healthcare compliancy. Or, it’s vector art meeting the requirements of a platform marketplace. To name a couple of examples.
But, the way AI gen was introduced to the world, there’s always another level of something else I’m trying put my finger on.
As a designer, I wanted to create something beautiful to explore fashion, design and shopping visually – and explore all the interesting creative stuff happening there. And, so I created this workflow for creating characters and fashions. Then I created and refined a main character for exploring the latest evolving capabilities happening around AI vision and fashion and virtual try-ons, and all that realm of platforms and tech. I had a lot fashion-related designs I wanted to bring to life.
And, I started sort of finding myself trying to understand and put into words what about that is important to know or convey, or pivot around. And, I think it’s the idea of parasocial relationships. I think understanding that connection is the top concept to bring forward.
Parasocial relationships are not necessarily an AI related thing. But, with AI becoming so lifelike and the tools so widely available, it’s an overarching human factors benchmark to elevate in importance going forward. Maybe the guardrails and filters get better or something. We’ll see.
So I think going forward, there’s going to be a sort of PSA element threaded through all this. As a designer focused on visuals, I need to know what’s possible with AI, but it’s becoming so realistic that I’ve been speechless more than once at what I’ve created.
By the time creators are using it, at its source, it’s evolved so much farther than is released. And I see that, and then I see the audience at large and current media consumption patterns. And, I hypothesize that on both sides of the spectrum-both the creatives and the audience don’t have decades of experience creating, presenting or consuming synthetic content that’s indistinguishable from real life, so continually finding the honorable path amongst all the new AI generation capabilities is important, even if you have to spend the time and resources to do it.
Surreality aside, I assume most social media content online is at least partially AI based and the creation of it might have been automated for optimization on some level. I think I have become more aware of it the more fashion-related things I create.
I don’t give advice, I try to just show rather than tell because a picture is worth a thousand words, especially these days. Given that most creatives and audiences today don’t have decades of experience parsing synthetic content, the honorable path for designers seems to be showing what’s possible, and maybe threading PSA-style transparency into creative work (figuring out the nuances of what that looks like), and helping set responsible standards for AI-generated media where possible.
This AI as PSA is tbc. It's a theme I'm thinking about, especially when it comes to exploring fashion design related social media content. https://www.youtube.com/@styleanddesigninspiration I'll also be unpacking the multiplicity and complexity of lifelike AI generation.