I’ve been a full stack designer for a long time. As part of that, I have always religiously tried new things and tried to focus on the visual and front end of things – new code, apps, tools, styles, techniques, benchmarks, and research continuously using every source available. I like to always be learning and to keep it fresh, and there’s always something new to try.
Ever since AI generation happened years ago, I have been deep in it trying to figure out just exactly what it is and what is it capable of – to sort the hype from real world professional use cases, and gauge output quality, see what design habits change and how, and immerse in all new ways of creating things based on what’s possible using AI tools.
I still have the eye of an art director and the design system awareness of an interactive front-end designer and coder. And, I still have the years of vector illustration, interactive animation, and creating things for global brands in continually evolving medias. And, I still have the human factors design benchmarks deeply ingrained that led to creating an app UI for a number one design app in the Appstore that led to acquisition and that is used by most people. And, I still have the lens of someone who created visual designs for some of the top visited sites on the internet, and whose work is used by the governments of multiple nations.
So, that’s the lens I bring to testing AI generation in design, and especially visual design. To say I'm just curious is an understatement.
There’s no other way to understand something deeply other than to dive in. And, for years the quality was not as realistic. You could tell it was AI. So, what you could do with it was slightly more innocent since basically it was generating cartoons. Or, I should say, I was focused more on exploring model line art capabilities and quality, and not yet focused on human imagery. Testing models for line and art style quality, making things like this: https://exploringallthethings.blogspot.com/2025/04/a-few-characters.html.
And, I tested image generation across many different platforms. I had many years of design experience and a lot of illustrations as part of that, and there was a massive amount of visual things I came to it wanting to see if AI could generate out of sheer curiosity. As part of the deep dive, I ran prompts to generate as many things as I could think of, just to see the quality of the output. Like that one specific shading technique I had been wanting to explore, or even see what Midjourney could do with my extensive vocabulary around visual design.
To know the tool’s capabilities you had to use them, even though the training data was, like most of these tools, coming from a position of research where they fed it as much different kinds of scraped public data as possible to push the tools to see what is possible. And not commercial -where that same data training becomes a completely different thing entirely. It was a part of visual design now, so I had to understand it down to the algorithms.
As someone who has helped design design tools, and has thought a lot about the subject, one of the benchmarks for making generative AI useful as a pro visual design tool is being able to get repeatable output that is controlled down to the last pixel. And editable. And completely controllable so you can more or less know that what you think it will do, when you need to do that thing, it will actually do it. Then you can meet your objectives. That’s important if a whole team is relying on you.
And, so one of the things I knew early on (because I didn’t expect AI generation to ever go away or do anything else other than evolve) was for AI gen tools to be useful it had to easily create consistent characters (not necessarily realistic looking) and repeatable (or programmatic) techniques for controlling outputs for everything. Now, I have done it more than once using many different tools.
The thing that’s different now, is that now it’s possible to create a repeatable character so real, and have it speaking immediately (even singing and playing musical instruments) that you simply can’t tell, especially if you’re in a tiny mobile screen not looking for that and not aware how far things have come in that area.
I can put AI labels all around it and make it’s head spin in circles and out of context certain people will probably still think it’s a real person.
Every time I achieve something hard (because it’s got to be visually exacting, repeatable, and controllable) with AI, it’s got a duality I’m still getting used to as a visual designer. Its great, and then it opens up more questions that have nothing to do with the beauty of it. In the case of AI repeating characters that are starting to be convincing, the question is, how to leverage the high quality visualization without the character being perceived as real.
Usually when you set a goal to try and create something beautiful and useful, and then you go deep to pull out all the details and bring an idea to life, it’s a satisfying experience when it meets the goal visually.
Usually the result for the most part speaks for itself. Usually it’s something beautiful and useful and everyone is happy because it’s been filtered through many lenses. Depending on the context, sometimes it’s even got the resources of multiple governments approving the visual content I made, or a brand legal team approving content, or it’s designed to map to healthcare compliancy. Or, it’s vector art meeting the requirements of a platform marketplace. To name a couple of examples.
But, the way AI gen was introduced to the world, there’s always another level of something else I’m trying put my finger on as I keep pushing it with trying new things.
By the time creators are using it, at its source, it’s evolved so much farther than is released. And I see that, and then I see the audience at large and current media consumption patterns. And, I hypothesize that on both sides of the spectrum-both the creatives and the audience don’t have decades of experience creating, presenting or consuming synthetic content that’s indistinguishable from real life, so continually finding the honorable path amongst all the new AI generation capabilities is important, even if you have to spend the time and resources to do it - like extra labels, extra policies, extra areas of your design system and brand guidelines. (That is a big subject, so more on that later.)
Surreality aside, I assume most fashion social media content online is at least partially AI based and the creation of it might have been automated for optimization on some level. I think I have become more aware of it the more fashion-related things I create. I see Amazon uses it.
I try to just show rather than tell because a picture is worth a thousand words, especially these days. Given that most creatives and audiences today don’t have decades of experience parsing synthetic content, and the quality has lept ahead fairly recently, the honorable path for designers seems to be showing what’s possible, and maybe threading PSA-style transparency into creative work (figuring out the nuances of what that looks like), and helping set responsible standards for AI-generated media where possible.
I still have the eye of an art director and the design system awareness of an interactive front-end designer and coder. And, I still have the years of vector illustration, interactive animation, and creating things for global brands in continually evolving medias. And, I still have the human factors design benchmarks deeply ingrained that led to creating an app UI for a number one design app in the Appstore that led to acquisition and that is used by most people. And, I still have the lens of someone who created visual designs for some of the top visited sites on the internet, and whose work is used by the governments of multiple nations.
So, that’s the lens I bring to testing AI generation in design, and especially visual design. To say I'm just curious is an understatement.
There’s no other way to understand something deeply other than to dive in. And, for years the quality was not as realistic. You could tell it was AI. So, what you could do with it was slightly more innocent since basically it was generating cartoons. Or, I should say, I was focused more on exploring model line art capabilities and quality, and not yet focused on human imagery. Testing models for line and art style quality, making things like this: https://exploringallthethings.blogspot.com/2025/04/a-few-characters.html.
And, I tested image generation across many different platforms. I had many years of design experience and a lot of illustrations as part of that, and there was a massive amount of visual things I came to it wanting to see if AI could generate out of sheer curiosity. As part of the deep dive, I ran prompts to generate as many things as I could think of, just to see the quality of the output. Like that one specific shading technique I had been wanting to explore, or even see what Midjourney could do with my extensive vocabulary around visual design.
To know the tool’s capabilities you had to use them, even though the training data was, like most of these tools, coming from a position of research where they fed it as much different kinds of scraped public data as possible to push the tools to see what is possible. And not commercial -where that same data training becomes a completely different thing entirely. It was a part of visual design now, so I had to understand it down to the algorithms.
As someone who has helped design design tools, and has thought a lot about the subject, one of the benchmarks for making generative AI useful as a pro visual design tool is being able to get repeatable output that is controlled down to the last pixel. And editable. And completely controllable so you can more or less know that what you think it will do, when you need to do that thing, it will actually do it. Then you can meet your objectives. That’s important if a whole team is relying on you.
And, so one of the things I knew early on (because I didn’t expect AI generation to ever go away or do anything else other than evolve) was for AI gen tools to be useful it had to easily create consistent characters (not necessarily realistic looking) and repeatable (or programmatic) techniques for controlling outputs for everything. Now, I have done it more than once using many different tools.
The thing that’s different now, is that now it’s possible to create a repeatable character so real, and have it speaking immediately (even singing and playing musical instruments) that you simply can’t tell, especially if you’re in a tiny mobile screen not looking for that and not aware how far things have come in that area.
So, I adjust my expectation - suddenly I can make a thing people react to as if it’s a person.
I don’t necessarily want them to think it's a real person, so solutions I’m trying are I’m making it more obvious with labels, and considering intensionally using a more hyperreal or cartoon render style.
I’m realizing that the AI labels on YouTube for example, aren’t really evident on YouTube shorts. So, I have started adding more labels making it even more obvious.
Every time I achieve something hard (because it’s got to be visually exacting, repeatable, and controllable) with AI, it’s got a duality I’m still getting used to as a visual designer. Its great, and then it opens up more questions that have nothing to do with the beauty of it. In the case of AI repeating characters that are starting to be convincing, the question is, how to leverage the high quality visualization without the character being perceived as real.
Usually when you set a goal to try and create something beautiful and useful, and then you go deep to pull out all the details and bring an idea to life, it’s a satisfying experience when it meets the goal visually.
Usually the result for the most part speaks for itself. Usually it’s something beautiful and useful and everyone is happy because it’s been filtered through many lenses. Depending on the context, sometimes it’s even got the resources of multiple governments approving the visual content I made, or a brand legal team approving content, or it’s designed to map to healthcare compliancy. Or, it’s vector art meeting the requirements of a platform marketplace. To name a couple of examples.
But, the way AI gen was introduced to the world, there’s always another level of something else I’m trying put my finger on as I keep pushing it with trying new things.
As a designer, I wanted to create something beautiful (a consistent character) to explore fashion, design and shopping visually – and explore all the interesting creative stuff happening there. Ideally it would be consistent no matter the story requirements. And, so I created this workflow for creating consistent characters and fashions. Then I created and refined a main character for exploring the latest evolving capabilities happening around AI vision and fashion and virtual try-ons, and all that realm of platforms and tech. I had a lot fashion-related designs I wanted to bring to life.
And, then it seemed like one person thought it was a real person, and I was really surprised. I started sort of finding myself trying to understand and put into words what about that is important to know or convey, or pivot around so that doesn’t happen, all is transparent, etc.
And, then it seemed like one person thought it was a real person, and I was really surprised. I started sort of finding myself trying to understand and put into words what about that is important to know or convey, or pivot around so that doesn’t happen, all is transparent, etc.
Social media can foster parasocial relationships. I think understanding that connection might be the top human factors concept to bring forward with realistic AI and social media. That’s the goal of a lot of YouTube channels - to get you to create a connection so you feel like you know them. With my channel it’s for AI fashion and the focus is the clothing not the model. It’s a shame to have to add so many labels - in that this could be automated by the platform and templatized. But, for now, I guess figuring out my own labeling system is still a requirement for the use case.
Parasocial relationships are not necessarily an AI related thing. But, with AI becoming so lifelike and the tools so widely available, it’s an overarching human factors benchmark to elevate in importance going forward I think. There’s a lot of use cases where highly realistic AI visualization can be extremely helpful. So, that’s one use case where adding extra transparency can help avoid confusion.
So I think going forward, there’s going to be a sort of PSA element threaded through all this. As a designer focused on visuals, I need to know what’s possible with AI, so I explore a lot of areas, and it’s becoming so realistic that I’ve been speechless more than once at what I’ve created.
Parasocial relationships are not necessarily an AI related thing. But, with AI becoming so lifelike and the tools so widely available, it’s an overarching human factors benchmark to elevate in importance going forward I think. There’s a lot of use cases where highly realistic AI visualization can be extremely helpful. So, that’s one use case where adding extra transparency can help avoid confusion.
So I think going forward, there’s going to be a sort of PSA element threaded through all this. As a designer focused on visuals, I need to know what’s possible with AI, so I explore a lot of areas, and it’s becoming so realistic that I’ve been speechless more than once at what I’ve created.
Surreality aside, I assume most fashion social media content online is at least partially AI based and the creation of it might have been automated for optimization on some level. I think I have become more aware of it the more fashion-related things I create. I see Amazon uses it.
I try to just show rather than tell because a picture is worth a thousand words, especially these days. Given that most creatives and audiences today don’t have decades of experience parsing synthetic content, and the quality has lept ahead fairly recently, the honorable path for designers seems to be showing what’s possible, and maybe threading PSA-style transparency into creative work (figuring out the nuances of what that looks like), and helping set responsible standards for AI-generated media where possible.
This AI as PSA is tbc. It's a theme I'm thinking about, especially when it comes to exploring fashion design related social media content. https://www.youtube.com/@styleanddesigninspiration I'll also be unpacking the multiplicity and complexity of lifelike AI generation.