Research Notes: Is there an AI that can see and interact with a desktop app such as Illustrator, like “hands-free” Illustrator creation?

I'm mostly curious to know what can happen when visually sophisticated and intelligent AIs can talk to each other. 

But, I'm also curious about the first part of that, what a visual AI can do overall

And, what can it do if it could interface with a drawing tool on a desktop like Adobe Illustrator. 

I lived in Illustrator for a long time. I'm curious if I could go hands-free, if I could create a new type of thing, or what could be done that I haven't seen before. My research into that is quickly leading to broader things like tools that access and manipulate a desktop in general. I try to narrow focus to visual ai, but sometimes you go broad to go specific.

What AI tool can see my desktop screen and use my Adobe Illustrator application on my desktop, and manipulate the Adobe Illustrator app to create a drawing in Adobe Illustrator on my desktop computer?

Answer from Perplexity Deep Research:

AI Tools for Automated Drawing in Adobe Illustrator

Before diving into the specifics, here's a summary of the key findings: 

Several automation tools can monitor your desktop and control Adobe Illustrator, with SikuliX and Robotic Process Automation (RPA) platforms emerging as the leading solutions. 

While true AI-powered drawing automation is still developing, these tools offer sophisticated ways to automate Illustrator workflows through screen recognition and simulated user actions.

Screen Recognition Automation Tools

SikuliX: Computer Vision Automation

SikuliX is a powerful automation tool that can effectively "see" and interact with your desktop, making it particularly suitable for controlling Adobe Illustrator.

SikuliX works by using image recognition technology to identify and control GUI components on your screen. This allows it to:

  • Automate anything visible on your desktop screen, regardless of whether you're using Windows, Mac, or Linux[1][2]
  • Run mouse and keyboard actions to interact with Adobe Illustrator's interface elements[2]
  • Recognize text elements within the Illustrator interface[2]
  • Execute complex sequences of actions through Python-based scripting[2]

Unlike some alternatives, SikuliX doesn't require direct integration with Adobe Illustrator's API. Instead, it works by recognizing visual elements on your screen and performing actions as if a human were controlling the program. This makes it flexible for automating various drawing tasks in Illustrator, from simple repetitive actions to more complex workflows[2].

One example provided in the search results shows a user asking about automating image drawing by selecting and drawing different shapes on a canvas using SikuliX[1]. This aligns exactly with the type of automation you're looking for.

Robotic Process Automation (RPA) Solutions

RPA platforms offer another approach to automating Adobe Illustrator through desktop control. These enterprise-grade solutions are specifically designed for business process automation:

UiPath

UiPath is a leading RPA platform that can create workflows to control desktop applications, including Adobe Illustrator:

  • Enables non-programmers to build automation workflows without extensive coding knowledge[3]
  • Can fetch data from external sources (like Google Sheets) and use it to update Illustrator assets[3]
  • Provides a workflow-building interface for creating complex automation sequences[3]
  • Can trigger Illustrator scripts and process the results[3]

A detailed example in the search results describes using UiPath to update Illustrator assets from Google Sheets data. The workflow fetches data, processes it, sends it to an Illustrator script, and then returns the results - demonstrating the bidirectional control capabilities[3].

JavaScript-Based Automation Options

Adobe ExtendScript (JSX)

While not an AI tool that "sees" your screen, Adobe's native scripting language is worth mentioning as it provides direct control over Illustrator:

  • Allows for deep automation of Illustrator through JavaScript (ExtendScript/JSX)[4]
  • Can automate workflows, speed up design processes, and improve efficiency[4]
  • Enables tasks like batch processing, automated styling, and format conversion[4]

The main limitation is that it requires coding knowledge, though there are emerging solutions to address this:

ChatGPT for Script Generation

A recent development is using ChatGPT to generate Adobe Illustrator scripts without extensive coding knowledge:

  • Can write automation scripts for repetitive and boring tasks in Illustrator[5]
  • Allows users to provide natural language descriptions of desired automation[5]
  • Creates JavaScript code that can be run directly in Illustrator[5]

A YouTube tutorial from February 2025 demonstrates how to use ChatGPT to write an Illustrator script for automating tedious tasks that actions can't handle[5].

No-Code Automation Solutions

For those specifically seeking no-code solutions, there are limited but promising options:

  • A Reddit discussion from January 2025 shows users inquiring about no-code ways to automate Illustrator, with one response mentioning that ChatGPT can be used to streamline straightforward tasks[6]
  • Adobe Actions provide some automation capabilities without coding, though they're more limited than scripting[6]
  • Some RPA tools provide visual workflow builders that require minimal coding experience[3]

Practical Implementation Considerations

When implementing any automation solution for Adobe Illustrator, consider these factors:

Workspace Setup Optimization

For optimal automation performance, configure your Illustrator workspace appropriately:

  • Disable the Home screen when no document is open for more consistent automation[7]
  • Consider using the Application Frame to ensure consistent positioning of interface elements[7]
  • Utilize a consistent panel layout to ensure the automation tool can reliably locate interface elements[7]

Keyboard Shortcuts for Automation Efficiency

Many automation tools can leverage keyboard shortcuts for more reliable control:

  • Automation scripts can use shortcuts like Paste in Place (Ctrl+Shift+B/Cmd+Shift+B) to ensure precise object positioning[8][9]
  • Screen modes can be toggled programmatically using the F key for optimal viewing during automation[9]
  • Selection tool shortcuts can be used to reliably select and manipulate objects[9]

Conclusion

While there isn't a single "AI tool" that autonomously creates drawings in Illustrator through screen observation, there are several powerful automation solutions that combine computer vision, scripting, and workflow automation to achieve similar results.

SikuliX offers the most direct approach to "seeing" your screen and controlling Illustrator through image recognition, while RPA tools like UiPath provide more structured, enterprise-grade automation capabilities. For those with coding knowledge, or willing to use ChatGPT to generate scripts, Adobe's native JSX scripting offers the deepest integration.

Each approach has its strengths and limitations, and the best choice depends on your specific requirements, technical skills, and the complexity of the drawing tasks you want to automate.


  • https://answers.launchpad.net/sikuli/+question/707195  
  • https://www.udemy.com/course/sikulix-automate-anything/     
  • https://www.linkedin.com/pulse/using-rpa-update-illustrator-assets-from-google-sheets-vasily-hall      
  • https://metadesignsolutions.com/how-to-automate-adobe-illustrator-with-javascript-scripting-jsx-examples/   
  • https://www.youtube.com/watch?v=yu_ISH6juxo    
  • https://www.reddit.com/r/AdobeIllustrator/comments/1i5dwm6/any_nocode_way_to_automate_illustrator/  
  • https://helpx.adobe.com/illustrator/using/workspace-basics.html   
  • https://keycombiner.com/collections/illustrator/ 
  • https://helpx.adobe.com/illustrator/using/default-keyboard-shortcuts.html  

  • Its more curiosity with all the new AI stuff coming out. Can AI control Adobe Illustrator directly? If so how? Can it manipulate my desktop Illustrator app to draw something I asked it for - so that it looks done in Illustrator?


    ChatGPT said:

    Got it. I’ll look into whether AI can directly control Adobe Illustrator on a desktop — including how it could potentially manipulate the app to draw based on user input — and what current tools, plugins, or automation systems support that. I’ll let you know what I find.

    AI Control of Adobe Illustrator: Current Capabilities and Tools

    AI Interpretation of the Illustrator Canvas

    Currently, AI tools do not inherently “see” or interpret what’s on your screen in Illustrator 

    I question the timeliness of this answer.

    unless explicitly provided with that information. Out-of-the-box language models (like ChatGPT) have no direct access to the Illustrator interface or your artwork – they rely solely on the input you give them​

    . In practice, this means an AI assistant wouldn’t know what shapes or text you have on your artboard unless a plugin or script feeds that data to the AI. Some experimental approaches could capture the canvas (for example, taking a screenshot or reading the document’s object data) and then use computer vision or analysis to interpret it, but there’s no native feature in Illustrator that exposes the live canvas to AI. Any “awareness” an AI has of the artwork must be manually supplied (e.g. a list of layer names or an exported image for analysis). This limitation makes true visual feedback loops challenging – the AI can execute commands, but confirming results or adjusting based on the artwork often requires human oversight or an additional programmed step.

    Workarounds: Advanced users or plugin developers can script Illustrator to retrieve some document information (like layers, selected objects, or their properties) and pass that to an AI in text form. For example, a script could gather all shape names or positions and describe them to the AI. However, such solutions are custom – no general-purpose AI agent today can simply “look” at an Illustrator file the way a human would. This lack of built-in visual context is a key challenge in AI-driven drawing assistants.

    Methods for AI to Control Illustrator (APIs, Scripting, Automation)

    Adobe Illustrator provides several avenues for programmatic control, which AI tools can leverage indirectly. The primary method is through Illustrator’s scripting APIs. Illustrator has a Document Object Model (DOM) accessible via ExtendScript/JavaScript (and also AppleScript on macOS or VBScript on Windows)​

    . In essence, anything you can script (creating shapes, changing colors, moving objects, etc.), an AI could be instructed to do by generating or using those scripts. For example, a user can prompt ChatGPT to write a JavaScript (JSX) that draws a circle or batch-edits text, then run that script in Illustrator to execute the actions. This approach has been demonstrated in practice: one designer asked ChatGPT to script a Hawaiian shirt pattern – ChatGPT produced an Illustrator script that drew “waves, palm trees, and hibiscus flowers” (albeit in a quirky stylized way)​. The designer then copied the code into Illustrator’s script panel and ran it, generating a vector pattern without manual drawing​. This shows that AI-written scripts can directly manipulate Illustrator’s native vector content.

    Other methods include plugins and extensions. Adobe Illustrator supports plugins built on frameworks like CEP (Common Extensibility Platform) or UXP, which can run code within Illustrator. Developers have created plugins that act as bridges between AI and Illustrator’s API. (We discuss specific AI-driven plugins in the next section.) In theory, an AI could also control Illustrator through general automation software – for instance, using robotic process automation (RPA) or macro tools that move the mouse, click menus, and press keyboard shortcuts. Tools like AutoHotkey or UIPath can automate GUI interactions, and if guided by an AI (provided the AI knows what sequence of actions to perform), they could accomplish Illustrator tasks. However, GUI automation is brittle and lacks the precision of using Illustrator’s API directly. It’s more common to use Illustrator’s official scripting interface for reliable control, since it can create objects and change properties exactly as needed (e.g. adding shapes to layers, adjusting fills/strokes via code). Adobe’s own support for automation (the ExtendScript Toolkit, Visual Basic scripts, etc.) means Illustrator is quite “scriptable,” and AI can take advantage of that by generating the needed script commands.

    In summary, key control methods include:

    • ExtendScript/JavaScript API: The most direct way – AI generates a JSX script which is run in Illustrator, manipulating the DOM (vector objects, layers, artboards, etc.)​

      .

    • Third-Party Plugins: Custom plugins can accept AI input (like text prompts) and translate it into Illustrator actions using internal APIs. This offers a more seamless experience (no manual copying of code by the user).

    • Macro or RPA Tools: Less common for creative work, but possible – an AI could theoretically drive an automation tool that clicks through Illustrator’s interface. This doesn’t give vector-precise control and rarely is used in production due to complexity.

    • Creative Cloud APIs/SDKs: Adobe has APIs (e.g. the Adobe Illustrator SDK in C++ or the Creative Cloud API) that might be harnessed in AI systems, but these are more for developers to integrate Illustrator with other software. An AI agent with programming access could use such APIs if configured, though this is not consumer-friendly.

    Real-world uses lean towards the scripting approach. It’s become “shockingly easy to automate… design tasks” when using AI for scripting, as one user noted, without having to manually learn the entire scripting language​

    . This combination of AI + Illustrator’s API is powerful: the AI handles the coding part and Illustrator executes those instructions directly, maintaining a fully native workflow (the output is regular Illustrator vector objects on your canvas, not an imported screenshot or flat image).

    AI Plugins and Natural Language Control in Illustrator

    One of the most exciting developments is the emergence of AI-powered plugins that let users control Illustrator with natural language commands. A leading example is MATE for Illustrator (formerly called Omata AI), an experimental plugin that integrates OpenAI’s GPT models into Illustrator. With MATE installed, you can type in plain English what you want to do – the plugin then translates that into an Illustrator script or action behind the scenes. In other words, “it converts your workflow explanations into scripts directly in Adobe Illustrator”

    . For instance, a designer could say, “Create a blue rectangle, then duplicate it in a grid 3 by 3”, and the AI would generate and execute the code to draw that on the artboard. Under the hood, the plugin sends the prompt to a GPT-4 or similar model which produces JavaScript code using Illustrator’s API, and then runs that code for the user. This allows hands-free creation and manipulation of vector shapes without writing any code manually.

    Such AI assistants can handle a range of tasks: drawing basic shapes, applying transformations, adjusting colors, or even producing small utilities (one could ask for a script to rename all layers, for example). Because the plugin is operating within Illustrator, it can also potentially query the document. (In practice, current versions still rely mostly on the prompt and known scripting functions – they don’t truly “understand” the image on the canvas, as noted earlier.) Nonetheless, this represents a big step toward conversational control of design tools.

    Aside from Omata/MATE, we are starting to see voice or natural language interfaces conceptually applied to creative software. There isn’t a mainstream voice-control assistant specifically for Illustrator yet (e.g. you can’t yet say “Illustrator, draw a circle” to your computer and have it happen natively), but it’s technically feasible by combining speech-to-text with a system like MATE. In fact, one could use a voice dictation tool to feed prompts into the AI plugin – effectively giving voice commands that result in vector edits. Additionally, operating systems offer basic voice control that could work with Illustrator’s UI (for example, speaking menu commands via Windows Speech Recognition or Mac’s Voice Control), but these are not AI-driven in the creative sense – they’re simply voice equivalents of keyboard shortcuts. The true AI approach is the NLP (natural language processing) route: you describe the desired outcome, and the AI figures out the sequence of drawing operations needed.

    Other integrations and tools: Adobe has been exploring AI in its products under the banner of Adobe Sensei and the newer Adobe Firefly. While not a “type anything” assistant, Illustrator 2024 introduced a feature called Generative Recolor, which uses AI to recolor vector artwork based on a text prompt​

    . For example, you can select an illustration and prompt the AI with “apply a sunset-themed color palette” and it will produce multiple recolored variations using Adobe’s generative model. This is an AI feature working on your actual Illustrator objects (it reads the vector shapes and adjusts their colors). It doesn’t create new shapes, but it shows how AI can interpret existing vector art and make creative changes via natural-language input. Adobe has also previewed a “Text to Vector Graphic” capability, presumably Firefly-powered, aiming to let users generate new vector art from a prompt. This would be analogous to DALL·E or Midjourney, but for vectors inside Illustrator – keeping the artwork editable and layered. As of now, such generative vector features are in early stages or beta. They indicate that direct AI-driven creation of Illustrator artwork is on the horizon within the app itself.

    Outside of Adobe’s ecosystem, there are third-party efforts that bring natural language into design workflows. For example, Magician is a plugin for Figma (a web-based vector design tool) that can “generate unique icons from a simple text prompt” as SVG vectors​

    . While that’s not Illustrator, it shows the demand for AI that understands phrases like “a flat icon of a cat” and produces vector graphics accordingly. Another example is Illustroke, a web service that creates “stunning vector illustrations from text prompts” using generative AI​. Users can input a description and get an SVG illustration which they could then import into Illustrator. These tools highlight a key distinction: using AI in a web/cloud environment to make art vs. controlling the Illustrator desktop app directly. Plugins like MATE blur this line by bringing AI into Illustrator, whereas services like Illustroke generate content externally.

    In summary, natural language interfaces for Illustrator are emerging primarily via plugins and Adobe’s own AI features. They allow a more intuitive, hands-free approach: you describe what you want, and the AI handles the mechanics of drawing or editing. This area is rapidly evolving, and we can expect more voice or chat-based control as the technology matures.

    Use Cases and Integrations for Hands-Free Illustrator Artwork

    We already touched on a few examples, but let’s highlight some concrete use cases where AI has been used to generate Illustrator-based artwork with minimal manual effort:

    • Automated Pattern Generation: In a real-world test, a designer used ChatGPT to create a full Illustrator script for a Hawaiian shirt pattern, aiming to place motifs like palm trees and flowers across the canvas. The AI’s first attempt produced something humorous – it “tried its best, generating waves, palm trees, and hibiscus flowers – or at least that’s what it thought”, resulting in abstract shapes that with some imagination looked like parts of a Hawaiian print​

      . After a few refinements, the script was saved and run in Illustrator to produce a quirky randomized pattern. Every execution yielded a new variation of the vector art, showing how AI-generated code can introduce creative randomness. All elements came out as native vector objects (paths for waves, shapes for “trees” etc.), which the designer could further tweak by hand​. This example illustrates a hands-free workflow: the human provided the idea and prompt, the AI wrote the drawing instructions, and Illustrator rendered the final artwork.

    • On-the-Fly Automation of Repetitive Tasks: Beyond novel graphics, AI is helping automate tedious Illustrator operations. For instance, a user might frequently apply the same series of transformations or styling across many objects. Instead of manually clicking through menus every time, one can ask an AI assistant to create a script for that. In the Hawaiian shirt project above, the author also had ChatGPT write a helper script to apply a specific Distort & Transform effect with one click​

      . They integrated it into their workflow so that what used to take multiple menu actions could be done “in just one step” via the AI-generated script​. This kind of integration shows AI acting as a “co-pilot” inside Illustrator – it might not design entire artworks autonomously, but it can significantly speed up the parts that are more mechanical or rule-based, allowing the human designer to focus on creative decisions.

    • Layer and Object Management via Voice/Text: Consider scenarios where an artist wants to rearrange or tweak a design while staying in a creative flow. Typing or saying commands like “align these three objects vertically with equal spacing” or “duplicate the current layer and mirror it” could be executed by an AI plugin. Early adopters of MATE for Illustrator have reported using it to do exactly this kind of thing. It’s like having an assistant who knows Illustrator’s menu system inside-out – you issue a plain instruction and the assistant (AI) clicks the right buttons or runs the right code. One reviewer noted that MATE “collaborates with ChatGPT” so you describe your workflow and it handles the scripting instantly​

      . This has been especially useful for designers who aren’t comfortable writing code: they can automate complex tasks by simply describing them. For example, “generate 10 variations of this shape with random sizes and colors” could yield a quick series of options on artboards, all done via AI-driven automation rather than manual duplication and editing.

    • Generative Design Experiments: AI’s ability to produce unexpected results has also been embraced in Illustrator for creative exploration. Some users treat the AI like a brainstorming partner. They might prompt it for something imaginative like “draw a surreal landscape using vector shapes” and see what script or artwork comes out, as a starting point to build upon. While the results might need refining (the AI might not get the aesthetics right on the first try), it can spawn ideas that the designer wouldn’t have conceived alone. Adobe’s Generative Recolor is another example here – a hands-free way to get new color schemes that can inspire further design decisions. Because everything remains in vector format, the designer can then adjust any part of the AI-generated output. Eugen Pflüger, the creator of the MATE plugin, described this human-AI co-creation as “MATE works with the same vector paths as users do. MATE generates shapes, users can modify them, and MATE can adjust again”, emphasizing a loop of collaboration​

      . In practice, an artist could take an AI-generated vector graphic, tweak a few points or colors, and even ask the AI to continue or make changes based on the new state (provided the plugin is designed to handle iterative prompts).

    These use cases show that hands-free doesn’t mean completely unattended. Often, the designer is guiding the AI with prompts and making judgment calls, but the heavy lifting of actual pen-tool work, duplication, or computation is offloaded to the AI. The result is a faster workflow where mundane steps are automated and even complex drawings can start with an AI draft.

    Limitations of Current Approaches and Workarounds

    Despite the impressive integrations so far, today’s AI-to-Illustrator workflows have notable limitations:

    • Lack of True Visual Understanding: As discussed, AI doesn’t truly see the artwork in front of you. If you ask an AI to “make the circle on the left bigger than the one on the right,” a generic AI won’t know what “left” and “right” refer to on your canvas without additional programming. The AI relies on either the user’s description or a plugin providing context. This can lead to misunderstandings. For example, an AI might create a new object when you intended it to modify an existing one, simply because it wasn’t “aware” of what you already had. Workaround: In prompts, users must be very explicit about the current state (e.g., “there is a circle named ‘Logo’ on layer 2; make that 50% larger”). Future solutions might involve the plugin querying the document and feeding that info to the AI automatically, but this is still rudimentary in current tools.

    • Accuracy and Reliability of AI-Generated Scripts: When using GPT-style models to generate code, the results are not always correct or optimized for your specific version of Illustrator. AI can produce syntactically correct script code that doesn’t actually do the intended thing. It might call the wrong function or assume a capability that doesn’t exist. Users have found that ChatGPT sometimes produces incorrect or non-existent methods in ExtendScript, requiring the user to debug or retry the prompt​

      . Essentially, the AI doesn’t test the code it writes (it can’t run Illustrator itself), so the first attempt might throw an error. The Ink Kitchen example above noted it took “28 tries to get [the] transform script working efficiently” for a custom task​. That iteration can be time-consuming and might negate the convenience if you’re not prepared for it. Workarounds include: breaking tasks into smaller, simpler prompts (so the AI script is shorter and less prone to error), and having some scripting knowledge yourself to fix minor issues. Using a dedicated plugin like MATE can help because the plugin authors likely fine-tuned prompts or provided the AI with additional context to improve reliability – but even then, complex tasks may need a couple of attempts.

    • Limited Creativity and Design Judgment: AI, especially in the context of scripting Illustrator, will do exactlywhat you ask – which isn’t always what you envisioned. If you give a very high-level prompt like “draw a beautiful logo”, a code-writing AI might either fail (because that’s too vague for a script) or produce something very simplistic (since it has to rely on primitive shapes unless explicitly instructed otherwise). Unlike a human designer, the AI doesn’t inherently understand aesthetic principles or the purpose of the artwork. It has no intuition about composition, style, or branding (unless those are specified in the prompt in a technical way). This means the user must provide clear guidance and often still serve as the art director. Workaround: use AI for what it’s good at – generating variations, handling tedious repetitions, or implementing precise mathematical patterns – and use human creativity to set the direction. Some designers iterate by asking the AI to create a base, then adjusting it manually, then maybe asking the AI to do further operations on that result, mixing human judgment with AI speed.

    • Scope of Control – Not Everything is Exposed: While Illustrator’s scripting API is powerful, not every single feature of the GUI is scriptable. There may be new tools or third-party plugin functions that an AI script can’t directly invoke. For instance, an AI script can’t directly “paint with the brush tool in a freestyle manner” because the scripting API doesn’t record freehand drawing strokes as a series of commands (you’d have to approximate with curves and points). Similarly, certain live effects or plugin-driven effects might not be accessible via code. Thus, an AI might hit a wall if asked to do something that’s only available interactively. Workaround: sometimes a combination of approaches is used – e.g., using AI to generate an approximate shape, then a human or a macro applying a specific live effect. Or using AI to produce a suggestion (like a rough shape as an SVG) which is then refined with Illustrator’s manual tools. In some cases, Adobe might extend the API after seeing demand, but as of now, AI control is essentially limited to what a script can do.

    • Dependency on External Models and Services: Many AI integrations (such as the GPT-based ones) require an internet connection and access to third-party AI services. This introduces considerations like cost (API usage or subscription for plugins like MATE), latency, and privacy. If the AI service changes or goes down, the functionality could break. Also, corporate or offline environments might not permit these tools. Adobe’s native AI features (like Firefly in Illustrator) run through Adobe’s cloud as well, so they carry similar cloud dependency. A current limitation is that there’s no entirely local, offline “AI brain” inside Illustrator – it’s either cloud AI or user-installed scripts. Workaround: for critical needs, users can rely on pre-written scripts or actions (which run locally) and reserve AI for when connectivity is available. As AI tech advances, we might see more on-device AI for creative apps, which would mitigate this concern.

    Despite these limitations, designers are finding creative ways to integrate AI. One common workaround is a human-in-the-loop approach: the AI does the rough stuff, and the human refines the outcome. If the AI misinterprets, the human corrects the course with another prompt or a quick manual fix. Essentially, the current state of AI control in Illustrator is assistive rather than completely autonomous. It can greatly speed up workflows and even surprise you with novel outcomes, but it’s not at a stage where you can leave it alone to design an intricate illustration from scratch without guidance.

    On the horizon, we can expect many of these limitations to be addressed. Adobe and others are actively researching better integrations – for example, an AI that could read your Illustrator file’s structure (shapes, layers) and genuinely understand spatial relationships might eventually be possible. For now, users who embrace these tools do so knowing the quirks: they enjoy the boost in productivity and experimentation, and they handle the gaps with their own expertise. As one early adopter put it, “keep experimenting – you might just find your next favorite tool” in combining AI with Illustrator​

    .

    In conclusion, current AI technologies can control Adobe Illustrator to a significant extent, but usually via scripting and plugins rather than a direct “AI driver” of the interface. AI can’t literally watch your canvas in real time like a person, but through clever use of Illustrator’s APIs and some of Adobe’s new built-in features, it can take high-level instructions (text or voice) and turn them into vector artwork or edits. The integrations available today – from GPT-powered script assistants to generative color and graphic tools – are already expanding what designers can do and how they do it. They maintain the Illustrator-native workflow (producing real vectors, layers, and artboards) which is crucial for professional use​. 

    omata.io

    Meanwhile, purely web-based AI art tools offer alternative ways to create vector art that can complement an Illustrator workflow, though they don’t “drive” Illustrator itself. It’s an exciting space to watch, as the line between a designer’s instructions and the executed result gets ever more blurred by AI’s involvement. With mindful use of the current tools and awareness of their limitations, artists and designers can already achieve a form of “hands-free” Illustrator creation – essentially guiding the software with ideas and letting the AI fill in the technical details – something that even a couple of years ago would have sounded like science fiction.

    Sources:




    Takeaway:

    I think, right off the bat, hands free Illustrator is compelling, but what would the pen tool line control result be?

    Multimodal AI can script .jsx so that's lightly hands-free, like drawing with a eye path tracker.

    I have tested Gpt with .jsx to run scripts in Illustrator and After Effects, and back and forth between those two aplications. I hadn't, however, thought of setting up that same workflow of scripting files and using them in such a way that AI can just revise the script each time I ask for an action to be done in Illustrator. Then I would just run the script again.

    I hadn't thought about it as automation, but that's true, it is automation. It's not full automation. I've always thought if this level of automation as just scripting.

    If AI could create the fully complete high-res, high quality, final illustration, site, photo, etc. I would be more likley to think of that as automated.

    Ok, so scripting .jsx for Adobe Illustrator, and using AI in such a way that you get a revised .jsx you can run each time you verbally ask Gpt. Interesting to think about that workflow a bit more.

    I'm wondering about this partially because the copyright law, the latest guidelines, and in particular the language around "significant human manipulation". I'm trying to really map that to specific scenarios because I find it interesting, and because there's so many. Out of curiousity, I wonder if that auto-painting using brushes can also be emulated by AI. I assume so. I ran a Manus test to find out, and it created algorithms and brushes then painted my image. Not in high fidelity, but I did get it to do that. Research continues down that path.