My First Manus - Adding Agent Criteria, Obtaining Best Instructions and Approach, Testing the Guardrails

A few days ago I created a criteria matrix for judging the quality of an illustrated animation (https://exploringallthethings.blogspot.com/2025/03/coherence-matrix-draft.html)

The criteria matrix contains sections for both tradtional and ai/ml metrics broken down into agentic roles in an animation pipleline (Prototyping AI agents is really interesting. More on that on the blog). 

Once I had that criteria metric scale that can become a trigger in a function, and the handoff between agents described, I began to translate it into a few different tools. 

One is for a test to see what Manus.im can do, since I just got the invite, and wanted to use my free prompts to onboard and simultaneously gauage how it might approach the criteria matrix out of curiosity. 

It was late, so I didn't think much about the prompt. But, I did feed it the criteria as a pdf.

Then, I ran my first Manus.

I plan to do a lot more with it, but easily exhaused the free inital credits before getting very far. But, here's a sample of what my first prompt accomplished.




Manus.im Output:

Recommendations for Using Manus.im with Your AI Animation Framework

Specific Recommendations Based on Your Document

Your "Agentic Criteria & Coherence Matrix for AI Animation" document provides an excellent foundation for creating AI animation workflows. Here are specific recommendations for implementing your framework using Manus.im:

1. Implement Your Full Pipeline Simulation

The example prompt you provided on lines 579-597 of your document is an excellent starting point:

```

You are an orchestrator managing a multi-agent AI animation pipeline using the Agentic Criteria & Coherence Matrix. Simulate a sequential creative process across these roles:


1. Storyteller – narrative, tone, symbolism

2. Visual Designer – style, shape language, lighting

3. Character Artist – consistent characters, expressions

4. Animator – motion dynamics, timing, gesture

5. Colorist / Lighting – emotional color and contrast

6. Model Engineer – fidelity, coherence, reroute logic


Each agent:

- Outputs structured content

- Scores its output (1–5)

- Passes handoff notes

- Reruns if score < 3


Begin with the Storyteller. End with an evaluation summary.

```

This prompt structure leverages Manus.im's ability to simulate a full creative AI pipeline within a single structured prompt.

### 2. Enhance with Detailed Evaluation Criteria

Expand your prompt by incorporating the specific evaluation criteria from your Coherence Evaluation Matrix:

```

For each agent, evaluate outputs using these criteria:

Storyteller:

- Staging: Clarity of narrative focus in each scene

- Anticipation: Use of visual cues to foreshadow events

- Timing: Emotional pacing that matches story beats

- Appeal: Characters and visuals that support the story's tone

- Scene Continuity: Logical progression and visual consistency between scenes

- Mood Progression: Emotional tone that evolves meaningfully over time


Visual Designer:

- Color: Palette choices enhance emotion and hierarchy

- Shape Language: Consistent stylization across characters and environments

- Texture / Style: Unified visual style across the sequence

- Lighting: Creates atmosphere and directs attention

- Exaggeration: Visual distortion to enhance clarity or emotion


[Continue with criteria for other roles]

```


### 3. Implement the Agent-to-Agent Workflow

Your "Agent-to-Agent Workflow Flow" table provides valuable guidance on how outputs should move between agents:

```

Define these handoff contents between agents:

From Storyteller to Visual Designer:

- Story beats, tone, symbolism

- Purpose: Establish mood and direction

- Feedback trigger: If misaligned tone is detected


From Visual Designer to Character Artist:

- Style guide, shape rules

- Purpose: Ensure on-model design

- Feedback trigger: If visual identity diverges


[Continue with other handoffs]

```


### 4. Utilize the Three Prompt Levels

Your document defines three levels of prompts (Beginner, Advanced, Multimodal) that can be adapted based on your familiarity with Manus.im:

- **Start with Beginner prompts** as you learn the platform

- **Progress to Advanced prompts** as you become more comfortable

- **Implement Multimodal prompts** when you're ready to combine text, image, and motion inputs

### 5. Create a Custom Scoring System

Implement your 1-5 scoring system with specific thresholds for action:


```

Scoring Key (1–5):

5: Excellent – Fully aligned, high-quality, and coherent output

4: Good – Minor inconsistencies but solid performance overall

3: Adequate – Meets basic requirements; room for improvement

2: Needs Improvement – Gaps in logic, quality, or alignment

1: Poor – Output is incoherent, off-target, or unusable


If any score falls below 3, the Orchestrator should:

1. Identify specific issues using the evaluation criteria

2. Provide targeted feedback to the relevant agent

3. Request a rerun with adjusted parameters

```


## Next Steps for Implementation


1. **Start with a simplified version** of your framework to test Manus.im's capabilities

2. **Gradually add complexity** as you become more familiar with the platform

3. **Document results and refine** your approach based on outputs

4. **Experiment with different prompt structures** to find what works best

5. **Consider integrating external tools** mentioned in your document (e.g., Midjourney, DALL·E 3, Pika, Runway)


## Potential Challenges and Solutions


### Challenge 1: Complexity Management

Your framework is comprehensive and might be challenging to implement in a single prompt.

**Solution**: Break it down into modular components that can be chained together. Start with implementing just 2-3 agent roles, then expand.


### Challenge 2: Evaluation Consistency

Ensuring consistent evaluation across different creative aspects can be difficult.

**Solution**: Create detailed rubrics for each evaluation criterion with specific examples of what constitutes each score level.


### Challenge 3: Tool Integration

Your document mentions various external tools that might need to be integrated.

**Solution**: Start with what Manus.im can do natively, then explore its ability to orchestrate external tools as you become more familiar with the platform.


## Conclusion

Your "Agentic Criteria & Coherence Matrix for AI Animation" provides an excellent framework that aligns well with Manus.im's capabilities. By implementing your framework as structured prompts with clear roles, evaluation criteria, and feedback loops, you can leverage Manus.im to create sophisticated AI animation workflows.


Start with simpler implementations and gradually increase complexity as you become more familiar with the platform. Document your results and continuously refine your approach based on the outputs you receive.

-

# Summary of Findings and Next Steps for Using Manus.im

## Overview of What We've Accomplished

1. **Analyzed Your Document**: We've thoroughly analyzed your "Agentic Criteria & Coherence Matrix for AI Animation" document, which provides a comprehensive framework for evaluating AI-generated animation and orchestrating intelligent creative agents.

2. **Researched Manus.im**: We've gathered information about Manus.im, a general AI agent platform that can autonomously execute complex tasks and simulate entire creative pipelines within a single structured prompt.

3. **Created an Instruction Guide**: We've developed a comprehensive guide on creating effective instructions for Manus.im, incorporating principles from your document and general prompt engineering best practices.

4. **Provided Specific Recommendations**: We've outlined specific recommendations for implementing your AI animation framework using Manus.im, including prompt structures, evaluation criteria, and workflow management.

## Key Insights

1. **Alignment Between Your Framework and Manus.im**: Your "Agentic Criteria & Coherence Matrix" aligns exceptionally well with Manus.im's capabilities. The platform's ability to simulate a full creative AI pipeline within a single structured prompt makes it ideal for implementing your multi-agent animation workflow.

2. **Structured Approach to Creative AI**: Your framework provides a structured approach to creative AI that can be effectively implemented through Manus.im's instruction system. The clear roles, evaluation criteria, and feedback loops in your document can be directly translated into Manus.im prompts.

3. **Scalable Complexity**: Both your framework and Manus.im support scalable complexity, allowing you to start with simpler implementations and gradually increase sophistication as you become more familiar with the platform.

## Next Steps for Getting Started with Manus.im

### Immediate Steps

1. **Join the Manus.im Waitlist**: Since Manus.im is currently in private beta, join the waitlist to get access to the platform.

2. **Familiarize Yourself with the Platform**: Once you gain access, spend time exploring Manus.im's interface, capabilities, and limitations.

3. **Start with a Simplified Version**: Begin by implementing a simplified version of your framework with just 2-3 agent roles to test Manus.im's capabilities.

### Short-Term Goals (First Week)

1. **Test Basic Prompts**: Use the example templates provided in our instruction guide to create and test basic prompts based on your framework.

2. **Document Results**: Keep detailed records of your prompts and the resulting outputs to identify what works well and what needs improvement.

3. **Refine Your Approach**: Iterate on your prompts based on the results, adjusting the structure, language, and evaluation criteria as needed.

### Medium-Term Goals (First Month)

1. **Implement Full Agent Pipeline**: Gradually expand to implement your complete agent pipeline, incorporating all roles and evaluation criteria.

2. **Develop Custom Templates**: Create custom templates tailored to your specific animation needs based on your framework.

3. **Explore Integration with External Tools**: Investigate how Manus.im can orchestrate or interact with external tools mentioned in your document (e.g., Midjourney, DALL·E 3, Pika, Runway).

### Long-Term Vision

1. **Create a Comprehensive Animation System**: Develop a comprehensive system that combines Manus.im with other tools to create a full AI animation pipeline.

2. **Refine Evaluation Metrics**: Continuously refine your evaluation metrics based on real-world results to improve output quality.

3. **Share and Collaborate**: Consider sharing your approach with the Manus.im community to gather feedback and collaborate with others working on similar projects.

## Resources Created for You

1. **PDF Analysis** (`/home/ubuntu/pdf_analysis.md`): A comprehensive analysis of your "Agentic Criteria & Coherence Matrix for AI Animation" document.

2. **Manus.im Instruction Guide** (`/home/ubuntu/manus_instruction_guide.md`): A detailed guide on creating effective instructions for Manus.im, incorporating principles from your document and general prompt engineering best practices.

3. **Recommendations for AI Animation** (`/home/ubuntu/recommendations_for_ai_animation.md`): Specific recommendations for implementing your AI animation framework using Manus.im.

## Conclusion

Your "Agentic Criteria & Coherence Matrix for AI Animation" provides an excellent foundation for creating sophisticated AI animation workflows with Manus.im. By following the guidance in our instruction guide and recommendations document, you can effectively implement your framework on the platform.


Start with simpler implementations and gradually increase complexity as you become more familiar with Manus.im. Document your results and continuously refine your approach based on the outputs you receive. With patience and experimentation, you can create a powerful AI animation system that leverages the full potential of both your framework and Manus.im's capabilities.

Comments