Realtime Talking Avatars

 Runway announced the realtime custom avatar feature and I want to try it with the character I created. Dev.runwayml.com 

Watch the video on YouTube: https://youtu.be/IRxGEaap4Wc

I took a few months to develop my character to where I could control the static visual characteristics output across different platforms. Then, got it to a point where I could control the motion with fairly accurate consistency. But, I was just trying to use it for clothing and interior design scenarios, like a posing Barbie and it didn’t need to speak. I just wanted to see how materials flowed, clothing styles, what made sense as a series, and why. That kind of thing.

Really, a lot of it is up to the tools you choose, and the platforms level of sophistication, and what they can do, and they can now do realtime chat with consistency looks like. So, I want to try that out.

Now I have her voice created,  and I’m figuring out the best way to move forward with the character that will be the most consistent visually, and output the most useful artifacts. 

Short Character Intro: https://youtu.be/jDjgRNVBUM4?si=XFMPovbZWmKlzUP-

The algorithms surfaced the Runway dev demo and it looks like an interesting thing to test out.

I think by the time you have created one character, you have a hundred in the outtakes, bloopers, and cutting room floor.