Cosmonious High contains 18 characters across six species all created by a team with zero dedicated animators. That means lots and lots of code to create realistic behaviors and Owlchemy-quality interactivity! The 'character system' in Cosmonious High is a group of around 150 scripts that together answer many design and animation problems related to characters. Whether it’s how they move around, look at things, interact with objects, or react to the player, it’s all highly modular and almost completely procedural. This modularity enabled a team of content designers to create and animate every single line of dialogue in the game, and for the characters to feel alive and engaging even when they weren’t in the middle of a conversation. Here's how it works. Guest Article by Sean Flanagan & Emma Atkinson Cosmonious High is a game from veteran VR studio Owlchemy Labs about attending an alien high school that's definitely completely free of malfunctions! Sean Flanagan, one of Owlchemy's Technical Artists, created Cosmonious High's core character system amongst many other endeavors. Emma Atkinson is part of the Content Engineering team, collectively responsible for implementing every narrative sequence you see and hear throughout the game. The Code Side Almost all code in the character system is reusable and shared between all the species. The characters in Cosmonious High are a bit like modular puppets—built with many of the same parts underneath, but with unique art and content on top that individualizes them. https://gfycat.com/politelegitimateaffenpinscher From the very top, the character system code can be broken down into modules and drivers. Modules Every character in Cosmonious High gets its behavior from its set of character modules. Each character module is responsible for a specific domain of problems, like moving or talking. In code, this means that each type of Character is defined by the modules we assign to it. Characters are not required to implement each module in the same way, or at all (e.g. the Intercom can’t wave.) Some of our most frequently used modules were: CharacterLocomotion – Responsible for locomotion. It specifies the high-level locomotion behavior common to all characters. The actual movement comes from each implementation. All of the 'grounded' characters—the Bipid and Flan—use CharacterNavLocomotion, which moves them around on the scene Nav Mesh. https://gfycat.com/dependentillinformedbangeltiger CharacterPersonality – Responsible for how characters react to the player. This module has one foot in content design—its main responsibility is housing the responses characters have when players wave at them, along with any conversation options. It also houses a few 'auto' responses common across the cast, like auto receive (catching anything you throw) and auto gaze (returning eye contact). CharacterEmotion – Keeps track of the character’s current emotion. Other components can add and remove emotion requests from an internal stack. https://gfycat.com/frenchwellmadearrowcrab CharacterVision – Keeps track of the character’s current vision target(s). Other components can add and remove vision requests from an internal stack. https://gfycat.com/reliablesnoopyhog CharacterSpeech – How characters talk. This module interfaces with Seret, our internal dialogue tool, directly to queue and play VO audio clips, including any associated captions. It exposes a few events for VO playback, interruption, completion, etc. It’s important to note that animation is a separate concern. The Emotion module doesn’t make a character smile, and the Vision module doesn’t turn a character’s head—they just store the character’s current emotion and vision targets. Animation scripts reference these modules and are responsible for transforming their data into a visible performance. Drivers The modules that a character uses collectively outline what that character can do, and can even implement that behavior if it is universal enough (such as Speech and Personality.) However, the majority of character behavior is not capturable at such a high level. The dirty work gets handed off to other scripts—collectively known as drivers—which form the real 'meat' of the character system. Despite their more limited focus, drivers are still written to be as reusable as possible. Some of the most important drivers—like CharacterHead and CharacterLimb—invisibly represent some part of a character in a way that is separate from any specific character type. When you grab a character’s head with Telekinesis, have a character throw something, or tell a character to play a mocap clip, those two scripts are doing the actual work of moving and rotating every frame as needed. https://gfycat.com/someoilychameleon Drivers can be loosely divided into logic drivers and animation drivers. Logic drivers are like head and limb—they don’t do anything visible themselves, but they capture and perform some reusable part of character behavior and expose any important info. Animation drivers reference logic drivers and use their data to create character animation—moving bones, swapping meshes, solving IK, etc. Animation drivers also tend to be more specific to each character type. For instance, everyone with eyes uses a few instances of CharacterEye (a logic driver), but a Bipid actually animates their eye shader with BipedAnimationEyes, a Flan with FlanAnimationEyes, etc. Splitting the job of 'an eye' into two parts like this allows for unique animation per species that is all backed by the same logic. https://gfycat.com/cloudyimprobablegopher https://gfycat.com/tintedglassimperialeagle Continue on Page 2: The Content Side » The Content Side Having these modules and drivers gave the content team a wide array of tools to create CH’s many interactions. Rather than creating a new script for each interaction, our team got to play puppet master using all the 'strings' of code provided. At first, our backend structure required that each 'string' (a module or driver) receive content separately. Lines of dialogue, emotions, the character’s gazes, and so on were each slotted into a list and triggered in sequence. No Time for Fun Working this way limited the ability to do fine timing. Figuring out exactly when to change a character’s expression by timing out how many (milli)seconds from the beginning of a dialogue line that change should occur was tedious and difficult. Additionally, any change to a line or updated audio broke well-timed sequences that then needed to be redone in the same manner. Later in development we created the character sequence tool, which allowed manipulating all these elements together on a flexible timeline. This meant emotions, gaze targets, and gestures could all be controlled right in line with dialogue. This was sufficient to make the majority of character content in the game lively and responsive. However, certain moments called for a little more drama. Mocap Studio For moments of heightened emotion, we used our mocap studio. Built as a tool to enable more complicated animations to be made by anyone on the team, the mocap studio tracks and records the placement of a headset and controllers and maps them onto our characters. This let us motion capture our animations – and do so for specific lines of dialogue. https://gfycat.com/slushyplainiberiannase Through the mocap studio anyone at Owlchemy could access the script, play any audio clip, act and capture their performance for that line, and save it directly to the audio data and character sequence. BFFs Developing expressive characters was essential to Cosmonious High feeling like school. Achieving that effect while limiting the scope of our bespoke animations meant creating robust systems and layering them on top of each other. Bringing all those tools together with a usable interface in character sequences made it usable for anyone at the studio, and allowed us to bring our space children to life.