Lone Echo nabbed our 2017 Oculus Rift Game of the Year Award for many reasons—amazing visuals, intuitive locomotion, and a strong story, to name a few—but one of the game's unsung innovations is its virtual touchscreen interfaces. While many VR games are still using less than ideal laser pointer interfaces, developer Ready at Dawn created a framework for surprisingly functional virtual interfaces which are both intuitive and immersive. The studio's lead systems designer, Robert Duncan, joins us to explain the design approach behind the end result. Guest Article by Robert Duncan Duncan is the Lead Systems Designer at Ready At Dawn Studios, where he enjoys collaborating with the entire team in the pursuit of awesome. He loves to create compelling, emotionally engaging experiences and stories for others to enjoy, all while ravenously consuming those same types of experiences across a variety of mediums: games, movies, TV, anime… to name just a few. He also loves cooking, physical crafts, and tabletop games (especially with miniatures), and even more so when sharing those experiences with others. To that end, he’s excited about the storytelling power of VR and the incredible social opportunities it provides. [irp posts="66204" name="‘Lone Echo’ Behind-the-Scenes – Insights & Artwork from Ready At Dawn"] Objective System Goals Designing the various user interfaces in Lone Echo and Echo Arena—from simple mechanical interfaces like pull-levers, all the way up to complex virtual interfaces like Jack’s augmented-reality touchscreens—was a process that involved an exciting amount of exploration and iteration. Given the unique constraints that come out of developing for VR, we found ourselves in a situation where we were all but forced to innovate just to achieve our most basic goals. Fortunately, that innovation rarely had to be done from-scratch; in many cases we were able to pull inspiration from related disciplines (like physical product design or mobile interface design) while making clever adaptations for VR. To illuminate what that process was like, this article will take a deep-dive look into the design of the player character’s[1] integrated ‘objective system’ from Lone Echo, followed by a brief look at the various screens used in the multiplayer lobby of Echo Arena. However, explaining the what without the why wouldn’t be nearly as helpful, so to begin we’ll examine the overall goals that guided the objective system’s development: Comfort: As with anything we create in VR, comfort is a chief concern. Eye-strain can be quite problematic, especially when considering the use of text. Usability: While this arguably goes without saying, it is highly important that this system be easy and intuitive for players to use. Aside from the fact that these features are almost always beneficial to a system, they became all the more important when it was realized that we weren’t going to be developing a tutorial for the objective system. Effectiveness: Again, while this goal seems obvious, it’s important to enumerate what would make the objective system sufficiently 'effective'. First and foremost, it needs to help players figure out what they are supposed to do in order to make progress through the experience. As such, when critical progression information needs to be delivered, the objective systems needs not just to convey this information, but to even encourage the player to see it. Beyond such critical information, this system is also expected to offer players additional details about their objectives in case they need more thorough support. Lastly, it’s key that this system does not artificially imply linear action. That is to say, when players are allowed to do things in the order of their choosing, it’s important that this system doesn’t make them think they have to do them in a specific order, as we want players to embrace that freedom of choice. Immersiveness: Given the incredible power of presence that VR offers, one of the chief goals for Lone Echo is to leverage that power as much as possible. Naturally, this extends to all systems of the game, but it’s particularly important for the objective system because it’s most likely to be used when players are close to having their immersion broken: if a player is getting 'stuck' and not sure what to do, they’re probably teetering on the edge of taking their headset off… the last thing they need is for their life-line (the objective system) to push them over the edge! This is tricky, because objective systems are notoriously game-y, abstract, non-immersive systems. Feasibility: Though often overlooked due to its obvious implication, I find it’s important to weigh this practical goal alongside all the rest for one simple reason: if we can’t build it with the time and resources we have, the rest doesn’t matter! It won’t exist! With the goals laid out, the next step is to determine a high-level design that will meet these goals. High-Level Design It’s not uncommon for even the high-level design of a system to change a bit over the course of development, especially as new information is discovered about how the game plays, what players need, etc. In the case of the objective system, we waited until fairly late in Lone Echo’s development before designing it, which fortunately[2] allowed us to dodge any need for significant high-level change. Here is the high-level breakdown of features that we landed on after carefully examining what Lone Echo needed: Text-based Display: For the sake of feasibility, the tried-and-true means of conveying objective information via text seemed like the obvious choice: it would be easy to author and expand upon, and it could leverage known lessons from traditional gaming objective systems. As it turns out, figuring out comfortable typography in VR is a process all its own, but this ultimately was still the most viable solution for us to take. It is worth noting, however, that at one point we considered allowing players to replay the dialogue audio associated with an objective, but that feature was dropped due to time constraints[3]. Augmented Reality Theming: Fortunately, given the world of Lone Echo (and even more so Jack’s character as an AI), augmented reality is a highly appropriate way to contextualize the objective system. Furthermore, given that a common modern-day use-case for AR is conveying abstract concepts (e.g. travel directions), it’s that much more appropriate for an abstract system like objectives. Conceptualized as a ‘To-Do’ List from Jack/Liv/HERA: This idea was important for determining how objectives are written and displayed… while many games simply allow this sort of system to exist as an abstract layer ‘on top of’ the game (accessible via a non-immersive pause-menu or the like), given our immersiveness goal, that level of abstraction simply wasn’t an option. Instead, we chose to gear the list of objectives toward something that Jack and Liv might actually use: a dynamically updated list of things to do, written through their perspectives. We found that this also helps reduce any artificial implications of linearity. Predominantly Opt-in: In order to maintain the presence and exploratory feeling we were targeting for Lone Echo, we found that we had to strike a careful balance with how ‘in your face’ the system was while ensuring it still met its effectiveness goals. What we ultimately landed on (after lots of testing and sifting through a wide variety of player preferences) was a system that is predominantly opt-in and only grabs the player’s attention when absolutely necessary[4]. Now that the big picture is squared away, let’s talk details! The objective system (from a user-interface perspective) consists of two key elements: the wrist display, and the tablet. We’ll go over both in the following pages. Continued on Page 2: Wrist Display & Tablet » — Footnotes — [1] For those unfamiliar, in Lone Echo the player takes on the role of Jack, an advanced AI (with an android body) working on a mining station within the rings of Saturn. [2] One downside to this method was that we didn’t have a usable objective system for many of our focus tests. To get around this, we had our test proctors act as de-facto objective systems, giving players very constrained hints at the right times or when requested. [3] A simplified version of this feature was leveraged to allow players to listen to the audio logs they obtained from Cube-Sats. [4] Given the additional cognitive load players seem to experience while in VR, determining when (and how) to explicitly tell players what to do was a new challenge in and of itself! The Wrist Display Like the rest of Jack’s integrated systems in Lone Echo, the objective system’s wrist display offers fully ambidextrous support by simply appearing on both wrists[5]. Visually, it consists of only an unlock slider and new objective indicator, but it has a few more nuanced features as well. Let’s break them all down: Context Fading: Like UI in many games, the wrist display fades in or out under certain conditions. Obviously, when new information is added to it, it appears, but the other more peculiar condition is its position relative to the player’s view. Specifically, the closer it is to a specially tuned ‘sweet spot’ region of the player’s view (imagine a spot about 1.5 feet in front of the player’s face), the more opaque it appears. This allows the display to stay hidden during normal interaction (e.g. climbing around), but makes it very easy to see when intended. New Objective Indication: When a new objective is added an audio cue, unique rumble tone, and colorful moving icon are all used to get the player’s attention. Particularly, the icon ‘jumps’ off the arm to increase the likelihood that it’s spotted by the player even when out of their view, thereby encouraging them to see the text readout. Unlock Slider: Built around the assumption that most players would be familiar with this interface from using mobile device touchscreens, this method for opting-in to more objective info was chosen for two key benefits: it’s easy to do intentionally (but difficult to do accidentally), and it encourages players to start using their index finger (which they’ll need for all future interactions with the objective system). That said, to achieve this ease of use, a lot of time was spent iterating on the precise position and sizing of both the slider’s arrow ‘button’ and the overall travel distance. Furthermore, to avoid proprioception disconnects[6], the slider had to be placed pretty far ‘up’ off the arm. Additionally, we had to discover the right balance of using the sensor-tracked wrist orientation and the IK arm orientation to position the slider; discounting the wrist results in a bizarre lack of player input over the system, but using the wrist exclusively results in the slider frequently becoming uncomfortably misaligned from the IK arm. Additional Usability Features: Here’s a quick rundown of a few additional VR-centric features that improve the usability of the wrist display: Over-press Distance: Allowing players to push a little bit past the depth of the slider (and hover a little above it) helps make up for the lack of tactile feedback that your finger would normally experience pressing something. ‘Jump’ on Poke: When the player pokes the arrow button (instead of sliding it), it jumps in the unlock direction to imply the intended interaction. Simplified Physics Sim: The arrow button also runs a simplified physics simulation. Specifically, it retains velocity from being swiped, uses ‘gravity’ to fall back to its origin position, and even bounces a little on landing. Together, these features are important for making the slider easy for our brain to process; as adult humans we already understand the basic results of physics, so instead of having to learn arbitrary rules about how the slider moves, we simply realize that it’s a physical object ‘falling’ in a certain direction and then understand how it works intuitively. Swipe ‘Becomes’ Tablet: When the swiping gesture of the unlock slider is completed, it results in the tablet appearing. More specifically though, the tablet appears roughly along the same 3D-vector as the swipe, offering two key benefits: it helps first-time users notice the tablet appearing, and (in conjunction with the slider disappearing upon unlock) helps imply that the information/use of the slider has been transferred to the newly appearing tablet. That wraps up the slider and leads neatly into the other major component of the objective system, the tablet. The Tablet Just as the wrist display has some hidden nuances, so does the tablet. Let’s go over its relevant features: Physicality: Making the tablet a physics object that the player can grab was done to solve our biggest problem: where can we comfortably place text for players to read? What we found through testing is that—just like in real life—players had different distances that were comfortable for them to read at while in VR. Thus, while the tablet appears at a 'best guess' location[7], it being physical allows players to very naturally adjust its placement to match their own personal comfort. However, in order to leverage this advantage, it is also important that players easily understand that the tablet is physical: by adding a layered frame to create a sense of mass and rounding the corners to make its form more inviting to the hand, we found that players quickly picked up on the feature. This physicality offers a few additional benefits as well: Encourages the idea that the player’s fingers will work for the touchscreen features of the tablet. Affords a fun new way to close the tablet: throwing it away! Makes it easy for players to address text clipping[8] issues by simply pulling the tablet out of intervening geometry. Scrolling List: Inspired by email applications, this list offers players a place to easily see what their current objectives are without diving into too much detail. Its swipe-based scrolling functionality was chosen for its ubiquity in real-life tablets and to maintain the finger-based interaction paradigm established by the unlock slider. As such, like the unlock slider, it also leverages a simplified physics simulation, though instead of gravity it uses friction and dampening springs at its extents to limit over-scrolling. Two additional elements of this scrolling that were designed to help players easily discover it are that the scrolling always works (even if it’s not needed), and that the list can only show three-and-a-half items (so that when four or more items are on the list, the fourth item will be partially cropped, thus implying that scrolling can be used to fully reveal it). Details View: If the player taps on a specific objective in the list, they’re shown a more detailed view of that objective. To help players understand this relationship, the detail view slides into place, creating a brief moment of connection between the two views while contextualizing the nature of the newly-revealed back arrow button. We found that clarifying this relationship is very important as a fair number of players accidentally access this feature on their first attempt at using the system. Additional Usability Features: Here’s a quick rundown of a few additional VR-centric features that improve usability of the tablet: Smoothed and Reversible Interactions: The majority of the interactions that can be performed by the tablet (moving it, scrolling the list, switching to the detailed view, etc.) are immediately reversible and follow some sort of smoothing curve. The reversibility is key for allowing the UI to remain responsive, while the smoothing curves help proximate the acceleration/deceleration that objects experience in real-life[9]. To achieve this, these dynamic elements (including transition animations like the entrance of the details view) are programmatically driven and often tied directly to player motion-control input. Throw-Away Distance: The distance at which the tablet disappears after being thrown away is carefully tuned based on real-life arm lengths (of both very short and very tall people) to help avoid both accidental throw-aways and to prevent the tablet from getting stuck outside of someone’s reach without it disappearing. We found that between our tallest person (over 6 feet tall) and our shortest person (slightly under 5 feet tall), there was a 6-inch difference in arm reach! That might seem trivial, but it made a surprising difference in how often accidents occurred. However, given the possibility of children playing the game, we biased toward shorter arms while still trying to keep things comfortable for our taller players. Text Size, Weight, and Contrast: In order to make text as readable as possible, we spent a lot of time iterating on text size, font weight (how bold it is), and the level of brightness/contrast between text and its background. What we found is that text needs to be bigger than typical print-sizes (roughly 1.75x), even normal text is better off being what would typically be considered bold (which led us to develop an ‘extra bold’ font for cases where we needed that emphasis), and light text on a medium-dark background results in the best readability with minimal eye-strain[10]. That covers the tablet and thus the remainder of the objective system. Hopefully this detailed examination has provided a useful and interesting look into how even seemingly minute choices can make a difference, especially when those choices are directed by thoughtful goals to create a cohesive whole. Next, we’ll move on to Echo Arena’s lobby screens, including some of the more unique problems their development faced by being presented in a multiplayer environment. Continued on Page 3: Echo Arena Lobby Touchscreens » — Footnotes — [5] Given the physical nature of motion controls, supporting handedness is a must (though, as a lefty, I might be a bit biased there). However, we wanted to avoid players having to specify their handedness, especially given the tacticle nature of Lone Echo’s movement model: sometimes a righty will decide to briefly become a lefty if their right hand is busy with something else. Thus, most systems are simply mirrored across both hands. [6] Specifically, we found that having the slider near the arm resulted in increased awareness of that arm’s position (especially if players end up actually poking their own arms), which in turn makes players much more sensitive to the potential inaccuracy of the IK-inferred arm position, resulting in an immersion break. [7] Specifically, it appears a little past their hand: we found that most players comfortably read within arm’s reach. Additionally, this places the tablet in a prime position for further interaction with either hand. [8] We found that allowing text to render through geometry (as many traditional games do) was a quick way to create eye-strain: trying to sort out the different depths in VR is quite unpleasant for most players. A workaround we leverage (when we have to) is rendering the text as semi-transparent, thereby allowing the brain to evaluate the image as if the intervening geometry is transparent instead, in turn making it feel more like one is simply reading text through tinted glass. [9] Almost nothing in the physical world reaches its top-speed instantly; though subtle, the absence of that brief period of acceleration or deceleration is easily noticed (especially in VR) and can add an uncanny feeling to movement, which in turn can be distracting or even immersion-breaking. [10] Light coloration is particularly important for smaller/thinner-weight text, as the ‘bleed’ effect of the headset lenses (where light images bleed into dark images) will work in favor of the text. The inverse (dark text on a light background) tends to result in the text appearing thinner in the headset, and thus dark text needs to have its font weight adjusted accordingly. Fortunately, reducing the contrast between the text and its background reduces the bleed effect, but going too far can result in eye-strain as the brain struggles to separate the text from the background. Echo Arena Lobby Touchscreens When developing the touch screen interfaces for the lobby of Echo Arena we held to many of the same goals and design ideas found in the single-player objective system. However, as one might imagine, the multiplayer nature of Echo Arena adds whole new layer of considerations. We’ll be looking at how actions common in multiplayer game menus—specifically entering matchmaking, customizing one’s avatar, and setting up a private match—were adjusted from their traditional gaming norms to fit the needs of Echo Arena. [gallery type="rectangular" ids="75855,75854,75856"] Instead of going over each individual feature, we’re going to focus on relevant highlights and unique elements: Menus -> Lobby: Early on in the development of Echo Arena we decided that it would be a very worthwhile endeavor to allow all of these meta-game actions to be performed in a lobby with other players instead of within an isolated menu like most traditional games. As I’m sure many a multiplayer gamer can attest to, sitting at a menu while waiting for the matchmaker to find a match can be painfully boring. In traditional gaming, one has the option to shift their attention elsewhere—to look out a window, maybe eat some food, etc. In VR, sitting at a menu means you’re completely surrounded by that menu without access to distraction, making an already painful wait now unbearable. By allowing VR goers to play around in a multiplayer lobby where they can socially engage with one another, practice their skills, or simply muck around with toys, we hope to have offset that pain considerably. Making the Space Shareable: In order to fully leverage those benefits of the lobby, we had to make sure that the environment and the UI screens contained within it were appropriately shareable. What we found is that—like with seats on a bus—most players will simply go to the next open spot. Therefore, by simply providing enough podiums for almost everyone to use, there is practically always an open spot and therefore a place where someone can access whichever touchscreen they need without trouble. Additionally, the character customization preview 'dummy' can be spun so that players can always make it face them[11]. The Ultimate Button: Creating a 'physical' (albeit holographic) button in VR allows for nearly all of the benefits of both a digital and physical button. Like a digital button, it can change its appearance to indicate its state (e.g. changing color to indicate that its option has been selected, or even outright disappearing when no longer relevant). However, like a physical button, it can allow for partial-pressing, direct haptic feedback upon being pressed (though not to the same degree within VR… yet), and much larger sizes (by not necessarily being bound to a screen). Additionally, that size and physicality allowed us to leverage both the finger-precision benefits of large buttons and the intuitive benefits of buttons physically sticking up above a surface (thereby implying that they can be pressed down to said surface). However, one drawback these buttons do experience is that they don’t work well in situations where scrolling is required: a cropped-off 3D button ends up just looking very strange. Team Assignment Screen: The development of this screen in particular was one of those lucky cases where feasibility, usability, and effectiveness all aligned. While the original specification involved a three-column list just as the final version does, it relied on additional buttons to switch players between the different teams. By creating a non-functioning prototype of that layout we found it was difficult to get the UI to fit, tedious to use (requiring lots of button presses), and unintuitive (after moving a player between teams, do they stay selected or does selection remain in the list they came from? Different users had different expectations…). It was then that we realized we were trying to take a digital interface and move it into the world of physical interaction; by taking a step back and looking at how one would prefer to solve this problem physically we landed on the current design[12]. As with our other interfaces that aim to leverage physical interaction for its intuitive nature, a certain degree of physical simulation was required (in this case, retaining velocity from sliding) to make the interface feel natural and polished, but I believe the additional work as well worth it when I see how quickly and easily players are able to assign teams. That covers the particulars of the multiplayer touchscreens and wraps up our look into designing UI throughout Lone Echo and Echo Arena. As you can see, designing UI for VR presents a lot of new challenges, but also opportunities; it presents a unique world where the incredible possibilities of the digital meet with the rules and expectations of the physical, creating an exciting challenge as we try to discover how to balance the two. As VR advances, there will surely be plenty more discoveries to be made… while we here at Ready At Dawn will continue to work toward those discoveries, we also look forward to enjoying the incredible discoveries of others. To that end, we’re happy to share the knowledge that we’ve uncovered in the hope that it helps any and all to move forward and onto their next amazing VR discovery. — Footnotes — [11] Like all of our UI, this orientation isn’t replicated over the network, allowing one player to spin the dummy to their liking without interfering with another player’s view. We find this is important both for allowing players to easily do what they want without encumbering others, but also to afford players a certain sense of privacy. [12] Fortunately, the removal of those buttons and the additional state tracking that they required actually made the interface easier to build: each nametag independently knows what team it’s on and where it should be positioned—the only significant management required is a simple structure that tracks the size of each team.