I’ve mentioned contextual controls in passing before, and described a basic framework for how they might operate in Atma. I’ve decided to commit today’s post to discussing exactly how context-based controls might work in the game’s touch-based system, and discuss several game interactions that need to be handled effectively by the system, as well as the complications that arise from introducing new interactions into the mix. The end result should be a means of handling touch inputs that, while potentially intricate in terms of the back-end capabilities, is actually quite simple and intuitive from the player’s standpoint.
The first line of thought that’s necessary when thinking about a user interface, and the control system it’s attached to, is, “What do my users need to be able to do?” The answers to this question constitute the functionality that is required of your user interface. In Atma, players need to be able to move their character around; they need to be able to interact (talk) with non-player characters; they need to be able to use skills, which should have the ability to target the ground, monsters, or other players; they need to be able to manipulate their inventory; they need to be able to attack monsters; they need to be able to interact with objects on the ground. I’m sure there are a few things I’m forgetting, but this makes a good list to begin with.
At this point, I’d like to tie in the concepts of versatility and consistency that I’ve mentioned before. You can see in the list above that there are some categories of interaction that sort of repeat themselves, and thus might benefit from a consistent and versatile method of input — the act of specifying a target for interaction is a major one. In a game where different possibilities for interaction exist, which are based on which object or character is being interacted with, it’s necessary to include some universal means for the player to express the thought of “this is what I want to interact with”. Thus, there needs to be a single gesture which signifies this concept. So, we come to the idea of a “press”, the term I’ll use here to imply that the user has touched their finger to the screen in something’s general proximity (”general” in that the user needs some room for error, as touching an exact point precisely is quite difficult).
Pressing was chosen as an informed “guess” at what the most natural indication of interest might be, in a touch-centric control scheme; determining what feels “natural” is a prominent and vital portion of the user interface design process, and in this case, the most natural means of telling the game that you want to interact with something would be to poke at that thing with your finger. Regardless of how the conclusion was reached, however, what’s important is that we’ve selected an input method to represent the act of selecting or targeting an object or character. Hold on, though! There’s a problem we run into at this point. We’ve chosen “pressing” as our method for selecting items, characters, monsters, etcetera, but there’s another category of action which seems to call for pressing, as well: movement. Similar to how the act of simply touching something indicates a desire to interact with that thing, the act of touching a place on the ground is the most natural means of indicating a desire to move to that place.
So can’t we just throw that in with the other things that touching does, and call it a day? Well, we could, but that would be lazy, and would impact the control scheme negatively for two primary reasons: first, in a third-person game, your avatar is one of the objects that is visible to you, so allowing a player to move their avatar without selecting it first would break the control model we’ve previously established; second, if you want to move next to something, and lack incredibly precise controls — a description that applies to every touch interface I’ve witnessed to date — how might you move next to it without interacting with it? This second idea can be expanded further; if the visible ground is covered by people or items, how can you possibly move anywhere, given that you must touch the ground to move? This takes us back to the first idea once again, with the idea that your avatar is also a visible object (given the third-person view) that you might want to interact with in the course of the game. A system of movement that will offer any sort of precision — or even basic functionality — clearly can’t rely on an input method that will be confused by crossover with the other input methods. So it’s obvious that character movement needs to be relegated to a different means of input, which will be tied to selecting your character’s avatar via the same means used to select anything else: pressing.
The method I’ve come up with so far — and I’ve mentioned this before, I’m sure — is “dragging”, meaning that after the player presses down on their character, they can drag their finger in whichever direction they desire to move, and the character will “follow” their finger, letting the player lead them in the desired direction. I’ve tested this method for movement a fair bit, and I think it feels very natural, once the urge to just poke at the ground — which, as I’ve already discussed, doesn’t work as a control scheme for multiple reasons — is gone. My feelings on the matter are obviously ten-a-penny, however, and I hope to get substantial feedback on the preliminary controls for Atma when I release the first iteration of the game’s engine. Hopefully, such a release isn’t too far away; I’ll be sure to let you know when I’m close enough to a playable state to start making release date predictions.