This post serves as a response to the The Atlantic's recently-published article "Video Games Are Better Without Stories," which makes the argument that, since the history and architecture of modern games is grounded in spatial navigation and combat, storytelling makes for a poor complement. However, in this post I would like to dispute that premise, as I believe that the author first neglects the fact that the design space surrounding mainstream videogame development has been, up until recently, incredibly homogeneous. Second, a significant portion of the author's argument is focused on criticizing the type of environmental storytelling found in first-person games like Bioshock, which I will argue is not wholly representative of storytelling in videogames as a medium.
One of the author's key points is that early videogames, due to the technical constraints of the computational technology of the time, were often designed with a focus on combat within and navigation of wide-open and empty spaces. Although these technological constraints have since been eased (though certainly not erased), much of the design DNA found in games like Doom and Duke Nukem persists today, with many modern AAA videogames placing much of their structural focus on locomotion and combat. Here is where the author and I agree: Doom did not find mainstream success through a focus on narrative, and that was by almost definitely by design. The development team behind Doom aspired to create a game about combat, and judging by the game's commercial and critical reception, they were successful in their endeavor. However, because of the success of this style of game within the context of the early and malleable state of the early games industry, these design foci defined the AAA space going forward. While this isn't to say that all developers in this era aspired to create shooters (and indeed, the '90s were as vibrant and varied a decade for games as any), it's clear from the strength of the genre even today that first-person shooters built in the blueprint of Doom commanded the direction of the mature AAA market. Because of this predetermined focus on spatial navigation, AAA designers with an interest in telling stories were and often still are forced to shoehorn their narrative into the game as a secondary focus -- as books to read (Thief, Dishonored, Deus Ex) or audiologs to listen to (Doom 3, System Shock 2, Bioshock). The author names Bioshock as a chief example, and I tend to agree with the point being made: I don't think this is an ideal way to tell a story, as it sits at odds with the rest of the game's focus on movement and fighting. With that said, the author's primary argument is that the failure of these methods suggests that games ought not try to tell a story. While I can understand the author's reluctance, I feel that he ignores the substantial proportion of AAA games that attempt storytelling outside of the first-person shooter space, not to mention the bevy of smaller games that are allowed to exist outside of many of the financial and cultural boundaries that define the AAA space. Role-playing games like Earthbound and the Persona series feature gameplay that acts as a natural extension of the narrative, while smaller and more niche experiences like dating simulators or Sword and Sworcery place a large structural focus on player agency within a story. While this isn't to say that any or all of these games achieve a perfect integration of gameplay with narrative, their popularity suggests that their approach has found favor with players. Furthermore, this kind of storytelling is inseparable from the interactivity implicit to the the medium, suggesting that narrative in games is not a futile pursuit.
Videogames need not be defined by what made Doom a financial success in the 1990s; I would argue they are defined exclusively by their interactivity, as this is the aspect that sets them apart from other media. I would also suggest that to insist otherwise is to severely limit their potential as a medium. The author makes a point at the start of the piece that reads, "the best interactive stories are still worse than even middling books and films." I feel this misses something fundamental, though, in that great works in different mediums are, at their structural core, very different from one another. Furthermore, this kind of comparison seems fruitless unless those differences are sufficiently addressed and considered. The stories told in games like Virginia, Kentucky Route Zero, or Firewatch are certainly simpler and less robust than those found in works by Dickins or Kubrick, but then, how would you compare Great Expectations to Dr. Strangelove? Both are works of art because of their mastery of their respective mediums, and because each has the ability to engage and enthrall audiences by making use of the key features of those mediums. Games are no different, and in order to create meaningful stories, designers must learn to better engage their medium's own quirks and idiosyncrasies to do so. I would agree with the author, that many of the most popular and financially successful examples of work in the medium struggle to do this, but to suggest that designers should simply stop trying denies the medium its own incredible storytelling potential before it manifests.
As games have become more complex over the years, user interfaces (UIs) have had to evolve to accommodate them. While earlier games were often less complicated than their modern contemporaries, and as such, required similarly less complicated UIs, games like blah require blah in order to convey the requisite information to the player. As a result, it's often important to be smart about designing UIs, as over complicated interfaces can be difficult to manage and can make playing the game feel like a cluttered and unfocused experience. Many modern games attempt to solve this issue by continually reducing the UI, or by making it invisible until it's absolutely necessary.
This is a perfectly fine practice for some games, and it speaks to the developers' priorities: Naughty Dog wants you to be engaged with the characters and the world when you are playing Uncharted, so they strip away everything else during gameplay unless it's absolutely necessary. In these games, the UI isn't seen as part of the game, but rather as an informational concession made for the player. You don't play the UI, you play the game.
While this works fine for games like Uncharted, I would argue that it needn't be a one-size-fits-all approach. While much of the industry has moved to this approach, I think there is merit to be found in a UI that contributes to the experience of playing the game, rather than simply existing alongside it. While this isn't a unique approach, I do want to highlight a few specific titles and methods that I would argue use UI as an augmentation of the game rather than a lens through which to experience it. The Persona series, for instance, has always had a stylish presentation, but the 5th iteration is a great example of using UI to enrich an experience. The menu systems are intensely varied and are always accompanied by interesting and context-specific animations. Because Persona's combat system is that of a turn-based RPG, much of the player's time is spent in these menus. It makes sense that the developers would design their UI to feel as varied and spirited as the rest of the game. As such, being in menus doesn't end up feeling like something to get through in order to reach the game, but indeed rather like interacting with the game itself.
Another way that games can use UI to enrich an experience is through the use of diagetic interfaces. Diagetic interfaces are UIs where interface elements (text, icons, etc.) are actually made part of the in-game environment, rather than acting like a static filter over-top of the rest of the game. This is a technique used a lot in VR games, since the need for an HMD makes screen-mounted UI elements feel intensely uncomfortable (imagine holding a newspaper up to your face and moving it around with your head). However, diagetic UIs don't need to be constrained to VR games; a number of 2D games make use of diagetic UI, with the effect of the UI elements feeling like part of the game world, rather than like a departure from it. I think this ends up providing a more immersive experience, as nothing in the game acknowledges that there are elements that exist outside the game world.
This post isn't meant to be so much of a call to action as a reminder that there are a lot of interesting and unique ways that UIs can employed to different effect. It's worth thinking about what your game is trying to achieve with its interface before committing to a single solution, as a well-designed UI offers the ability to elevate your game's central purpose, rather than simply filter it.
One of the key points of focus when designing games at a broad, conceptual level is that of player agency. The question of what actions a player can take and what impact they can leave on the world is a big one, one that often shapes the overall structure and experience of the game. Many games take this a step further, birthed entirely from the idea of a player fantasy; developers might want to create a game that gives players superpowers (inFamous), or the ability to explore a world with minimal societal constraints (GTA), or simply the ability to play football really well (Madden or FIFA, depending on where you live). Core to all of these is the concept of agency, the idea that a player can enact their will on the game's world in a way that they ordinarily couldn't in real life.
There's nothing wrong with this; games at their core are meant to deliver a brand of escapism, and what's the point of powerless escapism? However, as these experiences get more robust and complex, they strive to introduce challenges that can clash with this central tenet of player agency. Games that ask players to make difficult choices, choices that undermine their ability to control the situation, do this at their own peril. We as designers often strive to give players as much choice and agency as possible, so as to make for a robust and real-feeling world to play in. We also strive to create challenge, and many designers (myself included) are trying to come up with ways that we can challenge players outside the traditional arena of thumbskill. However, because these "alternative challenges" generally engage players outside of the usual realm of conquest, they can take away from the player's feeling of agency.
An good example of this paradox is in the realm of player/NPC romance. Many modern games offer players the opportunity to engage in romance and relationship with characters in the narrative; this is generally a good thing when we're talking about diversifying the ways that players can interact with a game. However, it can often feel heavily constrained by the albatross of player agency; in pursuit of giving the player control, romantic partners simply become new frontiers to conquer and complete. There's the uglier question of consent as well; although players usually have to "play their cards" right to engage in romance with these characters, this creates a strange space where the romantic partner more closely resembles a slot machine for virtual sex than an actual partner with interests and desires of their own. The natural counterargument to this is a technology-centric one: we simply don't have the algorithmic ability to accurate simulate a real romantic partner. The answer, as always then, is to approach this problem from a design angle; if we want to create healthier and more realistic romance options in games, we need to be willing to take power away from the player and give it to the game.
Romance is one of many examples where player agency comes into conflict with the goal of providing different kinds of challenge to players. However, it seems key that designers be willing to take this agency away when the time comes in order to better represent these challenges.
Imagine the scene: you're playing your newest favorite game for the first time; you play through the tutorial and first level, and the game begins to unfold itself to you. You find an item or gadget that doesn't appear to do anything just yet, or see a locked gate that you can't open. You might not know what that item does or where that path leads, but one thing is certain: your frame of reference within the game just got larger. You're suddenly aware of the existence of other bits and pieces of this world, and how, chances are, these items and doors are going to get used and opened (respectively, presumably) sooner or later! The world feels larger for these discoveries, and the game's level designers didn't have to do a bit of extra work to do it.
These techniques and many others strive for what's called worldbuilding, where a game is made to feel less like a series of levels and more like a living, breathing space. While the advent of modern technology has certainly made it possible to create massive, sprawling worlds, for most teams doing so is either prohibitively expensive and time-consuming, or might simply detract from the focus of the game. However, just because a game is smaller or doesn't need a massive open world doesn't mean that it can't feel like a real, navigable space for players. There are a lot of ways to accomplish this, but I'm going to focus on just three in this essay: references, obfuscation, and nonlinear layout.
Likely the easiest of the three, referencing other parts of your game world in early game is a great way to clue the player into the fact that your game is more than a simple progression from level to level. Lots of games do this in very simple ways; receiving a new item to find that it's effective or necessary for progression in another level is a great way to stitch together different areas into a cohesive-feeling whole.
Although Batman: Arkham Asylum gives players the ability to see destructible walls very early on in the game, it doesn't give them the ability to blow them up until later. Getting this ability after already having seen the walls indicates to the player that the world is wider than they previously thought.
This sort of thing gives the feeling that different levels aren't necessary these wholly distinct worlds from one another; if you get a can opener in level B and you saw some some cans earlier in level A, levels A & B start to have some continuity with each other. They start to exist in the same universe, and if you can layer these kinds of references across your levels in a natural way, you've made a big step towards elevating your game from a series of levels into something more.
Another way to use references to aid worldbuilding is to incorporate writing in a way that complements gameplay. References to items, people, or places in the story are a great place to start, but if they foreshadow real, in-game things that players can experience, suddenly the game's writing is justified. Lore documents found scattered around are no longer just a way to keep the writing team busy (and how!), but rather they now represent possible clues to things later in the game. The first time you can "keep your promise" to a player in this way, you build a sort of trust between game and player that writings found in-game can actually come to bear. While this isn't always a good thing (try not to overpromise and underdeliver), it definitely goes a long way.
Bloodborne (and the Souls games in general) reference real locations often in books and item descriptions littered around the world. In this case, Cathedral Ward and Old Yharnam are both places that the player can reach later in the game; by delivering on this "promise," the game's writing is validated and given purpose.
Another effective way to make bigger-feeling worlds in games is to obfuscate the parts that define its structure. Hiding a game's length, number of levels, or really any possible indicators of progression that live within menus and substructures will help with this. A level-select screen is both the biggest indicator that players are in a game (rather than a world) and the biggest tip-off of when a game is about to end.
For example: LittleBigPlanet 3 is interesting because it offers the promise of "hub levels," where players can navigate a space and move in-and-out of distinct story levels. However, in the level select screen, which is shown before players arrive in the hub level (the biggest circle with the gorilla on it), players are shown all of the hub's connected levels. Players are immediately given an idea of the overall scale and possibility space of the upcoming section of game, thereby taking away from the exploration and surprise that the hub levels are meant to provide! It's a strange design choice that likely hearkens from the earlier two games' level-based structure, but it undermines the player's potential for discovery.
This might represent the most difficult of the three techniques described in this post, but it's also possibly the most effective. Designing your game to incorporate levels that circle back on themselves, have multiple paths, or have unexpected shortcuts or connections with other levels, is an excellent way to create a world that feels real.
Your world needn't be as complicated and intertwined as the ones from Castlevania or Metroid, but a key takeaway from this style of game is the multiple entry and exit points in each area. Finding shortcuts that lead to familiar areas builds a sense of understanding of the world as a real, navigable space rather than simply a bunch of rooms.
The real world is rarely so linear as most games, and even hints of nonlinearity in level design can make a level (or even a game) feel broader and most expansive than it perhaps really is. Implementing this is definitely effortful; it's harder to make a branching path than a linear one. However, the sense of surprise and discovery that comes from a player learning that two paths that they had previously explored were actually connected is hard to replicate, and sets a precedent that gives the impression of a genuinely bigger world.
Making use of players' curiosity and surprise through reference, obfuscation, and non-linearity can make for compelling and exciting worlds in otherwise smaller games. Although it can be challenging to effectively implement these techniques, and although they often demand extra time and effort, the net effect can be magical, resulting in not just a game, but something closer to a world.
VR has arrived (again), and it brings with it a new set of challenges for developers. However, this time not all of them have to do with HMD-friendly UIs and motion sickness. Because the push into VR over the past few years has been largely been a shotgun effort, with numerous different manufacturers throwing their entries into the ring at drastically different price tiers, VR as a medium has found itself highly stratified and generally niche. At any of the three available price tiers ($20, $100, >$400), consumers have an abundance of options. While this might seem like a good thing through the lens of the free market, it has the potential to confuse new buyers who "just want to try this VR thing" Direct comparison of these various options yields more questions than answers; why is the Vive so much more expensive than the GearVR? What about PlaystationVR? What's a Cardboard? While VR adoption numbers are climbing, they're still not spectacular, and I would argue that it is in no small part due to this sort of first-time user confusion. What happens next then? A glance back at history would suggest that the answer lies in the software; hardware manufacturers sell their wares by gesturing to the experiences that users can have on their platforms. You can only play Halo on an Xbox; Mario lives on Nintendo consoles (though ironically, both of these statements are looking less true by the day). These are the kinds of appeals that justify the significant financial investment for a new piece of hardware to consumers. So then, when we look back to VR, it seems evident that once again the task falls to developers to create those selling points. If VR's retail landscape were as simple as Sony vs. Microsoft vs. Nintendo -- 3 competing boxes to put by your TV that all cost about the same -- consumer choice likely wouldn't be so difficult and I probably wouldn't be writing about it. But it's not that easy.
Because those different price tiers of VR equipment offer dramatically different experiences, the question of buying the best piece of hardware for your home becomes much more difficult. The Vive represents a massively larger financial investment at its $800 pricetag (not to mention the cost of a capable PC) than does the GearVR at $100. And this makes sense: the two systems are clearly targeted at different markets, with the Vive vying to be the headset of choice for enthusiasts and the GearVR targeting the uninitiated mass market. However, this discrepancy creates a significant bottleneck for first-time consumers to overcome; many consumers are unwilling to invest in enthusiast level headsets like Vive and Oculus sight unseen, such that many of the great experiences created for this tier of headsets go underpurchased and underplayed except by a small fraction of VR aficionados and early adopters. Combine this with the increasing cost of creating the kinds of AAA experiences that these kinds of enthusiast consumers are wont to expect, and you have a recipe for the medium burning itself out in a very short timeframe (not to mention the walled gardens that represent the development ecosystems for the 3 big headsets, but that's another post). As such, I would argue that if VR as a medium is to succeed, there need to be more definitive experiences for the lower price tier of headsets that justify that tier's existence. While more great games is always a good thing, it is key that a case be made for VR's validity as a worthwhile concept, rather than a fad, at all possible tiers of investment. Inexperienced buyers need to be convinced that VR is worth their time, and they are not likely to buy a Vive to figure that out. If a case for VR's worth can be made on the smaller, cheaper headsets, only then are inexperienced consumers going to be willing to buy the bigger, more robust headsets for the bigger, more robust experiences.
So again, the task falls to the developers, right? Well, yes, but here's the thing. Mobile VR (and VR in general, consequently) will never escape the prison of fads without those definitive, meaningful experiences, and this means that developers need to make it their prerogative to create those kinds of games. There exists a kind of strange elitism in many of the game development communities with which I've interacted vis-a-vis mobile VR. Many seem to view mobile VR as an unnecessary and uninteresting counterpart to the "real VR" experiences offered by bigger box headsets; a low-power alternative pushed out by the big phone companies looking to cash in on the excitement. And this is the attitude that will cause VR as a medium to fizzle and die; as a quick return to the sales-verse, do you know the highest selling VR headset right now? I'll give you hint: it's not one of the big three. It's the Samsung GearVR at over 5 million units, and when considering the mobile VR space, that doesn't even factor in Google's Cardboard, whose sales are harder to estimate but are expected to far exceed even Samsung's numbers. Yes, the Vive has amazing tracking capabilities and screen resolution far exceeding anything in the mobile space, but if no one is around to player your game, do the technical specs even matter? Developers have to focus on making compelling and essential experiences on smaller and more accessible platforms if they truly want VR to succeed.
was lucky enough to have the opportunity to participate in my second Global Game Jam this past weekend, and I came away from the experience exhausted but very proud to have had a part in creating A Matter of Fact. (we even won an award!) Although pride and exhaustion are the two biggest takeaways that come to mind right now, the team (and certainly I as an individual) learned a lot about fast prototyping and the constrained type of development process implicit to a game jam. This post is going to act as something of a postmortem to our design and development process, in order to highlight some of the high-level lessons that we learned -- oftentimes the hard way -- over the course of the weekend.
Lesson 1: Scope not only your ambitions, but your genre too
A Matter of Fact was originally conceived as a response to the Jam theme of "waves;" with the inauguration and its implications for the flow of information and news in the US fresh in our minds, we came up with the idea of a strategy game where two players have to position transmission arrays to win the hearts and minds of the people. However, we also wanted to incorporate the idea of propaganda and fake news into this concept; players should shoot for maximum dispersal of their message, rather than (perish the thought!) truth or objectivity. While I'm still intensely proud of the fact that we were able to unite these two disparate gameplay mechanics in a way that feels mostly cohesive, even just the act of describing them in writing makes apparent to me how susceptible the act of uniting the two concepts is to overblown scope and feature creep. Knowing that we were hoping to make a large-scale strategy game should have been a key tip-off for us that our game would need to be as mechanically simple as possible so as to afford us enough time to conduct the requisite balancing and playtesting time. Though playtesting is key to the success of any game, our choice of genre means that we would ideally need more effort on this front than, say, a platformer or a shooter. We really weren't able to put the time towards this though, as much of our design and development time was dedicated to creating and uniting these two concepts. As a result, the game isn't as balanced as I would like it to be; a hard-won design and production lesson to take with me to future jams.
Lesson 2: Take breaks when brainstorming, but decide fast
Thankfully, this lesson comes from one of our successes at the jam. After about half an hour of brainstorming, we broke to go get dinner. On reconvening, we were immediately able to not only come up with the idea that would become A Matter of Fact, but also decide on its overarching design principles and purpose. When all's said and done, our ideation phase lasted only about an hour, leaving us the remaining forty-seven to actually create the game. I personally attribute a great deal of the success here to resisting the urge to immediately sit in a circle discussing ideas for hours after receiving the theme. By conducting a more informal and active brainstorming session (team members would tag in and out to write ideas on the whiteboard or go get food), we were able to find our idea quickly and devote more time to development. This is in direct contrast to my brainstorming experience at my first game jam, where my team and I spent five hours sitting around a table in a boring white room discussing what our game wouldn't be. The distinction between these two methods and their impact on their respective games is obvious, and definitely informs the brainstorming approach I intend to take in future.
Lesson 3: Once you've started, take the time to stop
An issue we had once we were fully locked into our development process was one of inertia. As we got closer and closer to the weekend's deadline, we found it harder and harder to stop and reconvene to make sure we were actually doing the right thing. As a result, we ran into some miscommunication and scoping issues at the very end of the Jam, resulting in some far far too eleventh-hour feature implementations that could have been avoided with some scheduling effort. Because this endgame cram is predictable, we might have preemptively scheduled checkpoint meetings to discuss delegation, the direction of the game, and what we need to do next.
Lesson 4: Be careful with random generation