Fashion, Emptiness and Problem Attic

Problem Attic is a game by Liz Ryerson that you can read about and play (for free!) on her website.

problem attic

Any designed work can be decomposed into two different kinds of features: Intrinsic features and extrinsic features. An intrinsic feature is something we judge to be a non-reducible atom of actual value that the audience wants and the work provides—that is, the work’s purpose—while an extrinsic feature is anything that exists solely to realize that purpose, providing no actual value in itself. To design something we must first decide which intrinsic features we hope to provide and then do so as efficiently as possible by devising and iterating on a set of extrinsic ones. Here is a quick example: The intrinsic features of a hammer include ‘delivering impacts to objects’ and ‘inserting/removing nails from rigid structures’, and to best realize these we might iterate on any number of extrinsic ones (such as the hammer’s shape, the materials of which it is composed, or its manufacturing cost).

A product, which is a special kind of designed work, has at least two intrinsic features. One is to perform the task for which it was made; the other is to convince you to buy it. (The next time you hear the phrase ‘ludonarrative dissonance’ ask yourself whether the dissonance you’re discussing might actually stand between ‘what marketing decided would generate money’ and ‘what the designers defiantly attempted to produce’.)

Iterations on a design’s intrinsic features are transformative; they change what the work is on a fundamental level by changing what it does. (A hammer that cannot deliver impacts to objects is no longer, ontologically speaking, a hammer; it has become something else.) Iterations on a design’s extrinsic features are merely ameliorative; they make it better at fulfilling its purpose without changing its nature. Thus we only value extrinsic features insofar as they improve a design’s ability to give us the things we actually want, and we are quite content to discard them as soon as we find more effective ones. Walmart would stop selling hammers if they could figure out how to to market telekinesis. Google would stop making iPhone apps if they could perfect the horrifying spider drones that burrow into your brain through your nasal cavity and telepathically communicate bus directions to you.

The intrinsic features of Art media like literature or film, unlike those of hammers and map APIs, are not easily reducible into language. Whereas to design a hammer involves finding ways of realizing features whose value is readily apparent, to make Art is to search for value lying beyond the edges of our understanding: To capture something we know is important to us even though we cannot quite say why. This is what makes ‘Art’ so famously difficult to define, and why we speak not about ‘novel designers’ or ‘film designers’ but about  the authors of these works. Authors are a specialized type of designer who work to realize feelings, concepts or moments; often they attempt to connect in some fashion with our shared humanity. We cannot fully express what their work is for because its value transcends understanding. Thus while conventional design undoubtedly remains useful as a means of iterating towards our authorial objectives (the language by which we communicate mood during a film, for example, is the product of very sophisticated design work) it tells us nothing about what our authorial objectives should actually be nor what our Art becomes when we realize them.

Videogames inherit a little from Art but mostly from product design, which has been kind of a problem for us. As an industry we put faith in the idea that there is intrinsic value in the games we develop, although we don’t think very expansively about what that could be; instead we abstract it, using ugly words like “content” as placeholders for value without ever proving that it truly exists. We then set about designing incredible machines that shuttle players towards these placeholders with extremely high efficiency, which as designers is really what we’re good at. We make the interface as usable as we can because players need it in order to learn the rules. We teach the rules very carefully because players need them in order to grok the dynamics. We shape our dynamics strategically because enacting them is what will stimulate players to feel the aesthetics. Somewhere at the core of all this, we suppose, lives the “content” players are attempting to access: That which we have abstracted away so that we could hurry towards doing safe, understandable product design rather than risky, unfathomable Art. In game design we enjoy paying lip service to aesthetics. What, then, shall we say are the aesthetics that we can package up into 5–60 hours of intrinsic value? Challenge? Agency? Story? Fun? Is ‘Mario’ an aesthetic? How do we stimulate the Mario part of the brain? Oh hey, wait, look over there! Someone is confused about what that UI indicator represents! TO THE DONALD NORMAN MOBILE.

The more time I spend examining my professional work and that of my peers in the games industry, the more I come to believe our near-sightedness has crippled us. We have avoided building sophisticated pleasures that demand and reward the player’s investment, preferring instead to construct concentric layers of impeccably-designed sound and fury over an empty foundation of which we are afraid and at which we can hardly bear to look. We gamify our games, and then we gamify the gamifications, so that many different channels of information can remain open all at once, distracting the player by scattering her attention across a thousand extrinsic reward systems that are, in themselves, of no value whatsoever. We delay the realization that our true goal is not to deliver some fragment of intrinsically valuable “content” rumoured to reside, like a mythical unicorn, in the furthest reaches of our product; our true goal is instead to find something, anything to mete out over the course of 5–60 hours that will somehow account for the absence of real instrinsic value. It is not, therefore, the content that truly matters to us; it is the meting out.

Though the products we design ought to provide value for players and money for us, they currently only pretend towards the former function while actually performing solely the latter one. This deception permits us to continue making intrinsically simple products, avoiding transformative changes to our designs that we fear would render them less digestible; we instead rely upon a pattern of amelioration by technical advancement wherein we deliver as few intrinsic features as possible (and the same ones over and over again), but with intricate fashions heaped upon them. We have abandoned game literacy in the process, and as a result we now find ourselves trapped in the business of making increasingly-elaborate pop-up books.

Read more…

Games as Histories

rainbow

The internet has trouble understanding optical phenomena

Why is it that so many of my friends who count Diablo 2 amongst their all-time favourite videogames find themselves so disappointed with its sequel?

It is tempting to point into the horrific maelstrom that is public opinion on the internet and claim that Diablo 3 suffers predominantly from an overabundance of rainbows; now, while I do believe said rainbows represent a tangible detail to which some have pointed in an effort to articulate legitimate concerns with the tone of the game world, that is only one piece of the story. The remainder involves the game’s notorious Auction Houses, which are far more interesting to discuss because they reveal something surprising about loot and its design. Diablo 3 commits the cardinal sin of Game Design Idolatry: It is so fixated on the whirring and buzzing of its item generating machine that it loses sight of the aesthetics for which that system was originally designed, and which made Diablo 2 so memorable.

Read more…

Smacking Some Reason Into Your Akai MAX49

A Photograph of the Akai MAX49 MIDI Controller

I don’t care what the internet says; I think the red paint looks sexy.

I bought an Akai MAX49 MIDI controller recently. It’s an impressive device boasting a nice, stiff keybed, a reasonably-supple set of pads and, of course, its big killer feature: Four banks of eight responsive LED touch faders designed primarily to make your neighbours jealous. A musician, upon unboxing such an instrument, might set about playing music with it; I, however, am no musician.

The first thing I did with my MAX49 was dive into its configuration settings in an effort to set it up just so for use with Propellerhead’s excellent Reason software. Akai offers some presets that work with Reason’s ‘Remote API’, an interesting and mysterious Lua-based scripting system for binding the various physical knobs and dials on any MIDI device to the virtual ones inside Reason. These are pretty good for the most part, but I encountered one major annoyance almost immediately: The controller’s ‘responsive’ faders (which one would expect to work just like the motorized ones on a mixing board) were capable of sending data out to Reason but not of receiving it from Reason. Thus I could use my finger and the MAX49 to move a fader on one of Reason’s virtual line mixers, but if I then used my mouse pointer to tweak that fader in the application the change would not be represented on my physical controller. This was not acceptable.

Read more…

The Strawberry Game Jam

Screenshot of The Wheel

I did a game jam last weekend! You can look at the result right here.

This game is a parable about time and society. A person must journey all the way around the surface of the globe, following the footsteps of his doomed countrymen into hell and confronting the ancient thing that brought them there.

I developed this project alongside Dylan Bremner between the hours of 7:30pm on a Friday night and 4:00pm the following Sunday. We used Flash’s fancy newish Stage3D (via the somewhat-interesting StarlingPunk framework). The rings consist of rectangular textures bent over circle-shaped triangle fans using some clever UV mapping and programmable shaders. I am very happy with it. It was a productive and enlightening experience.

Special thanks to Phil and especially Emily for allowing us to stink up their suite for an entire weekend!

Narrative Economy Makes Walking Dead Work

In drama there is a principle known as “Chekhov’s gun”. It goes like this: If, in act one of a play, you place a loaded gun prominently in the middle of the stage so that it becomes notable to the audience, it behooves you to fire the thing before the curtain falls. If you don’t, it means you’ve wasted the audience’s time and attention on a meaningless detail when these precious resources could instead have been spent on something that contributes more earnestly to the quality of the story. The concept is most frequently applied to storytelling, but in fact it’s applicable to all forms of design and can be restated in a way familiar to all creators: Strive to make every feature of your product as purposeful as it can possibly be. Good design is economical; it maximizes utility while minimizing waste.

Previous videogames aiming to provide the player with interesting narrative choices suffer from a lack of economy, and this is partly why we find game stories to be so inferior to those of film or novels. Consider, for example, the time-honoured trope of wheeling two characters the player has never seen before out onto the stage, explaining some conflict in which they’ve mired themselves, then asking the player to decide everyone’s fate. Often the choice allows the player to make some clearly defined moral stance (the proverbial ‘baby save/puppy kick’); occasionally it involves thought-provoking ethical judgments (‘gray area’). Ideally the player asks herself: What is the best way to handle this situation? Pragmatically, however, I can think of a few more pressing concerns: Who the hell are these people, why should I care about them, why should I care what happens to them, and how does any of this affect me?

Read more…

Violence in Games Has Gotten Weird

One afternoon in university a classmate asked me what I wanted to do after graduating. I immediately said “Video games!”, and she was surprised; she thought I’d go off to make weird Arduino-based projects for the Surrey Art Gallery or something. She didn’t think me the type to play or want to make video games. Eying me suspiciously she asked: “Aren’t those really violent?”

I was expecting this; I was, in fact, prepared for it in the way only an obnoxious know-it-all like myself can be prepared. “Well actually…” I began, eager to waste the next two minutes of her life ranting about my favourite subject. “The medium’s fixation on violent conflict is an unfortunate artifact of early design constraints. Just underneath the blood spatter are interesting spatial and temporal problems that constitute the actual game, and designers dress those things up with violence only because the precedent has become deeply ingrained and difficult to erase.”

Read more…

Your Portfolio

A screenshot of the author's ePortfolio

So you’ve just graduated from university, and you find in your hands a scrap of parchment with a bunch of letters on it that don’t make very much sense (say, for example, “Bachelor of Science from the Faculty of Communication, Art & Technology”). You want to work in the creative sector, yet your resume is looking rather thin and few seem interested in hiring a recent graduate of the science arts on her word alone. What can be done?

Well, I think you should consider building an ePortfolio. An ‘ePortfolio’, in addition to being a ridiculous word, is a thing on the internet through which people can learn about you and your work. The idea is that the student projects in which you invested so much time, but which cannot rightly be communicated or listed as ‘experience’ on a resume, may hereby be repurposed into a polished and readily-accessible form that (when designed properly) is both broader and deeper yet more approachable and indeed even more succinct than a resume could ever be.

My portfolio got me hired and yours can do likewise, which is cool. But beyond employment prospects, developing a portfolio provides a good opportunity to work uncompromised on a meaningful (yet also profitable!) personal project. Your porfolio can serve as a milestone; a monument. It can be a source of closure, of renewed opportunity, and even of personal transformation. For while it would certainly be more fun to go travelling for a year (perhaps “finding yourself” at a dive bar in Munich), people in our field should already be aware of the most effective route to self improvement: Good, hard, thoughtful, painful work. And a portfolio requires exactly that. This is a chance to synthesize everything you’ve done in school, for employers, and elsewhere in life; to produce your first proper masterwork. And unlike an unpaid internship or a placeholder job you don’t really want, if you put enough effort in I can guarantee you’ll be glad you did it.

Read more…

When Not to Turn It to 11

“The numbers all go to eleven. Look, right across the board, eleven, eleven, eleven and…”

“Oh, I see. And most amps go up to ten?”

“Exactly.”

“Does that mean it’s louder? Is it any louder?”

“Well, it’s one louder, isn’t it? It’s not ten. You see, most blokes, you know, will be playing at ten. You’re on ten here, all the way up, all the way up, all the way up, you’re on ten on your guitar. Where can you go from there? Where?”

“I don’t know.”

“Nowhere. Exactly. What we do is, if we need that extra push over the cliff, you know what we do?”

“Put it up to eleven.”

“Eleven. Exactly. One louder.”

“Why don’t you just make ten louder and make ten be the top number and make that a little louder?”

“[pause] These go to eleven.”

This Is Spinal Tap

As a civilization we are preoccupied with going to 11. We first like to rank things (dining experiences; movies; potential sex partners) using some set of real numbers like 1-10 (inclusive); we then seek cases where the object in question is so illustrious as to exceed the arbitrary scale we just invented, perhaps due to outstanding salt prawns or because she hails from ‘Elevenessee’. Our species invented the ‘five-alarm chilli’, then the six-alarm chilli, and then, conceived in the darkest reaches of Flavour Mordor, the insidious eight-alarm chilli. (How much spicier is it? Three alarms.)

On a scale of 1 to 10, where 1 is ‘not doing it’ and 10 is ‘just doing it’ I suspect Nike is currently holding somewhere around 12 or 13, which constitutes a year-over-year growth of 1.2 just do its.

I once read an article about how id Software’s publisher wanted them to fill their games with as many ’11 moments’ as possible. Always be turning it up to 11! More awesomewatts per second! Crank that funometer all the way up, and then crank it even more up! One gets the impression that as game designers we want the minds of our players to be exploding violently on some kind of perpetual basis.

Poster for Daikatana reading "John Romero's about to make you his bitch."

John Romero turns it up to 11

The joke, on a structural level, is that this line of thinking misapplies the concept of upper and lower-bounded scales to a problem better suited to the more general ideas of greater and lesser. A guitar amplifier has a minimum gain (off) and a maximum gain (the loudest it will let you go), which is to say it has a range, yet the members of Spinal Tap have identified a critical deficiency in this device’s design: Over time, our ears become numb to the intensity of anything played at the same consistent volume. When something first gets loud it seems pretty loud, but if we listen to it for three minutes straight it will at some point stop seeming ‘loud’ and start seeming ‘the same’. To be musical, therefore, it is necessary to let go of the absolute measure of volume and start dealing instead in ‘getting louder’ and ‘getting quieter’, employing something performers call dynamics (which is another word for ‘change’). This is why we characterize the passages of a song not by their decibel rating (our sensory apparatus is actually totally incapable of measuring that reliably anyway), but instead by when the song gets louder, when it gets quieter, and how much its volume changes relative to how loud it used to be.

Read more…

Games as Conversations With God

In one of Giant Bomb’s infamous live E3 podcasts, David Jaffe (the foul-mouthed director of Twisted Metal and God of War) described the plight of videogame narrative in an interesting way. It goes like this: The easiest movie to make is about people sitting in a room talking, while the hardest might involve spaceships and a bunch of explosions, right? When it comes to games, though, the easiest thing to make includes spaceships and explosions whereas simulating a bunch of people sitting in a room talking turns out to be incredibly hard. As such, many game designers (including Jaffe himself on God of War) choose not to simulate these conversations at all, instead writing and recording them as cutscenes to be inserted between explosive spaceship battles. Jaffe, in recent years, has become tired of that, and has done a few interviews (as well as one notable DICE talk) encouraging designers to explore the more procedural experiences at which games specifically tend to excel.

So, let’s unpack this a little bit. Why is it that games are so bad at simulating conversations between humans? Well, mostly it’s because of the dirty little secret living between the walls of the information revolution: Computers suck at almost everything. Unless your problem involves ‘doing arithmetic very fast’,  it’s going to be rather difficult to convince your microprocessor to help you out with it. (Indeed, the entire field of  computer science is essentially concerned with transforming various complicated problems into the smallest possible amount of arithmetic.) Computers are not naturally good at reading or writing in our languages, at emulating our behaviours, mannerisms and decision-making processes, or even at rendering images of us that don’t look like horrifying robot marionettes. They do not think, speak or act like us. They don’t even know what we are or that we exist by most definitions of the verb ‘to know’. We programmers do not speak to computers on our own terms, like people do in Apple commercials or on the Holodeck in so many episodes of Star Trek. Instead we do so strictly on the computers’ terms, primarily by reading/writing numerical values and doing simple math. The languages with which we instruct them grow increasingly elaborate as we climb the ladder from assembly to C to Java and onward, but they haven’t actually become more ‘human’. Object-oriented programming, for instance, is a useful design strategy, but it bears no real resemblance to English or Mandarin or Latin and, in fact, something like 80% of our population can’t seem to understand or apply the principles of OOP very successfully (or, for that matter, almost any other programming principle) .

When futurists talk about how there is going to be a ‘technological singularity’ in which computers develop self-awareness and in several seconds team up to calculate the meaning of everything and enslave/destroy us all, I find myself skeptical. Human consciousness, far from being the default way of existing, is actually this weird thing that resulted from an obscure evolutionary process on a large rock within a universe of matter, energy, light, gravity, magnetism, weird sub-molecular nonsense and so forth. At some point there became organisms with genetic structures and, through natural selection, they eventually managed to evolve into these funny looking bipedal critters with these weird ropy actuators that wrap around a hard endoskeleton and are operated via electrical impulses from this gray lumpy thing. For fairly arbitrary reasons, we happen to obsess over survival and, curiously enough, reproduction. Software, by contrast, is not anything like us. It exists in a universe of numbers, patterns and instantaneous transformation, all of it having been designed from scratch by humans for a specific purpose (this is why it’s always way worse at its critical functions than practically any biological organism you could name). If we did manage to build an AI capable of making twenty million increasingly-powerful copies of itself in an instant, what makes us think it would choose to? Reproduction is a biological thing. If our AI could get online to check Wikipedia and thereby absorb the sum of all human knowledge in 2.3 seconds, why would it want to? We humans are naturally curious, but I assure you my installation of Microsoft Excel is not. An AI may not mind dying; it may not consider the constructs ‘life’ and ‘death’ applicable to itself. It may not even recognize the concept of having a ‘self’ or of there being ‘other people’. It may regard our solar system as very similar to a brain, and the nuanced little movements of our planets and space junk as essentially the same thing as the complete works of Shakespeare. There’s a good chance it won’t find any of this stuff particularly ‘interesting’ in the way we understand the word. (Now, that pixel noise pattern in the top half of that webcam feed? Y’know, the one with all those weird face-looking blobs moving around in the bottom of the frame? There is something worth studying!) Our universe sometimes yields humans, but that’s because our universe is weird. Digital environments, being cleanlier and featuring less quantum entanglement, are poorly suited to our kind.

The question, then, is how to use these computer things to produce works of art that are relevant to the human experience in all its diversity. Now, perhaps one day computer scientists will squeeze enough human cognition into some set of O(n log n) algorithms that we can indeed all become addicted to the adult-themed Holodeck programs we so desperately crave. Yet I personally shall not hold my breath. Should you be a medium-to-large scale game developer you might try hiring some writers, artists and animators to hand-craft everything that your video game humans will look like and do, which can yield some interesting results. But what if, like Jaffe and many others, you simply don’t want to make a game full of cutscenes, dialog trees and other such forms of inelegance (or you happen to be dirt poor)?

Read more…