March 30, 2004
The fundamental choice, it seems to me, is whether your pieces move simultaneously or in turn. This is completely orthogonal to the issue of "real time" versus "turn based":
|Single move||"Move your piece"|
|Simultaneous||"Plotted movement" |
Baldur's Gate / Neverwinter Nights
Age of Empires
Starfleet Command 2
As you can see, it's hard to classify some of these games, and the more you think about it the harder it gets. For ease of simplicity, I'm explicitly declaring that games that allow you to pause the action are not invalidated from being considered "real time". All computer combat games are turn-based in some internal implementation sense (e.g., in Warcraft your grunt will get a swing every X quanta of time). In my taxonomy, a game is "turn based" if those quanta are explicitly exposed to the user as "Now is the time when you must enter your orders, or your men will stand around, slackjawed, looking rather like 3D-rendered apes."
AI, in other words, can turn a turn-based game into a real-time one. Neverwinter Nights implements D&D style turns and rounds, but if your avatar is attacked he or she will at least have some default "hit the monster back" type behavior. Likewise, Starfleet Command 2 is an almost direct implementation of the completely turn-based system from the Star Fleet Battles board game, but successfully hides its turn based nature from the player.
So for me the vertical axis is the more interesting one -- simultaneity of movement. "Single move" means that in the context of the game, either the player is moving or shooting, or the enemy is, but not both at the same time. "Simultaneous" means that the enemy can be on the move at the same time as your pieces are.
I find it interesting that most of the squad level games are single move. The compromise that most of them seem to have adopted is "only one piece moves at a time, but if the moving soldier moves into the field of view of an enemy who has movement points left, the enemy may get an attack of opportunity." I've never played Advanced Squad Leader, but I have a suspicion that that is exactly how it resolves combat. Any ASL experts want to enlighten me?
The battalion and division level games are more sophisticated; from as early on as Chris Crawford's Eastern Front, these games generally allow you to input orders for all your units, and then tell the computer "go." A notable exception is the Panzer General series of games, which essentially acts like the squad-level games in my list.
This is an intrinsic advantage of "move your piece" over "plotted movement" type games from a UI perspective: you click the mouse button and something happens. A piece moves. A little animation of a gunshot or an explosion plays. You whack the other guy with the stick. In a plotted movement game, the most that can happen is the computer can play a little happy sound that means "Yes, orders received." UI matters. This is a big hurdle for plotted movement games to overcome.
Scale is another issue that may make it harder to implement simultaneous movement in a squad level turn based game. In a game like Combat Mission, the distance troops need to cover is large enough that they're generally not going to fly past each other by accident. A carelessly implemented squad level game could result in multiple rounds of combat with antagonists running past each other and then pirouetting around to fire off a shot.
Yet if we're honest, the "move your piece" games suffer from their own class of problems: believability. This is less of an issue in the fantasy games, but it does stretch credulity in, say, Jagged Alliance when one of my men has enough action points to saunter up to an enemy and calmly fire a Desert Eagle into his face, and the enemy just stands there for the entire 10 second period because he had higher initiative and therefore has already moved and used up all of his action points. So even though plotted movement games have usability hurdles to overcome, I think it's worth trying to overcome them, because they bring greater realism -- of a sort -- to the player. Clicky-clicky games bring that realism also, but at the cost of requiring more dexterity and -- potentially -- less strategic thought (at least, that's the argument. I'm not actually convinced that this is necessarily true, but I'll put it forward for consideration anyway).
Here are some rules of thumb that I think one could use to make plotted movement games work as well as "move your piece" games:
- Rounds need to be short. The more orders you have to keep in your head before hitting the "go" button, the more confusing the experience will be to the player.
- The UI should encourage players to issue orders to squad members in terms of objectives and tactics, rather than in terms of controlling every step they take. It should be feasible to instruct a squad member "Get to this location on the map, moving cautiously and keeping under cover whenever possible" and not have them run out into the middle of a football field. Corrollary: A good pathfinding algorithm is a requirement. Corrollary: The interface needs to allow for a unit to report in for more orders if they get stuck or can't complete their objective.
- Likewise, instead of ordering "shoot at target X now," firing orders should be objective based: shoot indiscriminately to provide covering fire, only try to take out this target when you have a good shot, fire at anyone who approaches this area who you have a clear shot at.
- You should be able to issue orders to multiple units as a single squad ("Move to this objective cautiously. Leapfrog and cover each other as you go.")
Or, instead of developing rules of thumb like this, maybe I should just see if I can find someone to fund me to produce Galactic Gladiators 2004. I see maybe a Bruce Willis in the main role, with Julia Roberts as the love interest. It'll have lots of heart. And a happy ending. I promise.
If you enjoyed this article, you might enjoy some of these links, too:
- A narrative history of Chris Crawford's Eastern Front. Crawford also has some interesting discussion of this game in his most recent book.
- An ancient review of Computer Ambush.
- Some interesting commentary from the author of Galactic Gladiators and Adventures. ("I stopped making games for two reasons. I was going broke. And I was beginning to dream in 6502 assembler.")
March 29, 2004
I arrived in Toronto at about 2 in the morning, and the very first thing I did, after parking the car and checking in to the hotel, was to walk down Yonge Street to the nearest street vendor and buy a sausage, slather pickled peppers and mustard and kraut on it, and walk back to the hotel, eating my hot dog, victoriously. Hot dogs taste better in Yankee stadium, or on Yonge street. No one knows why. It's just that way.
I love Pittsburgh, but one of its biggest drawbacks is the near-absolute lack of street cart or vendor culture. The city council doesn't just not support it, they actively oppose granting new licenses (and then they write whiny, stupid articles in local papers wondering "Why don't more people move here?") Oh sure, there's the Saigon Sandwich lady on the strip, who makes the best Saigon sandwich I've ever had, anywhere, and there's Dilly and her ribs, but they're in the Strip District where there's already a ton of great food. Half the joy of a street vendor is that you can find them in neighborhoods where there's nothing but overpriced restaurants and banks. Hand them $2, walk away with the best meal you'll have all day. Other than the strip ventures, there's a few truck-based places that operate near the University campuses, and one Asian cart on the South Side. That's pretty much it.
Another aspect of Toronto street food that I love: everyone eats it. When I lived in New York, you could pretty much divide people -- on sight -- into those that will eat a hot dog from a cart and those that won't (there was also a third category, which was "I would eat a hot dog from a cart, but I decided to go to Nathan's in Coney Island, instead.") In Toronto, dividing people up this way isn't possible, because they don't let you into the city if you don't eat street vendor hot dogs (all of the vendors sell vegetarian sausages, too, so you're out of excuses). Every time I go, I see someone I wouldn't expect to be chomping down on an Italian sausage in public wolfing one down. Last trip, it was the 60-something asian grandmother, squatting on the curb (onions, peppers, mustard). This time, a Muslim man and his wife (in a chador!) walking down the street eating their very delicious looking beef dogs (I presume), which were covered in chilli.
It was a long drive. But it was a really good hot dog.
More culinary notes from Toronto will follow soon.
March 26, 2004
I'll be in Toronto this weekend, eating good food and visiting good bookstores; probably no updates until Monday.
March 24, 2004
Note: I originally published this article at Tea Leaves.
For a long time I've been fascinated by the idea of being able to buy and drink raw milk (or as some would have it, "real milk") rather than the pasteurized and homogenized product we all know and love. Part of it is the (realistic) fantasy of being able to make real clotted cream and part is the (unrealistic) vision of myself living in the Dordogne making an earthy, runny cheese from lait cru, which I bring to market each week. After the market, I would gather with my fellow peasant workers of the terroir and we'd sit and quietly get drunk on cheap red wine and complain about stupid Americans and the constantly striking truck drivers.
I always assumed that I'd never get the chance to make either of these fantasies come true, but thanks to a well-placed word, I now have a gallon of raw milk. Time to get to work.
Mystery surrounds raw milk. I had always assumed that it was totally illegal to sell it, everywhere in the US. It turns out this isn't true. Despite it being technically legal in my state, I quickly discovered that since everyone (like me) thinks it's illegal, trying to get someone to sell you some in any sort of retail setting is difficult to impossible. It was on a tour of a cheesemaking facility north of Pittsburgh that I got my first useful clue -- we saw how the farmers delivered raw milk, which the cheesemaker then separated the cream out of, and pasteurized. I asked the owner how I could get some raw milk. Looking at me strangely, he said "I guess you should find a farmer."
The only problem was, I didn't know any farmers.
At an Easter dinner, I got into a conversation about cooking and baking; I'd been making a lot of crème fraîche lately, and I let slip my yearning for raw milk to play with, and that I was giving up on ever getting any, since I didn't feel comfortable driving up to some farmer I didn't know and asking him if I could fondle his milk cow's teats. One of the people dining with us said she knew a dairy farmer, and offered to introduce me. I leapt at the offer with enthusiasm. I got a call a few days later setting up a time and place -- "It depends on the milking schedule, see." I was already liking this more and more.
Bob and Tomalee's farm is nestled in Somerset County under the shadow of giant, elegant, and intimidating windmills. Tomalee is short, voluble, and hugs strangers, all the while accompanied by an entourage of placid dogs and scrawny farm cats. Bob's face is cheerful and red, and he clearly relishes playing straight man to Tomalee's trickster. Tomalee seems puzzled by my need for raw milk ("Clotted cream? Never heard of it.") but is thrilled for the chance to show off her menagerie ("my friends") to my friends and me. Suppressing my urge to say "Yeah, yeah, lady, just hand over the loot!" I take the tour. And what a tour.
Tomalee's friends are many and varied. Apart from the milk and meat cows, which don't "count," there are of course the pet cows. Which are like the other cows, only cuter. Or something. But my first indication that something is truly different about this farm are the hens. They have little antennae. "Are those peahens?" I say. "Yep!" says Tommalee. And there, in the corner, is a peacock, with a five foot tail. "Did you know that the peacock is a tropical jungle animal?" she asks. "I learned that in science class." "Huh," I say, resisting the temptation to ask her "So what the hell is this one doing in Western Pennsylvania?" Next to the Peacock are the rabbit cages, which are next to the pen with the goats and a single (still unsheared) ram.
On a pile of hay near some of the pet cows a black farm cat is sleeping, near some piled up blankets. Then the blankets move. They're not blankets -- they're a pot bellied pig that has grown to the size of a small washing machine, looking disturbingly like Old Major from the animated version of Animal Farm. "That's Hunter!" says Tomalee. She walks over to Hunter and unleashes a torrent of high pitched baby talk. Tomalee explains that Hunter is real lazy. We walk down an aisle of about 8 "pet" cows. "We don't eat friends," says Tomalee, and I nod. "Never eat anything you've been introduced to, like in Alice in Wonderland." She looks at me funny again, and shakes it off. There's also a horse. I confide that I was scared of cows when I was younger, because they seemed so huge. Tomalee says she's scared of horses, when they're not in a pen, for the same reason.
There are about 50 milk cows in the barn, standing, sitting, eating, pissing (the first time I hear this, I start looking around for the firehose) and generally waiting to be milked. Each animal has a tag on its ear with a number. Above each stall is a sheet of paper with the name of the cow written on it ("Debra", "Marcy", "Vanna"...) and notes on what type of feed it gets -- grass, corn, soy. I've been absentmindedly petting the cows as we walk along. They seem to like being scritched on the sides of their face; several enjoy licking my arm. On my second time trying to pet Debra between her eyes, she lifts her head up, and slides it roughly down the side of my arm, turning it, pushing my arm away roughly.
I have just been brushed off by a cow.
Bob is talking to one of my friends about using Bovine Growth Hormone (BGH) to increase milk production, and how he doesn't use it because he thinks it overworks the cows. These people wouldn't think to market their products as organic, yet they have an intrinsic distrust of Monsanto corporation. It's like I'm in heaven. I am salivating at the prospect of trying their milk.
Yellow Golden Pheasant
I promise Tomalee I'll bring some clotted cream with me when I visit again. She says she looks forward to finding out what it is.
The milk itself, when I finally try it, has a musty, almost cheeselike taste, with just a hint of a sulfurous tone. The cream does indeed rise to the top, although the clotted cream didn't turn out that well, the crème fraîche was awesome. With spring just around the corner, I'm looking forward to going back to visit Tomalee and her friends. I think Debra the cow and I have some unfinished business.
These links might be of interest to you:
- The Campaign for Real Milk maintains a summary of laws about raw milk around the United States.
- The US Department of Agriculture sucks for not allowing us to buy, sell, and eat yummy raw milk cheese.
- The milk you drink might contain Bovine Growth Hormone. Sorry.
- The deceptive advocacy group PCRM is trying to convince people that milk is poison. Here's hoping I'm never invited to any of their dinner parties
March 23, 2004
Lebanese mourn death of Saruman, Wizard of Isengard
March 22, 2004
Buying vintage port is like going on a blind date in Manhattan. No matter how many close friends vouch for your blind date, you really can't know in advance whether it will be fun or a disaster, and the only thing you know for sure is that it's going to cost a lot of money.
You can't even count on having a good time if the date isn't completely blind. Since vintage port is a wine that we often keep for years, it's not unusual to end up in a situation where one bottle of a given house and vintage is superb, and then the next bottle from the exact same batch is awful, because it has spoiled, because you didn't rebottle it; this happened to me with a bottle of 1977 Smith-Woodhouse. I'm still recovering from the trauma. There are people who rebottle their vintage port periodically to avoid this outcome. I don't personally know any. (Pet peeve: if the wine industry would just get over itself and admit that "real" corks are completely inadequate to the job they're being asked to do, and move to some less stupid technology such as a bottlecap and wax seal, this wouldn't be an issue.)
So vintage Port, because of value issues, remains for me a rare luxury. Most of the Port I drink, when I drink Port, is just straightforward nonvintage ruby Port; I'm fond of Cálem, but the variance in quality between all of the great Oporto houses in this segment of the market is very small. It's hard to go too far wrong. Occasionally I'll grab an unassuming bottle of cheap white Port, when I can find it, which isn't often. That's a little riskier (and many people don't enjoy white Port). I dislike tawny Port intensely; i'd rather drink Sherry.
Ruby Port is a fine drink. Typically it will run you around $10 to $15 a bottle in the US, a little less overseas. The houses in Oporto, sensing an under-served market ("I want to be snobby about my wine, but I don't want to pay for it.") have developed a class of wines in recent years referred to, broadly, as "Late Bottled Vintage" In one of those amusing false cognates, "Late Bottled Vintage" is Portuguese for "not vintage." Like vintage wines, LBVs come from a single year's harvest, but spend much longer in oak casks compared to true vintage Ports, and much less time maturing in the bottle. This forces some complexity into the wine, at the cost of subtlety. They're ready to drink earlier, although they will generally have a lighter body. A typical LBV will run you between $25 and $40.
Late Bottled Vintages have been a thundering success for the houses of Oporto. I believe this isn't so much wannabe wine snobbery as a genuine desire to drink good Port, and drink it now: the only vintage Ports most of us can reasonably afford are the truly young ones. Buying a case of young vintage Port is a risky endeavor, because you don't know how it's going to turn out, and drinking a bottle of vintage Port before it's matured feels like an ethical violation, at least to me. "Infanticide" is the word bandied about by Portophiles. Late Bottled Vintages are an understandable compromise.
The houses, emboldened by the success of the Late Bottled Vintage marketing experiment, have now developed a new product to capture the dollars lying between the $10ish Ruby price point and the $30ish LBV price point. These wines are referred to as "reserve" Ports. "Reserve" has a long and hoary tradition in the wine industry outside of the niche Port market. Roughly, it translates to "this wine isn't any better than our standard wine, but I somehow think I can convince you to pay more for it." On very rare occasions, you'll find a reserve that is actually noticeably superior to its undesignated cousins. But less often than you would hope.
I succumbed to the marketing this week, mostly because my local wine store (I live in Pennsylvania, where human life is cheap and the State controls all liquor sales with an iron fist) didn't carry an adequate selection of ruby Port. I decided to experiment, and picked up two "reserve" class Ports: Graham's Six Grapes (on sale at $19/bottle) and Sandeman's Founder's Reserve ($18). As usual, we tasted the Ports with neutral crackers and a selection of good blue cheeses (Port and Stilton, Roquefort, or Cabrales is truly an incomparable paring).
The Sandemans Founder's Reserve was acceptable, although, arguably, still overpriced. Much sweeter than the Six Grapes, it managed to stay just under cloying. It did this mostly by having an interesting middle -- there's a sour note that was equal parts offputting and intriguing, a bit like the hint of sour milk in Hershey's chocolate. It lacked the attenuated wino-on-Saturday-night attack (and sustain, and release) of the Graham's. The tail was noticeable if not outrageous. I initially thought the nose was a bit on the light side, but that was because I was comparing it to the beaker of rubbing alcohol that was the Six Grapes; when I tasted it again in isolation it was fruity, with almost a bright citrus or sangria aroma.
My foray into the world of "reserve" Ports has, simply, reconfirmed my prejudices: from now on, when I'm not willing to open one of my vintage Ports, I'm going to stick with a standard ruby Port from a house that I know and love. I'm sure there must be some reserves out there that might be worth the premium, but I'm not inclined to experiment much more with this particularly marketing label.
If you enjoyed this review, the following links may be of interest to you.
- The Wine Spectator's Port basics page is a good introduction to what Port is and how it is made.
- My favorite house is Cálem. Visit them if you're ever in Oporto.
- Almost as important as finding the right wine is finding the right cheese.
- Daniel Rogov has reviewed hundreds of Ports. He hates Six Grapes too.
One of the digital photography web sites recently published an article on how Sports Illustrated manages its digital photographs. The piece described the process of shooting and editing 16,000 pictures during the Super Bowl. After reading it, I realized that the workflow that I've come up with for managing my own personal digital pictures is similar to SI.
Workflow is a word that gets tossed around a lot when referring to the management of digital pictures. But, there is nothing really new going on here. Even in the film days, we all had a workflow. To wit:
1. Shoot the film. Organize the shot film into buckets.
2. Process the film buckets. I always used Tri-X and D76 1:1.
3. When the film was dry, cut it into strips, but the strips into storage containers. Organize the storage sheets somehow.
4. Make contact sheets. Pick the frames you like. Label and organize the contact sheets to relate them to the stored film.
5. Make proofs of the good frames, pick the ones you want to print well. Label the proofs and final prints so you can find the negative when you need to make others.
And so on.
The idea here is to be systematic about how the pictures are processed and stored so that you find the pictures you need later.
It turns out you need to do exactly the same sort of thing with digital pictures. The medium makes some of this easier and some more tedious. But, it's important to have a structured and repeatable process, since you don't to be losing that one file with a favorite picture in it.
The Big Picture
I shoot with 2 cameras, a small point and shoot that makes JPEG files and a bigger digital SLR that can make JPEG files, but also allows me to store the pictures in a "RAW" format. The following is what I do with the pictures from shooting to final print.
1. With the SLR, I shoot RAW files only, except when I know I might need to shoot more than 100 pictures before downloading. I have a single 1GB flash card, which fits 100 RAW pictures. With the P&S, I shoot JPEG.
2. Every once in a while, I take the card out of the camera and stick it in a card reader. I use iView Media Pro to download and catalog the initial pictures. All pictures for a given year are downloaded to a single folder on my laptop called "2004".
3. I use iView to rename the picture files to be unique by adding on the capture date. iView automatically keeps track of this date for me, so it's handy.
4. For the RAW pictures, I convert each picture from RAW to a small proofing JPEG file using a batch script and Adobe Camera RAW inside photoshop. For JPEG files, I also make a small proofing JPEG, just to be consistent. All the JPEG proofs go to a different folder called "2004-JPEG". The JPEG files are tagged with an sRGB color profile. More on this later.
5. I browse the JPEG proofs to pick pictures I like.
6. The good ones get reconverted and then turned into files for printing or for display on the web. I have a script that generates the albums on the web site. These albums are driven off another collection of JPEG files that are completely separate from the main catalog. That way if I need to back up the web site it's easy. Also, the image file names on the site match the ones in my catalog so the RAW file or JPEG that the picture came from are easy to find. iView also lets me tag pictures with keywords and such to make them easier to find.
7. All RAW and original JPEGs are backed up to 3 external hard drives stored in two places. I keep the originals online on my laptop for a year, and then move them offline and just keep them on the backup drives.
For prints, I have to come up with a way to organize the print files and attach them to the originals. I don't make many prints, so I haven't worked this out yet.
I came to this particular workflow after about a year of fussing around with a few different schemes. So here is the rationale for the steps I take above.
1. iView is simply the fastest and most straightforward cataloging program I've found. It doesn't have server features, but I don't need that. It reads EXIF keywords, keeps track of color profiles correctly, and so on. It also automaticlaly lets me browse the archives by date, keyword, and other queries. Very nice.
2. JPEG proofs turn out to be very handy to have for multiple reasons. First, browsing a ton of small jpegs to find the good ones is a lot faster and easier than browsing RAW files or larger JPEG files. My wfe likes to do this using the slide show feature in iView, and this is only really usable with smaller JPEGs. Second, the small JPEGs make a good base image for the web albums, so I'd be making them anyway. Finally, having the small JPEGs lets me carry around my whole picture archive on my laptop. The laptop disk isn't really big enough to hold more than a single year of RAW pictures, but I can hold many years of JPEGs and still have enough room to keep my RAW files for the current year online.
3. I use hard disks for backup because burning CDs or DVDs just takes too much time and is no more archival. My plan is to just buy a big firewire drive every year and cycle the entire archive to it. For the forseeable future, disks will keep getting bigger faster than I can shoot more pictures.
4. The only manipulation I do with the image file names is to tag them with the capture date of the picture. Putting any more meta-data into the filename is problematic in various ways. Better to have it in the iView catalog. It's also handy to know that your file names are always unique so you can use them as database keys. The main thing to do is to make sure you use the same filename for the same picture everywhere the picture appears.
5. I like shooting RAW files on the SLR because it's really easy to post-process the things to fix things like color balance under mixed lighting and general underexposure if I or the camera messed up. Doing this with JPEG files is trickier.
Someone asked me how I keep the two catalogs I make in sync. The answer is that I don't, really. The JPEG catalog is for proofing only and I only use it to navigate to the actual NEF files in the main catalog.
But, most of the meta-data (dates, keywords, labels, etc) that I might attach to a file gets put into the EXIF part of the file which is preserved when I convert from NEF to JPEG. If I change a lot of this info, all I have to do to sync the catalogs is reconvert the NEF files and then rebuild the JPEG catalog based on the new information.
Oh, so for reference, what I do that is similar to SI is:
1. Keep both JPEG and RAW around.
2. Proof using JPEG
3. Reconvert the RAW file and process for printing.
4. Use a good fast catloging program for meta-data.
My final thoughts are a quick meditation on color profiles.
Color profiling and color spaces are a large and confusing topic. Rather than get into a long and abstract discussion about color spaces, device profiles, gamut and so on, I'll outline my reason for using sRGB for everything via a simple problem that I was trying to solve:
I use a Mac. My wife uses a PC. I needed my web pictures to look at least similar on the two machines.
It turns out that the easiest way to do this is to always convert pictures into the sRGB color space, and make sure that the JPEG files on the web site are correctly tagged with this color space. Why should this be?
Historically, Macs and PCs have displayed images very differently. Apple's default display setups use a gamma of 1.8 and some pre-defined color balance. PCs use a gamma of 2.2 and some other predefined color balance. It turns out that sRGB is a color space that is defined to model a sort of generic PC display. What does this mean?
Basically what it means is that for every (r,g,b) value in the image file, the display software will look at the sRGB profile and transform the numbers into a new (r,g,b) so that the whole image displays with an overall gamma of 2.2 and a certain color balance. The exact transformation that takes place depends on the display device. On a Mac, the transformation makes everything a bit darker and more contrasty. On the PC, the transform does almost nothing.
But, the overall result is that if I look at the picture on my Mac with software that respects the color profile, it will look similar to what my wife sees on her PC. Happily, most of the display software on the Mac does the right thing because it all hooks into Colorsync. Earlier versions of Mac OS X didn't have this universality, so if you look at my pictures in Safari under Jaguar, they will look too bright and washed out.
In digital photography circles, sRGB is scorned as a poor least common denominator color space. You'll hear a lot of complaints about lack of color gamut for example. This all might be true, but for me, sRGB makes perfect sense because:
1. I'm mostly concerned with web display, and this makes sure that the pictures look right even on PCs that don't have color management.
2. Most inkjet printers and digital minilab machines do automatic color balance based on an assusmption that you are using sRGB. Sticking with this saves me an extra calibration headache. I haven't noticed that I'm losing a lot in sRGB. But if I did, I'd just redo the job in another color space if I had to.
3. Most point and shoot cameras implicitly capture in sRGB and tag their JPEGs as such, so I don't have to do anything special.
So there you have it. Maybe another time we'll talk about color calibration some more. Except it gives me a heacache.
March 20, 2004
Game mechanics: the underlying rules and goals of a game. How do you decide what a player is allowed to do? When has a player won? How do player actions affect the game? The mechanics of a game are part of a game that is not narrative.
Some basic game mechanics:
- Run around in a circle; first one to finish wins. (all racing games)
- Kill everyone else and/or capture the flag (most FPSs)
- Move the ball to the scoring zone the most times (sports games)
- Capture and hold victory points (war games)
- Wager and win tokens (gambling games)
Compound games exist: Get the ball in the goal while killing everyone (Deathrow). Run around in a circle while killing everyone (Wipeout XL, Quantum Redshift). There are plenty more. What I've been thinking about today is not so much mechanics in the sense of the specific ways games play, as in these examples, but in what I think of as "meta-mechanics," the classes that the rules themselves fall into. For example...
A similar decision is whether there is a role for random chance in a game. A game such as chess, checkers, or go is a "pure" game; the outcome is determined entirely by where the pieces are placed. On the other side of the spectrum would be Candyland, where the player actually has no control whatsoever over his move, and just obeys the random chance of the card he pulled from the stack. Many in-between shades exist, from The Game of Life (where you get to make essentially one decision: "Go to college" or "Don't go to college") to Monopoly (where in the early part of the game you can decide whether to purchase or not purchase each property you randomly land on) to The Settlers of Catan where you make early wagers on probability and make non-random moves which will provide payouts later (or not) based on a die roll. There is a school of ludological snobbery that says that "pure" games that have no randomness -- such as chess, go, and checkers -- are inherently superior to games with a random element. The logic here seems to go something like: games are a competition, and if I can't prove my superiority every time by applying the power of my gigantic brain then the game sucks. I think that's clearly wrong -- randomness can have a place in games, just like in life. De gustibus non disputandum est.
What other metamechanics can we identify? Why do some "strategy" gamers so vehemently prefer randomness to dexterity, when that randomness is only a part of the genre because doing more sophisticated modeling in a boardgame context is ponderous? Shouldn't the "roll a die" method of determining an outcome in RPGs and strategy games be outmoded, now that we have hyperintelligent transdimensional computers to do the hard modeling work for us? Is "roll a die" still the preferred method of combat resolution simply because it's easier for developers to implement, and we're all lazy?
Score points by moving your piece into the target areas below and clicking.
- Eric observes that talking about these things can be hard.
- There are plenty of people that think about this stuff more carefully than I do.
- Everyone knows where to get videogames, but if you have no local board-gaming shop, you can get great board games from Funagain.
- It bothers me that Carabande (and its inferior remake) are so expensive.
March 18, 2004
In a wonderful rant, psu talks about the perfect cup of coffee, and how there's only one place in Pittsburgh -- La Prima Espresso -- to get it. His conclusion is that it's pointless to buy an expensive espresso machine like a Silvia because it still won't be as good as what we can get at La Prima.
He's right and wrong. Whether or not getting a fancy espresso machine is "worth it" is of course a judgment call, but I agree that generally what you're going to make for yourself isn't going to be as good as what you get at a good café, if only because of what I like to call the "hot dog in Yankee stadium" effect. You can come really close, but fundamentally no matter how interesting I make my house, it's not likely to have old Italian men smoking cigars and playing Scopa and trying to hit on the cute Italian teacher from Shadyside Academy while people come in and buy fresh pastries from Antonio next door and jostle to make sure that Elio pulls their coffee instead of the annoying kid with the buzz cut.
OK, maybe not everyone likes that ambience. But I do. Hot dogs taste better when you're actually in Yankee stadium.
We have a Silvia at work. You can make great coffee with it. But fundamentally, a cappucino at work doesn't taste as good as one near Campo dei Fiori. There's also the financial angle: a cappucino is $3/cup at la prima, rounding up. That's 200 cups of coffee before you break even on a Silvia / Solis Maestro combination, and that's if we only count equipment and not raw materials. Of course, you may find joy in the process of making the coffee yourself, which is hard to put a price tag on. But if you don't, unless you're drinking many per day, it makes sense to pay for your coffee by the dose instead.
Personal to psu: the lousiest, skankiest train station in Rome makes a better espresso than Café de Flore, although I make no promises as to ambience.
Pete's also right and wrong about Starbucks. He's right that it's a shame that they promote the retarded Seattle style cappucino -- newflash, geniuses, the hood in cappucino is supposed to come from the milk mixing with the foam from the coffee, not from freakishly aerated milk -- but I think he forgets that just 10 years ago if you wanted a cup of coffee on the Pennsylvania Turnpike you pretty much ended up drinking stale Maxwell House that had been cooking for 12 hours. Starbucks' brewed coffee, at least, is better than that.
Synchronicitously, Goob gives some tips on how to make good coffee at home without spending $500 on an espresso machine and $100 on a grinder. His advice is good -- freshness really does trump everything else. For home use, I'm pretty happy with my Bodum vacuum pot, which only ran me about $50 or so.
Get your fix on:
- La Prima Espresso has a web site. The guy in the picture in the lower right is Elio, who pulls the best coffee.
- psu's coffee rant. Paul's coffee advice.
- One popular coffee fanatic's site is Whole Latte Love
- For window shopping, I like CoffeeGeek
- Too Much Coffee Man is the cartoon for the web-savvy caffeine addict.
- The Rancilio Silvia is a nice machine.
March 17, 2004
Since my last article was spent talking about console games and how great they are, let me shift gears and talk about a PC game I've been playing lately: Europa Universalis II.
At first blush EU II looks like a standard "conquer the world" civ-alike, perhaps with a feel akin to Merchant Prince, but it's really more of a history simulator than an actual game. It brings some interesting mechanisms to bear on the conquer-the-world genre that I enjoy on their own, yet which fail to cohere into a playable whole. Let's first take a look at some of these mechanisms by themselves, and about some of the positives of the game, and then talk about the negatives
EU covers the period from 1419 through to the early 1800s. There are optional campaigns that cover smaller lengths of time, if you're not up for the full challenge of 400 years modeled a single day at a time. Yes, I said one day at a time. The days fly by, but this is not a game of lightning warfare and swift technological change; the sands of time drip through the hourglass one by one. I've been playing the "France 1492" campaign for perhaps two weeks now, and it's still only 1526. This is not a quick game.
Combat and Conquest
One of the aspects of the game which I intensely disliked when I started playing but have come to admire is how, exactly, conquest works. In your typical civ-alike (or Risk-alike), you march your troops off to some enemy province, dice are rolled, blood is spilled, and when all is said and done, to the victor goes the spoils. That's kind of how it works in EU II. But not really.
First, your invading army has to dispose of any standing troops in the province; that works pretty much like you'd expect. Then, you have to mount a siege. You can expect your average siege to take between 6 months to a year, or longer. Eventually, you'll starve the province into submission and the province is yours. Except it isn't: sure, you might have possession of that province, but if other leaders (and God!) don't recognize your claim to it, you are merely a pretender. No, if you want to claim that province you'll have to earn it the old fashioned way: by sending your best councillors off to negotiate peace, and getting the country you wrested the province from to agree to give it to you (unless the entire country is just a single province, in which case you may simply take it.)
Raising a military is an expensive proposition, and you have to worry not only about taxes, but about support, attrition, and the disparity in technology levels between you and your opponent. It is challenging, if a bit unrealistically whack-a-mole like -- defeated armies "retreat" in seemingly random directions, sometimes deeper in to your own territory, so you have 16th century armies engaging in what seems like maneuver warfare, accidentally. It can be frustrating.
Provinces have religions (and cultures, too). Catholic, Orthodox, Protestant, Sunni, Shia, Buddhist, Hindu...all the major religions are represented (although not Zoroastrianism. Yea, verily, the Wise Lord Ahura Mazda shall punish the developers for their perfidy.) Religion is the backdrop against which politics moves throughout the years. It is more difficult to wage a war against a country that shares your religion (it damages the stability of your realm), and provinces of a different religion have a tendency to rebel more frequently (depending in part on how tolerant you decide to be of other religions). You can send missionaries around the world to convert the infidels.
Trade and Colonization
The game has a fairly opaque concept of "centers of trade" where your merchants compete for goods that I couldn't quite wrap my head around or enjoy. Colonization is expensive and takes forever, which makes sense, since it did in real life. You'll fight hostile natives (if there are any), plague (the game accurately models the dictum "Don't try to colonize subsaharan Africa, you idiot"), and the other great and not-so-great powers.
Exploration is a bit...strange. Put simply, you can't just go off into terra incognita and discover the world; you need a special army: an explorer (for seas) or a conquistador (for land). You can't buy them, find them, or hire them. You get one when the game decides to give you one; Spain and Portugal get plenty, early on, and everyone else gets few to none. This is part of what I mean when I describe the game as a "history simulator." There will be no alternate histories here (in the main game, at least) where France discovers the New World.
One of the major changes from EU to EU II was the addition of the ability to play any country, not just the great powers. Want to play as Japan in 1419? You can do it. Brandenburg, Mecklemburg, or Tuscany? You got it. Playing the game as a tiny power has a very different feel from playing as a great power (it's the difference between "Can I conquer the world?" and "Can I survive the next two years?")
One of the things that recommends the EU series of games is: they're dirt cheap. You can get the original Europa Universalis for $5 on the remainder shelves of most big PC software chains, and EU II is going for around $10 to $15. That was, in fact, what convinced me to take a risk on it, and if I view my investment as "I spent $15 to get a really cool map of the world in the year 1500", it seems sensible to me.
The user interface is terrible. Of course it's clunky and unintuitive: it's a strategy game put out by a European software house, ergonomics in software are against the law there. More than that, it has a disturbing mouse-feel. By that I mean that the mouse tracking feels sloppy and inadequate, like they somehow misused DirectX and didn't get it right. At first I thought this was specific to the Mac version of the game, but I've since played on the PC and it's the same thing. I can't quite put it into words, but the mouse just doesn't move right. That makes the fact that the UI elements themselves are all undersized and not well marked that much worse.
The game is slow. No, slower than that. Glacially slow. Not in terms of how fast your pieces move, but just in terms of pacing. Obviously, some people like that in this class of game. I'm not saying it's an out-and-out negative, but it's a strange thing to say to yourself "OK, well, that was a good round of conquest. Now I'll watch the days go by one at a time for three years until my reputation improves."
The biggest negative of the game, for me, is that it fails to coalesce into something more than a toy; at times it feels more like Europa is playing you, rather than vice-versa. What is the game looking for? What inputs can I give it to make it happy? I admit that that is a vague description, but it's how I feel: I have fun looking at the game. I have fun thinking about the game. But I don't have all that much fun actually playing the game. I think the detailed After Action Reports on the publisher's forums by avid players attest to this: what's fun in Europa Universalis is, in large part, the mirror it holds up to the history buff's psyche, rather than the mechanisms and machinations of the game itself.
Here are some links you can follow while you're thinking about how cool maps are:
Home espresso machines are a big business. For a few hundred dollars you can get a machine on par with the stuff that Starbucks is using to make those Venti half-caff double caramel 2-pump vanilla machiatto smoothie drinks for the local teenage set. But here is why I'd rather just go to La Prima espresso.
Really good coffee is a complicated thing.
My wife never drank, and in fact, hated coffee for at least the first 15 years we were together. This is because coffee in the U.S., for the most part, sucks. I believe this is for a few reasons:
1. The coffee is not fresh. Most places use a service that delivers packets of preground coffee that have been aging for months.
2. The coffee is boiled. Most places use drip coffee machines or worse, those industrial strength brewers that boil the coffee and then put it in a tank to keep at a super hot temperature for hours. By the time you see it, the coffee is long dead.
3. The Starbucks curse. Starbucks has made espresso and boiled fancy coffee almost universally available. Unfortunately, only the busiest Starbucks goes through enough coffee to have fresh grounds and people who know how to make a good shot. So most of the time you get tasteless crap with too much milk.
The first coffee my wife drank was a café au lait at Café de Flore in Paris. This is some of the best coffee I've ever had. It's basically a triple or quad shot of espresso in one small cup, and hot milk in another. You mix it into little mugs at the table. Let us compare:
1. The coffee is super fresh. This café sells so much coffee that they have to buy new stuff every day.
2. The people know how to make espresso. The espresso here is so good that it has a rich nutty flavor and little or no hint of bitterness. This is the result of perfect brewing. Every shot is done fresh and the grind and tamping are perfect for the machine they use. Getting stuff this wired in is the result of years of practice and expertise.
In Pittsburgh, we have a single coffee place that can reach this level of perfection on a regular basis: La Prima Espresso in the strip. Like the Paris coffee, the coffee is always fresh. Like the café in Paris, the people at La Prima have been making espresso for years and know just how to do it. In fact, when you go, you should always strive to have one of the veteran bar people make your shot for you because the ones of lesser experience will only disappoint.
Finally, La Prima does not commit the Starbucks Cappucino Sin. To wit, the La Prima Cappucino is not this mutant drink wherein a tiny shot of coffee sits at the bottom of a mile deep sea of foam. The drink is as it should be. Hot, slightly foamy, milk mixed in with a perfect espresso shot so that the foam from the coffee mixes with the steamed milk to form a perfect bond of milky rich, smooth, nutty, perfect coffee goodness.
For me, the La Prima cappucino is the closest thing to the apotheosis of coffee that I've had, short of that café au lait in Paris. It would take me years to come even close to doing something as perfect in my own home, with expensive equipment that is too loud and takes up too much space in my kitchen.
Therefore, as pretty as the Silvia is, I don't think I'll buy one. I'd just make bad coffee with it, and that would be tragic.
March 16, 2004
I play video games, on average, maybe an hour a day. Sometimes more, sometimes less, but on average probably 1/24th of my adult life is spent playing videogames. That's quite a lot.
I have a love of the game medium that is wide and deep. For the past many years, I've played games both on the PC (Windows and Mac) and more or less every console in vogue. I spend time and money on gaming as a hobby. And lately I notice that a greater percentage of my playing time is devoted to games on a console, as compared to games on the PC.
Why is that?
Although it's popular for people to blame cost, that's not really a major factor for me. I've got my PC. I've got my consoles. They're all paid for. I'm asking "What are the feelings that make me reach for the console rather than the PC when I'm in a gaming mood?"
For one thing, there's the comfort factor. I sit at a desk in front of a computer all day long. Perhaps somewhere in the back of my mind is the feeling that sitting at a desk in front of a computer in my leisure time too is wrong somehow -- playing a game is psychologically transformed into work. This holiday season, when I had a huge stack of games that I hadn't made progress in, I had a brief period where I was actually feeling guilty that I wasn't playing enough games. When games are work instead of fun, I'm less likely to play them.
I can play a console game from the couch, sitting well back from the monitor, which is an attractive, large TV. Friends can play or watch at the same time in the same room, which makes the console experience a bit more social, comparatively speaking. If I'm playing Project Gotham Racing 2, I can play standing up (everyone knows that if you lean when turning left, the car turns faster, right? Just like how you can steer the ball when bowling.)
Lastly, there's the "just works" factor. This weekend while preparing my review of The Battle for Wesnoth, I decided to fire up Warlords III to refresh my memory of it. Same hardware configuration as when I used to play it, same disc, but now, presumably because of some magic Windows update, the game no longer works -- it crashes after a few minutes. I spent about 2 hours downloading various driver updates and trying different configurations, but in the end I was foiled. Congratulations! This is what it's like to play games on the PC. Get used to it.
When I want to play a game on the PS2 or Xbox, I walk up to it, hit the power button, put the disc in the drive, and I'm playing in a minute or so. No fuss. No wondering if there will be some subtle incompatibility between the game and my sound card. It all just works.
Perhaps this is just another example of how specialization brings convenience. If you really want to, you can make toast by sticking a piece of bread on a spit and holding it above a flame, or by putting it in the oven for the right amount of time. Everyone has an oven. Everyone has a stove. But everyone also has a toaster. You don't hear people saying "Hey, don't use that toaster -- this Viking range is much more powerful!" Yet people make that argument about PC games versus console games all the time.
A year ago I would have said that a PC was the better choice for online gaming, but frankly the Xbox Live user experience has so far exceeded my expectations that I no longer hold that opinion. I've drunk the Kool-aid. Here's my money; a few bucks a month to pay for a voice-enabled rendezvous service that lets me play with my friends rather than a bunch of rude 13 year olds is well worth it.
Will I keep playing PC games? Sure, especially the smaller, independent ones. But the sharp dividing line of quality that used to exist between PC and console games no longer exists. As time goes on, I find that the ergonomic advantages of consoles overwhelm PC games for all games except those with the quirkiest user interfaces. I'm already choosing to play most game available for both PC and Xbox in their Xbox form.
Toast, you see, should be made in a toaster.
This is what we talk about when we talk about games:
- Brad Wardell of Stardock explains why the writing is on the wall for PC gaming, and how developers are going to have to adapt.
- Silver Spaceship's Chromatron is one of the best little games to come along in a while. PC and Mac.
- One of the last great small game companies is Ambrosia.
- For those of you who wonder "What ever happened to Silas Warner?"
- My review of The Battle for Wesnoth.
March 14, 2004
I have a cheese problem.
My problem centers around the fact that the two best cheesemongers in town (Penn Mac and Whole Foods) are somewhat inconvenient for me to reach without planning. So I often find myself in the local supermarket, Giant Eagle, which purports to have a good selection of cheese. And they do: in the abstract, their selection is "ok." Nothing fabulous, but they often have cheeses that I would like to eat, especially if I haven't had time to pick up something great at Penn Mac.
However, for some reason that I don't fully comprehend, Giant Eagle wraps their cheese in a plastic that makes all of their cheese taste disgusting -- it has a penetrating, oily, plastic aroma that can manage to penetrate to the core of the stinkiest Stilton. Some middle-level manager probably had to go out and do hard research to locate a plastic that was this effective at completely ruining cheese. So I am caught in an infinite cycle wherein I am craving cheese, but I have no cheese, and I'm in the supermarket, and Giant Eagle has a type of cheese that I want, and I know that it will probably taste like rancid plastic but I still convince myself that somehow it won't taste bad this time, and I buy the cheese anyway, and I bring it home, and it tastes like plastic and I am sad and swear that I'll never do that again.
Since I don't seem to be able to break my habit of buying cheese at the supermarket when I'm craving it, I've developed a tactic designed to minimize the risk: only buy cheeses that were factory-wrapped, rather than cheeses cut and wrapped at the supermarket. These generally end up being not top-of-the-line cheeses, because of the way they are packaged and sold, but my logic is that it's less depressing to buy a cheap cheese that turns out to be not very good than it is to buy a superb stilton that tastes like plastic because of an incompetent cheesemonger's stinky packaging.
Following that tactic today led me to buy "gjetost," which I knew nothing about other than it is Norwegian and comes in a cute small square package.
Gjetost is...odd. I am not entirely convinced it is really cheese. I think it may actually be a very large piece of "Bit-o-honey" candy.
So I'm eating this candy cheese thinking "this can't possibly be right," so I turned to the internet, which confirmed that, yes, this is indeed what gjetost is supposed to be like. It's a cow's and goat's milk cheese which is prepared such that some of the lactose caramelizes. One site suggested that it's common in Norway to enjoy it with a cup of coffee for breakfast. I happened to have just brewed a pot of coffee so I tried it: it didn't improve the experience substantially.
People describe gjetost as a "love it or hate it" experience. I don't love it or hate it. But I might give out slices to trick or treaters next Halloween.
Remember: a life without cheese is not worth living.
- Information about gjetost.
- How to make your own gjetost.
- Or, as obsessive gjetost fans have discovered, you can buy it over the internet.
- Gjetost is marketed in the US by Nestlè, Inc. as Bit o' Honey.
- The Pennsylvania Macaroni Company is the best place to buy cheese in Pittsburgh.
- I'd much rather be eating Ami du Chambertin.
March 13, 2004
I do all of my work on my laptop. I have an external drive for large projects, but the desire to keep everything on the laptop means that I really only want to spend internal hard drive space on the essentials.
I love LiveType, but I only use it once in a blue moon, generally when finishing a project up. Unfortunately, Apple's dopey installation program requires that LiveType (like all the Pro apps) be installed completely on the internal drive. Here's how to route around their bogosity.
All we're going to do is use the unix ability to have symbolic links from one directory to another allow us to store the most egregiously large directory on our external drive. Symbolic links are created with the ln command.
(I am assuming that your external drive is hooked up during this procedure. For instructional purposes, let's assume the drive is named external)
That's it! You're done. (You could of course just do sudo bash and then issue the mkdir, mv, and ln commands from a root shell, but that violates the principle of being conservative when wearing your root hat).
You can do this not only with LiveType, but with Soundtrack, and in fact even with iTunes, if you're willing to accept parts of your collection being unavailable when you're not tethered to your drive. Note that iTunes goes to greater lengths than most to disallow this technique (in particular, it will actually insist that your "iTunes Music" directory is on a local drive, but you can set up symlinks within that directory for specific artists. I have two little shell scripts to let me migrate data back and forth as I desire. Here's one:
...so from within my iTunes Music directory offline.sh "The Beatles" will migrate the entire Beatles directory to the external drive, and set up the correct symlinks so that I can still access it as needed. Of course, there's an online.sh to reverse the process.
One thing of note is that most of the HFS+ savvy applications hate symlinks and do unexpected things when asked to store data in a symlinked directory. So if after migrating data offline you try to add anything to those directories (for example, by ripping another Beatles album, say Let It Be in iTunes), it will end up sticking that in ~/Music/iTunes/iTunes Music/Let It Be rather than in ~/Music/iTunes/iTunes Music/Beatles/Let It Be. It still works, it's just unpleasant. This means that you should always be prepared to undo your symlinks before installing software updates, for example, lest you confuse the poor lost little Finder.
I hope this little trick helps you, and be sure to make backups of any unrecoverable data before trying it, lest a careless slip of the fingers (or a mistake on my part!) cause data loss.
March 12, 2004
Our thoughts and hopes are with you, Madrileņos.
You are not alone.
Over the years, one of my favorite types of computer games has fallen under the rubric of "turn-based strategy"; basically, traditional units-in-hexes wargames where the computer takes care of all the bookkeeping for me. The first of these was probably, I'm guessing, Empire for the VAX/VMS system (and yes, I played it).
For my money, the best in class of these games was the Warlords series. The original Warlords, which had both Mac and DOS versions, took place on a large continent named Etheria. 8 kingdoms each started with a single castle, a hero, and an army. Castles generated income, which could be used to fund more armies (and to support them; you had to pay upkeep for your existing troops). The goal was simple: take over the world. I liked Warlords better than the alternatives because the designer made some smart simplifying assumptions about unit management and generation. Often in games of this class the early game is fascinating and then by the late midgame you are staggering under the logistical weight of managing several hundred units, and suddenly it's not fun anymore. Warlords avoids that syndrome.
Steve Fawkner, the author of Warlords, has continued to turn out the sequels. Each sequel has, to me, played progressively worse. I hate to say that, because he has always been kind and courteous to me, and I know that there are countless improvements under the hood. Steve is an AI geek. He'll talk about how the pathing in Warlords III blows away the WL II version, and how much more challenging the opponent is, yadda yadda yadda, but all I know is that it took me 15 seconds to play a simple turn of Warlords II Deluxe, and it takes 5 minutes to play a simple turn of Warlords III, 4 minutes and 46 seconds of which is spent watching the lovingly rendered animation of an orc move deliberately across the screen, pausing to make himself a hot toddy. You can actually watch the molasses drip from the jar into the teacup. It's very detailed. And oh so very slow.
Warlords IV: Heroes of Etheria likewise continues the trend towards more bigger slower. If you don't believe me, download the demo and try it for yourself.
So: newer doesn't always mean better. Prettier graphics doesn't always mean better. I would argue that some games can be made better by giving the player a view of his units that has less graphic detail, not more. Graphic designers that work in the real world understand this. Consider the icons used by the Olympics to denote sports. Surely, we could probably replace those with 3d holographic movies of people actually participating in those sports. And it would probably result in everyone having to take a few more seconds to think about what they were looking at rather than immediately mapping the simple icon to "Oh, that's skiing".
I feel bad slamming the newer Warlords games in this way, because I'm sure it's not as if Steve walked in to the office one day and said "Hey, I've got this great idea: let's degrade the user experience somewhat and make the game slower." I have no doubt that he shopped an incremental improvement to Warlords and got told by a publisher "If it doesn't look as good as this other game over here, we won't be able to get it on the shelves, so therefore we won't publish it for you, so therefore no more Warlords sequels." I've been in that position myself, albeit not with games. If you're not a sales and marketing expert and the marketing guy says "Your product must have feature X or no one will buy it," you either decide that you know more about marketing than he does and ignore him, and then you go out of business and starve to death and people laugh at you, or you shrug, sigh, and go back to your office and add the feature the marketing guy asked for, even if you personally think it's sort of stupid.
Thinking about it a little more, I'm not really saying "I wish Warlords III (or IV) had worse graphics." I'm actually making a complaint about the user interface. I'm saying "In the old version, I could accomplish this task in X seconds, and in the new version that same task takes four times as long." (Please note that I haven't gone and actually measured the times. I'm speaking of impressions here, but I think that that's valid: half of UI design is about managing user perception.) Perhaps if I hadn't used the older, more responsive UI I wouldn't care. Maybe I'm not Steve's market any more. Maybe I'm more persnickety about response time than other old Warlords fans. But fundamentally, this is the same complaint people make about Windows XP taskbar animations, or the 'genie' effect to and from the dock on Mac OS X.
So the reason this all matters is that I'd rather spend time playing the (open source!) game The Battle of Wesnoth than buy Warlords IV because it has a better response time, even though I am 100% positive that Warlords has better graphics, better sound, better production values, fewer misspellings, and superior computer AI. That's how much the response time matters to me. I'm playing a strategy game. Strategy games are like chess. How often would you play a chess game if every time you told the computer to move a piece you had to watch it move for 15 seconds? Another way of saying this is: I have more confidence that there will be patches to Wesnoth to improve enemy AI than I am that I'll ever be able to make Warlords III's UI more responsive.
Thinking about it a bit more, I now realize that I have seen this before, almost -- it's very similar to the "hard attack" / "soft attack" handling in Panzer General. This is actually another way in which Wesnoth differs from Warlords. In Warlords the natural unit of control is the city, which provides a defensive bonus for units within it and generates income and potentially builds new units. Unlike Warlords, units in Wesnoth exert a (panzer-general-like) zone of control (as in PG and other games), and so you can actually use your forces to shape the battlefield, rather than simply attacking or defending with them.
The scenarios were designed with care. The tutorial was a bit disappointing, and somewhat disorganized, and the unit recruitment rules are not well explained at first (you quickly figure it out once you move your hero out into the fray and then discover that you can't hire new units if you're not in your keep.) The first real scenario made things much more interesting, however, and the difficulty ramp is appropriately challenging without being impossibly steep. I liked that scenarios support victory objectives other than "kill everyone else" -- the first two scenarios in the first campaign, for example, are "reach this spot on the map without these two VIPs dying" and "kill foozle OR survive for 12 turns", respectively.
The game is still in beta, and there are some areas that need work. For example, while different terrain types exist and are implemented, their effect on the game is fairly opaque; the impact on movement is fairly straightforward, but it's clear that it also impacts combat, but not how. When you decide what attack to use, the game tells you the percent chance success you'll have, so you end up having to kind of intuit the effect terrain is having by comparing those percentages across several battles. Personally, I'd rather see something like "+10% bonus due to being in the woods, +10% bonus for bravery" and not know my absolute chance of success than the inverse. Better yet, give me both. Also, while I've already said I like the responsiveness of the game overall, there are definitely GUI nits; right-clicking is supposed to bring up a context menu, but this doesn't work on a Mac running OS X. There are too many spelling and grammar errors; hopefully they'll fix those soon (I know, I know, it's an open source project: I should volunteer).
The music is nice and understated. I enjoy being soothed while slaying the armies of darkness.
The Battle for Wesnoth is one of the most polished games I've seen released under the GPL thus far. I approve of its SDL/multi-platform nature, and the fact that it's clearly made by people that understand and love the genre. I look forward to seeing it develop more, and I hope the team stays together and works on other projects as well. Perhaps the answer to the unceasing but understandable pressure from commerical game publishers for sequels that focus on flash and not simply on improving the user experience is for small projects to demonstrate that fun does not require seed funding from Electronic Arts to exist.
Personal to Steve Fawkner: I can't promise you the sort of mad cash you get for making the top shelf at Best Buy, so I realize that I can't make a business case to you as well as your publisher can. But here's an offer: Your newest game, Warlords IV: Heroes of Etheria retails for $19.99 (at least at Amazon), and you just released Warlords II Deluxe for the PocketPC. If you update Warlords II Deluxe to work on modern Win32 systems -- most of which I bet you in fact had to do to release the PocketPC edition -- I'll pay you $19.99 for that, making it the third time I will have bought the game. I bet there's a few other people out there that would do the same, although, honestly, probably not more than a few hundred. But, y'know. While we may not be a market, we are dedicated fans.
Don't run it by your marketing team. This can be our little secret.
Some of these links may be of use to those of you who like turn-based strategy games.
- The Battle for Wesnoth is available for download.
- The Warlords series is what everyone else in the consumer space imitated.
- Xconq wasn't the first of this type of game, but it's the oldest ancestor still being actively developed.
- Arguably, the Panzer General games were a better implementation of the same basic gameplay mechanic with a different mise en scene (memo to self: try Fantasy General some time) (additional memo to self: Now stop thinking of General John Shalikashvili in a silk teddy. Ow ow ow ow ow.).
- Heroes of Might and Magic is another series of games in the same class that got worse with each successive iteration after the second. Milk that cash cow, boys! Branding! It's all about branding!
March 09, 2004
Well, the Australian Grand Prix is over, and once again I have to face ridicule from people like Dushyanth, who ask:
"Why do you watch this "sport"? All they do is go round and round in circles, and in the end Schumacher wins."As time goes by, I have fewer and fewer answers to that question. But instead of talking about Formula 1 as a sport, let's discuss it as a media event.
Supermodel and Troll
Passes were ignored. Cars would burst dynamically through a camera's field view, about to make a spectacular overtake, and the camera would remain still, pointing at the bereft, empty stretch of road behind the pass. Occasionally, a camera would accidentally be about to catch a pass in progress, at which point the program director would cut to a shot of the pit crew, filing their nails.
FIA likes to blame the local team they contract with for the poor quality of the camerawork at a given race, but given that it is the FIA that is doing the contracting, why shouldn't they be held accountable?
Also new this year are different graphics to represent the running order and gaps between cars. They are as comprehensible as an interpretive dance version of Jude the Obscure. Perhaps this is fallout from last year's super-smooth "FIA blames Tag Heuer for FIA's mistake" PR disaster in Brazil, but these are truly the worst graphics to have appeared on my TV screen since the Coleco Telstar Arcade was released. Periodically, a vertical bar with cells with the first three letters of each driver's last name in an unreadable font would appear. Then it would flicker rapidly, initials shuffling and permuting. Eventually, it would go away. At that point, everyone in the room I was in turned and looked at each other and said "What the heck was that?"
Look at NASCAR. No, really, look at it closely. Even if you don't like oval racing, everything about the NASCAR presentation is carefully designed, well thought out, and professionally implemented. Massive amounts of statistics are presented in a short period of time in a way that even an unsophisticated viewer can interpret. In addition, the camera coverage at a NASCAR race blows away even the best F1 coverage (which would in my opinion probably be found at either of the German races). Even on their road courses, there is practically no event at a NASCAR race that happens off-camera or is not covered immediately after it happens. The coverage is consistent and superb.
Racing fans are easily spoiled. What was acceptable TV coverage in 1984 is completely inadequate for 2004. As long as F1 can't figure out a strategy to improve their production values, they will continue to lose ground, world-wide.
March 08, 2004
The answer is:
"Three Guys and a Router"
No one guessed it, but, well, I've got these prizes and, so, congratulations! The "Tried To Think About it Analytically" prize goes to Kristen. The "Obsessive-Compulsive" prize goes to Francisco. The "Reminded Me about how wonderful Rome is" prize goes to Simone. The "I wish I had thought of that as a name for this site instead of the mega-stupid 'Tea and Peterb'" prize goes to monty. Lucky winners, please email your mailing addresses to "firstname.lastname@example.org" and you will receive your fabulous prizes in the mail shortly.
March 07, 2004
Note: This article may contain spoilers for a number of popular and not-so-popular video games. I'll try to keep my discussion oblique, but you have been warned.
The study of puzzles is one that consumes some people. There are brilliant folks who spend most of their waking hours thinking about historically significant puzzles, trying to develop novel puzzles, and who have a deep, analytical sense of what makes a puzzle challenging and enjoyable.
I am, I confess, not one of those people. Neither, apparently, are many of the people designing video games for the consumer market.
I play a lot of games (video games and otherwise), and I enjoy a good puzzle when I come across one in that context, but personally I'm much more interested in the narrative of a game than in the mechanics of any specific puzzle therein, as this series of articles demonstrates. So I don't speak with authority as an expert on brilliant puzzle construction, but merely as a player who encounters too many tired puzzles in the games he plays.
Let's start with a touchy-feely definition of puzzle to distinguish it from a game. I'm happy wiith the first two definitions from the American Heritage Dictionary:
1. Something, such as a game, toy, or problem, that requires ingenuity and often persistence in solving or assembling. 2. Something that baffles or confuses.
In a video game context, I'm speaking of puzzles as being parts of games. The typical puzzle implementation is self-contained ("Solve this puzzle to advance past this door,") but that is a matter of habit, not a requirement. A puzzle can have a scope that encompasses all of the non-narrative parts of a game, or can even be intimately intertwined.
I've been enjoying BioWare's Star Wars fantasy game Knights of the Old Republic, lately. It's very well balanced, has a better story than the past three Star Wars movies, and most of the time is very satisfying. I think it's a good game, and everyone should play it. Go buy it, for the PC or Xbox, if you don't have it. I'm saying that up front because I plan on mercilessly criticizing Knights of the Old Republic ("KOTOR") for the next week: even though it is a truly great game, it also has some very evident flaws. It is precisely because I enjoyed the game so much that the flaws irritate me to the extent that they do.
The thing that I'm thinking about today is KOTOR's lackluster puzzles. Periodically, the player is confronted with puzzles -- generally very self-contained, narrow puzzles -- to solve. Some of these are optional; some are required to pass gates. All of them are uniformly terrible; it's like the designers ran across the street to the Barnes and Noble and picked up a copy of "Encyclopedia Brown's 100 Most Tired Brain Twisters." There's the "complete the pattern of numbers" IQ test puzzle, ("Complete this pattern: O T T F F S S..."). There's the "tribute to Gollum Riddle Game" (complete with riddles taken straight out of The Hobbit), where the player faces the overwhelmingly daunting task of being asked a riddle that is easy to begin with and then has to choose the right answer from a list. There's the "You have two buckets, one holds 5 gallons, and one holds 3..." puzzle.
In between these embarrassingly bad puzzles, I was actually having a really good time swinging a lightsaber around. The puzzles were so transparently artifices that they actually destroyed my immersion in the narrative of the game, and I had to spend a few minutes after each one re-acclimating to the game world to recover what the puzzle took away.
And then we got to the Towers of Hanoi puzzle. Which I now officially dub the "lava level" of puzzles: if Towers of Hanoi is in your game, you should just eliminate it and instead put a big sign in your environment that reads "I am completely out of ideas." Word to the wise: if Emacs ships with a module that solves the puzzle you're putting in your game, that's a good sign that the puzzle isn't actually any fun.
So am I saying that it's never legitimate to include a classic oldie like Towers of Hanoi in your game? Well, yes. I am.
What you should do is develop a completely new, novel type of puzzle that no one has ever seen before, that is perfectly integrated into your narrative, and that is neither too difficult nor too easy to solve. That's ambitious, and a lot of work, so I can already hear the commercial game designers muttering under their breath "That'll never happen." So therefore, let me suggest some guidelines for puzzle development, selection and use (at least in games where narrative is paramount; obviously, if you're making a game whose raison d'etre is "to be a collection of classic puzzles", you will and should ignore these guidelines):
The puzzle has to have a reason for existing. I mean, here, a good reason, not the stupid one you just came up with. Is it a lock on a door? Does the owner of the mansion (Hi, Strahd!) sit there and play "jump the pegs and leave one in the center" every time he wants to open the lock?
The existence of the puzzle should move the narrative forward somehow, even if only by a little bit. If a puzzle is a lock on a gate, then at least make it a lock that the player will further the narrative by opening in a way other than "now he gets to go through that door." Make the solution rely on some other element of your narrative, or some knowledge the player's character should have picked up elsewhere.
The more integrated into your game world the puzzle is, the better. Myst is the gold standard in this regard; even though the puzzles themselves were mostly not very innovative ("Press these buttons to make this number reach this value.") there was an aura of believability about them that came from the care with which they were placed. A corollary to the integration guideline is "puzzles that you are in are better than puzzles that you are standing in front of." Myst accomplished some of this with tromp l'oeil -- even though sometimes you were just standing in front of "the puzzle" as a control panel, the game went to lengths to convince you you were inside (for example) a power station. Likewise, even the puzzles in Myst that were just keys to gates often opened a gate somewhere else, and part of the fun was figuring out where (for example, following the power transmission lines from the generator room to the spaceship.) Myst had the advantage of being the first of its kind to achieve wide success. Guess what? You don't have that advantage. If you want us to like your use of puzzles in your game, you've got to do better than Myst.
This one is debatable, but I believe that where possible, the puzzle should be to figure out what the puzzle is. That is, if you have to explain the rules to the player when they start, you've already lost. Ideally, once the player recognizes that they have encountered a puzzle, they should be able to solve it within moments of figuring out how to solve it. For example, most of the puzzles in the "Selenic age" in Myst had this nature; once you realized that you needed to pay attention to your ears rather than your eyes, you could make steady progress; even the maze is actually an anti-maze (you can brute-force your way through, but then you're doing it wrong). Contrast this with Towers of Hanoi, where a player who knows exactly how to solve it still has to spend 20 minutes moving rings around because the designers were lazy.
I suspect that at this point these puzzles appear in games like KOTOR because designers believe that players expect them. I don't believe that players do. As the success of the Final Fantasy games show, players are content to put up with even the most stultifyingly horrific and boring gameplay if it is attached to a compelling narrative (preferably involving sullen teens and large hair). The inclusion of tired textbook puzzles doesn't just frustrate and bore the sophisticated player, it actually does violence to the quality of the story you are telling. The only people that aren't bothered by bad puzzles are the people who have never seen them before, and frankly, that's not a market worth capturing (especially since it's likely that they will enjoy your game just as much without the Riddle Game as with it.)
Tread carefully. We're counting on you. Bioware, please avoid making the same mistakes in Jade Empire.
Is it a deal?
Here are some games with puzzles that don't suck.
- Cliff Johnson's games The Fool's Errand and 3 in Three have obsessed me for years.
- Andrew Plotkin's game System's Twilight is very consciously a tribute to Johnson's games. And Hunter, in Darkness is the only work of interactive fiction that has a maze that i have enjoyed.
- An emacs lisp library to solve Towers of Hanoi
- Bioware's upcoming game Jade Empire, which I hope will have all the good aspects of KOTOR and none of the bad ones.
- The first article in this series.
March 05, 2004
The old URL will continue to work just fine, but I've moved this site to the following address:
You might want to update your bookmarks and links.
As a special contest, I will ship a copy of (your choice) a CD of Charlie Mingus' The Black Saint and the Sinner Lady or a paperback of David Wingrove's book The Broken Wheel (tip: pick the Mingus CD) to the first person who can guess, in the comments, what "tgr" stands for.
Your two hints are: each letter stands for a word (i.e., it's not "tiger") and the "t" is not for "tea". No fair entering if you've already heard what it is from me personally.
March 04, 2004
So, you made a half hour film. At DV resolutions that takes up about 5 gigabytes of storage. How are you going to back it up?
Well, yes, you can print to tape. I do that too. Print to tape, keep the tape forever, yes, that's a good idea and all but it seems so...low tech. Where's the excitement? Where's the danger?
Press to DVD, you say? Excellent idea -- just write a DVD-R with the raw DV data, and -- oh, wait. This file is bigger than we can fit on a single DVD. Well, that's OK, we can use Stuffit or some other tool (live dangerously -- use dd!) to split it into manageable chunks. Of course, if one of those DVDs suffers a media failure, you're screwed. And writing multiple copies of multiple DVDs can be such a drag.
Here's what I do, for when I'm feeling really paranoid:
Take your 5 gigabyte movie and split it into 100 or 200 megabyte chunks. Take the chunks and feed them into MacPAR deluxe or its Windows equivalent. Generate about 30 to 50 parity files; for each parity file you generate you'll be able to tolerate a media error in one of your data files. At your leisure, write the data files and parity files to a few DVDs. Most of the media failures I've encountered on DVD-Rs tend to affect individual files rather than the whole disk, so I think it's a reasonable strategy to just split the data and parity files over two discs. If and when you encounter media failures, you just use MacPAR to reconstruct the lost data from parity.
If you want to be super-paranoid, you can even sprinkle the parity files among any other DVD-Rs you're writing at the time (I find that I always have some headroom when writing data DVDs).
Voilà! You have a redundant array of inexpensive DVD-Rs. You are now cyber. Congratulations!
March 03, 2004
I've been toying with the idea of writing an article about pointers and why we use them. When I was first learning about them, I understood the hows just fine, but couldn't ever find a lucid explanation of why one should use pointer. With an air of superiority, however, psu instructed me that the cool kids of today with their crazy dances don't condescend to worry about anything as low level as mere pointers, and I'd merely be making myself look old and unfashionable.
But I haven't worried about looking unfashionable yet, so why start now?
What's a Pointer?
Lots of new programmers think they hate pointers. Don't hate pointers. Love pointers. Pointers are the difference between having someone's address written down in a notebook and carrying their house around with you wherever you go.
Let's limit the discussion to pointers in C and C++, since those are the implementations most of us are familiar with.
A pointer is an number (the object oriented guys hate it when I say that) that describes an address somewhere else in memory. We say that a pointer "points to" the address it contains. Strictly speaking, that definition also encompasses any variable on the left hand side of an assignment (when you say "a = 5;", you're actually saying "put the value '5' in the memory pointed to by 'a'), but in common usage when we talk about pointers we're talking about an additional layer of indirection over that.
This is the point where I'm supposed to include a picture of a bunch of little cells with arrows pointing from some to others, and that is somehow supposed to make you magically understand pointers. The problem is, it won't: it'll just annoy you when you actually go to use them and there are no friendly arrows or boxes and all you have is a command prompt where you tried to run your program and it promptly responded Segmentation fault (core dumped).
In C, the two operators that are most commonly associated with pointers are "&" (the "address of" operator) and "*" (the "dereference" operator, which should not be confused with the multiplication operator that uses the same glyph). Taking the address of a variable gives you its location in memory. Dereferencing a pointer lets you read or change the contents of whatever is stored in the address that pointer points to. Dereferencing a NULL pointer, or a pointer that does not point to valid storage, will usually result in your program crashing, typically with what we call a segmentation fault. Instead of giving you diagrams, I'm going to give you code. The code will be confusing too, but hopefully you can actually run it and experiment with it yourself to make things work.
Here's a tiny function to try to demonstrate these operators in use, in an extremely non-useful manner (note: don't try to cut and paste these examples; they are line-wrapped and abbreviated to make them more readable on the web. If you want to try to compile these yourself, click the link after each example.)
When I run that, I get this:
The ugly hex numbers in this case are the addresses of memory locations within my process' stack. "a" happened to be at 0xbfbff76c. "a_ptr" was at 0xbfbff768. After we set a_ptr = &a, a_ptr (ie, the memory on the stack at location 0xbfbff768) contained the integer 0xbfbff76c. When we dereferenced a_ptr -- that would be "*a_ptr" -- our program fetched the value contained at 0xbfbff768, followed it to 0xbfbff76c, and told us what the contents of that memory was -- in this case, "5".
If you're confused, don't worry -- that's par for the course. I'm half-convinced that the reason people prefer to talk about objects instead of pointers is that the C syntax for pointers is so horrific. It really is awful. It's especially fun how the "*" glyph means one of several different things depending on whether it appears in a declaration or in a statement context. You're not hallucinating: that really does suck, and don't believe anyone who tells you otherwise. You will, in time, get used to it, but that doesn't mean we can't complain about it in the meantime.
You can also have pointers to pointers, and so on. I won't talk about those in this article.
There are basically three reasons to use pointers:
- Data structure flexibility: allocation of memory at run-time rather than compile-time, and the ability to construct more elaborate data structures (linked lists, trees, hash tables, etc).
- Letting a function modify memory pointed to by its arguments.
- Passing data by reference instead of by value. (This is really the general form of the previous statement.)
There are actually more reasons than these, but they're the three I consider to be most important.
I'm not going to talk too much about data structures here, because that's a subject that is deep enough to warrant it's own article (or book, or high-level University class). We can just summarize by saying that there are certain data structures that would be so difficult or expensive to implement without pointers that it is reasonable to describe them as impossible without pointers. It is no exaggeration to say that deciding what data structures to use is probably the single most significant choice you will make on any given program. Likewise, object oriented programming, under the hood, is all about doing clever things, systematically, with pointers (when we talk of a language with OO support, we mean "the language does the clever things with the pointers for me, so that I don't have to go on a killing spree when I put an ampersand in the wrong place.")
Lets talk about passing data by reference instead of by value. There are two implications of passing a pointer rather than the value itself around. The first is a data access issue: if you pass a function a pointer to a variable, it will be able to change the value in that variable, whereas if you pass the value, you can't. Here's an example.
We pass the value of counter to the "val_incr()" function, and pass the pointer to counter ("&counter") to "ref_incr()". Let's run our little test:
Note that after val_incr, when we returned to the scope of main(), counter was the same as what we started with -- 9. However, ref_incr was able to actually change the value stored in main's "counter" variable. The reason for the difference is because of lexical scoping -- the "counter" in val_incr() is not the same -- that is to say, it points to a different location in memory -- than the "counter" in main(). The "*counter" in ref_incr() isn't the same either, but since it is a pointer, it can contain a pointer to the memory location referenced by main()'s "counter", and we can then dereference it and make changes.
Some of you may be scratching your heads right now and saying "But why not just declare 'counter' as a global variable, and then both val_incr() and ref_incr() would be able to change it without any gymnastics, and indeed without even passing it as a parameter?" That's true, and is a bigger subject that I'm prepared to tackle right now except to say that we gain significant advantages in maintainability, portability (in the sense of being able to reuse functions we write for one program in a later program) and provability by not using global variables. As a rule of thumb: don't use globals. That's definitely a rule you'll find good reasons to break, but it's to your advantage to learn discipline early.
The other reason we like pointers is for performance. When you pass by value (or do struct assignments) you are doing memory copies: if the variable you're passing is a large structure, then you're spending a lot of your time copying data. Let's look at an example.
Just for fun, I decided to run some numbers to see how expensive struct copies are versus simple pointer copies on a typical system (where "typical" is tautologically defined as "whichever machine I happen to be testing on at the time" -- i am not trying to do super-scientific measurements here, I'm just having fun.) I'm not normally a performance geek; as a developer, it's of much greater concern to me that a given piece of code run correctly, and that it be maintainable, than that it run fast. In my experience, it's usually easier to make readable, correct code faster than it is to make unreadable and buggy fast code correct. But, y'know. Sometimes you just want to play.
So here's some lame synthetic source code:
That compiles down to this:
Doesn't look that different, does it? If we add some instrumentation to count the number of CPU ticks, though, we can see how long each of those segments take. You can download the source of the instrumented version of this program to try yourself here. To compile it on most unix systems, try: cc -o ptrtest ptrtest.c. To generate and look at the assembly code (at least with gcc), do gcc -S ptrtest.c. The output will be in a file named ptrtest.s (and will look a little different from what you see above, since that's the instrumented version).
The number of cycles to do the struct copy increases linearly with the size of the structure being copied, as you'd expect. The pointer copy always takes about the same number of cycles.
Of course, this isn't a truly fair test. Maybe those egregiously uninitialized pointers would point to stack-allocated memory in a real program, but it's more likely that I had to allocate and free the memory from the heap for them. What does it look like in that case? So let's change that first test to allocate the memory, do the pointer copy, and then free it. Suddenly things look a lot different:
That's with the standard FreeBSD malloc implementation; results might vary depending on what malloc you're using. Now you're in real trouble! 30,000 more ticks! Boy, is your boss going to be angry that you went over your tick budget. Those ticks were a precious resource, and you went and squandered them! So, naturally, you go on to implement a chunk allocator which preallocates all the memory your application is likely to use so you can quickly grab items off of a freelist, and then the next thing you know you're drunk, drinking sterno, unwashed, living under a bridge and whimpering "Garbage collection. Why can't I be programming in a language that has automatic garbage collection?" It's a hard knock life, thinking about performance, kid.
Really, though, this last case is worst-case for pointers. It's pretty rare that you allocate memory, copy a pointer once, and then free it right away; in a program of any size it's probably that you'll be doing many copies or assignments over the life of some variable. The bottom line is that (large) programs that don't use pointers will generally perform worse than programs that use them correctly.
I hope this article has proven useful. If you find any inaccuracies, please drop me a line, or comment below.
You can begin your slow descent down to skid row by following these links:
March 02, 2004
In a past life, over at Tea Leaves, I've already discussed my favorite purveyor of fine tea, Upton Tea Imports. I'm not their only fan, either. Another season has come and gone, and with it another care package from Upton. Here are my capsule reviews.
The biggest winner, to my surprise was the Organic Yunnan Select Dao Ming. I ordered this as an experiment, with some trepidation. "Organic" is a word all too often associated with "stale and tasteless," particularly with respect to comestibles like tea where freshness, among the vast majority of consumers, is (wrongly) not a concern. The Yunnan Dao Ming is very drinky, with a medium body and the expected spicy note. It is not a transformative experience, by any means, but then at $7/125 grams it is also less than half the price of China Yunnan Superior, which strikes me as an entirely reasonable tradeoff.
I am always open to recommendations -- what tea are you drinking?
March 01, 2004
This is the second in a series of articles investigating the question "What makes videogames fun?" The first article in the series can be found here
In my last article I talked about the importance of location and how the use of a model which points back to the real world can be compelling in and of itself. This is only half the story, though. Not every game can take place in Times Square, nor should every game. It's (arguably) unfair to make a space opera or fantasy story take place in Dayton, Ohio. The question then is: if you are describing a space that doesn't signify a real-world space, how do you make the player care? How do you increase the power of the virtual space you've created so that, when she is done playing your game, the player thinks of it, on some level, as a "real" place? In this article, I'm going to discuss three techniques: familiarity and reuse, signifiers such as maps and text, and geometric and logical consistency.
Familiarity and Reuse
The least trustworthy but easiest technique in our arsenal is reuse: keep the player in a certain portion of your virtual space until they are acclimated to it and begin to identify with it. It is the easiest because it requires the least amount of work to implement; it is the least trustworthy because doing this may not fit the needs of a game's narrative, and if done poorly may cause the player to rebel against monotonous repetition.
There are two variants of reuse commonly deployed: "hub and spoke," which is common in platform games, and "safe house," which is common in CRPGs. A given location may be both a "hub" and a "safe house" at the same time. The difference between the two is largely one of emotional perception: a hub is a place which one transits to reach new, more interesting places. A safe house is a place one goes to be protected from the ravages of the world. Grandma's house in Zelda: The Wind Waker is a safe house. The hideout in Grand Theft Auto 3 is a safe house, and the entire first island on Liberty City is made familiar by preventing the player from leaving it for a time (that also serves as a gate, but that's the subject of a later article). The witch's castle in Banjo Kazooie is sprinkled with hub areas that the player must cross and re-cross. The cities in Diablo II are both hubs and safe houses.
A safe house -- or a series of them, if need be -- can usually be justified in just about any narrative without breaking mimesis. Players will naturally familiarize themselves with the area around safe houses unless you go out of your way to make them extra-boring. Hubs are trickier. They're frequently used as a "magician's choice" by lazy game designers who aren't very good magicians. Consequently, a carelessly designed hub will be seen as a mere distraction and as a connection to the next "good part." When players can see the wires the result is usually boredom. If the hub serves no narrative purpose, get rid of it; if you want to force the player to go someplace else next, then just really force them ("Poof! You're in Emerald City now"), and get it over with. Forcing the player to trudge across your virtual city for no narrative reason will quickly earn you their contempt.
If you provide good reasons for the player to trudge across your virtual city, it will not be a trudge. More on that below.
The player is part of your narrative. For sufficiently advanced games, you may be able to use signifiers to help build the player's map of your space before having to actually depict that space. By signifiers we mean text, maps, roadsigns, graffiti, conversations with other characters, or any technique that allows you to suggest the existence of part of your virtual world to a player before they actually get there.
At its most basic, this can be used to prime the player not only with a knowledge of geography, but of her objectives, as well. "Oh!" said the Princess, "recently the forest to the east has been overrun with wolves!" Right, thinks the player. East, forest, wolves. Got it. With one line you've advanced your narrative and filled in the topology of your insipid little game world somewhat.
Signifiers do not have to be so straightforward, though. They are a mechanism to manage expectations on the part of the player, which also means they can be used to subvert expectations as well. The Silent Hill games do this magnificently. At various points in the game, players can obtain maps of the areas they were in. The maps are rough sketches, plans, nothing more -- gas station maps, fire escape route diagrams of apartment buildings, and the like. The player might read the map and say "Oh, I can travel west on King Avenue here and reach the park." The protagonist begins heading west and encounters a water main break -- the road is closed. When he discovers this, the protagonist scribbles a red line on the map across King Avenue -- can't go that way! As clues are collected or obstacles encountered, the player's map becomes (automatically) updated with notes, circles, jagged lines. The player learns quickly that the map is untrustworthy, but it's all he's got. For me, at least, the zombies and monsters in Silent Hill are not what keep me going -- what keeps me going is the maddening knowledge that there is still an area on that damn map that I can't get to.
I'm not a big believer in overlay or radar maps that give the player a precise view of where they are in the game world. As a rule, they distract the player from the game world you've created. If the player is distracted and not paying attention to the virtual space you've constructed, then they're not internalizing it, and if they're not internalizing it it's not as meaningful. To reduce this to a simple rule for the designer, never show the player the same map that the game uses internally. Although it seems like a contradiction, disagreement between a signifier and the signified will tend to increase the player's concentration on the environment, as they try to work out why the contradiction happened. Obviously, how you present that contradiction will have great narrative impact -- if a trusted ally simply lies to the player for no good reason, that has one set of narrative consequences, whereas if they are simply somewhat wrong, that has another. And if a signifier, be it text, map, or a character is always wrong all the time, the player will simply discard or ignore it.
In Sony's magnificent Ico, the protagonist is a young boy trapped in an ancient, decrepit castle. The edifice itself is more than the environment the protagonist moves in. The edifice is his true opponent. The castle you are trying to escape from is the nemesis. The monsters you encounter, the Witch-Queen who imprisons you, are not half so impressive and oppressive as the place you are trapped in. The monsters are shadowy figures, easily dispatched with the thwap of a stick. The castle is bigger than you. The castle is older than you. The castle has seen thousands of boys like you come, and none have ever left. Hit the walls with your stick. They will not fall.
One thing that makes this work is the internal consistency of the game space, which is intimately tied to its visual style. It's not just that the castle is a consistent mappable space which obeys the laws of geometry and physics -- it's that this is shown to you, deliberately, time and time again. You leave a room and step out onto a balcony. In the distance is a hexagonal tower. You struggle through three or four more rooms and find another balcony; that tower is closer now. Forty-five minutes later, you are at the base of the tower. Climb it, and look back, and you can see the balconies you were on early. That is where I was. This is where I am. Over there is where I'm going.
The intro to the classic FPS Half Life is a great example of this. It is positively glacially paced. Nothing happens; you're in a tram car which trundles through the Black Mesa complex, carrying you into your lab at the heart of it. What it accomplishes, though, is that it gives us the sense that this is a fully realized place, with its own geography. On our way out through the shattered complex we will pass some of the places we see on the way in. This self-reinforcement enhances the experience. (It's also my theory that this is one reason why the "Xen" levels near the finale of Half Life are so amazingly weak -- you go from a consistent, thoroughly realized world into a fantastic world. A fantastic world that looks like it could have been ripped straight out of Super Mario World, complete with platforms that move for no discernible reason other than to let the player reach the boss monster. A very disappointing ending to one of the best games of the decade.)
This tradition is as old as computer games, with even the original Colossal Cave text adventure presenting the player with an internally consistent (although in places confusing) plan, complete with foreshadowing of places yet to be visited (think of its famous "mirror room"). A consistent, believable model of physical space is a precursor requirement to the location itself having a meaningful impact on a game. There's no guarantee that a consistent, believable model will be interesting, but a model into which no thought has gone will be nothing more than window dressing. This is, obviously, one of those "know the rules to break them" situations -- obviously a dream world might have different rules than the real one, likewise the hallucinogenic flightscapes of Rez. But if you simply throw together a virtual space without a vision of where the player belongs in it, the players will be able to detect your laziness on an almost subconscious level.
I hope this article has been helpful in suggesting some techniques to make your game world more compelling and, hopefully, more fun. If you find any inaccuracies in this article, or just wish to comment, feel free. If you enjoyed this article, you might also find the following URLs to be of interest to you: