Thursday, November 13, 2014

Game Outcomes Project Methodology

[This blog post is intended to accompany a series of articles that will appear on Gamasutra beginning in mid-December 2014.]

In October and early November of 2014, the Game Outcomes Project team ran a survey targeting game developers asking roughly 120 questions about each respondent's most recent team-based game development effort.  Questions centered on development culture, teamwork, project management, and the project's outcome.

We received 771 responses, of which 302 were completed, and 273 referred to projects that were neither cancelled nor abandoned.

We chose to exclude responses related to cancelled and abandoned projects for reasons we explain at the end of this post.  This blog post explains our survey design methodology and our  analytical methodology for the remaining 273 questions for those who wish to delve into the statistical and mathematical details.

Survey Design Approach


Our survey was designed as follows:
  • Page 1 was a merely a qualifier page.  In order to ensure that we received responses only from our target demographic, we asked respondents to only complete the survey for projects with known outcomes within the last 3 years which had a team size of 3 or more and on which they had served in a development role.
  • Page 2 contained background questions such as team size, production methodology, financial incentives offered to the team, and project lifetime.
  • Pages 3 and 4 contained a little over 100 questions around teamwork, culture, and various other factors relating to game development.
  • Page 5 asked only four questions, asking respondents to rate the outcome of the project along a 6- or 7- point scale for four different dimensions: project delays or cancellation, return on investment, aggregate review scores (MetaCritic or GameRankings), and the team's internal satisfaction with whether the project had achieved its goals.
In designing the survey, we took several steps to help reduce the risk of bias.  Although some survey respondents have complained about these aspects of the survey, they were entirely intentional:

  • We occasionally asked important questions twice, once with a positive frame (“I rarely had to work overtime”) and once with a negative frame (“I had to work a lot of overtime”). We did this as we were concerned that if we asked all questions with same frame, respondents who felt positively or negatively about a project would be tempted to simply scroll through the list and click somewhere along the “agree” or “disagree” part of the spectrum, whereas intentionally disrupting this by occasionally shifting the frame of the questions would force the respondent to consider each question individually.
  • We sometimes asked slightly different versions of the same question in an attempt to tease out the cause of some phenomenon. For example, we had five questions related to “crunch” and overtime, and one of those questions asked about voluntary overtime while another asked about mandatory, imposed overtime. We felt we could use these subtle distinctions to tease out deeper causal factors (spoiler: we could, and we did).
  • We deliberately removed the section names and enabled randomization of the question ordering via SurveyMonkey. Although this led to a large wall of questions that was off-putting to some respondents, we felt that announcing the section names openly might tip our hat as to what we were looking for in each section, and allowing the answers in each section to remain clustered together would likely have a similar effect and allow respondents to simply use the same answer for all questions in that group. By randomizing the ordering of all the questions on a given page, we would greatly reduce the likelihood of these sorts of phenomena.
  • We added a qualification page at the beginning of the survey asking respondents to continue only if they had worked on a team project in a development role within the last 3 years that had some sort of known outcome. We deliberately wanted to avoid continuously-developed projects as the outcomes of these types of efforts are much more difficult to quantify.

The Aggregate Game Outcome Score


Lacking any way to objectively define “success” or "failure," we decided that the best way to quantify the outcome was through the lenses of four different kinds of outcomes – critical reception, return on investment (ROI), delays or cancellation, and the team’s own perception of its success or failure – and later combine them into a single “outcome” score during the post-survey analysis phase.  This led to four questions, each of which allowed answers along a 6- or 7-point scale.
  • Delays: “For the game's primary target platform, was the project ever delayed from its original release date, or was it cancelled?”
  • ROI: “To the best of your knowledge, what was the game's financial return on investment (ROI)? In other words, what kind of profit or loss did the company developing the game take as a result of publication?”
  • MetaCritic: “To the best of your knowledge, was the game a critical success?”
  • Internal: “Finally, did the game meet its internal goals? In other words, was the team happy with the game they created, and was it at least as good as the game you were trying to make?”

We base the decision to combine the four outcome values into a single score on two factors.  First, since all four questions are related to different aspects of project outcomes, it seems intuitively obvious that they are related, and that all of these four different aspects of the project’s outcome must have come after the end of development, and had to have been caused by the development cycle itself (or other factors, such as consumer tastes and marketing spend) rather than by each other.

Secondly, all four outcomes are strongly positively correlated to one another, as shown in the scatterplots below.

Figure 1. Animated GIF of cross-correlations betweeen all four game project outcome factors (on a 4-second delay).

Note that this image is an animated GIF with a 4-second delay; if you don't see it changing, wait a bit longer for it to finish loading.  Also note that all data has been randomly "jittered" slightly for these charts to make coincident data points more visible.  Note also that all four of these output dimensions have been "normalized" on a 0-1 scale with a "1" being the best possible outcome (the game shipped on time, critics loved it, it made huge piles of cash, or the team was thrilled with the fruits of their own efforts), and lower values being quantized equally along the 0-1 scale depending on the number of gradations in the question.

Each of these correlations has a p-value (statistical significance) under 0.05 (a p-value under 0.05 indicates that there’s a greater than 95% chance that the correlation could not have occurred by chance).  This makes it very clear that the four aspects of game project outcomes are interrelated.

We eventually settled on a simple non-weighted sum for the aggregate outcome score.  Although we were tempted to give each outcome value a coefficient, there is no objective basis for determining the coefficients.

We assigned the best possible outcome for each factor (terrific reviews, makes lots of money) a value of 1.0, and we gave a worse outcome a correspondingly lower score (closer to 0) along a linear scale depending on the number of gradations in the questions asked (some of the outcome questions were asked on a 6-point scale, others on a 7-point scale).  We then added them together.

        Score = (Delays) + (ROI) + (MetaCritic) + (Internal)

We also experimented with exponents for each factor which we tuned in Solver to try to maximize the cross-correlations between the outcome factod, and with multiplying them as a probability value instead of simply adding them.  However, we found that simply adding the four outcome factors, in addition to being simplest, achieved the highest correlation, and we could not justify the additional complexity of any other approach.

Missing Data Handling


Roughly 5% of the data in our survey was missing, as we allowed respondents to leave a small number of questions blank on pages 3-5 of the survey.

For the majority of the responses, we simply averaged the non-blank data for each question using the AVERAGEIF() function in Excel, and then used this to fill missing data for that question.

For the four outcome questions, given the critical nature of these values, we felt a more exhaustive approach was required.  Here, we used the mean of two values: the average value of all non-empty responses to that question, and the average of all other non-empty outcome values for that response.

Cancelled and Abandoned Projects


We decided to ignore responses that turned out to be for cancelled or abandoned projects. This was a tough decision, but the fundamental problem is that we have no good way to assign an outcome value to a game project that was cancelled or abandoned before completion – the “outcome” has to include its critical reception and ROI for a real direct comparison, and since it was cancelled before completion, these will never be known.

Initially, we felt a 0 was a proper score for a cancelled project. This makes intuitive sense, as surely a cancelled project is the worst possible outcome and has no value, right?

But this isn’t necessarily the case. There’s a world of difference between a team abandoning what could have been a great game 3 months into development because they decided that working on some other, even greater game project would be a better use of their time, and a team slogging through a multi-year death march, impairing their health with extended overtime, and ending up with divorces, only to see their game cancelled at the last moment. Those two games should score very differently in terms of their “outcome,” and for cancelled or abandoned projects, that data does not exist.

There’s also the simple fact that many times, cancellations and abandonment are caused by factors outside the team’s control. Perhaps a key employee ran into health issues, or perhaps despite a team being terrific and working on a very promising game, the parent company ran out of money and had to close up shop. These kinds of stories happen all the time, and of course there would be no way for our survey to detect these things.

That’s not to say that cancellation and abandonment are entirely random. However, we found that the correlations with cancellation were generally far lower, and only a handful of variables correlated reasonably well with this outcome. We hope to discuss the cancellation issue further in a future article, but for main part of our series, we focus solely on non-cancelled game projects.


Predictive Modeling


We looked at a number of different ways of building predictive models that would use all the inputs to predict the aggregate outcome score.  We imported the data into Weka and tried a number of different predictive models:

  • Linear Regression Full: 0.82
  • Linear Regression 10-fold: 0.51
  • M5 Prime Full: 0.89
  • M5 Prime 10-Fold:0.59
  • Additive Regression (20 learners) Full: 0.81
  • Additive Regression (20 learners) 10-fold: 0.62
We also built two linear regression models in Excel, limiting ourselves only to inputs which exhibited statistically significant correlations (p-value < 0.05) with the aggregate outcome score (this excluded only roughly 30 of the ~120 survey questions).  The full linear correlation achieved a correlation of 0.82, identical to the Weka linear regression above.

However, to avoid overfitting, we later constrained the linear regression so that the correlation coefficients had to have the same signs as the correlations of those underlying inputs.  This gave us a correlation of 0.73 -- still an excellent correlation.

We also ran cross-validation with separate parts of the data set (excluding 20 data points at a time, roughly 10% of the data set) against this linear regression, with identical results.

We ultimately used these linear regression coefficients to help us identify the most useful and relevant independently predictive variables in the Self-Reflection Tool and to construct the linear regression model provided in that tool.

Data Verification


We asked respondents to subjectively grade nearly everything in our survey.  Therefore, we cannot independently verify the accuracy of the responses, as we have not worked on the game development teams the respondents report on, and in most cases, we don't even know what specific projects they relate to and have no way to find out.

However, we did ask an optional question at the end regarding the name of the project in question.  Roughly 10% of our respondents answered this question.  This allowed us to do two things:

  • For those that did answer the question, we looked at MetaCritic scores of those game projects, and were able to verify that the question regarding MetaCritic scores had indeed been answered accurately.
  • We had hoped that there would be several cases where different people on the same team reported on their project.  However, there is only one case in our data where two respondents reported on the same project AND supplied the name of the project in this optional answer field.  However, we did compare these two results and found that the answers were quite similar, with the answers to most questions differing by 1-2 gradations at most.
Therefore, although we have no way to independently verify the data, those two avenues of investigation underscored that we have no reason to doubt the veracity of the data.

Additionally, although some of our friends at GamaSutra were worried about the survey potentially being overrun or trolled by those who use the "#GamerGate" hashtag on Twitter (as a previous GamaSutra developer survey had allegedly been recently corrupted by this loose affiliation of individuals apparently angry at GamaSutra themselves), we heard no rumblings of any ill will toward our survey on social media, and we felt it was unlikely that anyone would complete an entire 120-question poll just to try to bastardize a survey.  We also felt that anyone attempting that kind of "trolling" would likely reveal themselves with snarky comments at the end of the survey, and we saw no comments whatsoever that appeared snarky, disingenuous, sarcastic, or otherwise likely to have come from anyone other than genuine game developers.  Therefore, there is simply no evidence that might allow us to believe that any such corruption would have occurred.

Future Directions


We regard the first iteration of the Game Outcomes project as a surprisingly successful experiment, but it has also given us an excellent guide for refining our questions in the future.

In future versions of the Game Outcomes Project, we expect to be able to narrow our list of over 100 questions down to 50 or so, and add a number of additional questions that we simply did not have room to ask in this version of the survey:
  • What was the working environment like?  Was it mostly comprised of cubicles, 1-person offices, multi-person offices, or open working space?  Did the working space facilitate team communication, and did it foster enough privacy when developers needed to hunker down, focus, and get work done?  Significant research indicates that cubicles, often championed in the industry for fostering communication, actually hinder productivity.
  • Was a significant amount of work on this project thrown away due to re-work?
  • Were the team leads particularly heavy in one discipline (art, engineering, or design), or was there a mix, or was the leadership comprised mostly of producers or managers with little or no discipline-specific experience?
  • How did the team hire new members?  Was there a formal, standardized testing process?  Did the team do any level of behavioral interviewing techniques?
  • How long had the team worked together?  A significant amount of research shows that teams working together for the first time are far more mistake-prone, while teams that have worked together longer are far more productive (Hackman).
  • To what extent was the team working on an innovative design as opposed to a clone or a direct sequel?
  • To what extent was the studio’s structure flat vs hierarchical?
  • Did the game’s production have a discrete preproduction phase, and if so, how good a job did the team do of ironing out the risks during preproduction?
  • Did most team members exhibit professional humility, or were there many know-it-alls who always tried to prove themselves smarter than everyone else?

Initially, we had also questions about the development team’s gender and ethnic composition and geographic location in the initial survey, but we had to drop these due to space constraints and concerns about spurious correlations; we may bring them back in future versions of the survey.

A number of additional questions were directly or indirectly suggested by survey respondents themselves in our optional text entry boxes at the end of the survey:

  • Was the team's leadership mostly composed of individuals from an art, programming, design, production, or biz dev background, or a combination?
  • What percentage of the design decisions on the game was made by those in the trenches – the artists, programmers, or designers?
  • What percentage of the design decisions were made by people in leadership roles with no formal design authority, such as producers?
  • Did the team generally trust their leadership?
  • If the team disagreed with a decision made by the project’s leadership, was it able to get the decision changed?
  • What was the process for how new team members were trained or incorporated into the team? 
  • To what extent did the team use internally-developed vs. externally-developed tools?
  • How would developers judge their quality of life?
  • Did team members have a good sense of when a feature is complete?
  • Did the team spend a significant amount of time and resources creating demos for upper management or the marketing department?
  • Was the team happy, and did the project’s leadership work to help facilitate happiness?  (There is significant research showing happiness causes higher productivity, not the other way around; Scott Crabtree of Happy Brain Science can tell you more about this)

Monday, September 29, 2014

Call for Participation: The Game Outcomes Project needs your help!



Hi everybody,

The blog has been on hold for a while as I've been working on something much more important: starting up a new studio in Austin, TX, which I look forward to announcing in just a few weeks!

In the meantime, I've been participating in an exciting new project with a team of five producers from the IGDA Production SIG and an advisor from the Wharton School of Business.  We've built a unique study which is, as far as I can tell, the first and most comprehensive study of its kind in the game industry.

If you’re a game developer with at least one team project under your belt, please help us by taking a brief survey about your most recent team project in the industry.

Here's the official description:


The Game Outcomes Project
is a first-of-its-kind experiment to statistically evaluate game development efforts across hundreds of game teams and generate insights as to how teamworkculture, leadership, and production practices contribute to the success or failure of game development projects.

Currently, there’s little or no actual data on how different factors such as crunch/overtime, production methodologies, or team size impact the success or failure of projects in the game industry.

We hope to change that!

With your help, we hope to gain insights at to what factors produce the greatest risks in game development and what best practices development teams can follow to better manage the risks they face.

You can access the survey here --- >> https://www.surveymonkey.com/s/FDGPLL5 <<


The Game Outcomes Project is not affiliated with the annual IGDA Developer Satisfaction Survey or any other survey or organization.

Please help spread the word!  Share the survey with your colleagues, and follow us on Twitter at @GameOutcomes.

Sunday, December 1, 2013

Decision Modeling and Optimization in Game Design, Part 9: Modular Level Design

This article is the ninth in a continuing series on the use of decision modeling and optimization techniques for game design.  The full list of articles includes:


The spreadsheet for this article can be downloaded here: link

We've had to put off this instalment of the series for the last few months, as we've been very busy starting up a brand new game studio in Austin.

We're looking forward to announcing more about that in 2014, but in the meantime, we took advantage of Thanksgiving break to finally write up the long-awaited Part 9 of this series.  We apologize for the delay and hope to get Part 10 completed by January.

The Story of Pirate Planet

It was a warm day in Austin in April of 2007.  As they waited for feedback, the level design team for Pirate Planet on Metroid Prime 3: Corruption grew nervous.  They had been working on building Pirate Planet, one of the game's major worlds, for over five months, but for one reason or another -- due to the inevitable delays and re-prioritizations that always occur in game development -- they had never had a chance to show it to Kensuke Tanabe, their supervisor from Nintendo's Software Planning and Development group.

Now they were only a scant few months from ship, with precious little time for changes, and Tanabe-san -- a man whose frequent and major design changes were almost as legendary as Miyamoto's -- did not seem pleased.

In fact, he seemed rather disappointed.

Instead of offering his usual feedback, he simply said, "Come back tomorrow.  I'll have feedback for you then."

Tanabe-san took a build of the game into his office and shut the door.

When the next day came, the team was stunned.  Tanabe's plan was more than an overhaul -- it was a dramatic redesign.  Every single room was to be connected to a completely different room than before.  The entire world of Pirate Planet was to be completely rearranged, and even many basics such as item pickups were to be moved.

The level design team went through the usual stages that Retro developers experienced when confronted with the often overwhelming feedback from their Nintendo bosses: first, horror at the enormity of the challenge of making so many changes; second, reluctant acquiescence to the fact that they did indeed have to make all the changes listed; and finally, acceptance when they saw the effect of the changes in-game, and realized that nearly all of them were for the better.

The Metroid Prime games benefited enormously from this kind of modular approach to level design, which allowed them to build expensive "rooms" and cheaper "hallways" and separate them with identical doors (which also cleverly masked the loading times as new rooms were streamed in).  This gives designers a great deal of flexibility, then, as it allows them to quickly modify the configuration of an entire level.  Many games use broadly similar approaches, with The Elder Scrolls: Skyrim being a notable and very successful example.

The configuration of the level, then, can be seen simply as a graph.  Each node in the graph is one room, and its connections to other nodes indicate how its various entrances and exits connect to other rooms.

In theory, if we had a formula to tell us whether one graph configuration was "better" than another, we could use that to optimize an entire level, and find the "best" one.

Of course, that's theory.  Naturally, if you're an experienced level designer / world-builder, you know that there are far too many factors that go into consideration when building a level to try to build all of these considerations into any kind of neat mathematical formula or an Excel spreadsheet.  It's an ongoing, iterative process, with countless considerations based on all of the player's possible paths through the space.

But it's useful to take a look at how you would approach this problem in practice to see what can be done and how you can go about doing it, and what the capabilities and limitations of decision modeling techniques are when we apply them to this sort of problem.

And it's useful to know that there's a tool that can answer part of the problem for us, and suggest possibilities, even if we don't end up using decision modeling to figure out how the rooms are connected.

A Sample Problem

Suppose for a moment that our game is based on a reconfigurable room system, just like the Metroid Prime games.  Assume that we have various types of rooms that all fit within a square grid.  We have pre-built rooms that connect 2 neighboring rooms ("L"-shaped or straight line), rooms that connect 3 other rooms ("T"-shaped), and rooms that connect 4 neighbors ("+"-shaped).

We can visualize these as similar to the road tiles in the classic board game Carcassonne (playable on Kongregate here):


Assume we have a fixed "level entry" and "level exit," and we need to figure out the best set of rooms that will connect them.


We can place any of our rooms anywhere inside the grid, and we can rotate them arbitrarily.  Also, for the sake of simplicity, assume that rooms with 2 exits ("hallways") can be arbitrarily bent or un-bent like a garden hose, so that we can switch any such "hallway" piece between straight and L-shaped as needed.

Your problem is as follows: given the entry and exit points marked above, and assuming you have 9 "2-way" rooms, 4 "3-way" rooms, and one "4-way" room, what's a valid way to place all the rooms in the 5x7 grid above such that each room is used exactly once?


Optimizing Connectivity

We'll begin by using the cells in the 5x7 grid as our decision cells.  Each cell will hold either a '0' or a '1' -- a '0' to indicate that there's no room in the cell, and a '1' to indicate the presence of a room.

We also paste '0' and '1' values around the outside of the 5x7 grid to indicate which tiles have rooms.  The grey tiles on the left and right side indicate the level entrance and exit, respectively.


Following the formatting standard we laid out in Part 2, the decision cells are colored yellow to make clear that they are decision variables.

Just below that, we add a second table which counts the number of neighbors of each cell, but only if the cell itself has a nonzero value.  This will be our neighbor count.  More on that in a moment -- we'll show you the completed table after we solve.

Just to the right of that, we add up all of the things we want to happen when we optimize this table.



Each row of this table indicates a constraint.  The "Actual" column determines various things about the actual current characteristics of the decision table or the neighbor count table.  The "Desired" column indicates the value we actually require in each case (i.e. the desired value) as stated in the problem statement above (i.e. the values stated in the previous section, "A Sample Problem").  The "Difference" column just subtracts the "Actual" column from the "Desired" column.

Finally, the "Total Difference" cell will be our objective cell.  This simply adds up all the values in the "Difference" column.  We're trying to minimize that value -- ideally, it should reach 0 to indicate that this problem is "solved."

The first four rows count how many cells of various types exist in the neighbor count table (1-neighbor, 2-neighbor, 3-neighbor, and 4-neighbor cells, respectively), using Excel's COUNTIF() macro.  Finally, the last row counts whether there are tiles right next to the entrance and exit.

What else do we need?

Well, in theory, we also need a way to make sure there's a connection between the entrance and the exit.

The constraints above -- telling the solver that we want 0 1-neighbor tiles, 9 2-neighbor tiles, 4 3-neighbor tiles, and 1 4-neighbor tile -- should ensure that we end up with the right number of tiles and that everything is connected in a way that's valid.

If you think about it, though, it should be clear the guarantee of connectivity does not guarantee that there will be an actual path between the entry and the exit.  It's possible that individual clusters could form near the entry and exit points, or that there could be separate pieces floating off in the corner, not connected to anything else -- for example, you could get 4 "L-shaped" 2-way tiles in a small square, each connected to two others but none of them connected to the rest of the level.

However, we'll put off dealing with this problem -- proving reachability is overkill for this particular problem, when the fact is, it will probably be fine when we solve it, and if it isn't fully connected as we expect, we can just solve it again to get a new answer.

Finally, we run Excel Solver.


Note how simple this is.  We simply set it to minimize the objective cell, then specify the yellow decision cells as the variable cells and add constraints that they must be integers between 0 and 1.  We select the Evolutionary method and press Solve!

Solved!

After running the Evolutionary solver for a while, you may get something that looks like this:



Note that we've enabled the "conditional formatting" feature in Excel to color the second table -- the one that counts the number of neighbors.  As you can see, each cell properly counts how many neighbors it has (if you include the 'entry' and 'exit' cells just outside the table), and there are exactly 9 2-neighbor cells, 4 3-neighbor cells, and one 4-neighbor cells, which is exactly what we wanted.

Problem solved!  With just a tiny amount of work, we've gotten Solver to figure out a whole new level configuration for us based on our specifications.  We can even run Solver several times with different random seed values to see which level configurations it suggests, and from there, figure out which of those configurations seem to work best.

In terms of Carcassonne tiles, our level would look like this:



We can then use Solver to quickly spit out suggestions for alternate room configurations.  For example, if we want to generate a layout with 10 2-neighbor tiles, 6 3-neighbor tiles, and 2 4-neighbor tiles, we could just plug those numbers into our constraint table and re-run Solver to get something like this:




Extending the Example

Now let's say we want to extend the example to include the difficulty level.

Assume that each room has an overall difficulty level that represents how much of a challenge the player will encounter in that room, where '1' represents an easy room, '2' represents a moderately challenging room, and '3' represents a challenge, with a room that contains a boss monster or something like that.  So instead of simply using a '1' in our decision table for each cell with a room in it, we'll have a number between 1 and 3 indicating difficulty.

Our first criterion is that we want some variation in difficulty levels.  So we'll add a new table where each cell measures the change in difficulty between that cell and any nonzero neighboring cells (or 0 if the corresponding cell in the decision table is 0).


Now we add a cell to our constraint table that measures the average difficulty change -- the first row inside the red box in the modified constraint table shown below.  It does this using Excel's AVERAGEIF() function to count the difficulty change from the table above if the corresponding cell in the decision table is nonzero.


The second row in the constraint table tests whether there's one cell in the second-to-last column to the right (column H) with maximum difficulty.  This will help us ensure that the maximum difficulty is close to the exit, since the exit is at the rightmost side.

We also add a third row to check if every column in the neighbor cell is nonzero.  This will return a value of 1 only if each column of the neighbor count table is nonempty.  Since we know that a valid solution must have nonempty columns, this constraint will make it more likely that the generated level is fully connected, and there is a path between the "entry" and "exit" cells.  By itself, it's not enough to absolutely guarantee that the level it generates will be properly connected, but it gives us better odds than the previous spreadsheet did.

Finally, we make sure these three rows are added to the "Total Difference" objective cell, and solve.  Note that the Solver dialog is exactly the same as before, except that we now constrain the decision table cells to have values between 0 and 3 instead of 0 and 1.

Let it run for a few minutes, and voila!


Our decision table looks almost identical to the last one, except that each decision cell has a difficulty level associated with it, and the average difficulty change is 1.

Conclusion

Of course, this is something of a toy problem.  We built a grid-based system simply because that's what we can represent most easily with the grid-based design of Excel, but most component-based level design systems don't lend themselves to this format.

However, with a little work, we could also adapt this to a different, non-orthogonal representation, such as the graph-based representation we used in Part 8.  We could also limit ourselves to swapping compatible rooms within an arbitrary fixed configuration, and use that approach to find the best configuration of rooms within a game level without changing the overall layout.

In any event, the fact that it was so easy for us to build this example should tell us something.  There are clearly a vast number of ways to apply this approach; if we can only find a useful fitness function for the right configuration of rooms, it becomes almost trivial to apply that to many types of level recombination problems.

Stay tuned for Part 10, due sometime in the next few months, when we'll wrap up our discussion of decision modeling and discuss some very important big-picture issues.

-Paul Tozour

This article was edited by Professor Christopher Thomas Ryan, Assistant Professor of Operations Management at the University of Chicago Booth School of Business.

Sunday, September 22, 2013

Decision Modeling and Optimization in Game Design, Part 8: Graph Search

This article is the eighth in a continuing series on the use of decision modeling and optimization techniques for game design.  The full list of articles includes:


The spreadsheet for this article can be downloaded here: link


    Metroid Prime World Design

    If you’ve ever played any of the games in the Metroid Prime trilogy (or for that matter, any of the Metroid games), then you’re familiar with the way a typical Metroid world is set up.  Each map is a complex, interlocking set of rooms, with many parts of the map sealed off by requiring access to different suits, weapons, and power-ups.  As the player gains access to new technologies such as "morph ball mode," "spider ball mode," and different suits and weapons, different parts of the map structure gradually unlock, forcing a large amount of re-traversal for the player to visit each new unlocked area.

    As a result of this complexity, handling all of the traversal and re-traversal opportunities available to the player at each point in time is one of the major game design challenges on a Metroid style of game.  If we move any rooms, change the connections between rooms, or change the location where the player gets a certain suit or weapon or power-up, that has the potential to completely change the player's flow and quite possibly break the game.

    The in-game map looks something like this:



    If you’re a programmer, you’d probably look at a map structure like this and immediately realize the potential for all sorts of design tools you could use to build and manage the map structure.  Most likely, you'd make a system where designers could:
    • chart out a map's structure graphically,
    • get instantaneous feedback on where the player could move at each stage of the game,
    • view the combination of rooms the player could access with any combination of power-ups, and
    • instantly determine the fastest route from any room to any other room with any possible combination of power-ups (or indicate that no such path exists).

    Under the hood, you'd probably use an A* search or Dijkstra's search along the graph, taking the player’s movement capabilities into account while crossing each graph edge.

    A tool like that would be relatively easy to create; it would probably only take a talented programmer a few days to write, and it would likely save designers several weeks over the life of the project.

    Surprisingly, though, the creators of the Metroid Prime series never had a tool like that.  When I worked at Retro Studios / Nintendo between 2003 and 2008, we did it the old-fashioned way, and our then-Assistant Producer meticulously maintained large printed maps of all the worlds of each Metroid Prime game, hanging all of them around the company meeting room.  Each time one of the worlds changed, he would edit the associated map graph and re-print all the affected world maps.  These maps looked a lot like this:



    As much as we'd like to actually create a "map design tool" like this, we promised in Part 1 that we wouldn't write any code in this series.  So that kind of a full-featured map design tool is outside the scope of this article.

    What we'll do instead is show how we can handle a problem like this directly inside Excel.  So even if we're designers who can't get any programmer support to build this kind of a tool, we can still create something similar without having to rely on outside help.  And once we're done, we'll have a good template in place that programmers can take and run with to build exactly that kind of a map design tool.

    The Metroid World Example

    To illustrate this problem, we will build an example "Metroid world" and show how to solve it in Excel.  In order to prove that you can actually handle a problem like this from right inside Excel regardless of complexity, we’re going to make this a very complex problem, with 17 rooms and non-obvious connections between them.

    Our example world will look like this:




    Each bubble shows a different room, and the arrows indicate the connections between pairs of rooms.  The number next to each connection indicates the cost of traversing that connection in either direction.  We'll assume these costs fully account for the traversal time to get between any pair of rooms, and there are no costs associated with the rooms themselves (i.e. the traversal costs are to some arbitrary point inside each room).

    We'll explain the coloring of the different rooms later.

    Once we've built the model, we'll be able to ask it questions such as "what's the shortest path for the player to get from the Library to the Sanctuary?"

    A Simpler Problem

    Let's start by working our way up to that with a simpler problem.  Assume we have 6 rooms, 'A' through 'F', with connections as shown in the following directed graph.



    We want to know: what's the fastest way from A to F?  Is it ABEF, ABCEF, ABCDF, ADF  ...?

    You can quickly figure it out yourself by adding up the weights in all four potential paths, but let's see how we'd encode it in Excel Solver.

    First, we make a column with all the possible arcs: A->B, A->D, B->C, etc.  For each arc, we list the total distance of that arc (its traversal cost) in the "Total Distance" column.  To the right of that, we have columns for each room, and for each arc, we put a '1' in the room where the arc originates and a '-1' in the room where the arc terminates (so, for example, "C->D" has a '1' in the 'C' column and a '-1' in the 'D' column).



    We also have a "Decision Variables" column in yellow -- this indicates which arcs the decision model will actually use.  Right now, we'll set all of those to 0 to indicate that they're not currently used (i.e. the problem hasn't yet been solved).

    Because we're following the formatting conventions listed in Part 2, the decision variables are yellow and the predefined values stated as part of the problem description are in blue.

    Now, we add two rows beneath the right part of the matrix to sum up the current target values: this is the "Current" row shown below.



    The "Target" row indicates what we're attempting to achieve.  We simply put in a '1' for the origin, a '-1' for the destination, and '0' for all the other values.  We will use these when we set up the constraints later in the exercise in order to make sure that any potential solution begins and ends where we want it to.

    Finally, we set up the objective function.  This is simply the decision variables multiplied by the respective values in the "Total Distance" column (implemented with the Excel SUMPRODUCT() formula).  In other words, if any arc is used, its decision variable will be 1, and we will multiply it by the respective length of that arc in the "Total Distance" column; otherwise, its decision variable will be 0, and we will get a 0.  This will ensure that the objective function accurately measures the total distance of all the arcs that are actually used in whatever solution we find.

    Now, we solve (see Part 2 if you need a refresher on how to find and use Excel Solver).



    In our objective ("Set Objective"), we specify the orange objective cell.  Since the objective represents the total distance, and we want to minimize the total distance, we select the "Min" radio button below that.  For our decision cells ("By Changing Variable Cells"), we specify the yellow "Decision Variables" column.  In the constraints section, we specify that the yellow decision cells have to be integers that equal 0 or 1.  We also specify that the "Current" row has to equal the "Target" row, to ensure that the path begins and ends at the correct rooms.

    Before we solve, note that "Select a Solving Method" at the bottom has "GRG Nonlinear" specified instead of the usual "Evolutionary" solver.  We'll have a more detailed discussion of the different solver types in a future episode, but for now, suffice it to say that the GRG Nonlinear solver ("GRG" stands for Generalized Reduced Gradient) sometimes handles problems of this type much better than the Evolutionary solver does.

    Now, we hit Solve, and Solver fills in '1' values for the decision variables associated with A->B, B->C, C->D, and D->F, for a total path cost of 8 (note that Solver may take a few minutes to find this solution).  You can verify by hand that this is the correct answer by adding up all of the path costs of the other possible paths, which are all greater than 8.

    You can find this solution in the "Sample solution" worksheet of the attached spreadsheet.

    Searching in the Metroid World

    With that framework in hand, let's generalize it to the large Metroid sample world above.

    One big change in this version is that this example includes bidirectional links instead of directed arcs between rooms.  This means our decision variables will no longer simply be '0' or '1' to indicate whether an arc forms part of the shortest path; instead, we'll also use '-1' to indicate that the path is traversed backwards.

    [Note that this approach only works because all of the arcs in the Metroid example are bidirectional.  If we had any directional arcs in our example, we would have to use the previous approach, and specify any bidirectional arcs as two separate rows in the spreadsheet (for example, we'd have to specify "Lab -> Oculory" and "Oculory -> Lab" on two separate rows to indicate that the player can move between the two rooms in either direction).]

    Our spreadsheet looks similar to the previous one, except that we now add two columns, "Traversal direction" and "Absolute value of decision variables."  "Traversal direction" is just a visual assistant to indicate which direction we'll traverse a link, with a decision variable value of '1' translated into "Forward" and a value of '-1' translated to "Backward."


    We also change the objective function so that it multiplies the total distance column by the "Absolute value of decision variables" column rather than the decision variables themselves (since traversing a link backwards gives us a '-1' in the "Decision Variables" column, and multiplying this by the distance value would give us a negative distance).

    We also need to change our constraints so that it accepts integers between -1 and 1 in the "Decision Variables" column instead of 0 to 1.

    In the "Target" row below the room source/destination matrix, we enter a '1' for the Library and a '-1' for the Sanctuary to ask Solver to figure out the shortest path from the Library to the Sanctuary.

    After these changes, we solve again.  Note that because this is a much more challenging problem, we now have to tweak the Solver options a bit.  In the Solver dialog, we hit Options, select the GRG Nonlinear tab, and then check "Use Multistart" and set "Population Size" to 20 (see this link for more info).  This will make Solver do a much more exhaustive search and will make it much more likely to find the global optimum, and the expense of taking a bit more time to solve the problem.  With this option set, Solver may take 3-5 minutes to find the correct solution.



    After Solver finds a solution, we scan the "Traversal direction" column to make sense of the solution that Solver came up with.  We get the following as the shortest path:

    Library -> Sanctum -> Plaza -> Starport -> Chapel -> Sanctuary
    ...  for a total cost of 21.

    You can find this setup in the "Metroid world solution" worksheet of the attached spreadsheet.

    Adding Movement Constraints

    Of course, this still doesn't quite give us what we want.  The actual Metroid worlds have complex movement requirements that limit the player's access at any point depending on the exact combination of equipment he has acquired.  What we really want is a way to limit the search to only those rooms that are valid with a given combination of equipment.

    There are a lot of different ways to do this, but for the sake of simplicity, we'll just assume each room in the Metroid world graph above requires one particular type of equipment.  Then we'll solve that problem and leave any more sophisticated implementations as an exercise for the reader.

    We will assume the following colors in the graph have the following meanings:


    • Pink rooms (Library, Ruins, etc) don't require any special equipment.
    • Green rooms (Sanctum, Plaza, etc) require the player to have the morphball technology
    • Blue rooms (Courtyard, Throne Room, etc) require the player to have the Varia suit upgrade
    • Red rooms (Chapel, Furnace, etc) require the player to have access to missiles


    The goal is to use Solver to help us figure out how the optimal path for the player changes depending on whether or not he has the morphball, Varia suit, and/or missiles.

    We'll start by adding an "equipment required" table near the top of the spreadsheet:


    This table uses a '1' for each cell to indicate that the player currently has access to that technology, or a '0' if he doesn't.  So far, we've been assuming that the player can travel anywhere, so we've filled in a '1' for each technology.

    Now, we add two rows directly below the arc matrix, "Room category" and "Multiplier."


    Room category specifies the equipment type of the room listed in the same column -- None ('0'), Morphball ('1'), Varia suit ('2'), or Missiles ('3').

    The Multiplier row contains a formula that checks the room category in each column against the "Equipment required" table above.  It produces a '1' if the player possesses the appropriate equipment, and a '0' otherwise.

    The grey cells below that are a table that combines the arc reachability table above with the "Multiplier" row.  It produces a '1' if the room in the same column is included in the path for the corresponding row, and the multiplier in the same column is 1 (in other words, it multiplies the absolute value from the arc reachability table by the multiplier in the same row).

    We also add an additional column to the table just to the right of the "Total distance" column: "Traversable with current equipment."  This column will produce a '1' if both rooms in the arc are reachable with the player's current equipment.  If not, it will produce a value of '999' in order to make the arc prohibitively expensive.



    Finally, we change our objective cell (the orange cell called "minimum total distance") to also include this column in its SUMPRODUCT() formula.

    Now, we run Solver exactly as before.  Make sure you have the Options set as specified above (Use Multistart = TRUE with a Population Size of at least 20), or you may not get the desired results.

    First, we'll set the value for "Morphball" in the "Equipment required" table at the top of the spreadsheet to 0 and re-solve (note that this may take 3-5 minutes to solve).  This is the optimal path with no morphball:

    Library -> Ruins -> Survey Room -> Courtyard -> Lab -> Oculory -> Chapel -> Sanctuary
    ...  for a total cost of 25.

    Next, we set "Morphall" back to 1 and set the cell below it to 0 to find out the optimal path when the player doesn't have the Varia suit:
    Library -> Sanctum -> Plaza -> Starport -> Chapel -> Sanctuary
    You'll notice that this is exactly the same as the solution we found in the previous worksheet, before we began taking the player's equipment into account.

    What happens when we set both of these to 0, so the player doesn't have the morphball or the Varia suit?  Re-solving, we get:
    Library -> Ruins -> Foyer -> Mausoleum -> Sanctuary

    for a total cost of 41.  As you can see from the graph above, going down the very expensive Foyer -> Mausoleum arc (with a cost of 27) is the only option available when both the morphball (green) and Varia suit (blue) are disabled  ... and this is, in fact, the shortest path that includes that link.

    Problem solved!

    You can find this spreadsheet in the "With equipment requirements" worksheet of the attached spreadsheet.


    Conclusion

    In this example, we've shown how you can use decision modeling and optimization techniques as a powerful tool to solve graph search and reachability problems.  This particular Metroid game world example is of course only one application of this technique; there are many more.  You could just as easily use it to find the shortest path through a star system in a space game with direct links between stars, for example, or the shortest path along a flight path network in an MMO such as World of WarCraft.

    Tune in next time, when we'll show how you can use decision modeling and optimization techniques to not only search through game world configurations, but to help design those configurations in the first place.


    -Paul Tozour

    This article was edited by Professor Christopher Thomas Ryan, Assistant Professor of Operations Management at the University of Chicago Booth School of Business.

    Sunday, September 8, 2013

    Decision Modeling and Optimization in Game Design, Part 7: Production Queues

    This article is the seventh in a continuing series on the use of decision modeling and optimization techniques for game design.  The full list of articles includes:


    The spreadsheet for this article can be downloaded here: link

    Introduction

    In the previous articles in this series, we introduced the concepts behind decision modeling and optimization and showed how they can be useful in framing and solving a surprising number of design decisions.  We also discussed many of the limitations of these techniques.

    In this post, we'll show how these techniques can be used to optimize an undeniably hard design problem.  So, hang on to your hats: this example is going to be a beast.

    Once we're done, though, it should be very clear how this approach can be useful for tackling some very difficult design problems that can't reasonably be solved any other way.  Although it's a bit of work to set it up, the spreadsheet we end up with will serve as indisputable proof that decision models can answer some questions that are difficult or impossible to solve any other way.

    Our example is based on a production queue for a colony in a classic "4X" strategy game.  As designers, we want to know: what's the best order to build things in our colony?

    This will give us all sorts of insights into whether there is a "dominant strategy" that allows players to most easily win the game, or whether there are problems with the balancing of our game's buildings or the game rules.

    If we were to have some sort of system that allowed us to automatically see the best production queue for a given city or colony, that would be fantastically useful, wouldn't it?  Then we could tinker with the costs and benefits of each building, or experiment with our game rules, and see how the optimal production queue changed.

    In this installment, we're going to show you how to build exactly that kind of system.

    Although this example is limited by a simplified game model and the limitations of Excel and Excel Solver, it should be clear to anyone with a programming background how you could easily extrapolate from this example to your own model in a high-level language such as C++, C#, or Java, and potentially integrate it with your own game as well.

    Master of Orion

    In 1993, a small studio in Texas released a brilliant classic "4X" strategy game called Master of Orion.  Three years later, they followed it up with another classic, Master of Orion II: Battle at Antares.  These games were two of the initial forerunners in the space-based "4X" strategy genre, and laid the groundwork for later franchises such as Stardock's successful Galactic Civilizations series.  Our example in this case will use colony in a game broadly similar to Master of Orion II.



    A colony in MoO 2 is essentially identical to a city in a typical '4X' strategy game such as Civilization.  It serves as a growing population center, and the player can build one building at a time in the colony to increase its capabilities.



    Let's say you're designing a new game in the style of MoO 2.  You want to know what the optimal initial build order is in the general case -- that is, all things being equal, what's the best sequence of actions for a player to build up a colony?

    Being able to answer this question will be enormously useful to us in helping to balance the game and design all of the different building types.

    Your population begins at 1 million people.  It has a defined "maximum population" of 10 million people.  It grows at a rate that depends on the ratio of the current population to the maximum population:

    • At 10% of maximum population or less, it grows at 2.5% of the current population each turn
    • At 20% of maximum population size, it grows at 3% of the current population each turn
    • At 30% of maximum population size, it grows at 3.5% of the current population each turn
    • At 40% of maximum population size, it grows at 4% of the current population each turn
    • At 50% of maximum population size, it grows at 4.5% of the current population each turn
    • At 60% of maximum population size, it grows at 4% of the current population each turn
    • At 70% of maximum population size, it grows at 3.5% of the current population each turn
    • At 80% of maximum population size, it grows at 3% of the current population each turn
    • At 90% of maximum population size, it grows at 2.5% of the current population each turn
    Also assume that the population is divided into "citizens," with each "citizen" representing 1 million people (rounded down).  Each citizen requires 1 unit of "food."  Each citizen can be dedicated to either farming or production.  Each citizen designated as a farmer will grow 3 units of food, and the colony will automatically allocate as many farmers as needed to growing enough food to ensure all citizens are fed, with the remaining citizens dedicated to production (i.e. they are "workers").

    Also assume that a colony produces 1 unit of "production," plus 2 additional units for each citizen designated as a "worker."

    There are 5 initial building types in a colony:

    • A Hydroponic Farm generates 2 additional food every turn.  It costs 32 production units to build.
    • The Biospheres improvement increases maximum population size by 3.  It costs 56 production units to build.
    • The Cloning Center improvement adds 100,000 additional population every turn.  It costs 36 production units to build.
    • The Factory increases the production of each worker citizen by 1.  It costs 44 production units to build.
    • The Robotic Factory adds a flat 5 production units plus an additional 1 production per worker.  It costs 56 production units to build.
    Additionally, there is a sixth thing we can build: Housing.  We can build "Housing" whenever we like; unlike the others, it is not a discrete "building" but a target for us to allocate resources toward.  Every turn that we build Housing, we generate an additional 10,000 population per unit of production.

    This setup is a bit simplified compared to Master of Orion 2, but it's rich enough to prove the point we need to make here.

    Our question is: given the above, what should the player build, and when should he build it, in order to get to the best possible colony as quickly as possible?

    This is an even trickier problem than it first appears.  If it were simply a matter of finding the best ordering of a Hydroponic Farm, Biospheres, Cloning Center, etc, to build then we could just look through all 5!=120 combinations.

    But in fact, the player can build Housing on any turn, and this has a huge effect on the colony's population.

    So we're left with a much more complicated question: which of the 6 things (5 building types or Housing) should the player build for each of the N turns until all the buildings have been completed?


    The Simulation

    To solve this, we'll build a model conceptually similar to the one we used to solve for the tax rate in Part 2 of this series: we'll put the starting stats for the colony at the top of the attached spreadsheet and then show each new turn of the game on a subsequent row.  You can think of this as "unrolling" the game simulation onto the spreadsheet, with the vertical axis representing time.

    We'll start by making a table of "Production ID numbers."  These are fixed numerical IDs we'll use to refer to each of the five building types, plus Housing.  For example, '2' means "Biospheres."  We'll show why this is useful in a moment.


    Since we're carefully following the formatting standards we laid out in Part 2, these cells are all blue, indicating that they're fixed values that are part of the problem definition.

    Next, we'll set up a table for the population growth rates we described earlier:


    We'll also put in a table with some of the constants we described for the colony in the previous section.



    Now that we have these tables in place, let's build the simulation.


    We start by modeling population growth.  This part of the spreadsheet lists the turns along the leftmost column, and in the subsequent columns, it lists the current population (in thousands), the number of "citizens" (population in millions), the current population maximum, the ratio of current to maximum population.  Then, the "Growth Rate Next Turn" column looks up the current growth rate value in the "Growth table" above, while the "Growth Next Turn" multiplies this factor by the current population (since the growth rate is in terms of a percentage of current population).

    You'll notice, however, that this is actually 35 instead of 25 for the first two turns.  This is because we're actually building Housing for now, which adds another 10k population a turn.  We'll get to that part of the spreadsheet in a moment.

    "Population (in thousands)" grows every turn by adding the previous population to the "Growth Next Turn."  So, the 1000 population on turn 0 becomes 1000+35=1035 on turn 1, and so on.

    Next, we'll model the citizens' job roles:


    We know that each farmer produces 1 "unit" of food.  In this part of the spreadsheet, we determine the number of farmers and workers each turn.  Each new "citizen" added to the city will become a worker by default.  Any time the colony's food demand (= number of "citizens") exceeds the current food production (=3 x number of farmers), we'll reallocate one of these workers to a farming role instead.  The production column lists current production according to the formulas stated at the top (1 unit of production + 2 per worker, plus any modifiers based on whether we've built things like a Factory or a Robotic Factory).

    Now, we model the 5 different buildings.  For each building, we simply list the "production units" required to construct it in the top row.  In each subsequent turn, the production units will decrease if we're working toward that building.  If the value in any of these columns becomes 0 in later turns, it indicates that the building has been completed successfully.


    This gives us an easy way to track both the status of any building (whether it's been created or not), and our progress toward completing each of them.

    Finally, in the rightmost column, we put in our decision cells.  These cells will be integers that express our production decision for what to build each turn.  Because we're following the formatting conventions laid out in Part 2, these cells are yellow, because they're the ones we're going to tweak with Solver.


    The formulas in the spreadsheet are a bit too complex to go into in detail here; the reader is invited to download the attached spreadsheet.  However, the important thing to note is that as we change the cells in this column to any value from 1 through 5 (the Production ID Numbers we listed in the table above), the spreadsheet allocates all our production resources that turn toward any of the five different buildings in the "Production Remaining" column (download the spreadsheet and tinker with the values in this column to see for yourself).  If we keep them at 0, it represents Housing, and it adds to the colony's population instead.

    Optimizing

    There's still one thing we're missing: an objective function to help guide Excel Solver to the right answer.

    Our objective function should be something that expresses how powerful our city is -- its total population and the sum of all its capabilities.  A highly skilled, "min-maxing" player will want to build the colony to its maximum capability level -- that is, maximum population, food production, and production output -- so that it can make the greatest possible contribution to the player's space empire.

    Moreover, such a player will also want to achieve the maximum level of food and production output as early as possible, so that if the colony is interrupted by the need to do something else before this initial production queue is completed -- for example, it's attacked by an alien race and needs to retarget to military production -- it's likely to have a higher production capability at such a point to help it best address that challenge.

    In other words, even though the blue curve and the red curve in the illustration below both get to the same endpoint of total production capability over time, we should prefer the red curve because it gets us to a higher level of total production capability earlier than the blue curve does.  So we care about the total area under the curve, not just the endpoint of the curve.



    We'll build this as a customized function by adding three different values together:
    • The total food production throughout all turns of the simulation.  It might be enough for us to simply take the total value of food production on the last turn, but that might not differentiate between scenarios where we ramped up to maximum food production earlier in the simulation and those where we only ramped up later.  All things being equal, we want to get to maximum food production as early as possible, so we'll take the sum of the food production over all turns.
    • The total production over all turns.  Just as with food production, it makes the most sense to add the sum of production over all turns to reward earlier production gains.
    • An additional penalty for any buildings left unfinished.  We want to ensure that all 5 building types are completed by the end, so we'll take the sum of all the remaining production units required on all 5 buildings and subtract it from the total.  This should also help Solver find the best solutions more easily, since buildings don't have any effect on food or production until they're completed, and this factor helps reward Solver for partially completing buildings, whereas the previous two elements did not.
    We then add those three values together, and we now have the following at the bottom of the spreadsheet.



    Now that we've set up the spreadsheet, running Solver is surprisingly easy (again, if you're unfamiliar with Solver or can't find it in your copy of Excel, see Part 2).  This simply sets the orange "total score" cell above as the optimization objective, tells Solver to try to maximize its value, and sets the decision cells to the entire row of yellow cells in the "Work on what?" column.  It also tells Solver that these need to be integers between 0 and 5 (matching the 6 different values listed as our Production IDs in the table above).


    Now, we hit Solve.


    Not So Good  ...

    When you run Solver, you'll notice that it takes quite a long time to solve.  Since it's using the Evolutionary solving method (the drop-down box at the bottom of the dialog shown above), it can take a very long time to experiment with lots of different possible solutions and try to evolve the best solution.

    In our first run, we get a total of 593 for our objective function after 64 turns of game time.  This isn't very good; only the Hydroponic Farm and the Robotic Factory have been built.  Repeated attempts to re-optimize don't seem to improve the solution at all.

    The real problem here is that there are just too many parameters to optimize.  We've asked Solver to solve 64 different parameters; there are a massive number of possible solutions.  We've given Solver too big a haystack and too small a needle.

    If you wanted to throw money at the problem, you could probably fix it by licensing a more powerful Solver engine from the folks who make Excel Solver.

    But of course, that's not the approach I would recommend!  There are inefficiencies in our decision model setup, and it would be far better for us to fix those first and see if fixing them won't let Solver find a solution.

    As anyone who's done AI pathfinding knows, the best way to optimize your pathfinding algorithm usually isn't to tweak the algorithm itself, it's to simplify the space that it needs to search on.  The simpler and coarser the pathfinding representation, the more quickly any search algorithm will run.

    So we need to do the same thing: we need to take our 64 individual "What should I build this turn?" decisions and transform them into a much smaller set of decisions that Solver can handle using some kind of dimensionality reduction.



    It Only *Looks* More Complicated ...

    Let's rephrase our question.  We can take advantage of the fact that any time we start building a particular building, we'll want to keep building it until we're finished.  It seems safe to assume that the optimal production queue will build each of the five buildings in a focused stretch until they are done -- that is, there's nothing to be gained from stopping production on a building before it's finished to work on something else.

    This assumption allows us to rephrase the question from "What should I build every single turn?" to a much simpler series of questions:

    • How many turns should I build Housing before I create the first building?  (0 or more)
    • Which building should I build first?  (Production ID 1 through 5)
    • How many turns should I build Housing between the first and second buildings?  (0 or more)
    • Which building should I build second?  (Production ID 1 through 5)
    • How many turns should I build Housing between the second and third buildings?  (0 or more)
    • Which building should I build third?  (Production ID 1 through 5)
    • Etc  ...
    Then we continue that pattern until all 5 buildings have been built.

    In order to do this, we first implement a table above the spreadsheet that looks like this:



    The yellow decision cells on the left are the various "How many turns of housing?" questions we will ask Solver.

    The yellow decision cells on the right are the five production IDs of the five different buildings.

    In the center, we have our production table, which lists the sequence of events -- in this example, 18 turns of Housing, followed by building #3 (Cloning Center), then 1 more turn of Housing, then building #5 (Robotic Factory), then 0 turns of Housing, then building #1 (Hydroponic Farm), etc.

    Finally, in order to implement this change, we take the entire spreadsheet and copy it to a new worksheet in Excel.  Then, we replace all the cells in the "Work on what?" column with formulas that essentially express the build order as explained in the previous paragraph.  This will cause the Production table above to essentially "program" the build order.  We also change these cells from yellow to grey since they are no longer the decision cells.

    The exact formula is very complicated, and I don't want to derail the article by going into all the minutiae of how it's expressed in Excel, but as always, you're welcome to download the spreadsheet and check it out for yourself if you're curious.  (In short, we break it out into two separate columns, "Turns of housing in a row" that lists how many turns remain to build housing, and "Index in production table" that increments as we go further down the production table.  The "Work on what?" column returns 0 (Housing) for even-numbered rows in the Production Table, and returns the corresponding Production ID for odd-numbered rows.)

    We also have to change all of our Solver settings.  The dialog now looks like this:



    Here, the decision cells ("By Changing Variable Cells") are the yellow cells at the left and right sides of the new Production Table we showed above ("Turns of Housing" and "Production ordering").  The constraints simply state that the "Turns of Housing" cells need to be integers between 0 and 40, and the "Production ordering" values need to be integers between 1 and 5.

    [If you've been reading this series diligently, you may be surprised at what's missing in this step.  After all, the Production ordering table on the right side lists cells for the 5 buildings to create, each specified by a Production ID value from 1 to 5.  Don't we need to add some sort of special constraint here to ensure that they end up with unique values, and that we don't cause Solver to try to build the same buildings twice?

    The answer is no  ...  or at least, probably not.  The objective function we've specified above should already reward Solver for building all the building types, and it won't benefit from trying to build a building more than once, so that should already naturally ensure that they're unique.  So we'll leave off implementing this constraint until we actually need it (and as you'll see, we never will, so we can forget about it).]


    Conclusion

    Now, we can run Solver again, and in just a few minutes, Solver should give us our final answer, with our objective function having a value of 884 (if Solver doesn't reach this value for you, try re-running the Solver; you may also need to mess with the Solver's evolutionary optimization settings to help it attain this value).

    This solution represents the following:



    • Build Housing for 10 turns.
    • Then build a Cloning Center.
    • Then build a Robotic Factory.
    • Then build a Factory.
    • Then build a Hydroponic Farm
    • Finally, build Biospheres.
    That's it!

    In a relatively short time frame, we've solved a problem that it would be extremely difficult to solve with guesswork or with any number of testers, and we've gained useful design insights in the process.

    Not only that, we now have a tool that can allow us to quickly see how future design changes will affect the optimal build order.  If we decide at some point down the road to lower the cost of Biospheres from 56 to 40, or change Hydroponic Farms to generate +3 food instead of +2, we can make that change and re-optimize the model to see how it changes the optimal initial build order.

    From here, it's well worth looking at whether Cloning Centers and Robotic Factories are too powerful, too cheap, or too useful, or whether Hydroponic Farms and Biospheres might be too weak or too expensive.  It might be a good idea to play around with the benefits and the production costs of each of these buildings and find the exact point at which Solver changes its answer.  If we could find the point where Solver changed its mind about the ordering of two buildings, that would mean that we'd found the crossover point of those two buildings along a cost/benefit curve.

    Ideally, we'd like to build a tool that would give us an instant sensitivity analysis illustrating how the value of each building changed as its cost (or any other parameter) changed -- though that's a topic for another day.

    It's also worth noting that the buildings in this example lend themselves to a fairly well-defined objective function.  This won't necessarily be the case in general, as many buildings could have more ambiguous or context-sensitive contributions to a colony's success.  For example, the Marine Barracks in the example above generates marines to help with ground defense in case the planet is invaded.  The value of a Marine Barracks will scale depending on the planet's likelihood of getting invaded, and it will naturally be much more useful closer to the empire's borders than it will be in a backwater planet in the farthest sectors of the map where enemies can't reach -- there's no point wasting resources on building defenses for a planet that will never be invaded.

    It also depends to a certain extent on how many turns have elapsed in the game itself, since there is unlikely to be much planetary invasion in the early game, while space empires are just developing, whereas later in the game, enemies become much more aggressive and their technologies allow them to invade more distant planets.

    In these cases, we're left to improvise when determining how such a building would contribute to the production queue's objective function.  In the case of a Marine Barracks, we might have a contribution to the objective function that scales over time, or we might model multiple classes of colonies with different defensive needs (say, Low, Medium, and High), and give Marine Barracks and other such defensive buildings fixed weightings in each case, and see how the optimal production queue changes in each scenario.

    Stay tuned! In future articles, we'll build decision models to help us design game levels, analyze game world layouts in a Metroid Prime style of game, and much more.


    -Paul Tozour


    This article was edited by Rob Zubek of SomaSim Games and Professor Christopher Thomas Ryan, Assistant Professor of Operations Management at the University of Chicago Booth School of Business.