Housewifeliness

August 24, 2009

So I’ve been working on an AI to play the game lately and it is looking pretty good.  I’ve basically devised an algorithm that can take any list of letters and come up with all of the words that can be made from the letters.  It uses my word graph approach to detecting word correctness but takes it a bit further.

One of my tests of my approaches involves feeding a random set of letters into the algorithm and looking at the results of the words.  Performance is also a big concern because this process must run very quickly.  I’m pleased with my initial results.  In one test, I fed 42 random characters, that were actually generated in game, and looked at all of the words that were created.  In that test, these 42 letters can be used to generate 12,918 words in my dictionary.  It all took about 0.15 seconds to do this.  I consider this to be good performance, but it will not run in just one frame in the game.  I’m going to have to partition the search to run on multiple frames (or run it on another thread).  I also scored each word and was able to rank them by score.  My word score algorithm takes into account the rarity of a letter and also gives bonus points for length.  For example, scoring a longer word gives a lot more points than a shorter word (per letter, it is practically exponential).  In this test, the highest scoring word was “housewifeliness” which scored a 64.  And yes, that is a real word.

These results have made me start to think of how to scale the difficulty of the AI.  There are multiple factors that add up to difficulty in this game:

  1. How well the AI is at making long and complicated (therefore, high scoring) words.
  2. How fast the AI actually plays the game.
  3. The knowledge and skill of how to play the game.
  4. Character strength, spells available, and anything else tied to a character’s level.

In this case, we are more concerned about #1.  Using the method I have for finding words, I can always find the most optimal word to make with regards to score.  This kind of AI would be too difficult for new players (low level characters), so we need a way to “dumb down” the AI.  I’ve not decided on how to do this exactly quite yet, but I have some ideas.  First, it could pick one of the non-optimal solutions some percentage of the time.  This would have the effect of making a “mistake” every once in a while.   Less intelligent AIs would make more mistakes more often.  Second, I could just have the search system not search for optimal solutions and instead stop searching when some criteria was met based on AI level.  For example, a less intelligent AI might stop searching for solutions when a shorter word was found and that would come across as a player with a smaller vocabulary.

When you combine those techniques with speed and character strength, that should be a good way to make a lot of AI characters that all have different strengths and weaknesses and therefore different challenges.


Avoiding Unnecessary Content Rebuilds

August 20, 2009

As I continue to work with my project, I’ve ran into a nasty situation where a certain sequence of events can lead to a complete content rebuild on the game project.  Sometimes this isn’t a big problem, but my game is now taking 2 minutes or so to completely rebuild from scratch.

My situation is as follows…I have a main game project and two other class libraries.  The first class library is the root of the project dependency tree.  It’s a library containing shared components.  The second library is a content pipeline extension containing my custom importers and processors.  If I make a change in the first library (let’s call this the “data” library), it causes the content pipeline library to need to be rebuilt.  OK, no problem, it builds in a about 5 seconds flat.  The real issue is that when Visual Studio detects that the main game project is using content processors from that pipeline extension library, it thinks all of the game content needs to be rebuilt.  This is bad because it takes so long to rebuild my project.

The solution is to break up the “data” library into two libraries:  one contains the objects that the content pipeline extension library requires and the other contains everything else.  The second class library containing the shared data objects is the stuff that is changing frequently anyway, so this means that the content pipeline library will not need to be rebuilt as much.  This causes the entire project build to speed up because of the incremental build characteristics of the content pipeline.  I’m glad I was able to work around this issue.