The multiplayer online games’ popularity is snowballing. They attract players around the world. Such popularity also significantly raised the requirements for game designers. Players expect games to be balanced and polished.
To create high-quality gameplay, game developers usually adjust the game balance iteratively:
- Doing load test thousand playtesting sessions, in which the real testers take part.
- Consider their feedback and redesign the game.
- Repeat steps one and two until both testers and game designers are happy with the result.
This process takes a long time and is imperfect. The more complicated the game is, the easier it is for minor oversights to imbalance. When games have many different roles, finding the right balance between special features and realistic non-repeatable plot variations is very difficult.
We use machine learning (ML) testing for game balance variants. By running millions of simulations with trained agents to collect data, this ML game testing methodology allows game developers to more effectively increase game enjoyment for real players.
How to begin designing your own game
We developed our project as a mock-up, intensively using machine learning. We have the game rules to start our journey. We designed these rules in advance to expand the capabilities of the game and make it more challenging while working in accordance with some general idea.
The game basic principles’ are:
- Play as creatures capable of storming (with their attack aid properties). Or be attacked (which reduces their health score), or use spells that create special effects.
- Creatures are summoned to limited size biomes, on the physical level located in the playing field space. Each creature has the desired biome. It takes repeated damage when placed on the wrong biome. Or if it occupies an excess in the biome.
- The player controls one creature starting at in the base state. It can evolve and amplify by consuming creatures. To do this, the player must also receive a bond energy certain amount. This is generated from different gameplay mechanics.
- The game ends when the player lowers enemy health to 0.
We decided to use a relatively conventional model and researched some popular games as the best practice for our model. Alpha Go, in which a convolutional neural network (CNN) learns to predict the possibility of winning in a random game state, was one of the evaluated examples.
Having memorized the model in games where random moves were selected, we forced the agents to play against each other, iteratively collecting game data. They were then used to train the new agent. With each iteration, the quality of the training data improved, as did the agent’s ability to play.
When choosing the description of the game state passed to the input of the model, we learned that the CNN network of the encoded “image” gives the best results. It turned out to be better than all the procedural reference agents and other types of networks (for example, fully connected). The selected model architecture is tiny. It is performed on the processor without visible delays.
Apart from the decision-making per game AI, we also used the model to show a player winning probability an estimate during the game.
This technique allowed simulating more game variations than players can play alive in the same time gap. After collecting data from games played, we analyzed the results to find an imbalance between the 2 player decks we designed. Finally, this was a good starting point for our project run in alfa mode.
If you wish to create your own game, I wish my explanation will help you to find the best solution for your own experiments to let your scenario be more multivariative, unpredictable, and catchy.
One particular fragment of our work on the game improvements is provided below.
Based on the observations collected during several phases of testing, we have made some general improvements. To produce the evolution, we reduced the amount of binding energy from 3 to 1 unit. This energy is necessary for the development of the chimera. We’ve also added a rest period for the other creatures. We’ve doubled the time it takes to recover from the hit actions at least for some of the player characters. Additionally, we randomized the impact of the creatures damages, based on the surrounding environment.
By changing the rules and repeating our game with ourselves, and with the team of testers, we learned that these updates were moving the game in the right direction. The average number of evolutions per game increased, and the advantage of the particular hero was lost.
Based on the collected data, we’ve increased both the players ‘ initial health and the amount of health that can be returned by healing spells. This should have provoked longer parties to develop more diverse strategies. In particular, it allowed the hero to survive for quite a long time. To provoke the agent into correctly summoning creatures and strategically placing them in biomes, we have increased the penalty for placing creatures in incorrect or crowded biomes. We also narrowed the gap between the strongest and weakest creatures with small attribute configurations. By introducing several series of similar changes, we finally got positive feedback from the testers. So, the combination of ML and human tests brought fruitful results for our game design improvements.
Typically, play testing takes months to identify an imbalance in new game layouts. Using our methodology, we were able not only to identify imbalance potential sources, and in a day’s matter to make configurations that allow mitigating their impact. We learned that a relatively conventional neural network is enough to achieve the game’s highest efficiency against players and conventional gaming AI. Such agents can be used in different ways, for example, to train new players or to identify sudden discrepancies in the strategies. As this approach speeds up my team project, I hope this work will inspire further researches and help the progress of integration of the machine learning abilities to game development for other developers.