Jason Lynch , 4th Year Individual Portfolio (The Thinking City) - A.I Development

Here you will find a step by step layout of the development behind the A.I in "The Thinking City". From generating a simple NavMesh, to aniamting the movement, all the way up to the A.I state machine itself. 

Back To Homepage

Relevance of A.I in our game:

Since "The Thinking City" is a single player game it requires a fully functional A.I with a variety of states such as Idle, Patrol, Alerted, and Pursuit. This is especially true when it comes to the vertical slice of the game my team and I have chosen to build. The lab section of our game that we've chosen to build is a highly tense section and as such requires a threat/obstacle to the player, hence the introduction of the two robots that will be patrolling the area and will hunt you or any noise they hear down with great determination. These two A.Is will have the ability to not only chase the player if seen and attack when close enough but they will also be able to hunt down any noise they may hear from an item you knocked off a table or even your very footsteps if you run along the metal floors.

All these things help convey the atmosphere and tone that we are trying to portray in our game which is why the A.I is not only a big part of the project but also an extremely crucial one.

Technical Challenges:

In terms of technical challenges there are a few that come to mind straight away such as:

  1. I've never built an A.I state machine before
  2. I don't know the full extent of unitys systems (Could have in built systems to help)
  3. It has to be done within a deadline
  4. It needs to be balanced with my other tasks/responsibilities

To combat some of these problems I have been building up a pool of resources to look into in order to get a better understanding of how to build an A.I state machine and what possible help Unitys built in systems might provide. You can see these to the right of this paragraph. It is my hope that using these resources to combat the first two problems will have a knock on affect when it comes to point 3 and 4 as having references to look at could help me speed up the development than if I was doing it along and if I can get the A.I developed quicker I will have time for my other tasks and responsibilities.

References:

I find myself to be a lot more of a piratical learner than a person who picks things up by reading or seeing them especially when it comes to things like developing in unity or programming. To this end I have amassed a pool of video resources/online courses that I can use to help me learn the engine as well as coding the A.I and best practices when it comes to these things. Below you can find a link to the resources that I am using throughout development.

It may not seem like a lot of references but the amount of video tutorials in each will supply me with a lot of information for quiet some time.

Keywords searched: Unity, A.I, Unity game development 

A.I Development Process:

Creating a Navigation Mesh for a game

First thing needed to allow an "Agent" (Object) to move around a map is a Navigation mesh or "NavMesh". This is an area of polygons made up of a series of triangles and Quadrilaterals This will tell the Agent where it can and cannot move on the map. Luckily unity has a button for generating a NavMesh for a level and have values allowing you to configure the agents and the NavMesh such as the agents height, radius, its step height, drop height, and jump distance, etc. The NavMeshs max slope can also be altered which controls the cutoff point of what the NavMesh considers an allowable path.

Creating a Navigation Mesh for a game video

Download Baking NavMesh.mp4 [0.51MB]
Details

A.I Waypoint Network Setup

Now that the NavMesh is set up it's time to get some Agents moving around our scene. However before I can do that I need to feed them locations to move to. To do this I set up a Waypoint Network. This is a simple lost of empty game objects that uses their transforms as locations that the A.I can move between. This is a very simple and small script as it only needs 4 things:

  1. The type of path display mode
  2. The value of the start of the list
  3. The length of the list 
  4. And the list itself

Once this script is attached to an empty GameObject it acts as a holder for the list of waypoints. Simple. You can find the script below as well as some screenshots of the script in action.

 

A.I Waypoint Network Script

A.I Waypoint Network Setup Done

Details

A.I Waypoint Network Editor Display

Once the waypoint network setup was done I decided that it would be good to have the waypoints display inside the scene view window so we would know the path order. This script was a little more tricky to build. It required overriding Unitys OnInspectorGUI() function that would give me new options in the editor window such as path display mode, and the option to select two points in the list to draw a line between. The other function that I had to use was the OnSceneGUI() function in Unity. This function was needed as it is called whenever an object is selected in the scene view window. Using this function along with code of my own that the function calls I was able to get Unity to draw lines between the waypoints inside a network. You can see the scripts and a video of the results below.

A.I Waypoint Network Editor Display Script

A.I Waypoint Network Editor Display Video

Details

A.I Basic Level Navigation

The video below shows the first progress in the development of our games (The Thinking City) A.I. To create this movement I baked the Navigation mesh into the map and attached a "Nav Mesh Agent" component to the cylinders allowing them to interact with the Navigation Mesh. I then created a C# script which used the waypoints created earlier and the A* ("A" Star) Algorithm to allow them to calculate the quickest possible path to the current waypoint and move towards it avoiding obstacles like walls. As you can see from the video  below, once a waypoint is reached they automatically move toward the next waypoint on the list. The script I created is also attached below but note this is just a test script. The A.Is code will be much cleaner.

A.I Basic Level Navigation Script

A.I Basic Level Navigation Video

Download Basic NavMesh Navigation [88.2MB]
Details

A.I Re-pathing and NavMesh carving

After setting up the NavMesh and A.I navigation I worked on further advancing the A.I by introducing re-pathing and NavMesh carving. The NavMesh carving can be seen in the video below. When the panel is dragged in front of the doorway it removes the NavMesh that linked the room with the outside hallway cutting off the Agents ability to enter the room. 

The A.I has been modified to account for the addition of the carving feature. The A.I will now realize when its path has been blocked off and will adjust to compensate. If it is still possible to reach its waypoint via a different route it will now talke that route. If not it will move to the nearest possible location to its waypoint before moving on to the next which can be seen in the video below.

A.I Re-pathing video

Download A.I Re-Pathing [5.81MB]
Details

NavMesh carving video

Download NavMesh carving [10.45MB]
Details

Humanoid Re-targeting

Now that I have a working NavMesh and Navigation agents moving around said mesh I decided to take a step away and look at incorporating a character model and animation into the movement. One handy thing I've discovered is that if a character is humanoid in it's design then you can use an approach called "Humanoid Re-targeting". This phrase essentially means you can re-configure the skeletal rig of a character model to work with any animation that is built for a humanoid model. This saves a massive amount of time as you can re-use animations across multiple models instead of having to make individual animations for each and every one. Below you can see a video of some popular game characters of various shapes and sizes doing the THILLER DANCE because why not. The important thing is that they all follow the same humanoid shape which allows me to map this animation to all of them

Humanoid Re-targeting Video

Download Humanoid Re-targeting.mp4 [9.44MB]
Details

A.I Navigation & Root Motion (With Vs Without)

When developing the navigation style for the A.I in our game I looked into navigation both with and without root motion. There is pros and cons to using both which needed to be considered.

  • Without root motion the character doesn't seem very realistic and almost looks boring. The animation is simply playing over the navigation system and there is zero communication between them. This leads to an effect known as "sliding" where the animation doesn't match the movement of the character. You can see this in the left camera in the video. This is NOT a desired trait, especially in our game. The one thing non-root motion has going for it is that it turns corners far quicker than root motion does as you can see in the video.
  • Root motion appears a lot superior to non-root motion. The character actually seems more realistic in its movement and behavior. The navigation system is being passed in the motion contained inside the animations which then tells it how fast it should be moving. This completely eliminates the "sliding" effect. The one drawback to this approach is the turning as you can see in the video. The turns aren't as sharp as without root motion which wouldn't fit our game at all as the A.I needs to be able to turn quickly. The reason for this is that since root motion uses the animations contained movement we would need animations to account for all possible angles of turning, or near enough.

We just don't have enough time to make hundreds of turn animations so to combat this I am using a combination of the two styles where I will use the contained root motion while moving but will override it and use the standard rotation for corners that are under a 90 degree turn. You can see from the video on the right where I have implemented this that the character will use a 90 degree root rotation where necessary but other wise doesn't use root rotation. Below you can see a video of two characters, one without any root motion and one with a combination of root motion and non-root on turns under 90 degrees(Left and Right respectfully). It is pretty clear to notice that root motion creates a vastly superior look and feel to the characters movement than without.

A.I Navigation & Root Motion (With Vs Without) Scripts

A.I Navigation & Root Motion (With Vs Without) Video

Download With Vs Without Root Motion.mp4 [22.19MB]
Details

A.I State Machine : Setting up the basics

Before I start on writing any scripts there are a few things that need to be set up first that the state machine will rely on:

  1. Layers 
  2. Tags
  3. The sensor collider
  4. The target collider (Target Trigger)

1) Layers: The A.I state machine will use several layers, AI Entity, AI Entity Trigger, AI Trigger, Visual Trigger, Audio Trigger, Player, and AI Body Part. The main layers of focus at the moment are AI Entity, AI Entity Trigger, AI Trigger, Visual Trigger, Audio Trigger.

  • The AI Entity layer will be assigned to the character model itself identifying it as the actual AI Entity.
  • The AI Entity Trigger layer is assigned to the target trigger. This allows the AI to realize when it has reached a location such as a waypoint, sound, sighting, etc. 
  • The AI Trigger is a layer that will act as the AIs hearing and sight range. Anything entering its detection area is sent to the state machine for processing.
  • The Visual Trigger layer is assigned to any object that is a visual threat to the AI.
  • The Audio Trigger layer is assigned to any object that is a audible threat to the AI 

2) Tags: The tags act as the identification of objects that the A.I state machine can interact with. For example you want the A.I state machine to be able to enter the attack state when it gets close to the player who will have the player tag. The tags needed at the moment are Player, AI Visual, and AI Sound Emitter.

  • The Player tag will solely be assigned to the Player so the AI knows who they are.
  • The AI Visual tag will be assigned to any threat that the AI can see
  • The AI Sound Emitter tag will be assigned to any object emitting a sound that the AI can hear

3) The Sensor Collider: The sensor collider is an empty game object that is centered on the A.I model and has a sphere collider attached that will act as the A.Is eyes and ears. It is marked as the AI Trigger layer as it lets the AI know when something has entered its hearing/visual range. A script will be required at a later point to link it to the state machine. It is important that the senor is a child of the A.I so it moves with them.

4) The Target Trigger: The target trigger is essentially a marker that will act as the A.Is location to go to. When in patrol the target trigger will be placed on the waypoint, when the player is in view the target trigger is on the player, when the A.I hears a sound it will be at the location the sound was generated. The A.I then uses the navigation system to move to the target triggers location. Once reached it acts out the appropriate action before the target trigger moves on to the next location. This must not be a child of the A.I so it can move freely without being effected by the A.Is movement.

With that all done the setup for the A.I is complete and the scripts for the state machine can be started

 

 

1) Layer Setup

Details

2) Tag Setup

Details

3) Sensor Setup

Details

4) Target Trigger Setup

Details

The A.I Sensor Script

The first script related to the A.I state machine will be a small one held by the Sensor object. In this script it will hold an instance of the state machine as well as three functions that will trigger when an object enters, stays, and leaves the sensors trigger radius. This script is essentially the A.Is eyes and ears. It is a very small but important script. The script is attached below and the code is commented to explain what it does. You may have errors in the code as at the moment it relates to a state machine script that I haven't yet written but will do very soon.

A.I Sensor Script

A.I Sensor Attached Image

Details

Current State Machine Diagram

Details

The A.I State Machine

The A.I state system is one of the biggest chunks of the A.I as a whole. This is what the State machine will use as its foundation and will inherit some of these scripts. The base AI State Machine script will control the setup for both the target trigger and the sensor trigger. It will also handle the waypoint network and pass the waypoint to the patrol state which will be built later. The script also handles the transition between states as well as passing up the trigger functionality to the current state script. 

It would take an awful long time and writing to type it out here so I've attached the script below and added comments on how everything works 

NOTE : Code will not function as is because it depends on the inherited A.I Robot State Machine script that will inherit this script.

A.I State Machine Script

Current State Machine Diagram

Details

The A.I State Machine Link

Now that the A.I State Machine base class is constructed I need to build a script that will link the root motion script that I will be making to the A.I state machine itself. This will be done through the A.I State Machine Link script. This script will be a very small script and will hold a reference to the A.I State Machine class so that it can access the Root Motion Configurator script to determine whether or not it should use root rotation and root position contained in an animation.

You can find the script below with comments. 

A.I State Machine Link Script

Current State Machine Diagram

Details

The Root Motion Configurator

Now that the linking script is done we can begin building the root motion configurator script. This script is another small but important script as it will tell the A.I state machine whether or not it should use root position and root rotation for any given animation state. This will allow the Navigation Agent to extract the movement contained within the animations for more realistic movement and turning.

You can find the script below with comments inside.

Root Motion Configurator Script

Root Motion Configurator Image

Details

Current State Machine Diagram

Details

The Cinematic Enabler

The cinematic enable script is another small script like the root motion configurator. It's purpose is to tell the A.I State Machine whether or not the A.I robot is in the cinematic part of it's animator which is where the charging animations would have been. This section of the robot has been removed however due to time constraints but the functionality is still there if we manage to get the time to build animations for it.

You can find the script below with comments within.

Cinematic Enabler Script

Current State Machine Diagram

Details

A.I Robot State Machine

Now that the base abstract A.I State Machine class is built I can build the A.I Robot State Machine which will inherit from from that class and be attached to our A.I character in game. This state will hold info such as field of view, sight, hearing, aggression, health, intelligence, satisfaction, replenish rate, and depletion rate. This state will also handle the communication with the animation state machine passing in the values like movement, attack, turning, feeding, etc. This is the first of the state machine classes that can actually be attached to the A.I character and will hold all of its information that can vary, however the A.I will not do anything as we will need to build the Idle state, Patrol state, etc.

You can find the script below. It is commented to make it easier to understand.

A.I Robot State Machine Script

Current State Machine Diagram

Details

The A.I State

After building the A.I state machine which our A.I Robot State Machine will inherit from I moved on to building the A.I State script which the A.I Robot State will inherit from allowing the various state types to inherit from it. This class will contain all the base functions that the various states will need in order to function such as the onEnter(), onExit(), and onTriggerEvent() to name a few. It will also hold a reference to the A.I State Machine script as it will need to communicate with it. I've added the script below and commented it explaining each function and the code within.

NOTE : This code alone will not function as it depends on inherited scripts such as  A.I Robot State, Patrol State, Idle State, etc.

A.I State Script

Current State Machine Diagram

Details

The A.I Robot State

The A.I Robot State script inherits from the A.I State script and controls what the Robot/The Tainted does dependent on what the sensor picks up entering its radius. It takes in the layer masks in the scene (such as default, wall, floor, etc) which will allow it to detect whether or not something is blocking it's view of the player. Using these layer masks in combination with its field of view allows me to calculate if the player is within the line of sight of the A.I or not and allows me to call the relevant state that the A.I should change to. I've added the script below and commented it explaining each function and the code within.

 

NOTE : This script needs to have the different states inheriting from it such as Idle, Alerted, Patrol, Pursuit, etc

A.I Robot State Script

Current State Machine Diagram

Details

A.I Robot State Idle

Having the two classes that Idle will inherit from finished I can now build the Idle state for "The Tainted" A.I. This is a very simple state as it just remains in the same spot until a set timer expires OR it is interrupted by the Player, Light, Sound event, etc in which case it will call the appropriate state. The only real responsibility for this script is that it sets the first waypoint for the patrol state before calling the alerted state to align its direction. There will be more added to this state in the future as The Tainted A.I will originally be powered down with it's holographic head deactivated. The script is also able to trigger the head to flicker on one a condition has been met. I've added the script below and commented it explaining each function and the code within.

A.I Robot State Idle Script

A.I in Powered Down Idle State Image

Details

Current State Machine Diagram

Details

A.I Robot State Alerted

After finishing the Idle state that the A.I will start off in I can now work on the alerted state. This state will be in charge of aligning the A.I with its next patrol point, a sound it has heard, or the players direction. If the A.I is aligning with the next patrol point then the alerted state will call the patrol state. If it is aligning with a sound or player then it will call the pursuit state to chase after the sound/player. This state is also in charge of calculating which direction the A.I should turn to align itself with the target the quickest. This is done by calculating the sign of the angle between the target and the A.I. It will then return a value less than or greater than 0 which will be passed to the animator to play the appropriate animation. Finally the alerted state will constantly measure the angle between the target point and the A.I so that when the angle comes within a certain threshold it can pass control to the patrol or pursuit state. 

I've added the script below and commented it explaining each function and the code within.

You can also find a video below showing the robot turning to its next waypoint once it has reached it's current one.

A.I Robot State Alerted Script

A.I Alerted State Video

Download A.I Alterted Turn.mp4 [2.98MB]
Details

Current State Machine Diagram

Details

A.I Patrol State

The next state to be developed after the alerted state is the patrol state. This is developed after the alerted state as it depends on the alerted state in order to align the A.I within the angular threshold of the next waypoint so that the patrol state can then move the A.I to the waypoint. This state has the option to both use or not use root rotation. When it is using root rotation it will use contained data in the animator state animation in order to turn when inside the angular threshold. However we have decided to use a SLERP rotation when the A.I is within the angular threshold as our animator just doesn't have the time to commit to making a lot of possible turning angles for the A.I. The patrol state also has the ability to call for a new waypoint once it has reached it's current one. I originally had it so that the Robot would be able to look toward the waypoint he was heading to but due to the new model not being humanoid in terms of Unitys skeletal definition that functionality has been temporarily removed.

You can find the script below and comments within.

You can also find a video below of the A.I in its patrol state.

 

A.I Patrol State Script

A.I Patrol State Video

Download A.I Patrol.mp4 [11.41MB]
Details

Current State Machine Diagram

Details

The A.I Robot Pursuit State

With the Idle, Alerted, and Patrol states all constructed I can now build the Pursuit state. This state can be called by all of the previous states and will be in control of following the player when they are in line of sight as well and moving the A.I to any sound/light event that it is alerted by. The state uses the root motion contained within the pursuit animation made by our animator Oisin. It does not use root rotation however as we want the A.I to be able to make sharp and quick turns to keep up with the player. To achieve this we use a SLERP rotation to keep fluidity. It does result in some "sliding" (where foot movements don't match actual movement) but this is a needed sacrifice allow the robot to keep up with the player. We also figured the player would be running away and would not notice this small detail. The Pursuit script also has a nice piece of functionality that saves CPU cycles by having it repath less the further the A.I is away from the target but will increase in repathing once it is closer to the target so that it can match its rotation with theirs and not lose sight as easily. It also has the ability to drop into the Attack state (currently in construction) once it has come into melee range of the player. finally if it loses the player it will move to the last location it visibly saw the player before dropping into the alerted state to search for the player.

You can find the script below with comments within.

You can also find a video of the robot chasing the player, losing them, then reacquiring them below.

A.I Robot Pursuit State Script

A.I Robot Pursuit Video

Download A.I Pursuit.mp4 [11.36MB]
Details

Current State Machine Diagram

Details

The A.I Robot Attack State

With the Pursuit state finished the Attack state can now be made. The Attack state is called by the Pursuit state once the A.I is in range of the player and controls which attack animation to use as well as the movement speed of the A.I. The A.I uses a SLERP rotation t make sure it's facing the player during an attack. If the player moves out of melee range then the attack state calls the pursuit state to catch back up with the player.

You can find the script below with comments within.

You can also find a video of the A.I attacking below.

A.I Robot Attack State Script

A.I Robot Attack Video

Download A.I Attack.mp4 [10.84MB]
Details

Current State Machine Diagram

Details

The A.I Feeding State

The A.I Feeding state is the last of the behavior states developed for the A.I Robot. This state is in charge of the A.Is starvation replenishment. The original idea was to have the A.I need to recharge it's battery's or feed off of body's in the environment when it had no other threat in it's sights. This functionality would see the A.I moving to a charge location when its satisfaction rate fell to a certain threshold or feeding on a body in sight. While code wise the state is built and worked in my playground testing with an earlier model it was not implemented in the final game as we felt during testing that it made the A.I less of a threat as it was stationary more often.

You can find the script below with comments within.

You can also see an early development video below of how the state would have worked.

A.I Feeding State Script

A.I Feeding State Video

Download A.I Feeding.mp4 [9.58MB]
Details

Current State Machine Diagram

Details

The A.I Damage Trigger

The final part of the A.I state machine is the A.I Damage Trigger. This will detect if the A.I strikes a player within the appropriate window of an attack animation. For example, you don't want the player to take damage from the A.I as it pulls it's tentacle back in preparation for an attack. This can be handled inside the attack animation itself by setting curves and having the code check if that curves value is above a certain number. If it is above said number then the damage trigger will get the info of the player it has collided with and call the TakeDamage() function inside the Character Manager script attached to the player.

Note : The Character Manager script can be found in the General Programming page where I discuss both the original FPS Controller and the VR Controller and how they work with the A.I. 

You can find the script below with comments within.

You can also find a video of the the player losing health in the inspector after each hit from the A.I

A.I Damage Trigger Script

A.I Damage Video

Download A.I Damage.mp4 [7.22MB]
Details

Completed State Machine Diagram

Details