AI Finally Listens to Me
This sprint I added the functionality to the start button and optimized the AI's movement code. Since the last dev log post, I added a landscape, moving away from the standard chess layout of an autochess and giving the player more freedom to place their pieces. The pieces move towards enemies of a different tag so larger and smaller armies will have no problem fighting one another. I also added a new model to the pieces so they are easy to visualize as living characters.
I have been facing some challenges with the AI these last couple sprints. I have found it is easy to make AI... but making it listen to what I want it to do is significantly harder. Piece placing was tricky, but a fun challenge. While I have worked on AI in the past, it has usually been created by my programming teams and I simply build off of and balance what they made as a base. Building the foundation from scratch, while being a phenomenal learning experience, is also very much learning from failure.
One of the most significant alterations that I made to the system was the transition to using Unreal's behavior tree and blackboard system. I initially tried to program the AI through blueprints in the AI actor itself, but it quickly became a tangled mess. I had literally made spaghetti code. Moving to the behavior tree set me back a sprint, but allowed future ability integration to be much easier.
One of the biggest challenges I find myself facing through this project is the amount of time I have for it. My college capstone is extremely demanding as I find myself in the role of product owner having to manage and do a lot of work. This while juggling other classes, both part time jobs, and a healthy personal life is proving difficult, but not impossible. It is because of this, I only have 1-2 nights per week to work on this project. I luckily have the resources to get any help needed on the project very easily and am making substantial progress each week.