Author: robswor

Fourth Sprint – Gungnir Finalization, Stalwart Tuning

Fourth Sprint – Gungnir Finalization, Stalwart Tuning

The last three weeks have been pretty tumultuous for everyone, I think – naturally, this has affected the progress of Project Blue significantly. Over these last three weeks, I have finalized the Gungnir enemy’s behavior and worked on tuning the Stalwart, the charging enemy I discussed in my first blog post.

The Gungnir

For this sprint, I finished tuning the Gungnir’s jump scaling, and I tested it a bit more seriously in a test level and made some recommendation about how I believed it would be best to move forward with the Gungnir.

Previously, the Gungnir’s jump scaling was not great. It only scaled relative to how high the player was after a short period of time after the gungnir detected it was above a certain height above the Gungnir… If you can follow that sentence, awesome, because I can’t. It did not work particularly well – it would scale the same if that player had just jumped from slightly above the gungnir, or if they had just reached the peak of their jump, and the way I did the math to scale it meant the scale factor was often unnoticeable. Here is how it looked at the time of the last blog post:

Since then, I have changed it significantly. It still detects when the player is a certain height over the Gungnir, but rather than waiting an arbitrary period of time and jumping with an arbitrary force, the Gungnir instead waits for the player’s y-velocity to equal 0 before jumping exactly up to the height that player is at, calculating jump force each jump. Additionally, the Gungnir does not jump at all if the player is falling. This looks and feels much better than the original setup.

It isn’t perfect – if it’s only shooting at the peak, and isn’t jumping until the player reaches their own peak, then the Gungnir is guaranteed to miss just about 100% of the time, if the player were to only jump once. The Gungnir should be able to force the player to keep jumping, though, so I think this is an issue that sort of solves itself. I did increase the projectile speed to make up for it, but I do not believe it would be an issue. Further testing would be required to be sure, but it’s a moot point, as the Gungnir has been cut moving forward.

Even so, I did think about where it should go in levels, so here are some examples


Both of the above areas are asking the player to make one good jump that has consequences on a failure – these jumps are both relatively easy, so the gungnir adds some extra challenge to mix things up a bit.


Here, the gungnir seeks to mess with the player’s vertical movement instead. Same concept as above.


Here, it simply requires the player to use a little thought when approaching a fairly straightforward section of the level that is otherwise totally unthreatening.

The Stalwart

My other task was to improve and tune the Stalwart. With this, I had quite a bit I needed to do. Not only did it need tuning, but after some discussion with leads, we decided that the Stalwart’s originally planned charging style, where it barrels at the player until it hits something and can break blocks, would be better than having it just run towards the player indefinitely. I needed to implement that. It also had no warmup to its charge, so I needed to implement a brief wait period before it charges at the player so that it would not appear out of the blue and kill the player.

The first thing I did was implement the warning. I figured this was most likely going to be done with animation rather than code, so I added a brief wait period. I also tried to make it hop, but this caused some weird bugs that did not seem worth dealing with:

*The Stalwart’s art is not in the project yet, so I used the Twitter/Discord avocado emoji.

Then I dealt with having it charge until it hits something. While this seemed simple at first, it wound up being a large portion of my time. Originally, it would charge towards the player, and if the player jumped over it, it would slow down to a stop and then charge in their direction again. This certainly has the potential to be interesting at times, but it ultimately seems like it would be more annoying than fun:

Charging until it hits something I think has better implications for platforming gameplay and level design:

<drawing I sent to matt>

This image depicts a possible section of a level where a Stalwart is baited through a basic platforming challenge before destroying a crystal wall, which the player can not break on their own.

<Pictures from confluence>

Here, the player will drop into this hallway and become trapped with the Stalwart, and they must dodge it multiple times to continue, lest the avocado turn them into guacamole.
Here, the player pops up from the walljump challenge below and is greeted by a charging surprise of monounsaturated death.
Here, the player drops down from above and must avoid getting creamed by the Stalwart as they land.
Here, the stalwart will run back and forth forever until the player either leaves this small pit area or kills it. This means that the Stalwart can still be used as a combat-focused enemy. While the player could easily escape in this situation, situations could be created where the player must eviscerate the stonefruit in order to continue on.

In these sorts of situations, the Stalwart excels as something that can just throw itself at the player and attempt to get in their way. If there’s a breakable block at the end of the segment that the player needs the stalwart alive to break, then now these segments can become puzzles about how best to get the Stalwart down to the block without killing itself or the player.

Making the stalwart charge at the player was easy enough, I just had to make it so that the Stalwart not turn around. However, it would still stop when it got too far away from the player, defeating the purpose. This took some time to fix, but the code was straightforward enough. The biggest problem was when the stalwart would crash into walls:

This bug took a while for me to fix – ultimately, raycasting was the solution, but having never worked with 2D raycasts in Unity before, I was shocked to find that they’re structured almost nothing like 3D raycasts! *grumble grumble*

But, it works now! At first, it looked like this:

*pinball sounds*

But eventually I got it working as intended:

Then, after that, I just had to implement the breakable block:

This requires that the block be on the Terrain layer, which is unfortunate, and the feature is not technically complete, as my intent is that this kills the Stalwart, but health is not yet complete as far as I know. This should be a simple fix, though.

Thankfully, the Stalwart has not been cut. Further testing will be required to perfectly nail down values – it was extremely fast at first and I’ve toned that down, but it may need to be toned more. Damage and health will also need to be tested once implemented, and we may decide was want to go back to charging towards the player anyway.

Second & Third Sprints – Tuning and Breaking

Second & Third Sprints – Tuning and Breaking

While my first sprint was light, these two started out a little lighter. As my role is primarily that of a designer, I was bottlenecked by programming and art, and there were not enough programming tasks for me to pick up more. The first sprint of these last two weeks, my only task was to create some prefabs, which went painlessly.

This week, I had some more interesting tasks. I was directed to fine-tune the gungnir enemy, and I also found some extra programming work within the level design pod creating breakable blocks.

The Gungnir

The Gungnir is a basic enemy that shoots projectiles at the player, and can jump up and down in-place to try and hit a player who’s jumping. Here is what the Gungnir looked like when I got to it:

View post on

You may need to click the link to see it in action.

It fires at a decent pace, but the projectile is slow and its jump response is a little laggy. At first, I tried speeding the projectile and fire rate, and making the jump more responsive. I also made it so that it is a little less likely to jump in response to the player jumping, as it would jump when the player made even the tiniest hop before.

View post on

While this seemed good at first, when I tried to put the Gungnir into the demo level, it became clear to me that I needed to do some more. The large range over which it can shoot at any time can sometimes feel oppressive, and the high fire rate makes it hard to platform around. Additionally, especially in a platforming environment, I thought it felt weird that the gungnir could shoot at any point in its jump – not only did this mean that it could cover the entire vertical spectrum with its shooting, but also that it would sometimes perform “empty” hops – hops where it would never shoot. This meant I needed to make some changes to its AI.

First, I wanted to see if I could make it shoot only at the top of its hop. This would, on paper, give it more predictability and clarity, while also limiting its fire rate a little bit. This is the outcome:

View post on

While it isn’t perfrect, I think this looks quite clean when compared to before:

View post on

Now, it looks clean… but I am unsure whether or not I think it’s better for gameplay. As it is, I have made peak-only shooting an option that I was going to disable and leave for later, but I implemented another feature that I think works nicely with it. Now, the gungnir’s jump scales with how high the player jumps. Before, if the player performed a shorter hop, the gungnir might jump to try and shoot them. However, the gungnir might have jumped twice as high as the player – it seemed weird. Now, it scales to the player’s jump height:

This effect is admittedly rather subtle as it is, but I am still going through the process of tuning it. As it is, it takes a scalar created by dividing the height differential between the player and gungnir at the time of the jump, then dividing that by the buffer used to determine whether or not to jump, and finally doing some totally arbitrary math to make it feel reasonable. I think I’ve made great progress on this so far, but still have a bit more to do. Obviously, I have done nothing with damage (both dealing and receiving), but that has yet to be set up, and I’m not yet satisfied with the way jumping works quite yet. Empty hops are still possible, and I want to avoid messing with weapon cooldowns in the same way I’ve messed with jump height. The jump height still is not perfect, and I may look into combining the current height detection method with reading the player’s jump input – using inputs would allow the gungnir to more accurately scale its jump to the player’s, but maintaining height reading allows the gungnir to jump even when the player is not actively jumping but is instead simply above the gungnir.

Breakable Blocks

This is a much smaller section. There was no extra work for me within the enemies pod beyond tuning the gungnir, but I did find an extra programming task within the level design pod: implementing breakable blocks. This task seemed right up my alley – not only was it a good and interesting task, but with the Stalwart design (a charging, block-breaking enemy) I mentioned in my last post, it also related to my own work within enemy design.

This was a fairly easy implementation: following Brackey’s tutorial on breakable objects, I created an object that can be broken into smaller pieces when hit with the player’s sword. The number of hits required to break the block is adjustable, and the script should be applicable to any breakable object.

Here is the outcome:

Ultimately, I’m happy with how this turned out. The smaller pieces only collide with the environment – an earlier version would have allowed the player to platform off of them. And just to show how well it works out with other shapes:

Very happy with this result – I think it looks great and is quite juicy. Physics are applied to each child of the broken object individually, so it works with any number of objects post-break:

Overall, pretty happy with my work this week. I still have a bit of tuning to do on the Gungnir, but I am pleased with my progress. For future sprints, design should be less bottlenecked by programming, so I’m excited to start to test and tune more of our enemies!

Project Blue – First Sprint, Enemy Design Conceptualization

Project Blue – First Sprint, Enemy Design Conceptualization

Hello all! I’m back, and this time I’m designing enemies for WolverineSoft’s new game, currently codenamed Project Blue. Over the last couple of weeks, we started production! In our first sprint, my role as a designer meant I was designing enemies for our first area, the Crystal Caves. I conceptualized three enemies: “Qrabz,” “Stalwart,” and “Cube.” In this post, I’m going to go over these designs, and discuss the merits of each design, as well as some of the weaknesses. As it stands, none of these enemies have been prototyped or had concept art completed for them, so there is little I can share.



The Qrabz starts out as a basic enemy that serves as little more than a jumping and attacking tutorial – it patrols until it sees the player, then rushes them down. The player simply needs to either jump over it and ignore it or to quickly dispatch it with a sword swing or two. As the game goes on, though, the Qrabz evolves with the player. Drawing inspiration from hermit crabs, a Qrabz can take the crystals of the environment, using them as a shell that covers a certain portion of their body. However, a Qrabz will usually have a part of their body uncovered by armor. This means that the player must attack that area of the Qrabz in order to stand a reasonable shot at victory. The Qrabz can also hold a large crystal shield, which can deflect attacks or even deflect the player’s teleport ability, and can be repositioned to face the player. On top of all of that, they can be any size, with health & speed corresponding to their size, which means that a Qrabz can be a small platforming obstacle, a harrier enemy, or even a mini-boss. Due to their environment, they could also hide in the background and ambush the player, like a Bokoblin in Wind Waker hides in a pot. The initial design concept had a charging claw as well, but that is fairly redundant with the existence of the Stalwart, which will be discussed later.


There are a lot of these, in a bunch of categories. Most of these pros stem from the relative simplicity and flexibility of a Qrabz. Thanks to their basic behavioral patterns, a Qrabz is a good enemy to slot in just about anywhere. Presumably, this will be one of the first, if not the first enemy encountered in the game, and from then on the player knows pretty much exactly how they’ll act. As the game goes on, a larger Qrabz or an armored Qrabz will pose a higher difficulty and more interesting challenge, but the gulf of evaluation will remain narrow. The only time a Qrabz’s behavior will change will be whenever the crystal shield is introduced. The shield should add an interesting mixup that does involve a slight behavioral change, but the Qrabz will still move and act the same – now, though, it has a way to protect its weak points from a player who hesitates.

These positives extend into production. Level designers can just drag and drop a Qrabz into a level. If they want it to be a challenge, then they can drag and drop some armor pieces onto the Qrabz, or they can scale it up. For artists, the Qrabz is a flexible enemy that only needs one main sprite and then additional sprites that can be placed on top of it, rather than a new sprite for each variant. The Qrabz can also be placed in any future world with slight sprite changes and, if desired, behavioral changes. As an example, a Windmill Fort Qrabz could have steel for its armor and could walk on walls.


If we overuse the Qrabz, it could become quite dull for the players to see the same enemy popping up over and over again. I also am worried that an armored Qrabz may end up being seen as an un-fun nuisance, but that will require fine-tuning to avoid, I think.

The Qrabz has been chosen to be included in the game, which has me really excited.



I was told that other members of the team were interested in a charging enemy. At that point, I had already finished the Qrabz concept, which included a charging claw, but I thought that the charging claw on the Qrabz was going to be weird, hard for a player to catch, and would lead to too many behavioral changes, so I agreed that we would want a dedicated charging enemy.

Thus, the Stalwart was born. The stalwart is a larger, tougher enemy that, when aggro’d, charges at the player endlessly. Once it starts a charge, it runs until it hits something solid – even off of platforms or cliffs. Charging is the only way it moves while it is aggro’d by the player. Its charges are highly telegraphed, giving the player plenty of time to react and get out of the way. If the player strikes a charging stalwart in the head, the stalwart stops its charge, but if the player gets hit by the stalwart, they take significant damage. The key thing here, though, is that a stalwart’s charge can break through certain surfaces and objects that the player can’t, such as collapsed walls, opening up new routes. Its charge can also kill other enemies.

Initially, the design had two other behaviors – one where it only charges to the end of a platform, and one where it charges until x-aligned with the player. I think the others are interesting, but not worth the effort of implementation.


The stalwart’s charge turns this beast into both one of the biggest threats likely to be in the game (bosses notwithstanding) while also serving as an extremely valuable puzzle-platforming tool for the player. Getting hit by a stalwart will be suitably punishing, but the aha moment of baiting a stalwart towards a cliff to kill itself, getting it to kill a hoard of enemies, or tricking it into opening a blocked tunnel will make the risk more than worth it.


I think baiting a stalwart around could get boring if overused, and will require great care to make each puzzle feel right. My other concerns with the enemy stem from simply not totally knowing how the game feels – I don’t know how often a stalwart can work as an enemy you have to fight rather than an obstacle or a tool.

The Stalwart has also been chosen to be included in the game, which I am also extremely excited about. I think the Qrabz is a highly useful and practical enemy, and will hold a lot of importance, but I think that the Stalwart will be a far more interesting enemy to play with.



To be totally honest, the cube started out as a whim. I wanted to design an enemy that existed solely to punish bad sword throws. The Cube is almost what I got. An enemy that, when a thrown sword touches it, it swallows the sword and holds onto for a sec. If the player teleports to a swallowed sword, they, too, are swallowed, and take damage until they mash out. The Cube never moves and has no real aggressive behavior whatsoever, and can not be killed by most means. Wooo!

Thankfully, I decided that wasn’t enough. The Cube can also swallow enemies that touch it, and it can only swallow one thing at a time. But what happens if a second thing (enemy, sword, or player) touches it while it’s swallowed something else? They bounce! Hard! The cube would bounce whatever touches it a large distance – farther than the player could normally go with a normal sword throw. This turns the cube from a rather boring obstacle that would probably only be annoying into an excellent platforming hazard that should fit neatly with other mechanics and enemies in the game. Once you bounce off of a bouncy cube, it ejects whatever it’s swallowed and becomes a regular hungry cube again.


The Cube is a powerful tool, I think. Bouncing is always fun, and I can see a lot of particularly fun moments in mixing bouncing with other mechanics, or with more bouncing. A platforming challenge requiring the player to bounce from cube to cube, which requires the player to bounce, then throw the sword at the next cube, bounce off that, etc. could be fun. I could also see a fun challenge in something like getting a stalwart across a gap – I think that cubes and stalwarts are a particularly interesting combo that could make for some really cool puzzles.


I honestly don’t think a cube would ever be fun when used as a punishment tool. There might be some situations where it could be neat, but I think it would make for an arbitrary roadblock rather than a real interesting challenge. Other than that…. they just aren’t really enemies. They’re cool level design implements, but that’s really about it, I think.

The cube was not chosen to be included… yet. I think it would definitely be a better fit for the highly-vertical Windmill Fort, and I think I could do some revision on the design concept before it might be ready to be accepted. However, I think it has an extremely high fun ceiling, and hope that it will be included in the future.

For this sprint, I think I came up with some great enemy designs, and I’m very much satisfied with what I’ve got. I think I could have maybe come up with one or two more interesting designs, but I think these designs are a great starting place for an opening area, and I think the Qrabz and Stalwart are essentials. Stay tuned for two weeks from now, when I’ll post another update. By then, the Qrabz and Stalwart should have much more to them, and we may be moving onto the Windmill Fort’s designs.



And just like that, it’s over. Yesterday was the engineering design expo, where we showcased our projects.

The Last Week

Over the last week, I was able to fix a number of bugs and work some polish features out. I fixed some issues with scaling, some with touch management, and I managed to fix a major problem I had where I could not get clients to connect to each other reliably (I was dealing with regioning issues, as it turned out). Unfortunately, I was unable to ultimately get both AR and networked multiplayer working reliably in the same client, but I was able to fix a number of issues plaguing both, so I had a phone to show off AR and my laptop to show off Networking at the expo.

The Expo

At the expo, I had a fairly quiet showing. I got some compliments on the arrangement of my poster (based off a D&D 5e Character Sheet) and the project’s name, which were nice, if unimportant. I had a fairly modest demo, so I wasn’t expecting to catch too many peoples’ eyes, but I think I had more people stop by then expected. People generally didn’t seem too interested in my demos when they did stop by, nor were they particularly interested in hearing about the troubles I ran into with networking. Most of my good conversations involved why I wanted to do this project and discussions about how the scope of the project changed and why I opted to include networked multiplayer halfway through.

Most people who stopped by had enough knowledge about D&D to know that combat would at least have the potential to take a long time, and people who played tabletop RPGs generally agreed that a tool like Familiar would be useful.


All in all, it’s hard to call Familiar a success. However, I think in the end I got much closer to my goal than I had expected, and I think I learned a lot along the way. Naturally, this project taught me a fair bit about time management and scoping, and I got some useful skills in ARCore and Photon along the way.

If I were to do this project again, I think the biggest thing I would have done differently would have been to try and stick with Vuforia. Learning ARCore was nice, but I think I let myself get too carried away with the imperfections within Vuforia and modern AR tech in general. I think that, ideally, I would have instead operated with a much less realistic board, with tiles up to something like 8 inches by 8 inches, and created something that could be scaled to something more realistic as the technology improves. This is a line I’m interested in continuing down in the future, as I think it’s promising, and I still believe that the original idea for Project Familiar has potential.

Week 10 – Turns!

Week 10 – Turns!

I finally made some much-needed progress on turn order this week, and I polished up the ortho view.

(I apologize for the relative lack of images – I will update with those ASAP, but I can’t get pictures off of the Pixel for the moment.)

I started by doing something a little simpler to ease into it – I added zooming & scrolling to the main orthographic view. This was fairly straightforward and only took a couple of hours. There were a few problems here and there, but most had fairly trivial solutions once I diagnosed them. Scroll bars didn’t like to scale properly, as an example – I don’t really understand why this was the case, but it’s not like it was a challenging fix. I spent more time trying to handle issues with object placement: the grid isn’t centered at (0, 0) in ARCore, and mixing that with scaling meant it was extremely easy for the orthographic view to find itself centered nowhere near the grid.

I have also taken steps to greatly simplify my network code – and it looks like it’s paying off! I stripped almost every method that wasn’t either meant to be an RPC call or a helper, and made sure that any logic more complicated than changing a couple of variables based on one input was handled entirely by the host. I also simplified method parameters – previously, I had been using parameters that were too complex to work with, but now they’ve been reduced to a single string at most.

A before and after of how characters were added. AddCharacter was used in the same situation as AddCharacter_New. AddCharacterStatic is used in the same situation, but only for non-host players.

Here it is in action:

Super exciting stuff, I know. I also managed to get it set up such that a player can no longer drag their character around outside of their turn, but can still take steps to plan their turn, such as viewing characters’ movement radii.

While it may not seem that big, this is massive for me this week, especially since the expo is only 8 days out. The last things I need to do now are:

  • Clean up Touch Management a bit. Due to (I believe) the toll AR takes on devices, touch management can be spotty in AR. I think the phone might be missing when a TouchEnd event should happen at times, and other times I think it just loses track of the finger.
  • Multiplayer AR. While Cloud Anchors would be ideal, I may have to settle for getting augmented images working again. It seems like a consistent way to not have to worry about actually networking the grid while ensuring that it is as aligned as possible.
  • General polish (sir). There are smaller issues here and there – AoEs don’t scale properly in AR thanks to the grid scale slider, if it’s not you’re turn the picture-in-picture doesn’t pop up until you let go, highlighting tiles while dragging doesn’t work anymore, etc. Mostly small things that I can go without, but should also only take a matter of minutes to fix.
Week 9 – Almost There?

Week 9 – Almost There?

This week was also a little messy, but I did get some stuff done. Unfortunately, I was unable to get either of my main goals from last week done. I opted to focus less on Cloud Anchors this week – after reading more on them, they seemed difficult to implement using Photon, and I wanted to get my other issues in Photon solved first.

Which brings me to my first point: I left off last week in an ugly place with regards to networked multiplayer. I made a little bit of progress this week on that front – my code was looking more than a little al dente, so I decided to first take a few steps back and untangle the noodles a little bit. Before, I had each client processing the turn changes, but now I made sure that as many of those actions as possible fall to the host, which then tells the other clients the outcome.

Thankfully, this fixed the turn-sync issue – both clients now agree on whose turn it is! However, this showed me that the host seemed to believe that it was the owner of each player in the game, though this was a fairly easy fix. I left off here this week with it appearing that ending a turn doesn’t actually rotate through the turn order – very concerning. However, I spent a lot of time and energy on this for this week, and I felt like I needed something more tangible.

So, I went back to AR. I’d made so many changes since I last was working on my AR implementation that I could no longer just drop my code into an ARCore scene and watch it work, so I had to make a surprising number of changes to get back to that environment. It didn’t help that Unity decided that it doesn’t want to work with ARCore’s Instant Preview, so I have to build to a phone to test anything in AR now and I don’t have access to the editor at runtime – which was a considerable slowdown, but not a total blocker. Now, I have it working in AR, though it’s not perfectly stable; touch and drag registration is finicky, but when it works, it looks good! Shout-outs to my contact who lent me a Google Pixel 2, which has proven to be magnitudes more consistent than my own phone with ARCore.

Detected floor…
There we go.
And voila!

For now, this is absolutely acceptable! It’s not perfect, but it works well enough for now. I’ve gotten used to adjusting my touch management on a nearly-weekly basis by this point, anyway. As you can see, I also added a scale slider – this doesn’t work perfectly, and its range is a little crazy but I’m happy with it as a quick-and-dirty implementation of a much-needed feature so far.

For now, this is absolutely acceptable! It’s not perfect, but it works well enough for now. I’ve gotten used to adjusting my touch management on a nearly-weekly basis by this point, anyway. As you can see, I also added a scale slider – this doesn’t work perfectly, and its range is a little crazy but I’m happy with it as a quick-and-dirty implementation of a much-needed feature so far.

In addition, I decided to implement another AR aid feature: a full orthographic view. Many games that use a tile-based grid will offer a minimap, such as in the Fire Emblem series:

Fire Emblem: The Blazing Blade, for the Game Boy Advance

Since I made the Picture-in-Picture view a few weeks ago, a full orthographic view of the entire map seemed like a solid move. I ran into several issues here: Touch management caused more issues, scaling and camera placement was messy (🍝I currently have two scale factors🍝), and swapping between the two views wasn’t as straightforward as I would have liked, but it turned out alright:

Ft. Detected Plane

Ideally, this view would be able to zoom and pan, but I thought this was a fine stopping point for this week. I could also continue to show the AR view in the background, but that might just be visual noise.Going forward, I’ll set myself some easier, smaller goals – clearly I need to be doing that more often. Specifically, I’ll implement zooming and panning for the Orthogonal view. I already have set up a basic UI for that, but haven’t implemented functionality. On a larger scale, I need to figure out the turn order issues. I think for that I may need to completely restructure my turn code, but now that I have a better idea of how to avoid the pitfalls I ran into previously, I’m confident that it can turn out alright.

Week 8 – More Networking

Week 8 – More Networking

This week was rough. Nothing really went my way, and I struggled through a bunch of issues with little resolution. I’ll start with the good: I made a small fix to touch management that makes dragging work properly when the finger isn’t positioned over the grid. It used to be that, if you were not touching a grid tile, lifting up your finger would cause the game not to register that you stopped dragging. I also had a small issue where, if you were dragging one object and held it over another draggable object, the dragged object would be swapped.

So that’s the good news. My initial plan for this week had been to start moving back towards AR and to implement more rules in a networked environment. Starting with AR: for whatever reason, the Augmented Images feature of ARCore, which I was looking into using (aligning the grid onto the map seems difficult over a network otherwise), stopped working. Even the sample scene stopped functioning. I spent a couple of hours trying to debug this while also trying to write code that would work for my use case, but ultimately came away frustrated. I’m looking into using Cloud Anchors now, as I believe they will also function well, but I will need to test them. Aligning the digital and physical grids may be difficult for users, but I’m not super concerned about that for now.

After that, I decided to work on implementing initiative and turn order. This would likely not be difficult in a local setting, but over a network this is proving extremely difficult. I wanted to start basic: Rolling initiative takes no user input, it just randomly selects a number 1-20 and stores an [initiative, character] pair in a dictionary, using the initiative as the key. The game would then count down initiative, then resets the round when it gets to initiative count 0. I’ve run into many issues with this. One such issue is synchronizing initiative values: at first, I tried to add a photon view (which watches a component and synchronizes specified values) to the EncounterManager object and have it watch the initiative dictionary, but it couldn’t watch the dictionary due to the Character type. This required some mild retooling, but wasn’t super difficult.

Additionally, for whatever reason, turns don’t seem to sync properly, and when the game starts with two players, each client consistently says that it’s the other player’s turn.

How the screen looks before the encounter begins
The text in the bottom right should both say the same thing, and one of the clients should have an “end turn” button on the bottom of their screen. 

Around the time I started on this, my builds also stopped communicating with an instance running in the editor. I tried setting up a symlink so that I could have two editor instances running on the same project, but I couldn’t get that working either. This made debugging extremely difficult. Ultimately, I spent a lot of time wrestling with these issues this week. This has been frustrating, to say the least.

For next week, I’m hoping to meet with a local AR company to discuss ARCore a bit more, and I’m going to keep working at the initiative issue. I think these are both vital to the project, and I hope to make better progress next week.

Week 7 – Networking

Week 7 – Networking

This week, I started work on networking within Unity. At the recommendation of my adviser, I used Photon Unity Networking (PUN) to achieve a networked connection across game instances. This week is fairly light reading, as I don’t have a whole lot to show, but I think I’ve made decent progress nonetheless.

This process of getting networking to work in a basic form was straightforward, as Photon has a good official tutorial. I had two characters instantiated in the game world in just a couple of hours. Of course, there was nothing to determine which player could move which pieces, so I had to work on that. This proved harder than expected – at first, I made it so only the owner player could move their piece – but what happens when the DM needs to move it for them? Then I made it so the DM could move anyone’s character… but this caused a pretty major desynchronization where a player wouldn’t see their piece move unless they moved it.

I thought that would be an easy fix at first – I made it so that ,when the DM would try to move any piece that wasn’t their own, ownership would be transferred to them for the duration of the drag, and then when the user releases their finger, ownership would be transferred back to the original owner. This took a while to fix, and seemed pretty important, but eventually I got it to a more consistent point, shown below.

Only the left screen is actually the DM.

As it is, if you’re not careful, the players can still be desynchronized, but at this point I believe that is an issue with my touch management that I will have to fix later.

Somewhere in here I ran into a fun little issue where, if I closed the game without pressing “leave game” first, the Unity Editor would crash, and then get stuck in a crash loop where it would crash when I opened the project – that was a mess and a half, but the fix ultimately just wound up being to try and snipe another scene into the editor before Unity could load the lobby scene, but I was worried I was going to have to nuke the project and redo my setup for a while and wound up not having to – so that’s a happy ending, I guess?

I’ve also added functionality for a player occupying a tile to count as rough terrain – this was nothing huge for a single user, but in multiplayer took a little bit of extra learning. Thankfully, the solution was also straightforward for that – Photon’s RPC calls are easy enough to use.

Overall, I didn’t make a whole lot of tangible progress this week, but I feel like I made significant strides in understanding Photon and networking, and hopefully I’ll be able to springboard off of this week into a lot of meaningful work.

Week 6 – Progress!

Week 6 – Progress!

This week felt fairly productive. I spent some time trying to figure out this pathfinding, and managed to simplify it significantly in the end, I created a small feature that I think will be vital in AR, and managed to get the player actually moving.


I started with pathfinding. At first, I spent a while trying to make a good implementation of Dijkstra’s algorithm – trying to find the shortest path to each tile that could potentially be reachable. What this led to was a lot of infinite loops and freezing Unity. I even delved back into my EECS 281 (Data Structures & Algorithms) projects to try and get a feel for how to approach pathfinding in a good way, as that was the only time I’ve really done pathfinding. 

Eventually, I had an idea that seemed promising – I could find every tile in range, raycast from each tile to the start, and using that smaller subset of tiles in between the start and end to pathfind. Then, if there were obstructions or anything else, I could A* from the point of obstruction to the point on the other side!

…Thankfully, that thought process made me realize how much I was overthinking things. I don’t need to find the shortest path between each point – just a path from start to finish, and I can purge any path once it gets too long. I think I actually started with that thought process, but iSo I instead would just grow outward, highlighting points as I go, and if the path got too long, the point would be ignored.

The algorithm went from this:

(120 lines)

To this:

(5 lines in CharMove, 50 in GridTile)

Note how much smaller the text is in the old function. Also, one should note that the large blocks in the new function are simple conditionals checking all adjacent tiles. I also feel obligated to make a joke about/comparison to Pathfinder vs. D&D 5e here: my old code is like Pathfinder, the new code is like 5e.

Implementing and debugging this still took time – there were a fair few weird interactions and edge cases to take into account, but ultimately I got this:

Unreachable tiles are hidden in game view, but not scene view, for the purposes of immersion.

Compare that to the image of what it’s supposed to look like from last week’s blog post:


Additionally, it works with rough terrain! In this image, the red tiles are rough terrain:

And here is how the character would be able to move:

Rough tiles were also hidden if they were unreachable.

I’ve hidden tiles in the game view, as once I transition back to working in AR next week I won’t want the tiles to obscure the map. The perspective is a little bit off between the images, but the two red tiles on the right in the first image are the same two red tiles in the bottom right of the third image. I specifically placed those there because they would cost two tiles of movement to reach as rough terrain, but would be the limit of the player’s movement in normal terrain. As for why the player is still able to reach the edges on the left side, the player can move diagonally between them without technically touching rough terrain. I don’t currently have solid obstructions at the moment, but I am not concerned about ease implementation – worst case, if I made the cost of traversing an obstruction something like ‘999,’ it would be trivial to implement, if not clean.


My plan had originally been to work on networking after that, but after pathfinding that seemed like a daunting dive. Instead, I decided I needed to get the player to actually move first. This was a little more difficult than I had anticipated, as my Touch Management is definitely not as good as I would like, but I had it working sufficiently in only a couple of hours. Now, when you tap and hold on a character, after a brief moment (a sanity check for accidentally tapping on the player), they will move to wherever you drag, and then snap back onto the grid when you release. From there, you can pathfind as normal. Due to the fluid nature of D&D, I did not lock this to tiles that the player “can” move to, as a DM might rule something different. Here is player movement in action:

Unfortunately, that juicy circle and trail effect is built into my laptop – though I would like to implement something similar, time-permitting.

Picture-In-Picture, for AR

To go with this, I also added another feature that I suspect will be vital to the AR implementation: a picture-in-picture minimap-type-thing for dragging. Due to the fact that a player will likely frequently be sitting at off-angles and at a large distance from the game, I realized that just scaling tile collisions won’t be enough, so I created a Picture-In-Picture that shows an orthogonal top-down view of where exactly the player is dragging on the grid:

The green tile is the one that the finger is directly over.

While it is a little choppy here, I’ve found it to be extremely useful for trying to precisely select a tile. I’m very happy with how this turned out, and it ultimately has cost me very little for a whole new feature thus far. Moving forward with this, I would like to be able to have labels for each tile that have a chess-like name that only show up in the PiP – something like A1, etc, and I’d like it to rotate with the player, but those are not high-priority tasks. It can be confusing now, so at some point I will have to fix that, but I have ideas.

Moving Forward

After that, I started running through a tutorial on setting up the PUN – Proton Unity Networking – library and getting networking running. I was able to connect my personal phone as well as a Pixel 2 I received on loan from a contact in a basic game (though I couldn’t actually do anything as the sample game was not designed for touch screens), and have started working through a tutorial for it, but haven’t made significant progress yet.

Ultimately, I’m happy with my progress for this week! I got through a problem that I knew I would struggle with fairly unscathed, got another function that was harder than expected working well enough, and implemented another function that I suspect will be important surprisingly easily! For next week, I will work more on Networking, and I will also get back to my AR implementation. For now, my plan is to use a ground plane for the map and I will just do more testing on the Pixel 2, which I was given specifically to take advantage of ARCore.

Week 5 – Starting Visualization

Week 5 – Starting Visualization

This week, I ran into some AR roadblocks and decided to work on visualization tools instead.

I printed out a full, 30-by-30-inch battle map and tested the grid on that. It was, to say the least, not what I expected. Unity and Vuforia did not want to play nice with the setup I had, and nothing looked right in the end. Extended tracking also continued to serve as a point of frustration, so I opted to take a break from the AR implementation before I wasted too much time on it and started working on game rules instead.

First, I had to polish my grid implementation – this was fairly basic work and went smoothly. I also had to touch up the touch manager, which again went fairly smoothly. From there, I started first with Areas of Effect. This was a fairly straightforward implementation – after my improvements to grid and touch, I could easily spawn a cube, sphere, cylinder, or cone and place it in the game world. Once placed, they highlight any tiles and characters they overlap with.

I also started work on highlighting player movement. At first, I expected this to also be fairly straightforward, but it wound up being my Wall of the Week™. Since D&D operates in a square grid, movement seemed like it should easy, but once I started working on it I realized that there’s a catch: diagonals. Most grid-based games use taxicab movement – you can only take hard-angle turns, which would mean that in a field with no obstructions, a character with 6 tiles of movement would have a movement range that looks like this:

In D&D, the player can move diagonally, and the rules on that are odd, to say the least. The first space a player moves diagonally costs them 1 tile of movement, as normal – but if they move diagonally immediately afterward, it costs them 2, then 1, then 2, etc. This results in a movement radius that looks more like this:

Both of these images taken from here.

Taking this into account, alongside other factors that affect movement, like obstructions (walls, trees) or the rough terrain mechanic (crossing a tile that is rough terrain costs double movement), this will require some more significant pathfinding than I was prepared for. I am currently working on a Dijkstra’s implementation for this, but it has not gone smoothly so far.

Next week, I am planning to finish the movement radius, and would like to begin working on networking. I am debating revisiting ARCore, as well.

Theme: Overlay by Kaira
Ann Arbor, MI