Category: Familiar

Post-Expo

Post-Expo

And just like that, it’s over. Yesterday was the engineering design expo, where we showcased our projects.

The Last Week

Over the last week, I was able to fix a number of bugs and work some polish features out. I fixed some issues with scaling, some with touch management, and I managed to fix a major problem I had where I could not get clients to connect to each other reliably (I was dealing with regioning issues, as it turned out). Unfortunately, I was unable to ultimately get both AR and networked multiplayer working reliably in the same client, but I was able to fix a number of issues plaguing both, so I had a phone to show off AR and my laptop to show off Networking at the expo.

The Expo

At the expo, I had a fairly quiet showing. I got some compliments on the arrangement of my poster (based off a D&D 5e Character Sheet) and the project’s name, which were nice, if unimportant. I had a fairly modest demo, so I wasn’t expecting to catch too many peoples’ eyes, but I think I had more people stop by then expected. People generally didn’t seem too interested in my demos when they did stop by, nor were they particularly interested in hearing about the troubles I ran into with networking. Most of my good conversations involved why I wanted to do this project and discussions about how the scope of the project changed and why I opted to include networked multiplayer halfway through.

Most people who stopped by had enough knowledge about D&D to know that combat would at least have the potential to take a long time, and people who played tabletop RPGs generally agreed that a tool like Familiar would be useful.

Wrap-up

All in all, it’s hard to call Familiar a success. However, I think in the end I got much closer to my goal than I had expected, and I think I learned a lot along the way. Naturally, this project taught me a fair bit about time management and scoping, and I got some useful skills in ARCore and Photon along the way.

If I were to do this project again, I think the biggest thing I would have done differently would have been to try and stick with Vuforia. Learning ARCore was nice, but I think I let myself get too carried away with the imperfections within Vuforia and modern AR tech in general. I think that, ideally, I would have instead operated with a much less realistic board, with tiles up to something like 8 inches by 8 inches, and created something that could be scaled to something more realistic as the technology improves. This is a line I’m interested in continuing down in the future, as I think it’s promising, and I still believe that the original idea for Project Familiar has potential.

Week 10 – Turns!

Week 10 – Turns!

I finally made some much-needed progress on turn order this week, and I polished up the ortho view.

(I apologize for the relative lack of images – I will update with those ASAP, but I can’t get pictures off of the Pixel for the moment.)

I started by doing something a little simpler to ease into it – I added zooming & scrolling to the main orthographic view. This was fairly straightforward and only took a couple of hours. There were a few problems here and there, but most had fairly trivial solutions once I diagnosed them. Scroll bars didn’t like to scale properly, as an example – I don’t really understand why this was the case, but it’s not like it was a challenging fix. I spent more time trying to handle issues with object placement: the grid isn’t centered at (0, 0) in ARCore, and mixing that with scaling meant it was extremely easy for the orthographic view to find itself centered nowhere near the grid.

I have also taken steps to greatly simplify my network code – and it looks like it’s paying off! I stripped almost every method that wasn’t either meant to be an RPC call or a helper, and made sure that any logic more complicated than changing a couple of variables based on one input was handled entirely by the host. I also simplified method parameters – previously, I had been using parameters that were too complex to work with, but now they’ve been reduced to a single string at most.

A before and after of how characters were added. AddCharacter was used in the same situation as AddCharacter_New. AddCharacterStatic is used in the same situation, but only for non-host players.

Here it is in action:

Super exciting stuff, I know. I also managed to get it set up such that a player can no longer drag their character around outside of their turn, but can still take steps to plan their turn, such as viewing characters’ movement radii.

While it may not seem that big, this is massive for me this week, especially since the expo is only 8 days out. The last things I need to do now are:

  • Clean up Touch Management a bit. Due to (I believe) the toll AR takes on devices, touch management can be spotty in AR. I think the phone might be missing when a TouchEnd event should happen at times, and other times I think it just loses track of the finger.
  • Multiplayer AR. While Cloud Anchors would be ideal, I may have to settle for getting augmented images working again. It seems like a consistent way to not have to worry about actually networking the grid while ensuring that it is as aligned as possible.
  • General polish (sir). There are smaller issues here and there – AoEs don’t scale properly in AR thanks to the grid scale slider, if it’s not you’re turn the picture-in-picture doesn’t pop up until you let go, highlighting tiles while dragging doesn’t work anymore, etc. Mostly small things that I can go without, but should also only take a matter of minutes to fix.
Week 9 – Almost There?

Week 9 – Almost There?

This week was also a little messy, but I did get some stuff done. Unfortunately, I was unable to get either of my main goals from last week done. I opted to focus less on Cloud Anchors this week – after reading more on them, they seemed difficult to implement using Photon, and I wanted to get my other issues in Photon solved first.

Which brings me to my first point: I left off last week in an ugly place with regards to networked multiplayer. I made a little bit of progress this week on that front – my code was looking more than a little al dente, so I decided to first take a few steps back and untangle the noodles a little bit. Before, I had each client processing the turn changes, but now I made sure that as many of those actions as possible fall to the host, which then tells the other clients the outcome.

Thankfully, this fixed the turn-sync issue – both clients now agree on whose turn it is! However, this showed me that the host seemed to believe that it was the owner of each player in the game, though this was a fairly easy fix. I left off here this week with it appearing that ending a turn doesn’t actually rotate through the turn order – very concerning. However, I spent a lot of time and energy on this for this week, and I felt like I needed something more tangible.

So, I went back to AR. I’d made so many changes since I last was working on my AR implementation that I could no longer just drop my code into an ARCore scene and watch it work, so I had to make a surprising number of changes to get back to that environment. It didn’t help that Unity decided that it doesn’t want to work with ARCore’s Instant Preview, so I have to build to a phone to test anything in AR now and I don’t have access to the editor at runtime – which was a considerable slowdown, but not a total blocker. Now, I have it working in AR, though it’s not perfectly stable; touch and drag registration is finicky, but when it works, it looks good! Shout-outs to my contact who lent me a Google Pixel 2, which has proven to be magnitudes more consistent than my own phone with ARCore.

Detected floor…
OH!
There we go.
And voila!

For now, this is absolutely acceptable! It’s not perfect, but it works well enough for now. I’ve gotten used to adjusting my touch management on a nearly-weekly basis by this point, anyway. As you can see, I also added a scale slider – this doesn’t work perfectly, and its range is a little crazy but I’m happy with it as a quick-and-dirty implementation of a much-needed feature so far.

For now, this is absolutely acceptable! It’s not perfect, but it works well enough for now. I’ve gotten used to adjusting my touch management on a nearly-weekly basis by this point, anyway. As you can see, I also added a scale slider – this doesn’t work perfectly, and its range is a little crazy but I’m happy with it as a quick-and-dirty implementation of a much-needed feature so far.

In addition, I decided to implement another AR aid feature: a full orthographic view. Many games that use a tile-based grid will offer a minimap, such as in the Fire Emblem series:

Fire Emblem: The Blazing Blade, for the Game Boy Advance

Since I made the Picture-in-Picture view a few weeks ago, a full orthographic view of the entire map seemed like a solid move. I ran into several issues here: Touch management caused more issues, scaling and camera placement was messy (🍝I currently have two scale factors🍝), and swapping between the two views wasn’t as straightforward as I would have liked, but it turned out alright:

Ft. Detected Plane

Ideally, this view would be able to zoom and pan, but I thought this was a fine stopping point for this week. I could also continue to show the AR view in the background, but that might just be visual noise.Going forward, I’ll set myself some easier, smaller goals – clearly I need to be doing that more often. Specifically, I’ll implement zooming and panning for the Orthogonal view. I already have set up a basic UI for that, but haven’t implemented functionality. On a larger scale, I need to figure out the turn order issues. I think for that I may need to completely restructure my turn code, but now that I have a better idea of how to avoid the pitfalls I ran into previously, I’m confident that it can turn out alright.

Week 8 – More Networking

Week 8 – More Networking

This week was rough. Nothing really went my way, and I struggled through a bunch of issues with little resolution. I’ll start with the good: I made a small fix to touch management that makes dragging work properly when the finger isn’t positioned over the grid. It used to be that, if you were not touching a grid tile, lifting up your finger would cause the game not to register that you stopped dragging. I also had a small issue where, if you were dragging one object and held it over another draggable object, the dragged object would be swapped.

So that’s the good news. My initial plan for this week had been to start moving back towards AR and to implement more rules in a networked environment. Starting with AR: for whatever reason, the Augmented Images feature of ARCore, which I was looking into using (aligning the grid onto the map seems difficult over a network otherwise), stopped working. Even the sample scene stopped functioning. I spent a couple of hours trying to debug this while also trying to write code that would work for my use case, but ultimately came away frustrated. I’m looking into using Cloud Anchors now, as I believe they will also function well, but I will need to test them. Aligning the digital and physical grids may be difficult for users, but I’m not super concerned about that for now.

After that, I decided to work on implementing initiative and turn order. This would likely not be difficult in a local setting, but over a network this is proving extremely difficult. I wanted to start basic: Rolling initiative takes no user input, it just randomly selects a number 1-20 and stores an [initiative, character] pair in a dictionary, using the initiative as the key. The game would then count down initiative, then resets the round when it gets to initiative count 0. I’ve run into many issues with this. One such issue is synchronizing initiative values: at first, I tried to add a photon view (which watches a component and synchronizes specified values) to the EncounterManager object and have it watch the initiative dictionary, but it couldn’t watch the dictionary due to the Character type. This required some mild retooling, but wasn’t super difficult.

Additionally, for whatever reason, turns don’t seem to sync properly, and when the game starts with two players, each client consistently says that it’s the other player’s turn.

How the screen looks before the encounter begins
The text in the bottom right should both say the same thing, and one of the clients should have an “end turn” button on the bottom of their screen. 

Around the time I started on this, my builds also stopped communicating with an instance running in the editor. I tried setting up a symlink so that I could have two editor instances running on the same project, but I couldn’t get that working either. This made debugging extremely difficult. Ultimately, I spent a lot of time wrestling with these issues this week. This has been frustrating, to say the least.

For next week, I’m hoping to meet with a local AR company to discuss ARCore a bit more, and I’m going to keep working at the initiative issue. I think these are both vital to the project, and I hope to make better progress next week.

Week 7 – Networking

Week 7 – Networking

This week, I started work on networking within Unity. At the recommendation of my adviser, I used Photon Unity Networking (PUN) to achieve a networked connection across game instances. This week is fairly light reading, as I don’t have a whole lot to show, but I think I’ve made decent progress nonetheless.

This process of getting networking to work in a basic form was straightforward, as Photon has a good official tutorial. I had two characters instantiated in the game world in just a couple of hours. Of course, there was nothing to determine which player could move which pieces, so I had to work on that. This proved harder than expected – at first, I made it so only the owner player could move their piece – but what happens when the DM needs to move it for them? Then I made it so the DM could move anyone’s character… but this caused a pretty major desynchronization where a player wouldn’t see their piece move unless they moved it.

I thought that would be an easy fix at first – I made it so that ,when the DM would try to move any piece that wasn’t their own, ownership would be transferred to them for the duration of the drag, and then when the user releases their finger, ownership would be transferred back to the original owner. This took a while to fix, and seemed pretty important, but eventually I got it to a more consistent point, shown below.

Only the left screen is actually the DM.

As it is, if you’re not careful, the players can still be desynchronized, but at this point I believe that is an issue with my touch management that I will have to fix later.

Somewhere in here I ran into a fun little issue where, if I closed the game without pressing “leave game” first, the Unity Editor would crash, and then get stuck in a crash loop where it would crash when I opened the project – that was a mess and a half, but the fix ultimately just wound up being to try and snipe another scene into the editor before Unity could load the lobby scene, but I was worried I was going to have to nuke the project and redo my setup for a while and wound up not having to – so that’s a happy ending, I guess?

I’ve also added functionality for a player occupying a tile to count as rough terrain – this was nothing huge for a single user, but in multiplayer took a little bit of extra learning. Thankfully, the solution was also straightforward for that – Photon’s RPC calls are easy enough to use.

Overall, I didn’t make a whole lot of tangible progress this week, but I feel like I made significant strides in understanding Photon and networking, and hopefully I’ll be able to springboard off of this week into a lot of meaningful work.

Week 6 – Progress!

Week 6 – Progress!

This week felt fairly productive. I spent some time trying to figure out this pathfinding, and managed to simplify it significantly in the end, I created a small feature that I think will be vital in AR, and managed to get the player actually moving.

Pathfindering

I started with pathfinding. At first, I spent a while trying to make a good implementation of Dijkstra’s algorithm – trying to find the shortest path to each tile that could potentially be reachable. What this led to was a lot of infinite loops and freezing Unity. I even delved back into my EECS 281 (Data Structures & Algorithms) projects to try and get a feel for how to approach pathfinding in a good way, as that was the only time I’ve really done pathfinding. 

Eventually, I had an idea that seemed promising – I could find every tile in range, raycast from each tile to the start, and using that smaller subset of tiles in between the start and end to pathfind. Then, if there were obstructions or anything else, I could A* from the point of obstruction to the point on the other side!

…Thankfully, that thought process made me realize how much I was overthinking things. I don’t need to find the shortest path between each point – just a path from start to finish, and I can purge any path once it gets too long. I think I actually started with that thought process, but iSo I instead would just grow outward, highlighting points as I go, and if the path got too long, the point would be ignored.

The algorithm went from this:

(120 lines)

To this:

(5 lines in CharMove, 50 in GridTile)

Note how much smaller the text is in the old function. Also, one should note that the large blocks in the new function are simple conditionals checking all adjacent tiles. I also feel obligated to make a joke about/comparison to Pathfinder vs. D&D 5e here: my old code is like Pathfinder, the new code is like 5e.

Implementing and debugging this still took time – there were a fair few weird interactions and edge cases to take into account, but ultimately I got this:

Unreachable tiles are hidden in game view, but not scene view, for the purposes of immersion.

Compare that to the image of what it’s supposed to look like from last week’s blog post:

Bingo.

Additionally, it works with rough terrain! In this image, the red tiles are rough terrain:

And here is how the character would be able to move:

Rough tiles were also hidden if they were unreachable.

I’ve hidden tiles in the game view, as once I transition back to working in AR next week I won’t want the tiles to obscure the map. The perspective is a little bit off between the images, but the two red tiles on the right in the first image are the same two red tiles in the bottom right of the third image. I specifically placed those there because they would cost two tiles of movement to reach as rough terrain, but would be the limit of the player’s movement in normal terrain. As for why the player is still able to reach the edges on the left side, the player can move diagonally between them without technically touching rough terrain. I don’t currently have solid obstructions at the moment, but I am not concerned about ease implementation – worst case, if I made the cost of traversing an obstruction something like ‘999,’ it would be trivial to implement, if not clean.

Movement

My plan had originally been to work on networking after that, but after pathfinding that seemed like a daunting dive. Instead, I decided I needed to get the player to actually move first. This was a little more difficult than I had anticipated, as my Touch Management is definitely not as good as I would like, but I had it working sufficiently in only a couple of hours. Now, when you tap and hold on a character, after a brief moment (a sanity check for accidentally tapping on the player), they will move to wherever you drag, and then snap back onto the grid when you release. From there, you can pathfind as normal. Due to the fluid nature of D&D, I did not lock this to tiles that the player “can” move to, as a DM might rule something different. Here is player movement in action:

Unfortunately, that juicy circle and trail effect is built into my laptop – though I would like to implement something similar, time-permitting.

Picture-In-Picture, for AR

To go with this, I also added another feature that I suspect will be vital to the AR implementation: a picture-in-picture minimap-type-thing for dragging. Due to the fact that a player will likely frequently be sitting at off-angles and at a large distance from the game, I realized that just scaling tile collisions won’t be enough, so I created a Picture-In-Picture that shows an orthogonal top-down view of where exactly the player is dragging on the grid:

The green tile is the one that the finger is directly over.

While it is a little choppy here, I’ve found it to be extremely useful for trying to precisely select a tile. I’m very happy with how this turned out, and it ultimately has cost me very little for a whole new feature thus far. Moving forward with this, I would like to be able to have labels for each tile that have a chess-like name that only show up in the PiP – something like A1, etc, and I’d like it to rotate with the player, but those are not high-priority tasks. It can be confusing now, so at some point I will have to fix that, but I have ideas.

Moving Forward

After that, I started running through a tutorial on setting up the PUN – Proton Unity Networking – library and getting networking running. I was able to connect my personal phone as well as a Pixel 2 I received on loan from a contact in a basic game (though I couldn’t actually do anything as the sample game was not designed for touch screens), and have started working through a tutorial for it, but haven’t made significant progress yet.

Ultimately, I’m happy with my progress for this week! I got through a problem that I knew I would struggle with fairly unscathed, got another function that was harder than expected working well enough, and implemented another function that I suspect will be important surprisingly easily! For next week, I will work more on Networking, and I will also get back to my AR implementation. For now, my plan is to use a ground plane for the map and I will just do more testing on the Pixel 2, which I was given specifically to take advantage of ARCore.

Week 5 – Starting Visualization

Week 5 – Starting Visualization

This week, I ran into some AR roadblocks and decided to work on visualization tools instead.

I printed out a full, 30-by-30-inch battle map and tested the grid on that. It was, to say the least, not what I expected. Unity and Vuforia did not want to play nice with the setup I had, and nothing looked right in the end. Extended tracking also continued to serve as a point of frustration, so I opted to take a break from the AR implementation before I wasted too much time on it and started working on game rules instead.

First, I had to polish my grid implementation – this was fairly basic work and went smoothly. I also had to touch up the touch manager, which again went fairly smoothly. From there, I started first with Areas of Effect. This was a fairly straightforward implementation – after my improvements to grid and touch, I could easily spawn a cube, sphere, cylinder, or cone and place it in the game world. Once placed, they highlight any tiles and characters they overlap with.

I also started work on highlighting player movement. At first, I expected this to also be fairly straightforward, but it wound up being my Wall of the Week™. Since D&D operates in a square grid, movement seemed like it should easy, but once I started working on it I realized that there’s a catch: diagonals. Most grid-based games use taxicab movement – you can only take hard-angle turns, which would mean that in a field with no obstructions, a character with 6 tiles of movement would have a movement range that looks like this:

In D&D, the player can move diagonally, and the rules on that are odd, to say the least. The first space a player moves diagonally costs them 1 tile of movement, as normal – but if they move diagonally immediately afterward, it costs them 2, then 1, then 2, etc. This results in a movement radius that looks more like this:

Both of these images taken from here.

Taking this into account, alongside other factors that affect movement, like obstructions (walls, trees) or the rough terrain mechanic (crossing a tile that is rough terrain costs double movement), this will require some more significant pathfinding than I was prepared for. I am currently working on a Dijkstra’s implementation for this, but it has not gone smoothly so far.

Next week, I am planning to finish the movement radius, and would like to begin working on networking. I am debating revisiting ARCore, as well.

Week 4 – Splitting Targets, Not Hairs

Week 4 – Splitting Targets, Not Hairs

This week, I made good progress, in both tech and ideas. I was able to get into contact with three people I’ve reached out to: a D&D content writer, who I have been exchanging messages with on Twitter, an AR/VR research professor, with whom I met last week, and an AR startup CEO, who I will be meeting with soon. I have been able to build off of the conversations I’ve had this week and have been making good progress!

Interviews

I have been exchanging direct messages on Twitter with a D&D content writer, mostly just talking about how he runs the game and what features he would like to see implemented. This conversation is still ongoing, but he has floated some really interesting ideas so far; most notably, what he called “afterimaging,” or allowing users to “rewind time” to view game state on previous turns. Seeing how a battle has progressed could be useful in jogging memory or fixing the game after making a mistake, or could allow for some interesting homebrew rules, too! Implementing this shouldn’t be too hard, due to the relatively static state of D&D – I would just need to save the game’s state after each turn. I just need to be careful of feature creep here.

Last Thursday, I met with a research professor at Michigan’s School of Information, who works with AR/VR. He offered a lot of insight, and was interested to see how my project goes even if the app doesn’t work out in the end (phew). I learned a bit about how Vuforia and most AR APIs work, and he helped point me in the right direction regarding splitting up my maps. Rather than cutting them into contiguous chunks, he believed I might get better results if I just pick a few chunks and have the tiles spanning the gaps. I think this is a promising lead, and I spent a fair amount of time this week moving in that direction this week. My only issue with it would be that it likely rules out Vuforia’s virtual buttons, but I’m not sure those would have been the solution for me anyway.

Splitting Targets

Hot off of the second interview, I went to Vuforia’s developer portal, loaded the Target Manager, and uploaded the battle map I used as an example in last week’s post to see how it would work as a target.

https://www.youtube.com/watch?v=ofqAUeVtPqw

So that’s not going to work. This map has little contrast, few details, and lots of repetition – it was destined to fail. I tried another map I found online, Abbey of the Golden Leaf, by /u/AfternoonMaps on reddit.

Much better than the old one! In fact, this one is so good I felt it might be a waste to chop it up into many smaller targets.

First, I tried cutting it into quadrants. This map is 30×30, so it cut nicely into 15×15 chunks. These scored well (4-5 stars), but they were all touching and the top-right one was hurt by the water (aka: featureless blob) in the corner. I could do better, so I picked areas on the map that seemed like they could score well, using these criteria:

  • High contrast
  • Right Angles/Intersections
  • Little empty space
  • Large (about 6×6 minimum)

From that, I got these:

The bordered areas became their own targets.

These all scored 4 – 5 stars each – not as good as the whole map, arguably not even as good as the quadrants, but 4 stars is still considered “excellent.” I wanted a longer one, a wider one, and a square-er one so that there would be good ones to recognize from different viewing angles – a target that is (from a perspective) longer will appear more warped as the angle gets more extreme.

One thing worth noting is that, while these targets are worse than the original, they’re more granular. While it might be easier for a device to maintain tracking on the larger one in theory, in practice players may sit too close to the larger one to be able to detect it in the first place, so being able to detect smaller portions of the map will be important.

I tried splitting the other map into smaller portions, to see how it would fare:

Take 2: https://www.youtube.com/watch?v=ofqAUeVtPqw

That’s… better, I guess? It’s still pretty bad. I picked the most feature-dense regions of the map, and even this scored 1 star:

I tried messing with filters, drawing random symbols, and a generating noise on the map to see if I could up the score without obfuscating the map itself, but the only thing that worked was simply crankin’ up the contast:

2 – 3 stars still isn’t good enough for me, so I figured I would settle with the good map for now and move forward.

Testing Touch on the Grid

I took a break from anything to do with the targets for a bit, and whipped up a simple, non-AR scene to test touching the grid. It’s a simple scene with four cameras around a 30×30 grid, and you can tap on grid tiles to highlight them. It was hard at first to touch tiles closer to the end of the grid, so I made their heights scale based on distance from the camera, but in the end I got this:

Green outlines are colliders – notice the camera towards the top left.

I expected the scaling not to help much, but it works quite well! If you want to see how it feels for yourself and have an Android device, you can download the test APK here. If you give it a try, let me know how it goes! For now, you only tap each square, but moving forward I intend for tile selection to be done by dragging. Specifically, tiles will be highlight when they are passed over, but will not be selected until the user picks up their finger. This should feel better than it does now, I think, as it should give the user finer control over their selections.

Split Targets in Unity

I’ve set up a scene which contains the three arbitrary targets arranged within the 30×30 grid. As it stands, I have tied a reasonable arrangement of the tiles to each target, and made the tiles show up when the target is detected. Here it is in action:

This is from the scene view, so it looks misaligned, but it isn’t. This just seemed to be the best way to show that some of the tiles are tied to a subsection of the map. The tiles are not actually enabled by the map, though – their renderers are disabled, but the tiles are interactable regardless. I was unable to test this as well as I would have liked, because printing the full map onto an 8.5×11 sheet of paper meant the subsections were too small to function as targets.

Going Forward

For the future, I think I have a clearer path than before:

  1. Full map! I am going to print a full, 900-square-inch version of the map for next week and give it a test!
  2. Selection fine-tuning! For next week, I’ll implement a drag-and-release system.
  3. Networking! Networking is starting to seem more and more like a necessity, so I’m going to give it another shot.
  4. Visualization! I think I’d like to start with Player Movement – this seems like the best move, as my grid implementation is a little messy and should be cleaned up somewhat.

In closure, I’ve made good progress this week, and am excited going forward. Now, here’s a gif of me tapping the map:

Not pictured: excited noises
Week 3 – To Pivot or Not to Pivot

Week 3 – To Pivot or Not to Pivot

I spent a lot of my time this week thinking about how I can move forward. After last week’s technical issues, I began to wonder whether or not the technology currently available to me is going to be able to get the job done in a reasonable way. Recognizing images that are as small as one square inch without having to move around seems to be implausible with Vuforia, and the extended tracking offered is also inadequate. With how inconsistent my tests have been, I’m also beginning to wonder if the current project will even be capable of saving time.

My first thought was that I may be able to pivot. My D&D group is spread across the country, and so we’ve tried using digital tabletops like Roll20 for our campaigns; however, in our experience, engagement isn’t as high when you’ve not sitting around a table moving physical pieces. This certainly is at least partially related to the lack of any physical presence, but having to use the Roll20 interface is significantly less engaging than a physical tabletop. So, I thought to use AR to implement a shared digital tabletop overlay that allows physically separated players to share game state digitally. Going into this, I knew networking would be hard, but I figured it would be worth researching. Ultimately, this does not seem like it will work out, either, but it was worth the attempt.

First, I wanted to try using ARCore over Vuforia, and ditch image targets altogether. This was something I could apply to either a the original idea of Project Familiar or the pivot, so it seemed like a good next/first step. I spent some time adjusting my grid generation code to work based on tapping on an ARCore plane. Testing using ARCore in the editor is more difficult, but doable, and while I ran into some bugs and issues with scaling, it at first looked quite nice:

Not a great image, but notice how many of them there are compared to the blurry 1-foot ruler.

However, the scaling was still too off for me to want to try and apply this to a strict grid. I also attempted to use this over a normal D&D battle mat and didn’t even get to build a grid…

That’s a lot of ground planes!

I assume this is a result of 4 years of rolling and unrolling the mat combined with its natural reflectivity – which isn’t going to be uncommon. I had similar results on multiple attempts.

While I was working on the grid, I took a break and tried to do some networking. I decided to try and take a stab at networking multiplayer. I set up a very basic scene – a purple backdrop with a cube that was flagged as a player – and set up the basic Unity Networking High-Level API components in my scene. I built the scene for windows and tested to make sure everything worked as expected, and running multiple instances, I was able to connect them all to the host instance. Success!

The top left is the host, the other two would say “connecting” if they weren’t already connected.

However, I then built for android, and was totally unable to get the Android instance to connect to any other instance over a LAN connection – even though it could detect them just fine. I couldn’t find many leads on this, and figured my time for now was better spent elsewhere and returned to the grid.

As I was writing this blog post, I realized I may not need to pivot – another feature of Vuforia popped into my head: User-Defined targets. While they won’t be perfect, these would allow me to continue to use Vuforia, thus allowing me to use the digital buttons feature. What I could do, then, is require that users print off a map and cut it into smaller chunks (down to something fairly large, like 8 tiles by 8 tiles), like so:

Map taken from Pinterest, uploaded by user Aniket Raj. Was unable to find the original artist.

Then, the user can enter the dimensions, and when that segment of the image is recognized, it will be populated as a grid. From there, I believe I can make some good progress. Moving forward, I’m going to try to stick with the original idea, though I may have to trim the fat on which mechanics I want to digitize. For next week, I will experiment with how a broken-up battlemat works for a grid, finally test virtual buttons, and will try to test user-defined targets.

Week 2 – Grid Generation, Stability Struggles

Week 2 – Grid Generation, Stability Struggles

This week, I experimented with generating a grid based on placed image targets, and ran into some issues with tracking stability.

Grid Generation

As many TTRPGs are played on a grid, it makes sense that an in-engine grid would be necessary for a tool like this to function. While I wanted to use ground planes, combining them with image targets looked to be difficult. I decided to first try and simulate a grid by creating two targets which could be placed on opposite corners of a grid.

Artist’s Rendition

When the first target is found, the app would set that target as the world center in Unity, and then wait until it finds the second target. Once the second target is found, it calculates the distance between the two, rounds it out, and fills in the grid. All things considered, quickly throwing together and testing this process went fairly smoothly. I was able to set up a basic script that allows a specific method to be run when a target is found, and I created a singleton GridManager that creates the grid and holds references to it while the app is running. I ran into minor issues here and there with things like negative grid width, but nothing too major. This is the end result of my preliminary testing:

As it stands, this looks kind of neat, but I’m questioning how well it will work in the end. For now, the app treats target 1 as the world center, but target 2 still moves freely. This means that the grid won’t anchor itself to target 2 at all. This means that, if I get this working with a ground plane, it may be an extra struggle to use more than one target in cases where I need to perform a realignment; this then begged the question, why bother with two targets? While they make it easy to mark opposing corners of a grid rather than count out all the tiles you need, the implementation is complicated.

With my second take on grid generation, I tried something different – something that I had hoped would be significantly simpler to implement. Rather than relying on two targets, I could instead use one large target and build a grid from that based on a specified height and width. My original thought process had been that I wanted the user to be able to simply place their cards (which would be scaled relative to the size of a real-life grid tile) on the table and detect them, rather than need to count out the exact size of their grid, but I think that counting x and y dimensions is far from demanding.

As expected, this version was much easier to implement, especially since most of my grid creation code itself was already done. As it stands, it looks alright (ideally the grid’s center would be the target’s center), but larger grids would require either a massive target or that the player be willing to get up and examine the target repeatedly. With that, I opted to see how the grid would work completely targetless using the ground plane object.

Both of these objects appear to be completely misaligned with themselves and the ground. However, the area here is rather featureless, and I suspect that with less intense lighting and a more detailed surface it will function more predictably. I tried using my humanities readings and the smash bros meme from last week laid out near each other on a table to give it some detail, but…

If you can make out the placement marker on the left image, I applaud your impeccable eyesight, because I can’t even with the red box to help me out. The placement marker will shrink or grow depending on how far away the phone believes the surface is, and Vuforia appears to believe the monster can is significantly farther away than the block of text. While I assume this is caused by variations in the levels of detail between the two pages, this inconsistency concerns me. I tried this on several other surfaces later on, and it performed similarly inconsistently on a wooden table, my laptop keyboard, and a patterned couch. I couldn’t even get it to show up on other surfaces, like a floor with large tiles.

Unfortunately, I didn’t make as much progress this week as I would have liked. I’m happy I was able to generate grids in real space, but I hadn’t expected to struggle with stabilization for as much time as I did. For the next week, I’m going to tackle these stabilization issues (even if that means switching APIs again) and I’m going to try and track players’ locations in Unity.

Theme: Overlay by Kaira
Ann Arbor, MI