Month: October 2019

Week 6 – Progress!

Week 6 – Progress!

This week felt fairly productive. I spent some time trying to figure out this pathfinding, and managed to simplify it significantly in the end, I created a small feature that I think will be vital in AR, and managed to get the player actually moving.

Pathfindering

I started with pathfinding. At first, I spent a while trying to make a good implementation of Dijkstra’s algorithm – trying to find the shortest path to each tile that could potentially be reachable. What this led to was a lot of infinite loops and freezing Unity. I even delved back into my EECS 281 (Data Structures & Algorithms) projects to try and get a feel for how to approach pathfinding in a good way, as that was the only time I’ve really done pathfinding. 

Eventually, I had an idea that seemed promising – I could find every tile in range, raycast from each tile to the start, and using that smaller subset of tiles in between the start and end to pathfind. Then, if there were obstructions or anything else, I could A* from the point of obstruction to the point on the other side!

…Thankfully, that thought process made me realize how much I was overthinking things. I don’t need to find the shortest path between each point – just a path from start to finish, and I can purge any path once it gets too long. I think I actually started with that thought process, but iSo I instead would just grow outward, highlighting points as I go, and if the path got too long, the point would be ignored.

The algorithm went from this:

(120 lines)

To this:

(5 lines in CharMove, 50 in GridTile)

Note how much smaller the text is in the old function. Also, one should note that the large blocks in the new function are simple conditionals checking all adjacent tiles. I also feel obligated to make a joke about/comparison to Pathfinder vs. D&D 5e here: my old code is like Pathfinder, the new code is like 5e.

Implementing and debugging this still took time – there were a fair few weird interactions and edge cases to take into account, but ultimately I got this:

Unreachable tiles are hidden in game view, but not scene view, for the purposes of immersion.

Compare that to the image of what it’s supposed to look like from last week’s blog post:

Bingo.

Additionally, it works with rough terrain! In this image, the red tiles are rough terrain:

And here is how the character would be able to move:

Rough tiles were also hidden if they were unreachable.

I’ve hidden tiles in the game view, as once I transition back to working in AR next week I won’t want the tiles to obscure the map. The perspective is a little bit off between the images, but the two red tiles on the right in the first image are the same two red tiles in the bottom right of the third image. I specifically placed those there because they would cost two tiles of movement to reach as rough terrain, but would be the limit of the player’s movement in normal terrain. As for why the player is still able to reach the edges on the left side, the player can move diagonally between them without technically touching rough terrain. I don’t currently have solid obstructions at the moment, but I am not concerned about ease implementation – worst case, if I made the cost of traversing an obstruction something like ‘999,’ it would be trivial to implement, if not clean.

Movement

My plan had originally been to work on networking after that, but after pathfinding that seemed like a daunting dive. Instead, I decided I needed to get the player to actually move first. This was a little more difficult than I had anticipated, as my Touch Management is definitely not as good as I would like, but I had it working sufficiently in only a couple of hours. Now, when you tap and hold on a character, after a brief moment (a sanity check for accidentally tapping on the player), they will move to wherever you drag, and then snap back onto the grid when you release. From there, you can pathfind as normal. Due to the fluid nature of D&D, I did not lock this to tiles that the player “can” move to, as a DM might rule something different. Here is player movement in action:

Unfortunately, that juicy circle and trail effect is built into my laptop – though I would like to implement something similar, time-permitting.

Picture-In-Picture, for AR

To go with this, I also added another feature that I suspect will be vital to the AR implementation: a picture-in-picture minimap-type-thing for dragging. Due to the fact that a player will likely frequently be sitting at off-angles and at a large distance from the game, I realized that just scaling tile collisions won’t be enough, so I created a Picture-In-Picture that shows an orthogonal top-down view of where exactly the player is dragging on the grid:

The green tile is the one that the finger is directly over.

While it is a little choppy here, I’ve found it to be extremely useful for trying to precisely select a tile. I’m very happy with how this turned out, and it ultimately has cost me very little for a whole new feature thus far. Moving forward with this, I would like to be able to have labels for each tile that have a chess-like name that only show up in the PiP – something like A1, etc, and I’d like it to rotate with the player, but those are not high-priority tasks. It can be confusing now, so at some point I will have to fix that, but I have ideas.

Moving Forward

After that, I started running through a tutorial on setting up the PUN – Proton Unity Networking – library and getting networking running. I was able to connect my personal phone as well as a Pixel 2 I received on loan from a contact in a basic game (though I couldn’t actually do anything as the sample game was not designed for touch screens), and have started working through a tutorial for it, but haven’t made significant progress yet.

Ultimately, I’m happy with my progress for this week! I got through a problem that I knew I would struggle with fairly unscathed, got another function that was harder than expected working well enough, and implemented another function that I suspect will be important surprisingly easily! For next week, I will work more on Networking, and I will also get back to my AR implementation. For now, my plan is to use a ground plane for the map and I will just do more testing on the Pixel 2, which I was given specifically to take advantage of ARCore.

Week 5 – Starting Visualization

Week 5 – Starting Visualization

This week, I ran into some AR roadblocks and decided to work on visualization tools instead.

I printed out a full, 30-by-30-inch battle map and tested the grid on that. It was, to say the least, not what I expected. Unity and Vuforia did not want to play nice with the setup I had, and nothing looked right in the end. Extended tracking also continued to serve as a point of frustration, so I opted to take a break from the AR implementation before I wasted too much time on it and started working on game rules instead.

First, I had to polish my grid implementation – this was fairly basic work and went smoothly. I also had to touch up the touch manager, which again went fairly smoothly. From there, I started first with Areas of Effect. This was a fairly straightforward implementation – after my improvements to grid and touch, I could easily spawn a cube, sphere, cylinder, or cone and place it in the game world. Once placed, they highlight any tiles and characters they overlap with.

I also started work on highlighting player movement. At first, I expected this to also be fairly straightforward, but it wound up being my Wall of the Week™. Since D&D operates in a square grid, movement seemed like it should easy, but once I started working on it I realized that there’s a catch: diagonals. Most grid-based games use taxicab movement – you can only take hard-angle turns, which would mean that in a field with no obstructions, a character with 6 tiles of movement would have a movement range that looks like this:

In D&D, the player can move diagonally, and the rules on that are odd, to say the least. The first space a player moves diagonally costs them 1 tile of movement, as normal – but if they move diagonally immediately afterward, it costs them 2, then 1, then 2, etc. This results in a movement radius that looks more like this:

Both of these images taken from here.

Taking this into account, alongside other factors that affect movement, like obstructions (walls, trees) or the rough terrain mechanic (crossing a tile that is rough terrain costs double movement), this will require some more significant pathfinding than I was prepared for. I am currently working on a Dijkstra’s implementation for this, but it has not gone smoothly so far.

Next week, I am planning to finish the movement radius, and would like to begin working on networking. I am debating revisiting ARCore, as well.

Week 4 – Splitting Targets, Not Hairs

Week 4 – Splitting Targets, Not Hairs

This week, I made good progress, in both tech and ideas. I was able to get into contact with three people I’ve reached out to: a D&D content writer, who I have been exchanging messages with on Twitter, an AR/VR research professor, with whom I met last week, and an AR startup CEO, who I will be meeting with soon. I have been able to build off of the conversations I’ve had this week and have been making good progress!

Interviews

I have been exchanging direct messages on Twitter with a D&D content writer, mostly just talking about how he runs the game and what features he would like to see implemented. This conversation is still ongoing, but he has floated some really interesting ideas so far; most notably, what he called “afterimaging,” or allowing users to “rewind time” to view game state on previous turns. Seeing how a battle has progressed could be useful in jogging memory or fixing the game after making a mistake, or could allow for some interesting homebrew rules, too! Implementing this shouldn’t be too hard, due to the relatively static state of D&D – I would just need to save the game’s state after each turn. I just need to be careful of feature creep here.

Last Thursday, I met with a research professor at Michigan’s School of Information, who works with AR/VR. He offered a lot of insight, and was interested to see how my project goes even if the app doesn’t work out in the end (phew). I learned a bit about how Vuforia and most AR APIs work, and he helped point me in the right direction regarding splitting up my maps. Rather than cutting them into contiguous chunks, he believed I might get better results if I just pick a few chunks and have the tiles spanning the gaps. I think this is a promising lead, and I spent a fair amount of time this week moving in that direction this week. My only issue with it would be that it likely rules out Vuforia’s virtual buttons, but I’m not sure those would have been the solution for me anyway.

Splitting Targets

Hot off of the second interview, I went to Vuforia’s developer portal, loaded the Target Manager, and uploaded the battle map I used as an example in last week’s post to see how it would work as a target.

https://www.youtube.com/watch?v=ofqAUeVtPqw

So that’s not going to work. This map has little contrast, few details, and lots of repetition – it was destined to fail. I tried another map I found online, Abbey of the Golden Leaf, by /u/AfternoonMaps on reddit.

Much better than the old one! In fact, this one is so good I felt it might be a waste to chop it up into many smaller targets.

First, I tried cutting it into quadrants. This map is 30×30, so it cut nicely into 15×15 chunks. These scored well (4-5 stars), but they were all touching and the top-right one was hurt by the water (aka: featureless blob) in the corner. I could do better, so I picked areas on the map that seemed like they could score well, using these criteria:

  • High contrast
  • Right Angles/Intersections
  • Little empty space
  • Large (about 6×6 minimum)

From that, I got these:

The bordered areas became their own targets.

These all scored 4 – 5 stars each – not as good as the whole map, arguably not even as good as the quadrants, but 4 stars is still considered “excellent.” I wanted a longer one, a wider one, and a square-er one so that there would be good ones to recognize from different viewing angles – a target that is (from a perspective) longer will appear more warped as the angle gets more extreme.

One thing worth noting is that, while these targets are worse than the original, they’re more granular. While it might be easier for a device to maintain tracking on the larger one in theory, in practice players may sit too close to the larger one to be able to detect it in the first place, so being able to detect smaller portions of the map will be important.

I tried splitting the other map into smaller portions, to see how it would fare:

Take 2: https://www.youtube.com/watch?v=ofqAUeVtPqw

That’s… better, I guess? It’s still pretty bad. I picked the most feature-dense regions of the map, and even this scored 1 star:

I tried messing with filters, drawing random symbols, and a generating noise on the map to see if I could up the score without obfuscating the map itself, but the only thing that worked was simply crankin’ up the contast:

2 – 3 stars still isn’t good enough for me, so I figured I would settle with the good map for now and move forward.

Testing Touch on the Grid

I took a break from anything to do with the targets for a bit, and whipped up a simple, non-AR scene to test touching the grid. It’s a simple scene with four cameras around a 30×30 grid, and you can tap on grid tiles to highlight them. It was hard at first to touch tiles closer to the end of the grid, so I made their heights scale based on distance from the camera, but in the end I got this:

Green outlines are colliders – notice the camera towards the top left.

I expected the scaling not to help much, but it works quite well! If you want to see how it feels for yourself and have an Android device, you can download the test APK here. If you give it a try, let me know how it goes! For now, you only tap each square, but moving forward I intend for tile selection to be done by dragging. Specifically, tiles will be highlight when they are passed over, but will not be selected until the user picks up their finger. This should feel better than it does now, I think, as it should give the user finer control over their selections.

Split Targets in Unity

I’ve set up a scene which contains the three arbitrary targets arranged within the 30×30 grid. As it stands, I have tied a reasonable arrangement of the tiles to each target, and made the tiles show up when the target is detected. Here it is in action:

This is from the scene view, so it looks misaligned, but it isn’t. This just seemed to be the best way to show that some of the tiles are tied to a subsection of the map. The tiles are not actually enabled by the map, though – their renderers are disabled, but the tiles are interactable regardless. I was unable to test this as well as I would have liked, because printing the full map onto an 8.5×11 sheet of paper meant the subsections were too small to function as targets.

Going Forward

For the future, I think I have a clearer path than before:

  1. Full map! I am going to print a full, 900-square-inch version of the map for next week and give it a test!
  2. Selection fine-tuning! For next week, I’ll implement a drag-and-release system.
  3. Networking! Networking is starting to seem more and more like a necessity, so I’m going to give it another shot.
  4. Visualization! I think I’d like to start with Player Movement – this seems like the best move, as my grid implementation is a little messy and should be cleaned up somewhat.

In closure, I’ve made good progress this week, and am excited going forward. Now, here’s a gif of me tapping the map:

Not pictured: excited noises
Week 3 – To Pivot or Not to Pivot

Week 3 – To Pivot or Not to Pivot

I spent a lot of my time this week thinking about how I can move forward. After last week’s technical issues, I began to wonder whether or not the technology currently available to me is going to be able to get the job done in a reasonable way. Recognizing images that are as small as one square inch without having to move around seems to be implausible with Vuforia, and the extended tracking offered is also inadequate. With how inconsistent my tests have been, I’m also beginning to wonder if the current project will even be capable of saving time.

My first thought was that I may be able to pivot. My D&D group is spread across the country, and so we’ve tried using digital tabletops like Roll20 for our campaigns; however, in our experience, engagement isn’t as high when you’ve not sitting around a table moving physical pieces. This certainly is at least partially related to the lack of any physical presence, but having to use the Roll20 interface is significantly less engaging than a physical tabletop. So, I thought to use AR to implement a shared digital tabletop overlay that allows physically separated players to share game state digitally. Going into this, I knew networking would be hard, but I figured it would be worth researching. Ultimately, this does not seem like it will work out, either, but it was worth the attempt.

First, I wanted to try using ARCore over Vuforia, and ditch image targets altogether. This was something I could apply to either a the original idea of Project Familiar or the pivot, so it seemed like a good next/first step. I spent some time adjusting my grid generation code to work based on tapping on an ARCore plane. Testing using ARCore in the editor is more difficult, but doable, and while I ran into some bugs and issues with scaling, it at first looked quite nice:

Not a great image, but notice how many of them there are compared to the blurry 1-foot ruler.

However, the scaling was still too off for me to want to try and apply this to a strict grid. I also attempted to use this over a normal D&D battle mat and didn’t even get to build a grid…

That’s a lot of ground planes!

I assume this is a result of 4 years of rolling and unrolling the mat combined with its natural reflectivity – which isn’t going to be uncommon. I had similar results on multiple attempts.

While I was working on the grid, I took a break and tried to do some networking. I decided to try and take a stab at networking multiplayer. I set up a very basic scene – a purple backdrop with a cube that was flagged as a player – and set up the basic Unity Networking High-Level API components in my scene. I built the scene for windows and tested to make sure everything worked as expected, and running multiple instances, I was able to connect them all to the host instance. Success!

The top left is the host, the other two would say “connecting” if they weren’t already connected.

However, I then built for android, and was totally unable to get the Android instance to connect to any other instance over a LAN connection – even though it could detect them just fine. I couldn’t find many leads on this, and figured my time for now was better spent elsewhere and returned to the grid.

As I was writing this blog post, I realized I may not need to pivot – another feature of Vuforia popped into my head: User-Defined targets. While they won’t be perfect, these would allow me to continue to use Vuforia, thus allowing me to use the digital buttons feature. What I could do, then, is require that users print off a map and cut it into smaller chunks (down to something fairly large, like 8 tiles by 8 tiles), like so:

Map taken from Pinterest, uploaded by user Aniket Raj. Was unable to find the original artist.

Then, the user can enter the dimensions, and when that segment of the image is recognized, it will be populated as a grid. From there, I believe I can make some good progress. Moving forward, I’m going to try to stick with the original idea, though I may have to trim the fat on which mechanics I want to digitize. For next week, I will experiment with how a broken-up battlemat works for a grid, finally test virtual buttons, and will try to test user-defined targets.

Week 2 – Grid Generation, Stability Struggles

Week 2 – Grid Generation, Stability Struggles

This week, I experimented with generating a grid based on placed image targets, and ran into some issues with tracking stability.

Grid Generation

As many TTRPGs are played on a grid, it makes sense that an in-engine grid would be necessary for a tool like this to function. While I wanted to use ground planes, combining them with image targets looked to be difficult. I decided to first try and simulate a grid by creating two targets which could be placed on opposite corners of a grid.

Artist’s Rendition

When the first target is found, the app would set that target as the world center in Unity, and then wait until it finds the second target. Once the second target is found, it calculates the distance between the two, rounds it out, and fills in the grid. All things considered, quickly throwing together and testing this process went fairly smoothly. I was able to set up a basic script that allows a specific method to be run when a target is found, and I created a singleton GridManager that creates the grid and holds references to it while the app is running. I ran into minor issues here and there with things like negative grid width, but nothing too major. This is the end result of my preliminary testing:

As it stands, this looks kind of neat, but I’m questioning how well it will work in the end. For now, the app treats target 1 as the world center, but target 2 still moves freely. This means that the grid won’t anchor itself to target 2 at all. This means that, if I get this working with a ground plane, it may be an extra struggle to use more than one target in cases where I need to perform a realignment; this then begged the question, why bother with two targets? While they make it easy to mark opposing corners of a grid rather than count out all the tiles you need, the implementation is complicated.

With my second take on grid generation, I tried something different – something that I had hoped would be significantly simpler to implement. Rather than relying on two targets, I could instead use one large target and build a grid from that based on a specified height and width. My original thought process had been that I wanted the user to be able to simply place their cards (which would be scaled relative to the size of a real-life grid tile) on the table and detect them, rather than need to count out the exact size of their grid, but I think that counting x and y dimensions is far from demanding.

As expected, this version was much easier to implement, especially since most of my grid creation code itself was already done. As it stands, it looks alright (ideally the grid’s center would be the target’s center), but larger grids would require either a massive target or that the player be willing to get up and examine the target repeatedly. With that, I opted to see how the grid would work completely targetless using the ground plane object.

Both of these objects appear to be completely misaligned with themselves and the ground. However, the area here is rather featureless, and I suspect that with less intense lighting and a more detailed surface it will function more predictably. I tried using my humanities readings and the smash bros meme from last week laid out near each other on a table to give it some detail, but…

If you can make out the placement marker on the left image, I applaud your impeccable eyesight, because I can’t even with the red box to help me out. The placement marker will shrink or grow depending on how far away the phone believes the surface is, and Vuforia appears to believe the monster can is significantly farther away than the block of text. While I assume this is caused by variations in the levels of detail between the two pages, this inconsistency concerns me. I tried this on several other surfaces later on, and it performed similarly inconsistently on a wooden table, my laptop keyboard, and a patterned couch. I couldn’t even get it to show up on other surfaces, like a floor with large tiles.

Unfortunately, I didn’t make as much progress this week as I would have liked. I’m happy I was able to generate grids in real space, but I hadn’t expected to struggle with stabilization for as much time as I did. For the next week, I’m going to tackle these stabilization issues (even if that means switching APIs again) and I’m going to try and track players’ locations in Unity.

Theme: Overlay by Kaira
Ann Arbor, MI