This week, I made good progress, in both tech and ideas. I was able to get into contact with three people I’ve reached out to: a D&D content writer, who I have been exchanging messages with on Twitter, an AR/VR research professor, with whom I met last week, and an AR startup CEO, who I will be meeting with soon. I have been able to build off of the conversations I’ve had this week and have been making good progress!
I have been exchanging direct messages on Twitter with a D&D content writer, mostly just talking about how he runs the game and what features he would like to see implemented. This conversation is still ongoing, but he has floated some really interesting ideas so far; most notably, what he called “afterimaging,” or allowing users to “rewind time” to view game state on previous turns. Seeing how a battle has progressed could be useful in jogging memory or fixing the game after making a mistake, or could allow for some interesting homebrew rules, too! Implementing this shouldn’t be too hard, due to the relatively static state of D&D – I would just need to save the game’s state after each turn. I just need to be careful of feature creep here.
Last Thursday, I met with a research professor at Michigan’s School of Information, who works with AR/VR. He offered a lot of insight, and was interested to see how my project goes even if the app doesn’t work out in the end (phew). I learned a bit about how Vuforia and most AR APIs work, and he helped point me in the right direction regarding splitting up my maps. Rather than cutting them into contiguous chunks, he believed I might get better results if I just pick a few chunks and have the tiles spanning the gaps. I think this is a promising lead, and I spent a fair amount of time this week moving in that direction this week. My only issue with it would be that it likely rules out Vuforia’s virtual buttons, but I’m not sure those would have been the solution for me anyway.
Hot off of the second interview, I went to Vuforia’s developer portal, loaded the Target Manager, and uploaded the battle map I used as an example in last week’s post to see how it would work as a target.
So that’s not going to work. This map has little contrast, few details, and lots of repetition – it was destined to fail. I tried another map I found online, Abbey of the Golden Leaf, by /u/AfternoonMaps on reddit.
Much better than the old one! In fact, this one is so good I felt it might be a waste to chop it up into many smaller targets.
First, I tried cutting it into quadrants. This map is 30×30, so it cut nicely into 15×15 chunks. These scored well (4-5 stars), but they were all touching and the top-right one was hurt by the water (aka: featureless blob) in the corner. I could do better, so I picked areas on the map that seemed like they could score well, using these criteria:
- High contrast
- Right Angles/Intersections
- Little empty space
- Large (about 6×6 minimum)
From that, I got these:
These all scored 4 – 5 stars each – not as good as the whole map, arguably not even as good as the quadrants, but 4 stars is still considered “excellent.” I wanted a longer one, a wider one, and a square-er one so that there would be good ones to recognize from different viewing angles – a target that is (from a perspective) longer will appear more warped as the angle gets more extreme.
One thing worth noting is that, while these targets are worse than the original, they’re more granular. While it might be easier for a device to maintain tracking on the larger one in theory, in practice players may sit too close to the larger one to be able to detect it in the first place, so being able to detect smaller portions of the map will be important.
I tried splitting the other map into smaller portions, to see how it would fare:
That’s… better, I guess? It’s still pretty bad. I picked the most feature-dense regions of the map, and even this scored 1 star:
I tried messing with filters, drawing random symbols, and a generating noise on the map to see if I could up the score without obfuscating the map itself, but the only thing that worked was simply crankin’ up the contast:
2 – 3 stars still isn’t good enough for me, so I figured I would settle with the good map for now and move forward.
Testing Touch on the Grid
I took a break from anything to do with the targets for a bit, and whipped up a simple, non-AR scene to test touching the grid. It’s a simple scene with four cameras around a 30×30 grid, and you can tap on grid tiles to highlight them. It was hard at first to touch tiles closer to the end of the grid, so I made their heights scale based on distance from the camera, but in the end I got this:
I expected the scaling not to help much, but it works quite well! If you want to see how it feels for yourself and have an Android device, you can download the test APK here. If you give it a try, let me know how it goes! For now, you only tap each square, but moving forward I intend for tile selection to be done by dragging. Specifically, tiles will be highlight when they are passed over, but will not be selected until the user picks up their finger. This should feel better than it does now, I think, as it should give the user finer control over their selections.
Split Targets in Unity
I’ve set up a scene which contains the three arbitrary targets arranged within the 30×30 grid. As it stands, I have tied a reasonable arrangement of the tiles to each target, and made the tiles show up when the target is detected. Here it is in action:
This is from the scene view, so it looks misaligned, but it isn’t. This just seemed to be the best way to show that some of the tiles are tied to a subsection of the map. The tiles are not actually enabled by the map, though – their renderers are disabled, but the tiles are interactable regardless. I was unable to test this as well as I would have liked, because printing the full map onto an 8.5×11 sheet of paper meant the subsections were too small to function as targets.
For the future, I think I have a clearer path than before:
- Full map! I am going to print a full, 900-square-inch version of the map for next week and give it a test!
- Selection fine-tuning! For next week, I’ll implement a drag-and-release system.
- Networking! Networking is starting to seem more and more like a necessity, so I’m going to give it another shot.
- Visualization! I think I’d like to start with Player Movement – this seems like the best move, as my grid implementation is a little messy and should be cleaned up somewhat.
In closure, I’ve made good progress this week, and am excited going forward. Now, here’s a gif of me tapping the map: