Tag: arcore

Week 3 – To Pivot or Not to Pivot

Week 3 – To Pivot or Not to Pivot

I spent a lot of my time this week thinking about how I can move forward. After last week’s technical issues, I began to wonder whether or not the technology currently available to me is going to be able to get the job done in a reasonable way. Recognizing images that are as small as one square inch without having to move around seems to be implausible with Vuforia, and the extended tracking offered is also inadequate. With how inconsistent my tests have been, I’m also beginning to wonder if the current project will even be capable of saving time.

My first thought was that I may be able to pivot. My D&D group is spread across the country, and so we’ve tried using digital tabletops like Roll20 for our campaigns; however, in our experience, engagement isn’t as high when you’ve not sitting around a table moving physical pieces. This certainly is at least partially related to the lack of any physical presence, but having to use the Roll20 interface is significantly less engaging than a physical tabletop. So, I thought to use AR to implement a shared digital tabletop overlay that allows physically separated players to share game state digitally. Going into this, I knew networking would be hard, but I figured it would be worth researching. Ultimately, this does not seem like it will work out, either, but it was worth the attempt.

First, I wanted to try using ARCore over Vuforia, and ditch image targets altogether. This was something I could apply to either a the original idea of Project Familiar or the pivot, so it seemed like a good next/first step. I spent some time adjusting my grid generation code to work based on tapping on an ARCore plane. Testing using ARCore in the editor is more difficult, but doable, and while I ran into some bugs and issues with scaling, it at first looked quite nice:

Not a great image, but notice how many of them there are compared to the blurry 1-foot ruler.

However, the scaling was still too off for me to want to try and apply this to a strict grid. I also attempted to use this over a normal D&D battle mat and didn’t even get to build a grid…

That’s a lot of ground planes!

I assume this is a result of 4 years of rolling and unrolling the mat combined with its natural reflectivity – which isn’t going to be uncommon. I had similar results on multiple attempts.

While I was working on the grid, I took a break and tried to do some networking. I decided to try and take a stab at networking multiplayer. I set up a very basic scene – a purple backdrop with a cube that was flagged as a player – and set up the basic Unity Networking High-Level API components in my scene. I built the scene for windows and tested to make sure everything worked as expected, and running multiple instances, I was able to connect them all to the host instance. Success!

The top left is the host, the other two would say “connecting” if they weren’t already connected.

However, I then built for android, and was totally unable to get the Android instance to connect to any other instance over a LAN connection – even though it could detect them just fine. I couldn’t find many leads on this, and figured my time for now was better spent elsewhere and returned to the grid.

As I was writing this blog post, I realized I may not need to pivot – another feature of Vuforia popped into my head: User-Defined targets. While they won’t be perfect, these would allow me to continue to use Vuforia, thus allowing me to use the digital buttons feature. What I could do, then, is require that users print off a map and cut it into smaller chunks (down to something fairly large, like 8 tiles by 8 tiles), like so:

Map taken from Pinterest, uploaded by user Aniket Raj. Was unable to find the original artist.

Then, the user can enter the dimensions, and when that segment of the image is recognized, it will be populated as a grid. From there, I believe I can make some good progress. Moving forward, I’m going to try to stick with the original idea, though I may have to trim the fat on which mechanics I want to digitize. For next week, I will experiment with how a broken-up battlemat works for a grid, finally test virtual buttons, and will try to test user-defined targets.

First Week of Work – API Exploration

First Week of Work – API Exploration

This week, I began work on my implementation of Project Familiar. The first step was finding the right API for the job. I had a pretty good idea of what I was looking for, using the following criteria:

  • Free. As an undergraduate, this seems like a no-brainer. Paid AR APIs are mostly fairly expensive, so I’m not considering the paid-only ones. Some APIs, like Wikitude and Vuforia, have free trials with a watermark. Those also seem viable for Familiar, as I have no plans to monetize the app.
  • Android. I have an Android phone, which I will be developing on. This mostly just rules out Apple’s ARKit, which is unfortunate due to its power.
  • Unity Support. I prefer to work  in Unity, and for a project that involves gamifying a game, a game engine seems to me to be the way to go.
  • Image Recognition. As it stands, I intend to use image targets to detect tokens that will be used in place of miniatures. I also may want to use image targets to detect the battle mat itself, and might want to use them for other features, like Area of Effect visualization. An important note here is that I am not actually worried about image tracking. Since much of what is on the field in a game of D&D is static most of the time, being able to accurately track a target is nonessential.
  • Extended Tracking. A battle mat can be large – the one I have is something like 26 inches long and 23.5 inches wide. Players will be looking around frequently – asking that they keep the center of the mat in view at all times doesn’t seem reasonable.

ARCore

From these criteria, my first choice was obvious: ARCore. It has the best extended tracking available on Android, and is free. I ran into no issues while setting it up, but after doing the setup I realized I missed a vital note on their Augmented Images page: images need to take up at least 25% of the camera’s field of view to be detected. I tested this function, hoping that 25% was just a rough estimation, but alas, twas not. A target required a significant portion of the screen to be detected. This would mean that every time a player is to perform an action requiring AR, they would need to hold their token directly up to their phone, or lean in extremely close.To me, that hardly seems like a way to speed up combat, so I looked into other APIs to see what could function in tandem with ARCore – to augment the augmentor, if you will.

Wikitude

I stumbled across Wikitude in my search. Wikitude seems pretty straightforward – it can do image recognition/tracking and extended tracking, and has a watermarked free trial. It uses ARCore for its extended tracking, but from past experiences, though, I had noted that APIs that use ARCore for extended tracking still never seem to be quite as good. I wanted to use Wikitude, which claims to be able to recognize a target from about 8 feet away, in conjunction with ARCore. I installed them both into the same project, expecting it not to work. It didn’t. I spent about a half hour toying with it, messing with manifest files and such, but I could not find much in the way of tutorials, so I moved on and looked at other SDKs. Wikitude seemed good, but it struck me as fairly unremarkable for my uses in its free version.

Vuforia

I looked around a little more, and I liked what Vuforia brought to the table. I had been skeptical, as its extended tracking had always seemed questionable to me in the past, but it brings something else to the table: virtual buttons. Essentially, these allow the developer to note that certain areas of an image target, when obscured, will trigger a response in code. These could be an extremely interesting tool, and may help to alleviate the issues with recognizing small or distant images. I started setting up Vuforia just to give it a test, and it seemed to go well at first (I chuckled when the only image I had in my Google Drive that scored a 5-star target rating was a Smash Bros Vaporwave meme), but I ran into an issue where my Android build displayed a black screen. It took some googling and some tinkering for a while longer than I would have liked. 

At first, the black screen appeared to be a bug in Vuforia itself on newer versions of Unity, but I’m not using the newest version of Unity. Then I thought it was a problem with shaders and materials, but couldn’t find any leads on that thread. I also wondered if it might have been a problem with my phone. In the end, I fixed it by installing the app to my phone’s internal storage rather than the SD card. I did a bit more testing, and found that Vuforia’s extended tracking, even though it uses ARCore, is still rather lacking. Rotating the device at all while not looking at the target causes noticeable drift, which isn’t ideal, but isn’t a dealbreaker, either. This can probably be ameliorated by using image targets around the battle mat, by splitting the battle mat into smaller targets, or using a large target at the center of the mat, as mentioned earlier.

Going Forward

Vuforia looks like a solid option going forward. It has good image recognition, and the extended tracking is imperfect but should be serviceable. Its virtual buttons offer an intriguing option to solve issues I expect to run into, and if they’re not useful, then I will still be using a perfectly good API. Over the next week, I hope to experiment with virtual buttons a bit, and see if I can simulate a full grid. I also would like to look a little bit more at Wikitude, but it’s not my top priority unless something goes horribly wrong with Vuforia.

Theme: Overlay by Kaira
Ann Arbor, MI