There are two main types of augmented reality experiences: marker and markerless. Markerless AR has a lot of developers (including me) pretty jazzed these days, and with good reason. But why, and what’s the difference between the two?
Marker AR
Traditionally — if you can call anything traditional in augmented reality — AR involves a marker. It’s usually a physical object or a piece of paper on which an icon has been printed, like these postcards we created for thINK. Point your phone at it, and the AR experience appears over it. The marker helps the camera determine where to position the object.
Markers certainly have a place in the AR space, particularly while AR technology is so young in its evolution. One of the big benefits of using a marker is that they don’t require your phone to have a gyroscope. For iPhone users, all models from the iPhone 4 on have a gyroscope, so unless you’ve been using the same phone since 2010, you’ve got one. With Android or Windows devices, though, it depends on the device, which means markerless experiences are not available to all users yet.
Using a marker is also good if you want to take a user from a specific jumping off point to AR content that they wouldn’t have otherwise known about or sought out. We use it with our clients as part of what we call our “paper to pixels” strategy. The print piece — a table tent at an event lunch, for example — launches the viewer into a digital experience, where we can then actually engage with them. In cases like this, a marker is quite useful for guiding people to digital channels.
Markerless AR
Markerless AR is a whole other world. It uses something called SLAM — simultaneous location and mapping — that allows you to place your AR experience pretty much wherever you want, as long as it’s a fairly flat surface. This technology has existed for years, but Apple’s release of their markerless AR software development kit (SDK), ARKit, is a game changer — it tracks better, it’s faster, and it allows you to do things that other SDKs don’t do as well, or at all. Because of its superior ability to track — to put an augmentation in space and keep it there — the stability of the experience is the best I’ve seen yet, and I’ve played with a lot of AR frameworks. Take this for example: we suspended well over a thousand cats in our office, and ARKit kept them floating in space, all at 60 frames a second. We also augmented a jet in our parking lot and walked away from it for nearly a half-mile before that object scaled out into the distance.
I envision brands coming to life in ways we have never been able to accomplish before.
So what can you do with markerless AR that can’t be done with a marker? Aside from the use cases I outlined recently in education, navigation, and healthcare, I envision brands coming to life in ways we have never been able to accomplish before. Imagine you’re at a trade show, and you’ve downloaded the trade show app so that you can make the most of your time there. With markerless AR (and Apple’s CoreLocation framework), it’s possible to trigger an experience simply based on your proximity to a certain booth or landmark within the conference. No need to scan a marker, and no intermediary between you and the experience.
Now imagine this happening at an amusement park. Download the park’s app and, based on your location within the park, an augmented park mascot guides you to nearby rides, games, or refreshments.
Or, take the example of the augmented product catalog. With markerless AR, rather than scanning a marker in a print catalog, you can simply use your phone to surf a digital catalog and augment 3D models of each product, at scale and in your environment. Ikea and Apple have already teamed up to produce an AR app that does just this with Ikea furniture.
As I mentioned, there are limitations to markerless SDKs, namely that they don’t work on some Android devices (or, in the case of ARKit, anything other than an iPhone), but I believe there’s some serious potential here. ARKit itself is still in beta; when iOS 11 hits the market this fall, developers will be able to release the applications they’ve been feverishly working on since the SDK’s release in June. I have a feeling we’re going to see some pretty amazing things… and maybe a few more space cats, too.
About the Author
As a Managing Director and Trekk's Creative Director, Mike Wilson leverages his experience across industries and technologies to ensure client projects are not just beautiful but also strategic, forward-thinking, and on brand. Mike leads a team of designers in designing campaigns that span print, web, and new media channels, as well as package and booth design, corporate identity development, and augmented reality. He holds a Bachelor of Fine Arts in Visual Communications from Northern Illinois University.