AR Helpers – Part 1

In this new blog series, we present our ideas for improved usability in Augmented Reality applications. Over the coming months, we will thus create a small reference library for various AR usability features.

Since this summer, Augmented Reality (AR) has reached a critical mass of users and helps them in completing their daily tasks. Apple’s CEO, Tim Cook, says that the rise of Augmented Reality can even be compared to the invention of the smartphone. Also like in the case of the smartphone, however, the users have to get accustomed to this new technology. Many people don’t yet know how Augmented Reality works exactly and how to use the existing apps.

Since we’ve been working with this exciting technology for over six years now, we have, of course, also thought a lot about how to bring higher usability to Augmented Reality applications. Based on these reflections, we’ve created a few “AR-helpers”, which make using Augmented Reality apps a lot easier for our clients. Today, we’ll start with two helpers, the use of which seems logical, but which nevertheless don’t exist in many Augmented Reality apps.

You need a target, man

By now, there are many different forms of Augmented Reality. Some of them are based on image recognition or image-based tracking and use so-called targets, which are printed pictures or shapes. Two very well-known examples for frameworks that use image-based Augmented Reality are ARToolKit and Vuforia. With these frameworks, the target is used in order to determine the position of the digital contents in the real world. Therefore, the smartphone’s camera must first be pointed at this target, in order for the Augmented Reality experience to start.

One problem that comes with this technique is that, as I already mentioned, many users don’t know how Augmented Reality works. They open the app and before they can display any content, they are confronted with an empty camera stream and don’t know what to do with it. In many cases, they are (understandably) so frustrated that they immediately close the app and, in the worst case, uninstall it altogether.

Now, the first (and really very basic) of our AR helpers addresses this exact problem. The way we deal with it is by adding an overlay with a short explanation to our camera stream. You can also time this overlay and adjust this timing for the particular user group you are working with. In some cases, you might want to keep the overlay constantly visible in the camera stream. In other cases, however, it might be enough to simply show it when no target has been detected for a few seconds. You can see these different timings in the two videos below.

Follow me, little 3D model

The second AR helper also addresses the problem discussed above, but has a little bit of a different approach. This helper gives you the possibility of displaying your digital contents (such as, for example, a 3D model) directly in the camera stream, even when no target is detected. Thus, the user isn’t faced with an empty camera stream and can already imagine what might await them in terms of the Augmented Reality view. Moreover, they might be more inclined to look for the target when they can already see their goal ahead of them, so to speak.

Additionally, this AR helper also enables you to take a 3D model with you, once it has been recognized and positioned via the target. Thus, you don’t have to keep the target in your camera stream at all times and can look at the content while relaxing on your sofa. And since that wasn’t already fancy enough for us, we’ve extended this helper so that the content will fly or snap from the device to the target or vice versa. But instead of me trying to explain, just watch the video below, where you can see this effect in two different use cases:

And now, stay tuned – more AR helpers are on their way!