Learn ‘o’ Sys – VR/AR App for Learning Human Body - For School Children Week-3 Blog

ID 672655
Updated 2/1/2019
Version Latest
Public

author-image

By

Hello, Folks. 

 

Welcome to the third week blog! In the previous two weeks, we introduced you to the platform and the idea with which we are going to build the system. 

 

As discussed, to start with, we have to implement AR for learning. 

 

The components involved are Leap Motion sensor, ‘Vuforia’ and APIs. We have to learn a bit about all these. 

 

David Holz was the one who discovered the Leap Motion technology and initially ASUS was partnering and later on Hewlett Packard joined hands with David to complete the hardware development. Leap motion takes input from the human hands and the finger motion. The appreciable point is that, it does not require any sort of human hand contact or touch. Without touch or contact, the input gets recognized. 

The device is just 3 inches long and it has handful of features. It can be used with the Physical desktop machine or can be attached to the backside of the smart phone for capturing the gestures. We have preferred it the second way to have interfaced the sensor with the smart phone. The device has two Monochromatic IR camera and three IR LEDs. The coverage area is about 1 meter. Fig.1 can be referred to have a look at the sensor. 

 

Fig. 1 Leap Motion Sensor 

 

 

The way things work is really wonderful. LEDs in the Leap Motion Sensor generates the IR light and the cameras which are available shall generate about 300 frames per second. Now, this is sent through the USB cable to the computer or mobile. These data are analyzed by the software using “complex math” in a way combining 3D positions data by comparing the 2d frames generated by two cameras. The interactions are divided into two zones namely hover and touch zone. The hover zone is used for aiming and the touch zone is used for creating touch events on screen. From this region the inputs are sent to the sensors and then to the software in the desktop.

Fig.2 First view level. 

 

To quickly recollect, we have mentioned the proposed system diagrammatically in Fig 3. We have seen about the leap motion sensor and also made clear that we are going to use Mobile phone as the device to process the data acquired. 

Fig. 3 How the system works? 

 

Now, it is important to know about the second part of the implementation, (i.e.) how to use AR with the Leap. 

 

In earlier days, AR was fully based on the markers.  This means, it is a pattern which is required to embed any AR object in the space. Only when that pre-created pattern is shown in-front of the camera, the AR which is expected would appear. One can achieve this easily now and can be done with Image target class. But there is a major challenge in this approach. One cannot keep the pattern with him while using the head mounted display (HMD). Or, it would be practically impossible for someone to stick the patterns in the walls for scanning. Hence, it is understandable that Marker based augmented reality approach is uncomfortable while combined with HMD. So, what is the remedy. Go marker less! 

 

 

Fig.4 Go marker less. 

 

One can see from the figure 4 that the AR object, the flight is projected and made available in the real space without any marker. Hence, it is very easy and beneficial for the end user as well. Marker less AR typically uses the GPS of a Smartphones to locate and interact with AR resources. The Marker less AR is implemented using unity with an API called as Vuforia. In order to place objects in real environments we make use of classes from Vuforia. User Defined class is used to get the target data using the cameras. One should be familiarized with the usage and make sure that there is minimal perspective distortion. Object recognition also plays a major role in the entire process. 

 

The classes which are used are presented below as a table:

Classes

Functions

ImageTargetBuilder Class

This class encapsulates all functionality needed to create a user defined target on the fly

Image Class Reference

An image - Used to expose the Camera frame.

ImageTargetAbstractBehaviour

This class serves both as an augmentation definition for an ImageTarget in the editor as Well as a tracked image target result at runtime.

ImageTracker Class

The ImageTracker encapsulates methods to manage DataSets and provides access to the ImageTargetBuilder and TargetFinder classes.

TrackableBehaviour class

 

This is the base class of all the classes.

ImageTarget Interface Reference 

 

A trackable behavior to represent a flat natural feature target.

ObjectTargetBehaviour 

Gives the dataset required used for target reference 

Default Trackable Event 

is responsible for handling callbacks to the Object Target Behavior

 

Well, enough of technical information for this week’s blog. Next week we shall have the details of the gestures and related information. Until then, stay tuned! 

 

Further Reading for the week: 

[1] Jonathan J. Hull, Berna Erol, Jamey Graham, Qifa Ke, Hidenobu Kishi, Jorge Moraleda, Daniel G. Van Olst , “Paper-Based Augmented Reality ”, 17th international conference on artificial reality and Teleexistence, 2007. 

 

[2]Ronald Azuma, HRL Laboratories at al.” Recent Advances in Augmented Reality”, Computers & Graphics, November 2001.

 

[3] Joze Guna , Grega Jakus, Matevz Pogacnik, Saso Tomazic and Jaka Sodnik  ,“An Analysis of the Precision and Reliability of the Leap Motion Sensor and Its Suitability for Static and Dynamic Tracking”, Sensors, 2014.

 

[4]Permana, F., Tolle, H., Utaminingrum, F. and Dermawi, R., 2018. The Connectivity Between Leap Motion And Android Smartphone For Augmented Reality (AR)-Based Gamelan. Journal of Information Technology and Computer Science, 3(2), pp.146-158.

 

[5] Azuma, R.T., 1997. A survey of augmented reality. Presence: Teleoperators & Virtual Environments, 6(4), pp.355-385.