Learn ‘o’ Sys – VR/AR App for Learning Human Body - For School Children Week-6 Blog

ID 672658
Updated 2/5/2019
Version Latest
Public

author-image

By

Hi, Readers, 

Welcome back. We are into an interesting phase of the product. Yes, you are correct. The demo is all ready for you to visualize. 

 

But, hold on. We need to learn about two of the important things. The computing engine used for the entire process is to be discussed followed by the Binocular view and distortion constraints when ported on to the headset setup. I.e. how the product and application is developed on the ground with computer and how it is moved to the HMD is to be known and discussed clearly. 

 

Thanks to the Intel NUC8i7HVK, the formerly Hades Canyon was the savior. Our normal PCs with limited graphics ability and speed was stuck totally while building the application. After the NUC arrived, the process went smooth and all the troubleshooting and development were completed much faster than expected. The speed and elevated graphics support provided by the NUC were the important features to be highlighted. Intel Core-i7 processor with the AMD Radeon RX Vega M GPU was in fact awesome and the difference was felt right in the first attempt. Whenever anyone works on AR or VR, the most important factor to be considered is real time performance, i.e. the response should be much faster, every time, on time. Delayed response shall keep the users away from the system and we were worried about the same earlier with the simple PC test run. It was delayed and we could feel that difference and technically, it was dead time. NUC has blown that delay out. Wish we port it on to the headset too J

 

For the development and test bed setup to be visualized, one can refer to the below architecture diagram. 

Fig. 1 The development setup and test bed architecture.

 

Once the complete development is completed, we tested the same rigorously. We even made it a little fun through the games we developed with NUC and the sensor combo. We developed glow hockey, boxing games and it was all fun amidst the development challenges. The screenshot of those results are presented above and it was very interesting and we feel, the development kits and NUC have together provided endless opportunities to implement the imagination as a real product. One appreciable aspect of the NUC is, it was even faster with the multiple frames in the screen. 

Now, it is the time to understand what if the same set up gets into the headset with smart phone as the computing element. It would be a bit slow and we could feel the difference in the real time response. The aspects of the binocular view and the distortion are discussed below. 

  1. Binocular view 
  2. Distortion constraints. 

Let us understand both these points then have a look at the demo. 

 

Binocular view: 

Since the phone has to go very close the human eyes, it would be very difficult and at times could also be annoying or can create headache as well. To handle this situation, we have adapted the binocular mechanism. It as everyone knows has two lenses compiled properly according to the eye’s position in the HMD. The screen of the smart phone shall also look like a 50 inches TV because of this adjustment. Binocular view is represented in the below figure.2 for the easier understanding of the readers. 

Fig.2 Binocular view 

The Distortion – A must know point  

When being seen through two lenses one would suffer from the term called as distortion. When there is a deviation from the rectilinear projection happens, it can immediately be referred as distortion. The same is represented as an image in the figure.3. 

 

Fig. 3 Distortion

Time has come for the demo and the next section shall present the demo screenshots which will be followed by the video. Fig. 4 after the launch, gesture recognition happens here. One could see the hand and gesture being recognized in the same figure. Demo video shall give an enhanced understanding. 

 

Fig. 4 The pinch – gesture recognition.

 

Fig. 5 Augmented Content being accessed through Gestures

 

Fig. 6 – Moving the HUD to a different angle (Move the head) 

 

 

Fig. 7 Skeleton included as Augmented digital component. 

One could also see that the complete skeleton can be done through the gestures we used. The grab and pinch gestures are used for this and this would ease the access and will provide rich user experience. 

 

Fig. 8 Skeleton – Full Skeleton view (Bottom view) 

 

Fig. 9 Skeleton – Full Skeleton view (Side view)

 

The entire demo  is presented along  with apt explanation in the below video. Hope you like it! The next week, we shall learn about the same application being built with VR. Stay tuned and have a great weekend ahead! 

 

             

Further Reading: 

[1] McNeill, D., 1992. Hand and mind: What gestures reveal about thought. University of Chicago press.

[2] Hotelling, S., Strickon, J.A., Huppi, B.Q., Chaudhri, I., Christie, G., Ording, B., Kerr, D.R. and Ive, J.P., Apple Inc, 2013. Gestures for touch sensitive input devices. U.S. Patent 8,479,122.

[3] Rubine, D., 1991. Specifying gestures by example (Vol. 25, No. 4, pp. 329-337). ACM.

[4] Feiner, S., Macintyre, B. and Seligmann, D., 1993. Knowledge-based augmented reality. Communications of the ACM36(7), pp.53-62.

[5] Starner, T., Mann, S., Rhodes, B., Levine, J., Healey, J., Kirsch, D., Picard, R.W. and Pentland, A., 1997. Augmented reality through wearable computing. Presence: Teleoperators & Virtual Environments6(4), pp.386-398

[6] Van Krevelen, D.W.F. and Poelman, R., 2010. A survey of augmented reality technologies, applications and limitations. International journal of virtual reality9(2), p.1.