Hi, Readers,
Hope the previous weeks blogs were interesting and fascinating. In the past few weeks, we discussed the fundamentals of the proposed application. Also, the complete details about the AR and interfacing the sensor with smart phone are also discussed. Then the gestures used in the system were introduced to the readers and in this week, it is going to the next stage.
Bringing AR on the headset is all fun and this week’s blog is aimed at explaining the challenges we faced in conquering the challenges.
Coordinates for the sensor in the headset:
During the very initial stages of the runs, the sensor was placed over the table i.e. literally on the table. In that case there were no challenges as the sensor calibration had already been done for handling this scenario. But, when the sensor is to be mounted on the headset, it shall change the coordinates and the changes are to be made accordingly to develop the software to handle this challenge.
Fig.1 Normal Leap Coordinates when placed over the table.
The normal and the traditional coordinates for the system is presented as Fig.1. As discussed, the attention is towards the HMD and position of Z and Y axes certainly changes. Initially, Y axis faces upward and Z axis faces the front. But, when the same gets mounted to the headset, the Y axis shall be in the front while Z facing upwards. This change is pivotal and has to be taken care of in the software.
Fig. 2 Coordinates mapping.
The second level view is presented in the below figure.3 for the reference.
Fig. 3 Second Level view.
Now, let’s know the concept of Multiple AR.
All these time, the discussion was around a single frame AR. It is more interactive and much more intelligent to have multiple frames which shall certainly elevate the user experience. To accomplish this task, a simple strategy is to be followed. Whenever a frame is getting captures, a parent object is created (remember fork ()) and while an AR object is placed, a child is getting created to the parent. The child class is created because it has the tag for the AR objects in the particular frame. Similarly when we turn the camera to a different position we capture the frame in that place and the same process of creating a parent and a child class will take place.
An instance for this multiple frame AR is presented below. After launching the application, first the brain image (AR) is presented to the user for interaction and when the HMD is moved to a different position (angle), the skeleton content is presented for the interaction.
Fig. 4 Multiple frame AR
Readers shall be able to visualize these effects during the demonstration. We shall get the fully functional demo in the next week’s blog for this application.
The aspect of the persistent and locked view is also covered cautiously in this application which is developed. In the next week’s blog, we shall cover the aspects of Binocular view and distortion related views.
Stay tuned folks for the next week’s demo!
Happy week ahead!
Further Reading:
[1] McNeill, D., 1992. Hand and mind: What gestures reveal about thought. University of Chicago press.
[2] Hotelling, S., Strickon, J.A., Huppi, B.Q., Chaudhri, I., Christie, G., Ording, B., Kerr, D.R. and Ive, J.P., Apple Inc, 2013. Gestures for touch sensitive input devices. U.S. Patent 8,479,122.
[3] Rubine, D., 1991. Specifying gestures by example (Vol. 25, No. 4, pp. 329-337). ACM.
[4] Feiner, S., Macintyre, B. and Seligmann, D., 1993. Knowledge-based augmented reality. Communications of the ACM, 36(7), pp.53-62.
[5] Starner, T., Mann, S., Rhodes, B., Levine, J., Healey, J., Kirsch, D., Picard, R.W. and Pentland, A., 1997. Augmented reality through wearable computing. Presence: Teleoperators & Virtual Environments, 6(4), pp.386-398
[6] Van Krevelen, D.W.F. and Poelman, R., 2010. A survey of augmented reality technologies, applications and limitations. International journal of virtual reality, 9(2), p.1..