Creating an Interactive VR Environment to Learn Coding

ID 659237
Updated 9/5/2018
Version Latest



See it in action!


Built a Solid Base but Need to Keep Iterating

 One of the premises of Zenva Sky is to make the teachings of computer science and coding something immersive and interactive. Virtual reality can allow the interaction between the user and their surroundings in manners that are not possible with 2D or normal 3D development, which is what we normally do at Zenva when we teach coding.

VR is a new medium so it is hard to know in advance what will work and what won’t. Also, the only way to know is to build an actual VR prototype. Our methodology and goal for the next two weeks is to continue exploring prototypes and mechanic ideas, until we find one that feels suitable.

The prototype we have built so far allowed us to try a few things and yielded really interesting finding. Unfortunately, it also brought up a few challenges we will need to solve

Keep on reading to learn what worked, what didn’t, and how we plan to solve these issues!

Hardware and Software

We’d like to thank Intel and Microsoft for the generous support and hardware provided, which consisted on:

Zenva Sky is being developed on Windows 10 Pro with the Unity engine.

Coding the World Around You

Last week we presented an interactive experience where the user can apply “commands” to interactive objects.

We expanded on that concept by improving the UX so that the prototype can be used entirely in VR, without the use of the keyword.

1. Select a target object

By using their hand-tracked controllers, the user can point to an object with the “laser pointer” and activate it to prompt a series of commands. For now we only have the command “Move”:

2. Select command parameters

When selecting a command, options for that command are presented. In this case, the user can choose what axis to move on and how much.

There is some progress from last week:

  • The target object now shows the coordinate axis, so that the user knows which direction each axis points towards. Notice we opted to use a right-handed coordinate system, which is what most people are familiar with. Under the hood, Unity uses a left-handed coordinate system.
  • The user now can cancel this action.
  • The user can select the axis or amount entirely in VR by showing a visual dropdown (previously, they needed to enter the correct amount in the keyboard).

The way we implemented these spatial UI windows is by using Canvas objects in Unity, and setting their rendering to “world space”, which means the UI is shown in the 3D world (as opposed to it being 2D and overlayed on the screen).

We use box colliders in these canvases for them to read the “laser pointer”. For the laser pointer we are using VRTK version 3.

3. Add a command to the Program

The Program is the main set of instructions that will be executed. The user can add different commands to it (well, for now just Move!).

In this manner, users learn intuitively what a command and a program is. This approach is similar to that taken by popular robots that teach kids coding.

The Program now shows a screenshot of the object that will be moved, in order to make it easy to distinguish which objects the commands apply for.

The main updates from last week here are the ability to remove commands with the “X” button, and to see that screenshot of the object.

The screenshots are taken in-game and saved into a variable. This was done by having another camera in the game, which doesn’t render to the VR headset, have that camera positioned in between the user and the target object, and render its contents into a texture, then that texture is placed in the UI.

The camera ignores certain layers of the scene such as the sky and the UI elements.

4. Execute program

When executing the program, all the commands are applied to the corresponding targets, in sequence. If a movement encounters collision (for instance, you try to make the block go underground) that command is ended and the next command is executed.

This was already implemented last week, and the only change we made was to send things back to their initial position a couple of seconds upon termination.


What Could be Improved?

This prototype provided very valuable insights on how a basic programmable environment feels like. However, the following issues were found:

  • It’s hard to get an idea of the distances, specially if the objects are moved above ground. This could be mitigated by using a grid on ground and maybe a 3D grid, but that just felt too complex for the user.
  • It became a complex spatial puzzle game. Which was never the goal. To solve the challenge we had in mind, the user would need to use more spatial / 3D intelligence than computer science or coding. While we think spatial skills are essential in the innovation economy, that is not the vision we had for this application.
  • The UI has become too complex. We found ourselves clicking too many times and not feeling immersed enough. The vision of this app is not to replicate 2D UI interactions to 3D, but to create something natural for the VR environment.

So, What’s the Plan?

There are two variations of this prototype that we want to explore next. The good news is the components we’ve developed so far can be easily ported onto either:

Option 1. Controlling a vehicle / mech robot

  • The user is in a vehicle / walking robot and the commands all apply to this vehicle.
  • The vehicle only moves in the horizontal plane
  • The floor has a grid so it’s easy to know how many units you need to move. All objects fit exactly within a cell grid.
  • Besides moving and rotating, the robot can activate Boolean logic gates. We can then introduce how these gates work in an interactive manner, so the user has to “code” a sequence that will unlock these gates.
  • The UI can be physical buttons and levers in the vehicle dashboard, so no more “clicking”.

Option 2. “God mode” of Option 1

  • Same as option 1 but instead of being inside the vehicle, the user sees the grid and vehicle from above (as if it was a table board game).
  • In option 2, the Program can run much faster than in option 1 (since the user is not inside the vehicle, there is no risk of motion sickness so the speed can be higher – however this comes at the cost of immersion).


What option do you think will work best? Come back next week for an update!