Intel® Multicore Enterprise VR Experience: Integrating Crowd Simulations into Mixed Reality

ID 672605
Updated 8/6/2018
Version Latest
Public

author-image

By

Download the Code sample

By Scott Mocha ( scott.mocha@4dpipeline.com) and  Jed Fisher ( jed.fisher@4dpipeline.com)

1. Intro

Developing experiences in virtual reality (VR) is a fun and exciting journey, and without the proper tools or a helping hand it can become quite daunting. To aid in this process, Intel looks to create a roadmap; a series of signposts and lessons learned in a way that helps bridge the gap into the world of VR. To illustrate many of the tools and practices required for this journey we collectively take on a project, plan the project, build the project, and of course test the project.

The project requirements are exciting and offer a new realm of VR integration. The task is to build a VR scene that represents an area with a large crowd of people. Within that scene we implement multithreaded crowd simulation and pathfinding utilizing the CPU. We then have the crowd stop at gather points while finding their way through the scene, and queue at those points as if they are purchasing something; have them stand there for a few seconds, and then go on their way. Lastly, to add complexity and interactivity, allow at least one of those gather points to be moveable. When the gather point is moved, the crowd flow adjusts accordingly and simulates the same scenario in real life. The takeaway experience will be an immersive and compelling simulation that utilizes the graphics processing unit (GPU) and multiple cores of the CPU to integrate crowd simulation and flow with VR.

2. Planning the Project

Any project is only as good as the plan to execute that project. The way to guarantee that a project falls off schedule is to have a poorly defined schedule in the first place. This seems like such an elementary idea, but it is the core of effective application development. Plan in as much detail as possible, before even a line of code is written, and hopefully the planning process uncovers larger issues long before they become costly mistakes. There is an old adage, "Plan your work, then work your plan," and this is as true here as it is anywhere. Take the time to flowchart the ways you think the application works, knowing it will evolve with time; even that first step of thinking through the logic helps.

For this project, we start with the basic idea that we are looking to create a crowd simulation exercise with multiple CPU cores used for pathfinding in a VR experience. Those basic ideas start to create a blueprint for what is to come. The questions that come up as we think through the processes required begin to shape the project. "What hardware will we use?" "What engine will we develop it in?" "How will we get our crowd?" "How will the crowd find their way through the environment?" "What will this be like for the user?"

2.1 Workflow

These basic questions start to outline the workflow; answering some of the key components will answer others. For example, choosing the hardware immediately decides for us which engine to use for development and rendering. We discuss this more in the following section, but each choice and requirement has a ripple effect that will be affected further into the project. To minimize the impact we start small and prove the basic concepts.

2.1.1 Starting simple

To start to prove these ideas, we break the project down into its simplest form; moving blocks through a space and creating behavior for those blocks.

blocks moving through space with behavior
Figure 1. Simplest form: blocks moving through space with behavior

The green block represents a gather point
Figure 2. Simplest form: the green block represents a gather point

To test multithreading and CPU load, we created a pathfinding iteration variable assigned to the plus and minus keys on the number pad. Increasing iterations creates a heavier workload for the CPUs and shows distribution among the cores.

Creating test scenarios for evaluating multithread implementation
Figure 3. Creating test scenarios for evaluating multithread implementation

2.1.2 Adding complexity

First figures replacing blocks in simple scene
Figure 4. First figures replacing blocks in simple scene

Issues with random meshes not aligning properly
Figure 5. Issues with random meshes not aligning properly

We add complexity by introducing generated characters into the block scene. Replacing the moving blocks with the new characters shows how the random character generation is working and shows any issues that need to be addressed with the ways the mesh is lining up.

3. Know Your Hardware

Hardware can define the project, so it is important to determine from the beginning what hardware you are targeting and what hardware you can disregard. These choices influence the entire flow of the project.

3.1 Windows* Mixed Reality

Mixed Reality in each available brand
Figure 6. Mixed Reality in each available brand

In this case, Windows* Mixed Reality was the platform of choice, and by choosing it we were able to quickly define our development environment, as only Unity* supports Windows Mixed Reality development. Windows Mixed Reality (Windows MR) is available in multiple flavors from a range of brands.

Intel NUC

While each has their own strength and focus they are all built on the same platform, and in terms of development are essentially the same. Developing for one is developing for them all. This, as it turns out, is quite a selling point for our project, and further builds the case for choosing Windows MR. We made a conscience decision to try and get this to work with the new Intel® NUC 8 VR machine because it’s using the latest Intel® Core™ i7-8809G processor.

3.2 Intel® NUC 8 VR Machine

Mixed reality (MR), VR, and all aspects of real-time rendered location-based 3D generally require the best and most up-to-date equipment, and the core computer is no different. For each of these media, render power is generally found in the GPU with only a small portion of the load carried by the CPU. For this project, however, adding crowd simulation to an already GPU-intensive VR scene requires the best in CPU power as well, and ideally in a small form factor. With the implementation of the latest Intel Core i7-8809G processor we found the VR-ready Intel® NUC to be surprisingly capable and exactly what we needed for the project. Get more info on the Intel NUC 8 VR; see Introducing the Intel VR Machine.

4. Mixed Reality in Unity*

Mixed Reality Made in Unity
Figure 8. Mixed Reality Made in Unity

As mentioned previously, choosing the platform of Windows MR as our target device immediately establishes the use of Unity as the development platform. This is a great choice, as Unity is becoming more adept at handling VR scenarios and making them accessible to all levels of users. Unity requires a bit of programming, and many tutorials from Unity and Microsoft are available. Currently, Unity supports all widely accepted VR, augmented reality (AR), MR, and extended reality (XR) devices. With all of that support available, it brings back into question why we would continue to choose and use the Windows MR devices; what does this device bring to the table that can be of use to our project? Is it the best choice for this forward-thinking project?

4.1 Why mixed reality

Lenovo Mixed Reality
Figure 9. Lenovo Mixed Reality

Windows MR at the time of this project is still a relatively new toolset for VR. What it brings to the table is ultra-fast setup, simple hardware configurations, and a fairly high level of resolution with minimal overhead. Our favorite feature is by far the absence of lighthouses, which are used in virtually every other device on the market today. Windows MR uses Microsoft Kinect* based computer vision technology and similar cameras to actually scan and interpret the surroundings of the user. By interpreting those surroundings, looking for edges, planes, floor, and so on the Windows MR device is able to determine its location in 3D space—the pitch, yaw, and roll of the head-mounted display. This is a revolution in VR and makes the headset a perfect choice for our project.

4.2 Unique requirements

The fastest path to developing with Unity and the Windows MR devices is through the Windows MR toolkit available through GitHub*. Using the plugin in Unity makes things much faster to deploy, but it should be noted that when building and compiling the project for delivery into the Windows MR device, the only delivery method supported by Windows MR is the Universal Windows Platform (UWP). UWP brings with it its own limitations, as one of the requirements of the project was to show CPU load and core count, but UWP is unable to gather that information from the operating system due to security features of the platform. To solve for this issue, we created an external utility for gathering this information from the CPU, launching that utility with our UWP application, and then communicating with the utility itself, passing data into the framework as text in a way that doesn’t interfere with its advanced security features.

4.3 Controls

The Windows MR devices also use Bluetooth® technology controllers for interaction, which is also a forward-thinking development in VR. By using Bluetooth technology, the setup is super easy and there is nearly zero latency. Initially, we had challenges with getting the various test machines to all see and use the Bluetooth technology controllers consistently, and found that older Bluetooth technology adapters were not as efficient and reliable as the more current ones. If this happens for you it might be time for an upgrade.

4.4 Building for mixed reality devices

With our target device determined, the task becomes developing for that device instead of around it. To get started, have a look at the Microsoft intro into Mixed Reality development. It all starts with installing the Mixed Reality tools.

With Windows MR devices, as with any newer device, information and test cases are sometimes difficult to find. Thankfully, more information is becoming available. Still, it helps to have the developer sites bookmarked to allow fast searching through the experiences of others to determine the best course forward; see the Mixed Reality developer site.

5. Character Animation

Adding crowd simulation to any project immediately brings into question where the crowd will come from, or worse yet, "How do we make a crowd?" There are many solutions that are accessible for this, ranging from building your own to using tools that generate the characters for you, often using canned or prefabricated elements, and allowing you to do with them as you please within your own project.

5.1 Using canned or prefab characters

Canned characters have been around for years, from pioneering high resolutions programs like Smith Micro Poser 3D* software, to web-based offerings like Autodesk Character Generator* software. There has been little reason to create and animate your own characters, unless of course you are looking for something extremely specific. Using character generating tools is a massive time saver and the output looks wonderful. Often using them requires a bit of thinking outside the box to make them function the way we want them to.

5.2 Using character generation tools

For our project, character animation was done inside Autodesk 3ds Max* software using a default biped character. The character models were created using Autodesk Character Generator* software (six characters in total) and they were imported into Autodesk 3ds Max software using the FBX format.

Characters available in the Autodesk Character Generator
Figure 10. Example characters available in the Autodesk Character Generator* software

The models come with a bone system, but it was replaced by the default biped system in Autodesk 3ds Max software for an easy way to animate them manually or create an easy way to load *.bip files.

Artist representation of VR
Figure 11. Artist representation of VR

The biped system was skinned to the character meshes once and was reused to the next model because all of them use the same mesh topology. The animations are made up of two different animation states, one for the walking cycle and another one for the standing pose. After doing the skinning and animations, each model was exported back to Unity in FBX format.

The model after importing to Autodesk 3ds Max
Figure 12. The model after importing to Autodesk* 3ds Max software (left), the Autodesk 3ds Max software default biped system (center), and the character skinned and animated (right)

5.3 Challenges with implementation

While canned characters are great, they are often not set up or rigged to do exactly what your project calls for. In our case, we needed a randomly generated crowd that mixed torsos, heads, and lower-halves to create a unique-looking sample of people. To do this, we split the original meshes into three different parts (head, torso, and legs), verifying that all parts could be swapped between models, and matched as closely as possible to avoid any issue or leak when mixing them. Additionally, we created a set of three different textures for each body part to improve the random number of combinations. After completing them all, each complete model (head, torso, and legs) was exported to Unity in FBX format and the texture packages were exported separately in JPG format.

Software used: Autodesk 3ds Max software, Autodesk Character Generator software, and Adobe Photoshop* software.

6. Environment Selection

More than other mediums, VR really demands attention to the environment. It gets increasingly difficult to substitute one environment for another when the player is immersed into it. This can be done at times for dramatic effect or to create a certain feel of experience, but in this particular project we were looking for an environment where a crowd would actually form, where they would behave a certain way, and where it would make sense to modify their flow pattern. We evaluated train stations, shopping malls, airports, and sporting stadiums. Each had its own challenges for creation, the largest of which was the time required to create a completely custom environment. In order to minimize that burden we looked to partners and vendors that offered prefabricated environments that we could build upon and perfect for this project.

6.1 Prefabricated versus custom made

Making the choice between creating your own objects and environments and finding ones that are already made and ready to use is a common theme with this and every project. Using predefined assets saves a remarkable amount of time and effort, as building them each from scratch is a daunting task. Yes it can be done, but for the interest of this project, starting with a predefined environment created the most opportunity for us, as we could focus on customizing that environment and making it showcase what we needed to show. To do this, we adopted a more hybrid approach.

6.2 Hybrid approach

Prefabricated environments are available in the Unity Asset Store* and are often made in such a way that the pieces can be mixed and matched to create a truly custom feel. There are free simpler versions, and there are more expensive complete kits that allow much more customization without having to model pieces or elements for each look. For our project, we started with the American Airlines Center located in Dallas, Texas. The environment was initially modeled by InContext Solutions as a simulation to illustrate retail advertisement placement. For the scope of this project, we made some modeling optimization and texture modifications to fit the project requirements, and added lighting and post processing effects as well.

Example of Prefabricated Environment
Figure 13. Example of Prefabricated Environment

Additionally, custom elements of the interior were modeled to match the scenario more closely and provide the spawn points of each of the walking characters a place to exist that was off screen.

Being a part of this project, InContext let us use it for illustrating the hybrid approach to environment building. With a selection such as this, which is a high-profile location, licensing and permission had to be established before we could even begin to work with it.

American Airlines Center
Figure 14. American Airlines Center

6.3 Licensing

Licensing is a key element to any project. Whether you are using third-party libraries, images, models, or anything that was created by or even looking like someone else, then appropriate permissions must be acquired. In our case, the American Airlines Stadium’s likeness was being used, and we had to reach out to their teams to validate that we had permission to show their likeness, how it would be used, and what we intended to do with it. With a few emails, some back and forth discussions, and a final contract stating our permission, we could move forward.

7. Materials

Materials are what separates good immersive experiences from mediocre ones, as it is the materials that give meshes life. Those materials are then lit, and the lighting creates shadows and reflections that also help convince the eye that something is real. We will get into lighting in a bit, as materials deserve their own section. When creating materials, and defining how they work with models, we must first determine which method of rendering we are using. The simplest division that matters most is the division between real-time rendering and offline rendering. Offline rendering is when scenes, animations, lighting, and so on are all established well before the render, and then executed in a render batch. This is how most animated movies and any prerendered material are created. On the opposite end of the spectrum is real-time rendering. Real-time rendering is just as it sounds, rendering that is happening in real time. If a user is in VR and looks to the left, the computer is determining exactly what the user would see as their head moves every degree in a single direction. In fact, the computer is doing that calculation twice for every frame (once for each eye) at a rate of 90 frames per second. That is a lot of rendering! As such, VR requires very specific types of materials and material handling to perform at these high speeds.

Materials in Unity
Figure 15. Materials in Unity

7.1 VR and materials

The performance of VR is directly related to the number of times the GPU has to pull data and draw it to the screen. Draw calls refer to the number of objects that are drawn into the screen, how the objects are calling data, and where it is called from; see What Exactly is a Draw Call? Having a large number of draw calls directly affects the performance of any project, but this is even more important with VR, AR, MR, and every other type of real-time rendered immersive 3D. Why? Because these experiences are relying on the smallest and sometimes simplest of graphics processors (for example, Microsoft HoloLens*), and to perform correctly they must maintain a certain frame rate. To optimize performance, by enabling this target frame rate we keep the number of meshes as low as possible by creating composite meshes, and have them share the same materials between them. Sharing the same materials across a composite mesh is known as using a texture atlas, or texture packing. Creating texture atlases is a critical component to developing VR, MR, and AR experiences, and is a built-in function of Unity; see the Unity document on Texture2D. There are many additional tools available to do this in a number of ways, but for this project we used Autodesk 3ds Max software optimization tools to reduce draw calls.

Unoptimized stadium scene
Figure 16. Unoptimized stadium scene with number of draw calls highlighted

Scene optimized showing reduction in draw calls
Figure 17. Scene optimized, showing reduction in draw calls

7.2 Unity and materials

Unity allows working with different material types. Some of them are more complicated than others, but in this particular project we mainly use Unity standard materials for the stadium and one custom, double-sided material for the characters (to avoid any leaks when mixing body parts). The stadium materials use the default specular setup so most of them used albedo, normal, specular, and emissive maps. For the characters we only used albedo maps.

Example of Unity Shader setup
Figure 18. Example of Unity* Shader setup

8. Lighting

Example of area lights
Figure 19. Example of area lights from Unity.com

Now that we have discussed materials at length we must look at how those materials are shown, and that is through lighting. Lighting is the virtual projection of light rays onto materials and objects causing light to bounce, reflect, dispel, or be blocked entirely. In offline rendering, lighting is a process that takes a great deal of time to set up, but it also can be tweaked and adjusted any time prior to the final render. To make the lighting process make more sense to our brains, applications use traditional lighting names for most light sources and do their best to have those virtual lights behave in the same way as the actual lights. We aim and point those lights to create dramatic effects. Lighting in real-time projects is especially challenging, as the main idea is to achieve a very realistic look without affecting the performance. There is always a give and take in this process, and that’s why it is so important to know how to set up lighting, global illumination (GI), and how to bake lighting information inside Unity. For more about lighting, see Guide to 3D Lighting Techniques for Digital Animation.

8.1 Unity GI

Global illumination (GI) is quite important in the lighting process because it improves the realism on any digital scene. GI calculates the bounces of light and how those bounces transport energy to surfaces that are away from direct lights, which in turn gives a natural and realistic visual effect. The problem with GI is that it is computationally heavy, so doing this calculation in real time is not an option in most cases. To get around this limitation, we bake the GI result into textures to reduce the hit on performance. Baking light into textures is a process that can be done in a program like Autodesk 3ds Max software, but it is also done in Unity every time we publish a project. When creating settings for various platforms, Unity allows very granular control of how that lighting is baked; every aspect of performance management is critical to a well-performing game or experience.

without G I
Figure 20. Without GI

with G I
Figure 21. With GI

In our project we use these ideas by creating a set of lights on any lamp or lighting location within the stadium model. After assigning the emissive materials to those objects, we start the lightmap process calculation, which consists of the calculation of the GI, the ambient occlusion, and how they interact with the materials and textures to create a realistic effect.

To make this happen in Unity, we need to set up all the geometries that will be part of the lightmap calculation with lightmap UVs (to store the lightmaps) and define them as static meshes. This means that all of those geometries will not move and the light can be stored without any issue. If they are animated objects this will not work. After that it is just a matter of adjusting some technical parameters and waiting until the calculation is done. When everything has finished all of the lights can be disabled, and everything should look the same without the need of having multiple lights turned on, affecting the performance. See more on how Unity handles lightmapping.

8.2 Balancing performance and look/feel

Balancing performance is one of the main tasks that any real-time developer needs to face, but in the case of VR development this is even more important. VR headsets have lower technical specifications than standard workstations, making performance the name of the game. So, no matter how realistic or good-looking a project you are able to achieve in Unity, if it doesn't perform well enough into the selected platform (90 frames per second in VR headsets) it will not be an enjoyable experience for the user.

Post processing (sometimes shortened to post) is an example of a method of achieving amazing results, but at a cost. Post processing refers to effects and changes that are applied to a scene after (post) the render (processing.) Examples are lighting effects, atmospherics, color correction, color blending, screen space reflections, and so on. While we were able to use post processing in this project, we had to tread carefully and test with each turned off and on, to ensure performance would not suffer. Even still, we loved the look of the scenes with screen space reflections turned on, but in the end had to disable them to maintain frame rate.

Scene without post processing
Figure 22. Scene without post processing from Unity.com

Scene with post processing
Figure 23. Scene with post processing from Unity.com

8.3 Challenges we encountered

The main challenge we encountered was the limitation of technology associated with VR. The most current and wonderful device may be very advanced, but it will always be limited in what it can display and compute at the same time. This is demonstrated well with screen space reflections. They brought about an amazing effect and looked great in the Unity editor, but adding this effect reduced the performance below ideal standards. We opted to disable it and use more basic reflection techniques (light probes) that are less realistic but still communicate similar ideas to the eye, while not taxing performance as heavily.

Basic reflections using reflection probes
Figure 24. Image with basic reflections using reflection probes

Image with complex reflections
Figure 25. Image with complex reflections (screen space reflection turned on)

9. Crowd Simulation AI

Crowd simulation is one of the most processor-intensive functions and offers a unique set of challenges within VR when performance is of paramount concern. To capitalize on Intel® CPU capabilities and available libraries for multithreading, we moved the crowd sim pathfinding into all available cores, distributing the load and enabling the performance we need. Still, we started with the question, do we build our own or use an available library? Thankfully, a well-known solution was readily available.

9.1 Recast and detour

extern "C" {
	// NavMeshAgent
	dtCrowdAgent* NavMeshAgent_Create(float pos[3], float radius)
	{
		dtCrowd* crowd = sample.getCrowd();

		dtCrowdAgentParams ap;
		memset(&ap, 0, sizeof(ap));
		ap.radius = radius;
		ap.height = ap.radius * 2.0f;
		ap.maxAcceleration = 8.0f;
		ap.maxSpeed = 3.5f;
		ap.collisionQueryRange = ap.radius * 12.0f;
		ap.pathOptimizationRange = ap.radius * 30.0f;
		ap.updateFlags = 0;
		ap.updateFlags |= DT_CROWD_ANTICIPATE_TURNS;
		ap.updateFlags |= DT_CROWD_OPTIMIZE_VIS;
		ap.updateFlags |= DT_CROWD_OPTIMIZE_TOPO;
		ap.updateFlags |= DT_CROWD_OBSTACLE_AVOIDANCE;
		ap.updateFlags |= DT_CROWD_SEPARATION;
		ap.obstacleAvoidanceType = 3;
		ap.separationWeight = 2;

		int idx = crowd->addAgent(pos, &ap);

		dtCrowdAgent* agent = crowd->getAgent(idx);
		
		dtAssert(agent);

		return agent;
	}

Code 1. Here one agent is spawned with fixed rules.

Recast/Detour is a state of the art navigation mesh construction and path-finding toolset for games. It’s well known, and used by Unreal Engine*, Unity, and so on. The library is written in C/C++ to squeeze the best possible performance out of most hardware and offer the highest level of flexibility. While the API is pretty straightforward, the library can also be downloaded as source code and modified to fit your exact needs. By default the pathfinding algorithm is single-threaded, and although it takes some configuration to enable, many parts of the library are set up for multithreading.

The library is broken into two components—Recast, which is specifically used for navigation mesh construction, and Detour, which is the spatial reasoning toolkit that we used for pathfinding. Both aspects of the library are used to create our crowd simulation. Recast creates the NavMeshes at runtime, and Detour finds their way through them. In our specific project, scene terrain is an IntelNavMesh, the characters finding their way through things are the IntelNavMeshAgent, and any dynamic obstacle such as the hot dog cart is called an IntelNavMeshObstacle. Each NavMesh is assigned a cost, and we simply ask the NavMesh to return a path based on a specific start and end point. From this request the NavMesh agent discerns any and all logical paths following the predefined rules, and returns them:

path = NavMesh.PleaseGivePath(FromA, ToB);

Overall, Recast/Detour is a highly dynamic tool that is well supported and well documented. Each of the modules with in-depth examples are available.

// Integrate.
parallel_for(0, nagents, [&](int i)
{
	dtCrowdAgent* ag = agents[i];
	if (ag->state != DT_CROWDAGENT_STATE_WALKING)
		return;

	integrate(ag, dt);
});

Code 2. Integrating

// Find nearest point on navmesh for agents and request move to that location
parallel_for(0, crowd->getAgentCount(), [&](int j)
{
	dtCrowdAgent* ag = crowd->getAgent(j);

	if (ag->active && ag->bHaveValidDestination)
	{
		dtPolyRef targetPolyRef = 0;
		float targetPos[3];
		navquery->findNearestPoly(ag->destination, halfExtents, filter, &targetPolyRef, targetPos);

		crowd->requestMoveTarget(j, targetPolyRef, targetPos);
		ag->bHaveValidDestination = false;
	}
});

Code 3. Example of finding nearest point

// Get nearby navmesh segments and agents to collide with.
parallel_for(blocked_range<int>(0, nagents), [&](const blocked_range<int>& r)
{
	// static nodePool for each thread, deallocated at program exit
	if (gNodePool == nullptr)
	{
		auto deleter = [](dtNodePool* pool) {pool->~dtNodePool(); dtFree(pool); };
		gNodePool = unique_ptr_deleter<dtNodePool>(new (dtAlloc(sizeof(dtNodePool), DT_ALLOC_PERM)) dtNodePool(64, 32), deleter);
	}

	for (int i = r.begin(); i < r.end(); ++i)
	{
		dtCrowdAgent* ag = agents[i];
		if (ag->state != DT_CROWDAGENT_STATE_WALKING)
			continue;

		// Update the collision boundary after certain distance has been passed or if it has become invalid.
		const float updateThr = ag->params.collisionQueryRange*0.25f;
		if (dtVdist2DSqr(ag->npos, ag->boundary.getCenter()) > dtSqr(updateThr) || !ag->boundary.isValid(m_navquery, &m_filters[ag->params.queryFilterType]))
		{
			ag->boundary.update(ag->corridor.getFirstPoly(), ag->npos, ag->params.collisionQueryRange, m_navquery, &m_filters[ag->params.queryFilterType], gNodePool.get());
		}

		// Query neighbour agents
		ag->nneis = getNeighbours(ag->npos, ag->params.height, ag->params.collisionQueryRange, ag, ag->neis, DT_CROWDAGENT_MAX_NEIGHBOURS, agents, nagents, m_grid);

		for (int j = 0; j < ag->nneis; j++)
			ag->neis[j].idx = getAgentIndex(agents[ag->neis[j].idx]);
	}
});

Code 4. Discerning paths around nearby colliders.

10. Utilizing Multiple CPU Cores

Distributing the pathfinding calculations across all available cores is not only a requirement of the project, it is a best practice for any processor-intensive operations. Another great challenge of development, and especially VR development, is the utilization of all resources that are available to you to create the best experience for the user. Multithreaded applications do just that, and are simply more efficient, they are faster, and surprisingly simple to implement when using the right tools.

10.1 Intel® Threading Building Blocks

Intel provides a library by the name of Intel® Threading Building Blocks (Intel® TBB) to easily enable the use of multiple cores for nearly every process. It is available for free and is relatively easy to implement. For our project Intel® TBB was implemented alongside Recast & Detour to extend the library so that the pathfinding could be moved into all available cores. Pathfinding can be extremely processor intensive and served as a perfect candidate to benefit from a multithreaded approach. While Intel® TBB is a very powerful library that can open an entire world of interaction with the individual cores, for our execution we were able to implement it in its default configuration and immediately capture the desired result.

11. UX/UI for VR

User experience (UX) and user interface (UI) development is an art in and of itself. As applications and machines become an integrated part of our everyday lives, it is the job of the UI/UX developer to create a way to interact with that application or machine that is both intuitive and non-inhibiting, allowing the interaction to not get in the way of the experience, but instead to enhance it. VR rewrites the rule book of UI/UX; the same basic principles are there, but the tools to achieve them are considerably different.

11.1 Limitations

When immersed in a VR experience there are most often no buttons, menus, or user interfaces just floating around in midair. Yes, it does happen, and sometimes it can be intuitive and extremely fun to use such systems. Those are generally not systems with motion, however, and require the user to be in a static location. With VR, the user moves around in every direction and can turn their head, crouch, see their hands, and so on. Developing a UI/UX that incorporates those everyday gestures and motions into something that feels familiar is the key.

11.2 The choices we made

Due to the use of Windows MR, we have a button configuration already defined for us. It was important to use previously developed Windows titles to establish the means by which Microsoft intended their controllers to be used. Even still, the presence of trigger buttons and thumb joysticks offers a world of opportunity for interaction, and much of it can feel like real life. So we looked to mimic natural movement as much as possible, allowing the user to grab an object by touching it with their controller and squeezing the trigger. The user then can hold on to that object by holding the trigger in, and it releases as they release the trigger. This is becoming a standard for VR experiences and proves to still be the best route with Windows MR. The choices for navigation are defined by the same navigation controls that can be found in the home screen for Windows MR. Our goal was to make the transition as seamless as possible.

12. Making VR Immersive

Now, all of the pieces are in place for an amazing visual experience with elements of grabbing and interacting with objects as well as full spatial mobility, yet something is still missing. This is an often-overlooked area of experience development, and it is Sound Design.

12.1 Sound, 2D, and 3D

Sound design is the use of different sounds to reflect the real-world experience of doing the same thing, being in the same place, or interacting with a similar device or object. It is often referred to as sound effects, but the overall process is much more comprehensive than that. To approach our sound design on this project we reflected on the experience that would be had in a similar environment. There would be 2D sound, sound that doesn’t really change as the user turns or changes position and 3D sound, sound that is indicative of a sound source. 3D sound changes in volume or in stereo position based on the user’s relative position to the sound source. To keep things simple we used one of each. The 2D sound is ambient sound; the sound of people walking in to and around in a space. This sound stays consistent as the user travels within the scene. It grows initially as the crowd enters the room, but maintains an even volume and stereo image after reaching full volume. 3D sound is sound sourced from multiple sports arenas, cut into small snippets, and intermixed with crowd noises in malls and concert halls. The resulting loop is just over two minutes in length and is meant to loop invisibly to the user. The sound is being emitted from the center of the basketball court in the adjoining stadium, which causes the volume to increase as you approach an entrance and decrease as you walk or turn away.

3D sound looped in Unity
Figure 26. 3D sound being placed within the project arena for sourcing and looped in Unity*

12.2 Testing, testing, testing

Immersive VR experiences can be written about, they can be developed, and they can be discussed, but until a user actually gets under the helmet and gets inside the experience no one can tell how things will be perceived. As such, testing is the key to having the best experience possible. Test, test, and test some more, tweaking each element that catches your eye or that breaks the illusion of reality from within the experience. With more users engaging in the experience there will be more data available as to what works and doesn’t work, and the problem areas will reveal themselves. Thankfully, those problem areas can be addressed, and as they are more readily handled the resulting experience continues to improve.

13. Summary

In the beginning we had little more than a loose project definition that lead us down a path of asking multiple questions as to how we would create this VR experience. Through answering each of those questions in broad general terms at first, and then honing them through the planning process, we were able to alleviate some of the headaches and missteps that naturally occur in any project. They still occur, but they are indeed minimized by the planning process.

After planning, we created our application in its simplest forms to prove the concepts and address some of the more difficult aspects, such as crowd simulation and multicore development with simple forms, allowing us to minimize the variables and focus on specific areas of development. With time, and as we resolved many of our questions we added complexity and brought the characters in, had them switch randomly, and evaluated the successes. We then added in the environment and worked on the materials and lighting to ensure that the right look and feel was achieved while still hitting the performance benchmarks. And once all elements were together we worked to improve the integration to which each respective piece was building or improving upon. We dialed in the user experience and interfacing to ensure that people would intuitively know how to use our experience. We added sound and focused on completing the immersive quality of the experience though testing, testing, and more testing. And finally, we implemented all of the user data obtained through our testing processes, corrected the obvious issues, and then arrived at our goal. We now have a VR project that meets the project requirements and employs each of the technologies pursued in a way that is hopefully logical and understandable, while still being entertaining.