Programming mobile applications for virtual reality in Windows

Authors: Dawid Borycki

06.03.2017
Performance of the modern smartphones is good enough to make extend their basic functionality for mobile virtual reality (VR) purposes. To make your smartphone VR-enabled you only need to a suitable hardware module, which thanks to the Google Cardboard project can be as cheep as a cup of Starbucks coffee! It’s a springboard to entirely new use of smartphones and therefore — to creating a new branch of mobile applications, which does not require expensive equipment. In this article, I present basic aspects of mobile VR programming in Windows with Unity 3D and ALPS VR SDK.

The following text is an English translation of an article by Dawid Borycki, published in the 10/2016 issue of a Polish IT magazine "Programista": https://programistamag.pl/programista-10-2016-53/
The translation was performed by Leaware.

INTRODUCTION - not only for geeks ;)

Remember The Commodore 64? It was equipped with only 64 kB of RAM memory and an 8-bit processor operating at the frequency of 1 MHz. If we compare these values with the parameters of today’s smartphones, it is hard to believe that this popular C64 was produced and sold in a practically unchanged configuration for twelve years! Multi-core 64-bit processors with the frequency higher by three orders of magnitude than the one used in C64, as well as much more (nearly 106 times more) memory capacities, become a decisive factor for the dynamic development of mobile VR.

Formally VR is defined as a realistic and immersing three-dimensional simulation of environment, created with the use of interactive software and hardware and controlled by body movement. In practice - VR simulates the sensation of being in a virtual, alternative world. Thanks to sophisticated hardware specifications of modern smartphones they can be very quickly transformed into a functional VR system. All you need for this is a suitable module (goggles) for emulating three-dimensional visual perception. It’s essentially the most important element of VR, based on generating two two-dimensional pictures (stereo pairs) for each eye separately, which are then automatically reconstructed by the human brain, which in effect results in creating the sensation of distance or, speaking more formally, the depth of the scene.

Pictures are generated and displayed on the base of the video chipset of the mobile device, and the VR goggles separate pictures intended for the left and the right eye. Whereas everything else, that is, the processing of the information registered by the eyes (specifically speaking, the retina), is processed by appropriate parts of the human brain. The brain reconstructs a three-dimensional scene by itself on the basis of its awareness of the distance between the eyeballs.

To grasp this phenomenon You can try concentrating Your vision on a small object that is sufficiently far away (e.g. a pen held vertically at the distance of about 0.5 m from the head). If You look at this object with the left eye closed and then with the right eye closed, You will experience the sensation that the object is slightly moved depending on the eye that is opened at a given moment. It’s the so called parallax effect — the distance between the left or the right eye and the object is different. In the context of geometric optics this effect results from angular differences between the rays of light reflected from the object and the light reaching the left and the right eye.

Even though You can get the VR extensions for smartphones in practically every RTV/AGD shop, this type of module can also be a DIY construction with the use of an appropriate Google Cardboard [1] template. Such cardboard modules, compatible with smartphones of different sizes, are also relatively cheap, and it is possible to personalise them.

VR itself is not a XXI century invention. At the end of the 90ties, Nintendo presented the Virtual Boy console. Even though it was not an instant market hit, mainly due to its technological limitations, visually it was the precursor of modern VR sets. One of the key moments in the development of VR was the overtaking of Oculus VR by Facebook in 2014 for a considerable amount of 2 billion USD. Shortly after that, the HTC Vive and Playstation VR devices appeared, and currently the largest players in the IT industry are investing large sums of money in start-ups connected with VR. Personally, I’m impatiently looking forward to the first presentations of the Magic Leap project, subsidized at the beginning of this year with the amount of about 790 million dollars by such companies as Google and Qualcomm. The Magic Leap project, similarly to Microsoft HoloLens, extends beyond virtual reality and executes more advanced technologies of augmented reality, where three-dimensional objects are directly overlaid onto real objects, similarly to two-dimensional pictures generated by Google Glass. I already described my first experience with HoloLens previously on the blog. Whereas I am familiar (so far) with Magic Leap exclusively from press releases [2], which are very impressive.

VR for begginers

I hope all those observations inspire You to get into programming VR applications for mobile devices, especially that it truly is a relatively fast and not very expensive way towards programming VR systems. In this article You will find my tips concerning ready-made programming tools available for the Windows platform, such as free Visual Studio 2015 Community, Unity game engine and ALPS SDK, which is a Unity package for VR programming. My idea is to get You through individual stages of configuration of the developer’s environment, to describe the procedure of creating a simple stereoscopic system in Unity and to present the basic elements of ALPS SDK. If you’re into creating games and consequently VR applications in Unity 3D, I’d reccomend a series of articles by Marek Sawerwain [3]. In this particular article I am going to create a simple mobile VR application for the universal Windows platform (UWP). However, it will be easy to extend it with a more advanced elements as it is presented in [3]. The process of creating the scene and writing scripts of a game does not change considerably.

WORKING ENVIRONMENT

The working environment for creating VR applications for the Windows platform is composed of two elements: Visual Studio and Unity 3D. The first one is an editor of C# source code, whereas the second one is for creating scenes, 3D objects, creating physical interactions between 3D objects etc. Shortly speaking, the Ul/UX of applications is created in Unity, and in Visual Studio C# scripts are edited, which implement the logic of games and applications.

Installation of Visual Studio, as well as Unity, is automatic. In this article I am going to use Windows 10 Anniversary Edition, Visual Studio 2015 Community and the latest stable version of Unity 3D 5.42f2 Personal. It can be downloaded for free from: https://unity3d.com/get-unity/update. Additionally, it necessary to install the Unity Metro Support for Editor-5.4.2f2 package, in order to be able to prepare a Visual Studio project for an UWP application.

CREATING A PROJECT AND PREPARING A SCENE

After preparing the programming environment we can proceed to create an application project and a three-dimensional scene. For this purpose, after running Unity 3D all you need to do is create a new project named MobileVR (see Figure 1). This will result in activating the Unity editor (Figure 2), where we can design a three-dimensional scene and create two cameras removed from one another. They will be simulating how eyes function.

Figure 1. Creating a 3D application project in Unity



Figure 2. Unity editor



All things considered, the Unity editor in a „2 by 3” layout, which I established using the Layout drop down list (right, upper corner of the editor), is composed of five elements: the scene design view and camera preview (left side), scene hierarchy and project structure (middle of the editor) and properties inspector.

First, we complete the scene with a plane, where we will place other elements. In order to add a plane to the scene, all you have to do is, from the Create dropdown list, in the Hierarchy window (see Figure 2), select the option 3D Object->Plane (Figure 3). This will result in completing the hierarchy of the scene with the object Plane. Let’s enlarge the plane now. For this we click the item Plane, and then in the inspector window we go on to the Transform group, where we change all the X, Y and Z components of the scale from 1 to 10 (see Figure 4). This will cause a tenfold enlargement of the scene along the coordinate system.

Figure 3. Creating a plane



Figure 4. Scaling an object



Now, we are going to create the plane material. Material defines the optical properties of the 3D object, among others, its colour. In order to create material and connect it with a selected element of the scene, we can do the following: from the menu Assets we select the option Create/Material. The object New Material will be added to the folder Assets of the project (see the lower part of the editor Unity from Figure 2). The next step is to click this object, and next in the inspector window click the rectangle on the right side of the option Albedo. This will open a modal window Color, which allows to define the colour. I am going to use the colour of the following RGBA values: 0, 0, 64, 255.

In order to connect the created material with a plane, all you have to do is click the material and move it to the object Plane (in the scene hierarchy). As the result the view of the platform will be updated.

Analogically, we construct four more objects: Capsule, 3D Text, Tree and Sphere. We configure the properties of these objects as follows:
  • Using the group Transform (option Position) we update values determining the position of the object 3D Text to X: 2, Y: 1, Z: 0. Next we go to the group Text Mesh, where in the field Text we enter any text, e.g. Programmer, and using the field Color we define any colour we want;
  • We can change the position of the object Tree X: -1.5, Y: 0, Z: -3, in a similar way, and the Y value of the scale to 0.25; 
  • Finally we will change the location of the capsule to X: 0, Y: 1, Z:0, and of the sphere to X: -3, Y: 0.5, Z: O.
We already have a simple scene prepared. We can modify it freely of course and complement individual objects with materials or textures or replace the plane with terrain according to the description contained in mentioned articles about Unity 3D [3]. However, in order to implement certain aspects of VR this simple scene will be completely sufficient.

STEREOSCOPY

In the previous chapter we did not create anything that is directly related to VR. In order to do that, we need an additional camera, a built-in VR camera Unity 3D or appropriate SDK. First, I would like to show You how to create and configure an additional camera by yourself, in order to create your own stereoscopic system. As you will see later in the article, a ready SDK for VR also implements tracking head movement. Its purpose is to automatically update the location of objects in the scene (formally, the projection of a scene) so that the user has an impression of movement “in a virtual world”.

For the purpose of this tutorial, I am using an emulator of a device with the Windows 10 operating system, which does not have an option to emulate a gyroscope. It is the sensor which provides information on angular acceleration of the device. However, the emulator allows to simulate the readings from an accelerometer (linear acceleration). That is why we are going to create a simple VR system, where turning cameras will be executed based on data read from an accelerometer. Next, I am going to show You how a certain SDK simplifies the process of creating cameras and following head movement on the basis of automatic readings from sensors built-in in the device.

In order to create stereoscopy in Unity 3D we proceed as follows: we change the name of the main camera (scene projection) from Main Camera to L, next we add another camera (the option Create/Camera in the view of the hierarchy of the scene) and change its name to R. In the next step, in the hierarchy view, we click camera L and we go to the inspector window. We find there the group Camera, and then Viewport Rect and there we change the parameter W from 1 to 0.5 (see Figure 5). This will decrease the width of the rectangle representing the camera view, looking in the direction the camera is pointed to. Such a decrease in the width is to simulate a limited field of view if the right eye is closed.

Figure 5. Camera configuration in Unity



Analogically we also modify the parameter W for the second camera. Additionally, we change the property X in the section Viewport Rect from 0 to 0.5. As the result, in the lower part of the editor window of Unity (tab Game) we will see the view from two cameras. Created cameras are independent objects. In order to simplify their movement in response to updated information from the accelerometer, we are now going to create an additional superior object. A change in its position will also result in updating the position of both cameras (inferior objects). Realisation of this task requires adding an empty object of the GameObject type (menu GameObject/Create Empty). Next all you need to do is change its name to VR Camera, next in the hierarchy of the scene drag cameras L and R to VR Camera. As a consequence cameras L and R will be derived objects of VR Camera (see Figure 6).

Finally, let’s also change the position of the VR camera so that its coordinates (Position in the group Transform of the inspector window) have the following values: X: 0, Y: 1, Z: -10. This will make the game preview (tab Game) look like in Figure 7.

Figure 6. Scene hierarchy. Cameras L and R are derived from the object VR Camera


Figure 7. View from the VR camera

USING THE ACCELEROMETER FOR UPDATING THE VR CAMERA

Now we are going create a simple C# script, which will turn the VR camera in response to the position of the device. Speaking more precisely, linear acceleration along the X axis received from the accelerometer. This is achieved by adding a C# script the object VR Camera. First, in the view scene hierarchy we click VR Camera, next in the inspector window we click the button labelled Add Component. This will activate a list (Figure 8), where we select New Script, next we set the script name as Accelerometerinput and from the list Language we select C Sharp (see Figure 8). Finally, we confirm this operation with the button Create and Add. Unity will add the file Accelerometerinput.cs the folder Assets and will fill it with the content of Listing 1. A double click on the file Accelerometerinput.cs in the window Project will run the editor of the source code (by default Visual Studio) for editing it.

Figure 8. Creating a C# script



Listing 1. Default content of the script Accelerometerinput
using UnityEngine;
 
public class AccelerometerInput : MonoBehaviour {
    // Use this for initialization
    void Start () {
    }
   
    // Update is called once per frame
    void Update () {
    }
}
Analysing a fragment of the code from Listing 1, we can see that the class AccelerometerInput inherits from MonoBehavior, the base class for all the scripts. Furthermore, AccelerometerInput has two methods, Start and Update. Their comments inform us about what methods Start and Update are intended for. The method Start is triggered once (directly before displaying the first frame) for initialisation. Whereas the Update function us triggered every time for each frame.

In this example we will use both methods. Through the Start function we will determine the orientation of the device view on Landscape, so that the application view is not turned while the device is moved, and we will save the current transformation describing the turning (orientation) of the camera. Whereas, through the Update function we will update the position of the VR camera. Furthermore, touching the device screen will bring back default camera parameters.

I described the implementation of the described changes in Listing 2. We can see that the Unity engine in a simple way allows to read information from device sensors, as well as information on touching the screen. For this purpose you need to use the right properties and methods of the Input class. Additionally, Unity allows to turn objects (or more generally, to perform transformations / affine transformations) using the field transform. In this example I used this field to turn the VR camera using the world matrix [4]. Shortly speaking, it is responsible for transforming the local coordinates.

Listing 2. Updating the VR camera using the accelerometer
using UnityEngine;
 
public class AccelerometerInput : MonoBehaviour
{
    private float previousReading = 0f;
    private const float scaler = 45f;
 
    private Quaternion defaultRotation;
 
    void Start()
    {
        Screen.orientation = ScreenOrientation.LandscapeLeft;
 
        defaultRotation = transform.rotation;
    }
 
    void Update()
    {
        if (Input.touchCount > 0)
        {
            ResetRotation();
        }
     
        Rotate();  
    }
 
    private void Rotate()
    {
        var currentReading = Input.acceleration.x;
        var delta = currentReading - previousReading;
 
        var rotation = scaler * Vector3.up * Time.deltaTime;
 
        if (delta > 0)
        {
            transform.Rotate(rotation, Space.World);
        }
        else if (delta < 0)
        {
            transform.Rotate(-rotation, Space.World);
        }
 
        previousReading = currentReading;
    }
 
    private void ResetRotation()
    {
        previousReading = 0f;
 
        transform.rotation = defaultRotation;
    }
}
In order to run the application, first we need to compile it in Unity, and then open the generated project in Visual Studio and there activate the application in an emulator, in a developer computer or on a mobile device. I am going to use the first option, that is, an emulator.

Compilation of a project in Unity requires clicking the menu File/Build Settings... or using a keyboard shortcut CTRL+SHIFT+B. As a result, a modal window from Figure 9 will be activated, where from the list of available platforms we select Windows Store. Next, on the SDK list we choose the option Universal 10, and in UWP Build type — XAML, and then we click the button labelled Build. After selecting the destination folder, e.g. MobileVR-build, and waiting for a short while, an XAML/C# project will be generated for Visual Studio, that is, MobileVR.sln. It will be placed in the previously specified folder. When the MobileVR.sln project is opened we can run the VR application, which is achieved through standard techniques.

Figure 9. Compilation configuration in Unity



In order to present how the application works, I ran it in an emulator of a Windows 10 Mobile device. It allows to emulate the accelerometer, which makes the camera turn. As you can see in Figure 10, turning the cameras allows the user to “feel” the virtual world. Even though this world contains only simple 3D elements, we can complete it depending on our needs. These changes, combined with input information registered by the Input class, are essential for creating VR applications.

Figure 10. Turning the device triggers the update of the VR camera position



ALPS VR SDK

In the previous chapter we created stereoscopy very quickly. However, it is a relatively simple system, which does not take into consideration the basic optical aspects related to picture distortion. This can of course be implemented manually or we can use one of the ready-made programming tools (sets) for VR. They are distributed in packages for Unity. One of them is ALPS VR SDK (ms-alpsvr). It is available at: https://github.com/peted70/mva-vrdemo/blob/master/ms-alpsvr.unitypackage.

In order to present its example use in a Unity project, I saved a scene created earlier as MainScene. It will appear in the Assets folder in the project window. By activating the MainScene context menu there, we can export the scene as a Unity package (option Export Package...). As a result, the scene that we created can be used in other projects, thanks to this we do not have to repeat our work.

Figure 11. Import of Unity packages


Therefore, after exporting the scene, we create a new Unity project named MobileVR-ALPS. Next in the Project window, we activate the context menu and select Import Package/Custom Package... (Figure 11) and choose the package prepared earlier containing MainScene. This will import the scene to the new project. It will be visible in the project structure. Let’s move it to the scene hierarchy view, and then let’s delete the default scene.

Analogically let’s import the ms-alpsvr package. This will supplement the project with a large number of additional objects, especially the ALPS folder. Let’s find the Prefabs folder in it, and then the object ALPSCamera, which then we add to the scene. Let’s change the position of this new camera to X: 0, Y: 1, Z: -10. Next, in the section ALPS Controller (Script) of the attribute inspector of the object ALPSCamera, from the dropdown list Device let’s select CARDBOARD. Let’s also delete our camera VR Camera as we will not need it anymore.

The project itself is ready to run. However, it will not work properly on an emulator because of the module following to recognise head movement. For correct operation, we need a device. However, we can use the game preview in Unity, which can be activated through the triangle icon. As a result, we can preview the application view and virtually move in the scene using the mouse or a touchpad (Figure 12).

Figure 12. Preview of the VR camera operation from the package ALPS VR SDK



It is worth noting that ALPS VR automatically follows head movement based on the data from the gyroscope. The script ALPSGyro.cs from Listing 3 (folder Assets/ALPS/Scripts) is responsible for it. Additionally, ALPS VR also has a simple script ExplodingBottle.cs (folder Assets/VisrTestScene). It shows (Listing 4), how you can implement virtual interaction of the user with the objects in the scene. In this case it is based on performing a certain action in the case of hitting a rigid body with a beam.

Listing 3. The content of the ALPSGyro.cs file

using UnityEngine;
 
public class ALPSGyro : MonoBehaviour
{
    private Gyroscope gyro;
    private Quaternion initialRotation;
 
    // Use this for initialization
    void Start()
    {
        Screen.sleepTimeout = SleepTimeout.NeverSleep;
 
        if (SystemInfo.supportsGyroscope)
        {
            Input.gyro.enabled = true;
        }
        else
        {
            Debug.Log("No Gyro Support");
        }
 
        initialRotation = transform.rotation;
    }
 
    // Update is called once per frame
    void Update()
    {
        if (SystemInfo.supportsGyroscope)
        {
            if (Input.touchCount > 0)
            {
                transform.rotation = initialRotation;
            }
 
            Vector3 orientationSpeed = Input.gyro.rotationRateUnbiased * Time.deltaTime;
            transform.rotation = transform.rotation * Quaternion.Euler(
               -orientationSpeed.x, -orientationSpeed.y, orientationSpeed.z);
        }
    }
}
Listing 4. The content of the ExplodingBottle.cs script

using UnityEngine;
 
[RequireComponent(typeof(Rigidbody))]
public class ExplodingBottle : MonoBehaviour {
 
    void OnLookStateAction(RaycastHit rayHit)
    {        
        GetComponent().AddForceAtPosition(
           rayHit.normal * -5, rayHit.point, ForceMode.Impulse);
    }
}
 

SUMMARY

Creating mobile VR applications offers new perspectives for smartphones users. In the nearest future it can introduce a completely new branch for mobile applications. As I tried to show You in this article, creating this type of applications is facilitated by the Unity environment and by dedicated sets of programming tools.

References and links

[1] Google Cardboard: https://goo.gl/QTuUPj
[2] Magic Leap: https://www.magicleap.com/#/home
[3] M. Sawerwain, Unity 3D. Robimy grę. Programista 10/2014, 11/2014 and 12/2014.
[4] J. Matulewski, Macierze w grafice 3D. Programista 11/2014.