A blog by the icapps iOS team
iOS has amazing native frameworks to help speed up development. UIKit and SceneKit are a few examples of the many frameworks we use in our daily development. Unfortunately, a framework with native support for virtual reality applications is not yet available. In this blog we will explain how we combined existing frameworks to create our own virtual reality experience.
1. Defining key aspects of virtual reality
Before we can start developing, however, we need to understand what defines virtual reality.
Virtual reality is a computer-generated simulation of a 3D image or environment that can be interacted with in a seemingly real way by a person using special equipment such as VR glasses with a screen inside. The person’s senses play a vital role in having a realistic experience.
From this we can conclude that we need something which can handle the input and output of our simulated environment, VR headset specifics and stereo audio rendering. To capture these functionalities, we use the Google VR SDK for iOS.
The Google VR SDK is made for Cardboard, an accessible and affordable VR platform, which supports both Android and iOS. It enables immersive VR experiences by fusing data from the phone's sensors to predict the user's head position in both the real and virtual worlds. This combined with an easy-to-use virtual reality headset makes it ideal for a great VR experience.
We also need a tool that can handle the content that populates our environment (e.g. 3D models with textures, rigs and animations). Luckily iOS does have a native 3D kit, SceneKit. It even comes with a WYSIWYG editor for 3D content, integrated in Xcode.
2. Designing the architecture
Now that we have our two main components, the only thing left to do is make them work together. Let’s look for some common ground.
Both frameworks are built on top of OpenGL. Google VR exposes its OpenGLContext, while SceneKit can consume it. This turns out to be a piece of cake!
Last but not least, we made our own custom renderer to delegate information between the Google VR SDK and our native SceneKit. This information contains everything which involves head tracking (e.g. the position of the user in the virtual world). Here, we also define some custom drawable objects’ initialisers, programs, shaders and renderers. This is all done in OpenGL to work efficiently with our native SceneKit.
To recap, we have the Google VR SDK passing head tracking information to our custom renderer. The custom renderer then delegates this information together with the video and custom objects to the three SCNRenderers available. Three renderers, you ask? Yes, one for each eye and one for the center which is used for the “magic view” mode. These make up our scene for the user to experience and enjoy.
3. Let’s make some magic
Time to create our virtual reality world. First, we take Google VR SDK’s CardboardView and find its OpenGLContext to pass to our custom renderer.
Then we create a SCNScene and attach it to the three SCNRenderers created by the CardBoardView. In this SCNScene we can import any models/objects we want into the visual editor (preferably with a .dae extension).
In our custom renderer we have functions defined to read an embedded video as well as draw a 360° sphere around the scene on which to project this video. It also contains an extension to the GVRHeadTransform class, which is responsible for passing the head tracking information, to easily rotate or move our objects based on the current users head position.
With our 360° projected video and custom objects imported, we can slide our phone into a VR Viewer, sit back, watch and enjoy our very own virtual world.
With the rise of ARKit we will definitely look for native alternatives because Apple could blow our solution right out of the water if they decide to drop support for OpenGL in favour of their own Metal API. But until then, this home-brew solution should do the trick.