Creating an APK from a WebXR app

Intro

Since recently it is possible to package progressive web apps into an APK using the tools provided by Oculus. This is a great way to get your app into the hands of users who are not able to use the web. And, it is just as easy as it sounds.

This tutorial is not a detailed walkthrough on how to create a WebXR app or how to create a PWA. It’s a quick overview of the steps you need to take to get your app packaged into an APK and deploy it to the Oculus Quest.

WebXR app

Let’s start with a simple example. I created a pretty empty A-Frame example that just shows a cube on a black background. To make everything work offline and to prevent cross-origin issues, I downloaded the A-Frame library and added it to the project.

Web manifest

The first step in converting the example is to add a webmanifest . Most of the file is pretty much the same for all Progressive Web Apps, with a few exceptions. The biggest and maybe the most important addition is "ovr_package_name":"TheCube.Sorskoot.com". This is the name of the package that will be used to install the app on the Oculus Quest. Another thing to mention is the display property. Right now there are two valid values, standalone and minimal-ui. I noticed no difference on the Quest. I will leave it as standalone for now. I hope that soon Oculus figures out a way of opening the app in full immersive mode, maybe by using fullscreen or something. For now, you have to show the UI to enter VR. In the example below I use a reference of my app running at localhost. If you release the app on a server or use a different address on your machine, you need to change this line. Lastly, there has to be at least one image in the icons array, a 512x512 image. This image has to have its purpose set to any maskable. More sizes are recommended.

Service Worker

For this example, I used a service worker from the service workers cookbook at serviceworke.rs . This service worker has a precache that loads the specified files as soon as possible and then updates the files in the background if possible. I’m not going into the details of how this works, because it is fully described at serviceworke.rs . The only downside of this service worker is that an update is always 1 behind. I would advise you to have a look at the cookbook and find a way that best suits your needs.

To see if it works you can look it up in the Chrome Dev Tools, under the Application tab. You should see the service worker in the list of installed scripts.

updating the app

To finalize the implementation of the progressive web app, we need to add a few lines to the index.html example. In the head section, I added a link tag referring to the web manifest. I also added a small piece of script to load the service worker.

Building the APK

Now that we have a complete PWA, we need to package it into an APK. This is done using the CLI tool provided by Oculus, you download it here . I believe this also needs Java 1.7 or later to be installed. The Android SDK or at least the Android SDK Build tools is also needed. I installed the tool on my C: drive in tools/ovr-platform-util-pwa. If you run the following from the command line it will create an APK file from your PWA. /tools/ovr-platform-util-pwa/ovr-platform-util.exe create-pwa -o TheCube.apk --android-sdk %localappdata%/Android/Sdk --manifest-content-file manifest.webmanifest The paths used in the command above are relative to the location of the tool, and probably will be completely different on your machine.

Deploy using adb

The fastest and easiest way to get your freshly built APK on your Oculus Quest is to use the Android debug CLI tool, adb . There’s only a short line needed to deploy the APK to the Quest. adb install TheCube.apk

npm scripts

To speed things up, I added a few npm scripts to the package.json file. You don’t really have to use npm or any packages in your project, but just having these scripts around saves a lot of time. I added the following to the scripts section of the package.json file:

scripts:{
    "build": "/tools/ovr-platform-util-pwa/ovr-platform-util.exe create-pwa -o TheCube.apk --android-sdk %localappdata%/Android/Sdk --manifest-content-file manifest.webmanifest",
    "deploy":"/tools/scrcpy/adb.exe install TheCube.apk",
}

Now I can just run npm run build and npm run deploy to build the APK and deploy the app to the Quest when it is connected.

Keep in mind that the folders might be different on your machine.

Closing words

At the moment of writing this, it is not possible to release the app in the Oculus store yet. It is possible to sign the APK with a keystore, but it is not possible to do anything more with that than what I’ve shown above. When Oculus starts allowing PWA/APK apps to be uploaded to the store by the general public, I will write part 2 of this tutorial showing what steps to take there.

From KenShape To WebXR

There are a lot of ways of creating content for your WebXR apps. Today I want to introduce you to a very simple one and one of my favorites: KenShape . KenShape is a tool that at first looks like a pixel-art sprite editor. And the first step is pretty similar. But, by providing a depth value you can create a 3D solid object from it. Other than with a normal pixel-editor, you can use shapes other than squares to draw your models.

Before we go into details on the editor, I need to clarify one thing. KenShape is not a free tool, but it costs only $3.99 and in my opinion well worth the money.

Creating a model

When you start the tool you are presented with 3 options for sizes of your model. 16x16, 24x24, or 32x32 pixels. For this example, I create a 16x16 model. Keep in mind that the depth is 8 pixels or voxels for every size.

At this point, we are presented with a blank canvas on which we can start drawing our model. On the top, we see the drawing tools: a pen, for drawing pixels; a tool for drawing straight lines; and a tool to fill an entire area. The three options next to that let us mirror our drawing horizontally, vertically, or disable the mirroring.

On the left side of the canvas are the different shapes you can use to draw your model. By pressing the spacebar or by using the scroll wheel on your mouse you can rotate the shape.

On the right side, there are 16 colors you can use. Through the palette icon in the top right corner, you can load other palettes. I often use lospec to find palettes to use.

I decided to draw a little space gun model for this tutorial. I loaded a custom palette. As you can see there are a lot of different shapes in here, not only the squared pixels you would get in a normal pixel-art drawing.

Once we have drawn a model we can start adding depth to it. By adding the numbers 1 to 8 to our image we can extrude based on those values. The numbers mean we extrude each pixel, giving it depth. Keep in mind that when creating your own models, that they are always mirrored and that you can add details to the front/back or top/bottom unless you open the final model in another editor.

Now we’re done with modeling our gun we can review de model. I like it. So, next, let’s export it.

For use in a WebXR project (in this case, I use A-Frame ) I export to GLTF. I leave the rest of the options as is. If you want to edit the model further in another program, you might want to create a texture and export it to FBX or something your tool of choice can import. By hitting export... you save your model as a GLTF. Time to use the model.

Using the model

Now that we have successfully created a model. It would be fun to use it in a simple A-Frame scene.

There’s one thing to keep in mind when working with these models. The scale is in meters, where every voxel is 1m by 1m by 1m. So we need to scale everything down significantly.

In the code example below I loaded the model using assets. This way it’s easy to reuse the same model multiple times and, in case of a change, you only have to change it in one location. I called the asset gun-model.

Then I added pretty much the same code twice. One for the left controller and one for the right controller. Since I wanted to show the same gun in both hands the code is equal. To get the position and the scale right it’s just trial and error. I made heavy use of the A-Frame inspector (of which I created a video a while back 😉).

closing words

I hope this tutorial will help you create your own content for your WebXR games. If you have any questions or ideas, hit my up at the WebXR Discord .

WebXR Discord

During the Covid-19 crisis all over the world, we noticed a lot of people from all over the world joining our meetups and members of our community hanging out in other meetups. We decided to try and bring everyone together in one Discord server . Here we can share links to our events and recordings, but also help each other out with issues when developing with A-frame, Babylon.js, or Unity and running your apps on devices like the Oculus Quest or Microsoft Hololens. Now we need you to join and help grow this community to become the official WebXR community.

Make sure to share the invite with your friends and colleagues: WebXR Discord, http://discord.gg/Jt5tfaM !

How to debug Microsoft Edge on the HoloLens

I recently started a new job as a Mixed Reality developer building applications for the Microsoft HoloLens. We build our applications using Unity3D. But, of course I had to try and run WebXR on the device as well. Microsoft Edge comes with the HoloLens and supports WebVR. And switching to ‘VR’ removes the browser chrome and your 3D model becomes a hologram in the real world. After that it get different from running a WebXR app on other devices very quickly. One of the difficulties I ran into was debugging.

Debugging

When running a web application on the Oculus Quest, for example, you can use Edge on your desktop de remotely debug. Just browse to edge://inspect and without too much hassle you should be able to debug with the same tools you would debug any other website. The HoloLens runs the ‘old’ UWP version of Edge and can’t be debugged with the new Chromium Edge. I’ve tried a couple of different approaches, but none worked.

##Vorlon.JS This is where Vorlon.JS comes in. This nifty little tool can replace the web debug tools and creates a connection to debug your website remotely. Just install the npm package globally on your machine (npm i vorlon -g). When you run this it spins up a small web server locally, by default on port 1337. You’ll have to add a script that is served from this page to your HTML page and you are pretty much good to go. Except that I run my WebXR application using SSL, with a custom certificate (I have a DNS entry that points a local IP) and mixing secure and unsecured connections doesn’t work in this case.

To get this to work you have to find where Vorlon is installed on your Windows machine. In my case it’s %AppData%\npm\node_modules\vorlon\Server. You’ll have to edit the config.json file there. Basically it comes down to changing “useSSL”: true. When you want to run SSL on LocalHost this is enough. In my case, I wanted to use my own certificate. To get this to work I copied the .crt and the .key files to the /cert folder. I also updated the config to use these. Don’t forget to update the script tag in your .html file to use the domain name if that’s different now. I added <script src="https://{YOUR-IP}:1337/vorlon.js"></script> to my HTML file.

After this, you just open the Vorlon URL (https://{YOUR-IP}:1337) on your desktop and the WebXR app on the HoloLens. Anything that is written to the console will show up on the desktop. But you can explore a lot more on the remote location from your desktop machine.

Halloween Scream Stream

This week was Halloween 🎃 and I wanted to do something special for the stream this time. I wanted to make everything a bit more spooky. So, I added some simple effects to the stream, created some spooky music and eventually added a few new shaders and wrote a horror script to end the stream with. This is what was added and how it was done. I hope this post will give you some ideas for your own streams. I might create some detailed tutorials on how to create the effects used in the future.

Look and feel

For the look and feel of the Halloween stream, I wanted to create an effect inspired by the dark world called the upside down from the Stranger Things series.

Overlays

I wanted to keep my overlays the same as normal as much as possible, with only a few added animations. I went searching and found a few animated gifs of smoke and floating particles I liked. Unfortunately, these had a black background instead of a green one. A green background can be removed pretty well using a chroma key filter in OBS. But, it turned out a black background can be removed as well. And since these gifs were black and white, the white parts stayed which gave me the look I wanted.

To make the combination of animations reusable, I created a new scene in OBS. and added the animations in there.

Then, in every other scene, I wanted to have the smoke and particles, I could just add a scene source and reference that scene. This way, whenever I need to make changes to the Halloween scene, I could just change it in one place and everywhere it was used the reference would update.

At some point, I noticed a higher than usual CPU usage in OBS. It turned out to be the chroma keying of the particle animation. I had to remove it to prevent possible issues while streaming.

Camera

In normal streams, I’ve configured the colors of my camera to be vivid with the default lighting set to blue and purple. This didn’t fit the theme for Halloween and wasn’t spooky enough.

I changed two things with regards to the camera. First, I had only the lights in front of me turned on. I made sure the lights stayed this way, I disabled the light commands in the chat (you can type !light with a color to change the color name during the streams). I place the main light on the floor next to me angled upwards. The other one is set to light my face a little bit from the other side.

Second, I added a color LUT (lookup table) filter to the camera. Normally this is used to change the colors a bit and make the color pop just a little more, but in this case, I dramatically changed it and added a lot more blues while lowering the saturation. This created the look I wanted for the camera.

Music

My main choice for music is the songs from the Monstercat library. But these didn’t fit the horror feel I wanted to have for my stream. I wanted to have very slow and long droning sounds. I remembered a tool call Paul Stretch . This tool can stretch audio into the extreme. I took a couple of famous horror movie themes, like the theme from the movie Halloween and made it about 20 times as long. This resulted in a couple of songs that were over 1 hour long. To finish them I added a bit of EQ and normalization to them so they all sounded similar.

The script

Now that I had the look and feel down I wanted to do something special. I wanted to ’tell a story’ during my stream, inspired by my favorite horror movies. I came up with a story of an old haunted house. Every 66 years, with Halloween, the ghost would come back haunting the family living in the house at that time, which would result in a lot of unfortunate deaths.

During the stream, I mentioned details of this story, like living in an old house and the big fire in the fall of 1953. The stream would start normal, except for the spooky atmosphere. I also mentioned the rain and thunder outside.

1 hour before the end of the stream I started adding ’events’ to the stream, supernatural events. It started with knocking sounds and children crying. I left my screen a few times with the camera running. With me gone or looking the other way, the camera glitches. The door would open by itself and when I’m gone to check it out and close it a black shadow moves past the camera.

Near the end, the light flickers and a ghost is seen in the background after which I decide to end the stream and check on the family because I keep hearing noises. Again, knocking. So I stand up from my desk and the stream starts glitching and after a bone-chilling scream the stream cuts out.

Shaders

A month ago I create a shader for use in OBS. I decided to do the same again and create a couple of different shaders. I used a tool called KodeLife to create the shaders. I’ve added 2 custom shaders and used 1 that came with the OBS shader plugin. Both shaders I created are not that complex.

I wanted to create a shader that would separate the RGB colors as you would sometime see on old CRT TVs.

This clip from the stream shows the effect around the 15s mark.

The shader is activated only when a semi-random value gets above a certain threshold. At that point, the Red channel shifts a bit to the left and the Blue channel a bit to the right. Below is the shader itself.

The other shader I created renders noise and randomizes when it is shown and how much. Again with a certain threshold and a specified speed.

The ghost

The ghost appears two times during the stream. The first time it passes the bottom of the screen as a dark shadow. This was created by overlaying a transparent animation on top of the camera.

The second appearance was a bit more work. To create the effect of the ghost appearing behind me, I took a screenshot of the webcam without the lights on, but with me looking into the camera. I opened it in Photoshop and added a figure in the back. Looking at the picture itself it looks too fake.

But, when I added the image to a scene in OBS (behind the camera) and added a flickering effect to the webcam on the alpha channel you just got to see it few milliseconds at a time.

I added a sound effect as well. It really looks like there’s some glitch in the lighting. By this time during the stream, I was actually getting the chills, because of the weird lighting and the droning sounds.

Timing the events

For the timing of everything that was happening during the stream, I created a custom tool that was running in the background. This was a very simple web application that would show me a message and execute a function after a specific time had passed.

The tool reminded me to mention the ‘storm’ outside and the question of the day. I wanted to talk about horror movies during the stream a bit to try and influence the minds of my viewers. At some point,it started playing audio of knocking and children crying. It should be a message that it played the sound and reminded me what to do. At first, only listen, but later switch to certain scenes to show the glitches.

The entire code of this app is available at the Halloween Stream Repo on my Github.

I might reuse this application and add it to a bigger application I’m planning to create reminders and such for during my stream.

I also needed to automate a few things in OBS, because I didn’t want to trigger things noticeably. There’s a plugin for OBS that can help, Advanced Scene Switcher . There’s an option in this plugin that lets you create sequences, it automates going to a specific scene after some time.

I wanted to have a little bit of time before a scene or glitch happened so I added ‘pre’-scene to a glitch and timed it to the sequence. This way I could switch to a fullscreen camera, leave and then have a glitch occur. I also used this in the end. Right as I stand up I hit the last button, I stand up and grab my headphones. This way I created the illusion I didn’t hit any buttons when the final malfunction happened because my hands are clearly visible.

I programmed everything into my stream deck and the special tool reminded me what to do.

Wrap up

I had a lot of fun creating the Halloween stream, even though a few things went differently than planned. I think I created the atmosphere I wanted, I actually was on edge during the stream. I did, however, forgot my lines during the end sequence and forgot to mention there was someone at the door again.

If you like to watch the entire ending you can do so in this highlight .