JS13KGames 2018 Post Mortem

And another JS13KGames competition is over. This year’s theme was ‘Offline’. Since BabylonJS was allowed too in the WebXR category this time and I had been looking into Babylon lately, I decided to go with that. At the JS13kGames website you can play my entry, Lasergrid.

The first concept I had in mind was in a factory setting where the player gets an order on a display in the form of a selection of different colored objects. The player then has to look at the conveyor belt and push everything that’s not in the order off. The problem that I ran into very quickly was that I couldn’t use the default physics with Babylon. Although there’s a library for physics available for Babylon, it’s an external library that I couldn’t fit into the allowed 13kB. So I went with my second idea, a laser grid in which the player has to turn object and make the laser get from the start to the finish.

Lasergrid

BabylonJS and Virtual Reality

Setting BabylonJS up for Virtual Reality is very easy. It’s not ‘on’ by default, but there’s a helper function to add everything including the icon to switch to VR. It is usually called right after initializing the scene: this.vrHelper = scene.createDefaultVRExperience();. Moving around in the browser works right out of the box. Teleportation in VR is also very simple to add. I created a mesh for the ground and called enableTeleportation:

  this.vrHelper.enableTeleportation({
    floorMeshName: ground.name
  });

With that out of the way, I added a simple data structure for the puzzles and created a few basic cubes as objects to work with. When these are clicked in the browser they would rotate.

With this part working I was confident enough to start working on the full game.

Challenge 1 - Textures

Since even a PNG gets larger than 13kB very quickly texturing is always a challenge. When you try to shrink down images you quickly end up with a pixel art look. So that was the look I went for. Of course, I used my favorite pixel art tool PyxelEdit. PyxelEdit Applying the textures to the models was giving me some issues as well, which actually takes me to the next challenge:

Challenge 2 - Models

The models for the game would exist of very simple objects. In the proof of concept, I used a box and an extruded triangle. At first, this worked fine. When texturing the models I learned that Babylon stores the UVW information in the vertices, and not in the faces as I expected. This meant I could not map the textures per face. Getting the UVWs for 1 face right meant breaking another. I ended up ‘modeling’ in 3D Studio. I needed the UVW information so I recreated the simple meshes in 3D Studio and detached some faces to give them their own vertices and UVWs. I wrote a custom script to extract all information about vertices, faces, and UVWs to simple JavaScript objects. Since I was working with 3D Studio already, this gave me the idea to use this for puzzle design as well. So I created another MaxScript to export a scene from 3D Studio to JSON. The scenes are very simple. I just add some cubes in various colors by setting the material ID. This gave me the possibility to rotate the cubes and build entire walls out of them without any problems. I included these scripts in the GitHub project as well, by the way. In case anyone wants to have a look.

Challenge 3 - Laser

I needed to find a way to calculate the laser beam. I decided to go with casting some rays. Every object has a rotation value, I used that. The first ray is cast from the transmitter object. If it hits an object I call a method on that object that returns 1 of 4 possible values: Stop processing; Go left; Go right; Hit target. I planned on extending that list with other constants but never got to that. The laserbeam repeated that process until it hit nothing, a wall or the target. Every time it hit something it adds a new coordinate to an array. The array is then used to create a tube. I ended up adding a glow effect to the tube to make it a bit more like a laser.

Challenge 4 - Controls

To make the game into a real VR game I wanted to add Oculus Rift support. I wanted to be able to rotate the blocks and move around using the Oculus Touch controls. Babylon.js has some very simple function to add interactions and teleportation. That is until you really want to do something with it. There is a mesh selection event you can use. This has a filter to limit exactly what you can select in the game. Unfortunately, this event triggers when you point your controller to the object, without even pushing one of the buttons on the controller. I ended up having to track what the user is pointing at and have the controller respond to that. Another this is that the trigger for the Oculus Touch controllers are not boolean values, but can be anything between 0 and 1. As soon as you slightly touch the trigger this value changes and the event is fired. And whenever you slightly change the amount you are touching the controller this value changes. I fixed this by adding some more status values. And although it’s working it is by far an elegant solution.

Challenge 5 - Finishing

Finishing the game turned out to be the biggest challenge. I started working on the project as soon as I could, during my summer vacation in France. I was making some progress but got struck by the flu. This took me out for a whole week. I also had to finish some presentations and a workshop. I did learn a lot though. I wasn’t planning on continuing to finish the game, but after using the game in a demo at our local WebXR NL Meetup people convinced me to continue working on it. I try to stream as much as possible on my Twitch Channel and upload it my Youtube Channel.

Wrap-up

The main reason for me to work on compos like JS13kGames is just to have a fun project to work on and learn a lot in the process. The constraints force you to think outside the box. And although I got sick, I managed to create something playable, learn a lot and had fun.

@end3r, thank you for hosting this great compo every year! Already looking forward to the next.

Substance Painter to AFrame

I was working with on a WebVR project the other day and was trying to get a model rendered with the correct textures. I was creating the textures in Substance Painter. I was doing a back and forth between various tools to get the textured model to render correctly. At first, I was using a .obj model. But I rather would have used a .glTF model. Luckily, there’s actually a very nice way to get directly to .glTF from Substance Painter.

When you are done painting your textures, got to the file menu and look for Export Textures….

Export step 1

In the config dropdown, find glFT PBR Metal Roughness. Depending on where I need the resulting files I might lower the resolution of the textures to 512x512. When uploading you models to FaceBook you need to do this to decrease the file size.

Export step 2

Make any other configuration where needed and hit export.

When you open the resulting folder you’ll end up with files like this.

Export result

Depending on the usage you can copy these to your project. If you only need the model with textures, the .glb file is probably the one you need. This file contains the .glTF with textures in a binary format.

To use the file in A-Frame, use the <_a-gltf-model> tag. Like so:

<a-scene>

  <a-assets>
    <a-asset-item id="art-model" src="/assets/art.glb"></a-asset-item>
  </a-assets>

  <a-gltf-model id="art" src="#art-model" position="0 2.5 -10" ></a-gltf-model>
      
</a-scene>

And that’s all!

BabylonJS WebVR Hello World

In a few weeks, we have our next WebXR NL meetup. This evening we are going to put a couple of WebVR frameworks head to head: A-Frame, ThreeJS, and BabylonJS. Since I happen to have some experience with BabylonJS it is upon me to explain how to work with WebVR in BabylonJS. This post will be the first part, “Hello World”.

Basics

For this tutorial I use StackBlitz, but any other online or offline editor will work. In my case, I started a new TypeScript project in StackBlitz. This will give you an HTML file, a TS file, and a CSS file. The HTML file is the simplest. This contains only 1 element in the body of the HTML file: the Canvas element. All renderings will go to this canvas.

The CSS file is pretty straightforward as well. It makes sure the canvas element will file the entire screen.

Packages

To get BabylonJS to work we need to install a few packages. Of course BabylonJS itself, this packages also includes the TypeScript definitions.

BabylonJS needs a couple of packages, which you don’t need right away, but may become handy in the future. However, if you don’t add them, Babylon will complain.

  • Oimo => JavaScript 3D Physics engine
  • Cannon => JavaScript 3D Physics engine
  • Earcut => JavaScript triangulation library

With StackBlitz it very easy and fast to install them. Just enter the name in the ‘enter package name’. If you miss one StackBlitz will offer to install the missing package.

Main Class

I started by clearing the index.ts file with the exception of the import of the styles. I’ve added the import for BabylonJS as well. This will make sure the library is loaded and you can use it.

We need a TypeScript class for our app to run, I named it VRApp. Add an empty constructor and a function named ‘run()’. This is the basic outline of the class. After creating the class, instantiate it and call the run function.

Babylon works by having an Engine, that talks to the lower-level WebGL. You also need one or more BabylonJS Scenes. The Scene contains, for example, the geometry, lights, and camera that needs to be rendered. I created 2 private fields for these because there need to be available from different places in the class.

The engine itself is instantiated in the constructor of the VRApp class. You need to pass 2 parameters to the constructor of the BabylonJS Engine: a reference to the canvas and a bool to turn the antialiasing on. After that, we can instantiate a scene and pass it the engine. Right now, your code should like something like:

Next, we need to add a few things to the scene to render. We need a light to illuminate the scene. The first light I often create is a hemispheric light. This light has a direction. This is not the direction of the light itself, but the reflection of the light. The hemispheric light is used to create some ambient lighting in your scene. For ambient lighting in combination with other lights, you often point this up. In this case, I kept it at an angle to get some shading going.

Lighting alone won’t do anything. We need some geometry. For the ground, I create a Ground Mesh. This plane is optimized for the ground and can be used in more advanced scenarios like octrees if you wish in the future.

The rest of the scene will be made from a couple of cubes randomly scattered around. I created a simple for-loop in which I create a cube mesh and change its position to a random value.

Almost there. We need two more things. We need an implementation of the run function of the VRApp class. In this function, I provide the BabylonJS Engine I created in the beginning with a render loop. This function we provide to the engine is called every frame and is responsible for rendering the scene. This function can do more and probably will do more in the future, but for now, it only calls the render function of the scene.

At this point, you should see an error when running the application using StackBlitz.

And the error is correct. We didn’t create a camera. In a ‘normal’ WebGL application you need to create a camera, and you can do that in our case as well. But you don’t have to. Creating a WebVR project from a WebGL project takes some effort: You need to configure everything; And render a special camera. To make it as easy as possible BabylonJS has a special method that creates all of these for you and converts your application to WebVR, createDefaultVRExperience. The function creates a default VRExperienceObject. This helper will add the VR-button to the UI, checks if WebVR is available and by default creates (or replaces) the device orientation camera for you. I’ve added the following to the end of the constructor of the VRApp class:

Result

The result of the tutorial should look something like this, the full code is in here as well:

You can open this code on StackBlitz and play with it yourself. Of course, there’s much more you can do with WebVR, but this is it for this tutorial. If you have any question feel free to add a comment or come to our meetup on the 12th of June in Eindhoven, The Netherlands.

Getting started with A-Frame – #1

Getting started with A-Frame

For a while now I’ve been working with the A-Frame framework for building virtual reality application in the browser and I really like it. So, it’s time to dive in deeper and what better way to do that, than writing a series of tutorials. Of course, I have to start at the very beginning and work my way through the entire framework.

This first tutorial will explain a little bit about the framework itself and shows you how to get your first polygons on screen in the browser.

What is A-Frame?

What is the A-Frame framework all about? A-Frame is a framework for building Virtual Reality applications using web technology, originally by Mozilla and started in 2015. It is built on top of the WebVR API and uses Three.js. Three.js is built on top of WebGL. This may seem like a dependency on top of a dependency. And, although it is, you probably do not want to write everything for WebGL and WebVR yourself.

A-Frame uses a declarative syntax in your HTML files, which makes your HTML files feel very natural, easy to understand and it makes copy-pasting very simple. A-Frame is, just as the web itself, cross-platform. Applications build with A-Frame run everywhere from your desktop browser to your mobile devices, on Google Cardboard, Google Daydream, and Samsung Gear, and on the HTC Vive and Oculus Rift. It supports the controllers for the various devices as well.

A-Frame is using Three.js in the back, but you are free to use whatever you want next to it on your website. Because it used the HTML component structure you can use whatever you want next to it, like Vue, Angular or React. It also comes with a visual inspector, just hit CRTL+ALT+I when running. In a later tutorial, I’ll dive deeper into this and show you what it is capable of.

A-Frame uses component you can build yourself or download from a large repository. This list is curated and can be compared with the Unity Asset store a bit. These components make it very easy to extend your application and create reusable pieces. More on this in later tutorials as well…

Let’s start a scene

In this first tutorial, we are just going to get our feet wet. Let’s start by having a look at a piece of HTML.

On line 3 you can see the A-Frame framework scripts being loaded from their CDN. You can use npm or a local, offline copy of the framework, but I personally use the CDN most of the time.

The more interesting parts are on line 6 through 8. All A-Frame tags start with an “A”. And every A-Frame application must have a scene.  The a-scene tag creates this. You can add attributes to the scene as well, but just the tag is sufficient for now. Inside the scene is an a-boxThis tag adds a box entity to the scene. There are two attributes defined on that: color and position. Both are pretty self-explanatory.

What’s next?

Next time I’ll dive a little deeper into these attributes. How that work and what you can do with them.

In the meantime, you can have a look the code for this tutorial (and possibly the next tutorials as well) on my Github page. If you like to support the series you can use one of the affiliate links throughout the page. Another way of support is to become a Patreon. This will provide you with all kinds of benefits, like early access to pretty much everything I write.

 

 

Recording of VR in a Box @ Techdays

Last weekend the recording of my session VR in a Box for the recent Microsoft Techdays was uploaded to YouTube. If you have any feedback let me know. I might do a follow-up if necessary.

You can view and download the slides over here:

In case you are looking for the APK to try it for yourself: VR_TechDays.zip