Substance Painter to AFrame

I was working with on a WebVR project the other day and was trying to get a model rendered with the correct textures. I was creating the textures in Substance Painter. I was doing a back and forth between various tools to get the textured model to render correctly. At first, I was using a .obj model. But I rather would have used a .glTF model. Luckily, there’s actually a very nice way to get directly to .glTF from Substance Painter.

When you are done painting your textures, got to the file menu and look for Export Textures….

Export step 1

In the config dropdown, find glFT PBR Metal Roughness. Depending on where I need the resulting files I might lower the resolution of the textures to 512x512. When uploading you models to FaceBook you need to do this to decrease the file size.

Export step 2

Make any other configuration where needed and hit export.

When you open the resulting folder you’ll end up with files like this.

Export result

Depending on the usage you can copy these to your project. If you only need the model with textures, the .glb file is probably the one you need. This file contains the .glTF with textures in a binary format.

To use the file in A-Frame, use the <_a-gltf-model> tag. Like so:

<a-scene>

  <a-assets>
    <a-asset-item id="art-model" src="/assets/art.glb"></a-asset-item>
  </a-assets>

  <a-gltf-model id="art" src="#art-model" position="0 2.5 -10" ></a-gltf-model>
      
</a-scene>

And that’s all!

BabylonJS WebVR Hello World

In a few weeks, we have our next WebXR NL meetup. This evening we are going to put a couple of WebVR frameworks head to head: A-Frame, ThreeJS, and BabylonJS. Since I happen to have some experience with BabylonJS it is upon me to explain how to work with WebVR in BabylonJS. This post will be the first part, “Hello World”.

Basics

For this tutorial I use StackBlitz, but any other online or offline editor will work. In my case, I started a new TypeScript project in StackBlitz. This will give you an HTML file, a TS file, and a CSS file. The HTML file is the simplest. This contains only 1 element in the body of the HTML file: the Canvas element. All renderings will go to this canvas.

The CSS file is pretty straightforward as well. It makes sure the canvas element will file the entire screen.

Packages

To get BabylonJS to work we need to install a few packages. Of course BabylonJS itself, this packages also includes the TypeScript definitions.

BabylonJS needs a couple of packages, which you don’t need right away, but may become handy in the future. However, if you don’t add them, Babylon will complain.

  • Oimo => JavaScript 3D Physics engine
  • Cannon => JavaScript 3D Physics engine
  • Earcut => JavaScript triangulation library

With StackBlitz it very easy and fast to install them. Just enter the name in the ‘enter package name’. If you miss one StackBlitz will offer to install the missing package.

Main Class

I started by clearing the index.ts file with the exception of the import of the styles. I’ve added the import for BabylonJS as well. This will make sure the library is loaded and you can use it.

We need a TypeScript class for our app to run, I named it VRApp. Add an empty constructor and a function named ‘run()’. This is the basic outline of the class. After creating the class, instantiate it and call the run function.

Babylon works by having an Engine, that talks to the lower-level WebGL. You also need one or more BabylonJS Scenes. The Scene contains, for example, the geometry, lights, and camera that needs to be rendered. I created 2 private fields for these because there need to be available from different places in the class.

The engine itself is instantiated in the constructor of the VRApp class. You need to pass 2 parameters to the constructor of the BabylonJS Engine: a reference to the canvas and a bool to turn the antialiasing on. After that, we can instantiate a scene and pass it the engine. Right now, your code should like something like:

Next, we need to add a few things to the scene to render. We need a light to illuminate the scene. The first light I often create is a hemispheric light. This light has a direction. This is not the direction of the light itself, but the reflection of the light. The hemispheric light is used to create some ambient lighting in your scene. For ambient lighting in combination with other lights, you often point this up. In this case, I kept it at an angle to get some shading going.

Lighting alone won’t do anything. We need some geometry. For the ground, I create a Ground Mesh. This plane is optimized for the ground and can be used in more advanced scenarios like octrees if you wish in the future.

The rest of the scene will be made from a couple of cubes randomly scattered around. I created a simple for-loop in which I create a cube mesh and change its position to a random value.

Almost there. We need two more things. We need an implementation of the run function of the VRApp class. In this function, I provide the BabylonJS Engine I created in the beginning with a render loop. This function we provide to the engine is called every frame and is responsible for rendering the scene. This function can do more and probably will do more in the future, but for now, it only calls the render function of the scene.

At this point, you should see an error when running the application using StackBlitz.

And the error is correct. We didn’t create a camera. In a ‘normal’ WebGL application you need to create a camera, and you can do that in our case as well. But you don’t have to. Creating a WebVR project from a WebGL project takes some effort: You need to configure everything; And render a special camera. To make it as easy as possible BabylonJS has a special method that creates all of these for you and converts your application to WebVR, createDefaultVRExperience. The function creates a default VRExperienceObject. This helper will add the VR-button to the UI, checks if WebVR is available and by default creates (or replaces) the device orientation camera for you. I’ve added the following to the end of the constructor of the VRApp class:

Result

The result of the tutorial should look something like this, the full code is in here as well:

You can open this code on StackBlitz and play with it yourself. Of course, there’s much more you can do with WebVR, but this is it for this tutorial. If you have any question feel free to add a comment or come to our meetup on the 12th of June in Eindhoven, The Netherlands.

Issues with native WebVR on Daydream

When I was building my entry for the JS13K games contest, I tested my app on the Google Daydream. One issue I ran into was that it was hard to focus my eyes in VR. I thought it was just me at first and during the contest, I didn’t have the time to investigate any further anyway. Now that the contest is over I’ve decided to dig in a little deeper.

Debugging

I’d like to start with a little background on how to debug a WebVR app running on a phone in a Daydream View. Since the app will probably be running in Chrome on the device ( although debugging Microsoft Edge works as well!), I use the Chrome remote inspector. The remote inspector can be accessed by starting Chrome on your dev machine and navigate to chrome://inspect. When your phone is connected to your PC via USB it will show up there, but it is possible to find it over WIFI as well. Select the page you want to debug and click ‘inspect’.

Now you can debug the page as you would normally.

To test if the page supports WebVR you can check if ‘navigator.getVRDisplays’ exists. Just type this in the console:

navigator.getVRDisplays

You should get one of these three responses.

“undefined”. In this case, WebVR isn’t available. Not native, and not through a polyfill.

In this case, you’ll see an implemented function. This function is implemented in the polyfill.

Best case scenario. The function is implemented natively.

In the last two cases, you’ll be able to call the function and get info about the device. This function returns a promise, but with the following like you can view it in the console.

navigator.getVRDisplays().then(d => console.log(d));

Now, you can dig into the result and see what is in there. For example the device that is used.

Issue and Fix

It turned out that for some reason my WebVR app was defaulting to ‘Google Cardboard’ and not Daydream, while other examples had access to the Daydream native VR. This seemingly weird behavior is caused by the polyfill, flags in Chrome and a meta tag.

After a lot of debugging, testing and digging around, I learned the following:

  • The polyfill defaults to cardboard, even when running it on a Daydream device, which causes the wrong settings for the Daydream lenses. This makes it very hard to focus your eyes on the VR.
  • To get WebVR running on your Daydream supported Android device everywhere on any page, you need to enable it through a flag. Go to about://flags and search for ‘WebVR’. After doing that it runs natively.
  • If you don’t want your users to have to enable a flag, there’s another way. That is by adding a meta tag. You have to enable the experimental feature by registering an origin trial. This is very easy to do. Start by going to the OriginTrials Gitbub and follow the link there to a form to register. You’ll receive a token you can add to the page to enable the WebVR feature for every visitor.

That last point, the meta tag, is what is used in the examples from ThreeJs.

JS13k Games Post Mortem | A JavaScript WebVR game using A-Frame for the Js13KGames jam

On the 13th of September, the JS13k Games jam 2017 has ended. The challenge in this contest is to build a game in JavaScript that fits in a .zip file of 13k in one month. New this year was the A-Frame category. A-Frame is a web framework for building virtual reality experiences on the web. I love 3D and VR, programming in JavaScript and a challenge, so I decided to participate (again) this year. Eventually, the biggest challenge turned out to be time. I ran out of time, with about 5kb left of the 13kb.

Since I always wanted to try creating a _roguelike_ game that was going to be my game type, and that I would love as much as I love the no game throws from P4rgaming. The world should be generated randomly, with monsters and loot. You should search for pieces of you spaceplane that crashed onto this planet. At first, I wanted to create portals to go from level to level until the player would have found all pieces. I later, due to the time constraint, changed to 1 level with 5 pieces you have to find. I wanted a ‘low res’ pixel-arty Minecraft-like look.

A-Frame

A-Frame uses a declarative HTML syntax which makes it very easy to understand (and to copy-paste). A-Frame supports all the major virtual reality devices, mobile devices, and the desktop. It is built on top of three.js. This way you can leverage the full potential of 3D on the web.

A-Frame works by writing components to use in combination with the markup. The way this worked reminded me a lot of the way games are built in Unity3D: Creating a ‘script’ and attach it to a game element. It took me a few days to get into this mindset, but when it hit I started creating components for every element of the game, like the player, the mobs, ground tiles and the materials.

For moving around I chose to limit to a ‘gaze’ at the tile next to your character to move to that tile. This feature is implemented in A-Frame and could be handled by having a ‘click’ event handler on a ground asset. This even could be filtered by using a css class on the ground elements.

Map Generator

A hard part to build was the map generator. Though there is a lot of information on the web on building a dungeon generator on the web, pretty much all of them are too complex for the 13kB limit.

One thing I kept in mind is the fact the 13kB limit is the zipped version of your game. Having some very repeating data inside the game should theoretically be compressed quite a lot.  I decided to go for a type of generator that would place and connect various ‘rooms’. The rooms are constructed from 2D arrays containing 0s, 1s and 2s. Where 1s are floors and 2s are exits. I also wrote a function that mirrors the rooms horizontally and vertically.

To save bytes on holding the generated map in memory, I used a canvas instead of an array. The canvas has every function I need to write and read data for the map.

Every tile in the game is represented by a pixel on the map. Each pixel is made of 4 bytes. I used the first to represent the floor, the second to represent an item and the third for a mob. The actual value of these bytes varies per type of mob or item. The best side effect of using a canvas is that I added it to the DOM to debug the map generator. Below is an image from the map. You can clearly see the bright green pixels, which are the pieces of the plane.

The next step after generating the map is to write it to the 3D scene. In this, I used a couple of for loops to look at each pixel and draw a floor tile if the first byte is a not- zero value. I also gave the A-Frame entities an ID with the position on the map. That way it became very easy to find an entity based on its position on the map.

Shaders and Art

To get the game to look the way I wanted I wrote a couple of shaders. Two vertex shaders and a fragment shader. The first vertex shader is pretty simple, it only transforms the vertices to 2D space and calculates the UV coordinates for the sprites. The second vertex shader is slightly more complex. It is used to billboard the sprites. They are rendered that they always face the 3D camera. The fragment shader is used to render the sprites and uses a few tricks.

Speaking of sprites. The sprite sheet I used is this:

Note that there are colored floor tiles, mobs or items in there. The shader I wrote is replacing the gray colors of these with a color from a second sprite sheet.

I was planning on creating a lot more ground tiles in different variations. The plan was, for example, to have a grassy biome. This biome would have all kinds of variations of grass tiles. Some all green, some with yellow flowers and some with stone tiles. I even thought of doing animations on lava and water by cycling through a couple of different rows from the palette. But, again, I didn’t have the time to do so.

For creating and testing the shaders I used ShaderTool and FireFox. ShaderTool makes it very easy to write you GLSL code and test it by wiring up the inputs. I used FireFox for fine-tuning and debugging. The GLSL dev tools in FireFox let you modify the shader while running. Very handy.

The biggest issue I ran into while creating the shaders was handling the opacity correctly. I eventually added a parameter as a cutoff value. If the alpha drops below this threshold, the pixel is discarded.

Text

Near the end of the contest, someone mentioned on Slack that the a-text component was downloading a font. Since using external stuff besides the A-Frame library was prohibited, using the a-text component wasn’t allowed as well. I solved this issue by creating a new component that renders a text on a canvas and uses this as a texture on a plane entity. Too bad this took some time to build and caused other items on my todo list to be removed.

Future

You can play the game at JS13KGames or at my GitHub. You can find the source there as well. Since the game isn’t entirely finished I’m thinking of continuing working on it. A few features I’d like to add are a better following camera, more intelligence on the mobs, an inventory for the items and durability on them so they break and better biomes.

Just want to close by thanking Andrzej Mazur and c9bets for having this great jam every year!