An Action-Arcade Experience in Your Browser. Play it now!
Site & Content © 2014-2015 by Mike Linkovich

Article: Normal Mapped Sprites

Part 2: Normal Mapped Sprites with WebGL

Note: This is not an introduction to WebGL tutorial. If you're new to OpenGL/WebGL, I'd recommend starting out with this set of tutorials over at learningwebgl.com.

You'll probably want to start by checking out the demo and source. If you're reasonably familiar with WebGL, the shader sources may be all you're looking for. If you need some more hows and whys answered, then read on. Keep the source handy for reference.

I must confess that this demo ended up being much larger than I originally planned. Part of that is due to all the overhead involved in setting up WebGL - loading shaders, textures, acquiring attributes, uniforms and so on. But also because I couldn't really demonstrate the potential or flexibility of sprites and normal maps without rendering a bunch of them at once. Which segues into...

Yes, in fact, we do need some stinkin' batches

One thing you'll learn (or have probably already learned) about WebGL, OpenGL and hardware-accelerated rendering in general, is that the more you can render all in a single draw call the better. In a 3D app, you tend to activate some shaders, set up a transform matrix and then render a complex model geometry, then you move on to the next. In a 2D, sprite-based app, you tend to have very simple geometry. Sprites are typically just 4 vertices each. Ideally, you'd like to treat all of your sprites, or as many sprites as you can, as a single mesh.

Unfortunately this means you can't go the normal route of setting transform matrices between each sprite and letting the shader do the transforms for you. Even if you reduce your transform math to a 3x3 matrix and 3D vectors, you'll still be doing just about as many arithmetic operations in Javascript as you would if you transformed each vertex in Javascript. So let's do that. We'll transform the geometry ourselves (just like back in the day with immediate mode 3D APIs) then we can render all those sprites in a single draw call, rather than changing states, attributes and uniforms between every 4 vertices.

The example source contains a SpriteBatch class. This class manages a list of sprites to draw, along with the buffers for the position vertices, texture coordinates and some other stuff we'll need. The game engine/app can then work with fairly easy-to-use sprite instances to move things around, rotate them, add/remove them, etc. If you look at the update() function, you'll see how little code is needed to independently animate each monkey.

The Quad class is kind of like a template from which Sprite instances are formed. It represents the shape to draw (in this case a square) and the texture coordinates to use from the texture map. The example only uses a single Quad definition, but if you had a sprite atlas with a bunch of sprites packed on to one texture (along with a corresponding normal map texture) you could create a separate Quad for each and draw all sorts of different sprites, with different sizes or shapes using the same texure, all in the same batch.

You could also render shapes made up of more (or less) than 4 vertices each, however that would require a different SpriteBatch implementation. Having a consistent number of vertices per sprite makes things nice and easy (and fast.)

Using a different shader would also require a different batch. It would have its own set of uniforms and attributes, more or less textures, etc. The idea behind a batch is to render all the things that can use the same render states, texture maps and shader. In a game, you might want to render a bunch of "3D-style" sprites with lighting, then create another batch for alpha-blended effects like explosions which aren't affected by lighting, use a different blending mode, don't rotate, and have an additional scaling parameter, then another batch for particle animations which have a completely different set of uniforms and attributes, and so on.

Dynamic lighting - what normal maps are for

Compared to the SpriteBatch, the lighting is actually pretty simple. A normal map texture is basically a 24-bit texture where the red, green and blue components are interpreted as x, y and z values. This provides each pixel on the diffuse texture with a "normal" vector. All we need to do is compute the dot product of the normal of the pixel and the light direction to get the resulting light intensity. I've attempted to document this as clearly as possible in the vertex and fragment shader scripts at the top of the source.

The only tricky thing about it is doing the lighting for a batch of sprites, each with a different rotation value. We handle this by adding a "rotation" attribute to the vertex shader. Each vertex then has a corresponding rotation angle, so the shader can adjust the light direction vector correctly. This is called "rot_arr" in the SpriteBatch class.

I think that's about it then! If you save a copy of the source and grab the two textures (diffuse and normals) you can run it on your own localhost webserver and play around with the code.

Additional References

My initial understanding of normal maps and shaders started with this example for libGDX, a Java-based, cross-platform game library. The libGDX guys were very helpful on their forum getting me through some of my early stumbling blocks. You might find that example useful if you're looking for point lighting with per-pixel accuracy rather than directional lighting.




© by Mike Linkovich. Last updated Feb 20 2014.

comments powered by Disqus