Welcome to my number five in my series of WebGL tutorials, based on number 6 in the NeHe OpenGL tutorials. This time we’re going to add a texture to a 3D object — that is, we will cover it with an image that we load from a separate file. This is a really useful way to add detail to your 3D scene without having to make the objects you’re drawing incredibly complex. Imagine a stone wall in a maze-type game; you probably don’t want to model each block in the wall as a separate object, so instead you create an image of masonry and cover the wall with it; a whole wall can now be just one object.
Here’s what the lesson looks like when run on a browser that supports WebGL:
Click here and you’ll see the live WebGL version, if you’ve got a browser that supports it; here’s how to get one if you don’t.
More on how it all works below…
The usual warning: these lessons are targeted at people with a reasonable amount of programming knowledge, but no real experience in 3D graphics; the aim is to get you up and running, with a good understanding of what’s going on in the code, so that you can start producing your own 3D Web pages as quickly as possible. If you haven’t read the previous tutorials already, you should probably do so before reading this one — here I will only explain the differences between the code for lesson 4 and the new code.
There may be bugs and misconceptions in this tutorial. If you spot anything wrong, let me know in the comments and I’ll correct it ASAP.
There are two ways you can get the code for this example; just “View Source” while you’re looking at the live version, or if you use GitHub, you can clone it (and the other lessons) from the repository there. Either way, once you have the code, load it up in your favourite text editor and take a look.
The trick to understanding how textures work is that they are a special way of setting the colour of a point on a 3D object. As you will remember from lesson 2, colours are specified by fragment shaders, so what we need to do is load the image and send it over to the fragment shader. The fragment shader also needs to know which bit of the image to use for the fragment it’s working on, so we need to send that information over to it too.
Let’s start off by looking at the code that loads the texture. We call it right at the start of the execution of our page’s JavaScript, in webGLStart
at the bottom of the page (new code in red):
function webGLStart() { var canvas = document.getElementById("lesson05-canvas"); initGL(canvas); initShaders(); initBuffers(); initTexture(); gl.clearColor(0.0, 0.0, 0.0, 1.0);
Let’s look at initTexture
— it’s about a third of the way from the top of the file, and is all new code:
var neheTexture; function initTexture() { neheTexture = gl.createTexture(); neheTexture.image = new Image(); neheTexture.image.onload = function() { handleLoadedTexture(neheTexture) } neheTexture.image.src = "nehe.gif"; }
So, we’re creating a global variable to hold the texture; obviously in a real-world example you’d have multiple textures and wouldn’t use globals, but we’re keeping things simple for now. We use gl.createTexture
to create a texture reference to put into the global, then we create a JavaScript Image
object and put it into a a new attribute that we attach to the texture, yet again taking advantage of JavaScript’s willingness to set any field on any object; texture objects don’t have an image field by default, but it’s convenient for us to have one, so we create one. The obvious next step is to get the Image
object to load up the actual image it will contain, but before we do that we attach a callback function to it; this will be called when the image has been fully loaded, and so it’s safest to set it first. Once that’s set up, we set the image’s src
property, and we’re done. The image will load asynchronously — that is, the code that sets the src
of the image will return immediately, and a background thread will load the image from the web server. Once it’s done, our callback gets called, and it calls handleLoadedTexture
:
function handleLoadedTexture(texture) { gl.bindTexture(gl.TEXTURE_2D, texture); gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, true); gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, texture.image); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST); gl.bindTexture(gl.TEXTURE_2D, null); }
The first thing we do is tell WebGL that our texture is the “current” texture. WebGL texture functions all operate on this “current” texture instead of taking a texture as a parameter, and bindTexture
is how we set the current one; it’s similar to the gl.bindBuffer
pattern that we’ve looked at before.
Next, we tell WebGL that all images we load into textures need to be flipped vertically. We do this because of a difference in coordinates; for our texture coordinates, we use coordinates that, like the ones you would normally use in mathematics, increase as you move upwards along the vertical axis; this is consistent with the X, Y, Z coordinates we’re using to specify our vertex positions. By contrast, most other computer graphics systems — for example, the GIF format we use for the texture image — use coordinates that increase as you move downwards on the vertical axis. The horizontal axis is the same in both coordinate systems. This difference on the vertical axis means that from the WebGL perspective, the GIF image we’re using for our texture is already flipped vertically, and we need to “unflip” it. (Thanks to Ilmari Heikkinen for clarifying that in the comments.)
The next step is to upload our freshly-loaded image to the texture’s space in the graphics card using texImage2D
. The parameters are, in order, what kind of image we’re using, the level of detail (which is something we’ll look at in a later lesson), the format in which we want it to be stored on the graphics card (repeated twice for reasons we’ll also look at later), the size of each “channel” of the image (that is, the datatype used to store red, green, or blue), and finally the image itself.
On to the next two lines: these specify special scaling parameters for the texture. The first tells WebGL what to do when the texture is filling up a large amount of the screen relative to the image size; in other words, it gives it hints on how to scale it up. The second is the equivalent hint for how to scale it down. There are various kinds of scaling hints you can specify; NEAREST
is the least attractive of these, as it just says you should use the original image as-is, which means that it will look very blocky when close-up. It has the advantage, however, of being really fast, even on slow machines. In the next lesson we’ll look at using different scaling hints, so you can compare the performance and appearance of each.
Once this is done, we set the current texture to null
; this is not strictly necessary, but is good practice; a kind of tidying up after yourself.
So, that’s all the code required to load the texture. Next, let’s move on to initBuffers
. This has, of course, lost all of the code relating to the pyramid that we had in lesson 4 but have now removed, but a more interesting change is the replacement of the cube’s vertex colour buffer with a new one — the texture coordinate buffer. It looks like this:
cubeVertexTextureCoordBuffer = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, cubeVertexTextureCoordBuffer); var textureCoords = [ // Front face 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, // Back face 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, // Top face 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, // Bottom face 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, // Right face 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, // Left face 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, ]; gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(textureCoords), gl.STATIC_DRAW); cubeVertexTextureCoordBuffer.itemSize = 2; cubeVertexTextureCoordBuffer.numItems = 24;
You should be pretty comfortable with this kind of code now, and see that all we’re doing is specifying a new per-vertex attribute in an array buffer, and that this attribute has two values per vertex. What these texture coordinates specify is where, in cartesian x, y coordinates, the vertex lies in the texture. For the purposes of these coordinates, we treat the texture as being 1.0 wide by 1.0 high, so (0, 0) is at the bottom left, (1, 1) the top right. The conversion from this to the real resolution of the texture image is handled for us by WebGL.
That’s the only change in initBuffers
, so let’s move on to drawScene
. The most interesting changes in this function are, of course, the ones that make it use the texture. However, before we go through these, there are a number of changes related to really simple stuff like the removal of the pyramid and the fact that the cube is now spinning around in a different way. I won’t describe these in detail, as they should be pretty easy to work out; they’re highlighted in red in this snippet from the top of the drawScene
function:
var xRot = 0; var yRot = 0; var zRot = 0; function drawScene() { gl.viewport(0, 0, gl.viewportWidth, gl.viewportHeight); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); mat4.perspective(45, gl.viewportWidth / gl.viewportHeight, 0.1, 100.0, pMatrix); mat4.identity(mvMatrix); mat4.translate(mvMatrix, [0.0, 0.0, -5.0]); mat4.rotate(mvMatrix, degToRad(xRot), [1, 0, 0]); mat4.rotate(mvMatrix, degToRad(yRot), [0, 1, 0]); mat4.rotate(mvMatrix, degToRad(zRot), [0, 0, 1]); gl.bindBuffer(gl.ARRAY_BUFFER, cubeVertexPositionBuffer); gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute, cubeVertexPositionBuffer.itemSize, gl.FLOAT, false, 0, 0);
There are also matching changes in the animate
function to update xRot
, yRot
and zRot
, which I won’t go over.
So, with those out of the way, let’s look at the texture code. In initBuffers
we set up a buffer containing the texture coordinates, so here we need to bind it to the appropriate attribute so that the shaders can see it:
gl.bindBuffer(gl.ARRAY_BUFFER, cubeVertexTextureCoordBuffer); gl.vertexAttribPointer(shaderProgram.textureCoordAttribute, cubeVertexTextureCoordBuffer.itemSize, gl.FLOAT, false, 0, 0);
…and now that WebGL knows which bit of the texture each vertex uses, we need to tell it to use the texture that we loaded earlier, then draw the cube:
gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, neheTexture); gl.uniform1i(shaderProgram.samplerUniform, 0); gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, cubeVertexIndexBuffer); setMatrixUniforms(); gl.drawElements(gl.TRIANGLES, cubeVertexIndexBuffer.numItems, gl.UNSIGNED_SHORT, 0);
What’s happening here is somewhat complex. WebGL can deal with up to 32 textures during any given call to functions like gl.drawElements
, and they’re numbered from TEXTURE0
to TEXTURE31
. What we’re doing is saying in the first two lines that texture zero is the one we loaded earlier, and then in the third line we’re passing the value zero up to a shader uniform (which, like the other uniforms that we use for the matrices, we extract from the shader program in initShaders
); this tells the shader that we’re using texture zero. We’ll see how that’s used later.
Anyway, once those three lines are executed, we’re ready to go, so we just use the same code as before to draw the triangles that make up the cube.
The only remaining new code to explain is the changes to the shaders. Let’s look at the vertex shader first:
attribute vec3 aVertexPosition; attribute vec2 aTextureCoord; uniform mat4 uMVMatrix; uniform mat4 uPMatrix; varying vec2 vTextureCoord; void main(void) { gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0); vTextureCoord = aTextureCoord; }
This is very similar to the colour-related stuff we put into our vertex shader in lesson 2; all we’re doing is accepting the texture coordinates (again, instead of the colour) as a per-vertex attribute, and passing it straight out in a varying variable.
Once this has been called for each vertex, WebGL will work out values for the fragments (which, remember, are basically just pixels) between vertices by using linear interpolation between the vertices — just as it did with the colours in lesson 2. So, a fragment half-way between vertices with texture coordinates (1, 0) and (0, 0) will get the texture coordinates (0.5, 0), and one halfway between (0, 0) and (1, 1) will get (0.5, 0.5). Next stop, the fragment shader:
precision mediump float; varying vec2 vTextureCoord; uniform sampler2D uSampler; void main(void) { gl_FragColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t)); }
So, we pick up the interpolated texture coordinates, and we have a variable of type sampler
, which is the shader’s way of representing the texture. In drawScene
, our texture was bound to gl.TEXTURE0
, and the uniform uSampler
was set to the value zero, so this sampler represents our texture. All the shader does is use the function texture2D
to get the appropriate colour from the texture using the coordinates. Textures traditionally use s and t for their coordinates rather than x and y, and the shader language supports these as aliases; we could just as easily used vTextureCoord.x
and vTextureCoord.y
.
Once we have the colour for the fragment, we’re done! We have a textured object on the screen.
So, that’s it for this time. Now you know all there is to learn from this lesson: how to add textures to 3D objects in WebGL by loading an image, telling WebGL to use it for a texture, giving your object texture coordinates, and using the coordinates and the texture in the shaders.
If you have any questions, comments, or corrections, please do leave a comment below!
Otherwise, check out the next lesson, in which I show how you can get basic key-based input into the JavaScript that animates your 3D scene, so that we can start making it interact with the person viewing the web page. We’ll use that to allow the viewer to change the spin of the cube, to zoom in and out, and to adjust the hints given to WebGL to control the scaling of textures.
Acknowledgments: Chris Marrin’s spinning box was a great help when writing this, as was an extension of that demo by Jacob Seidelin. As always, I’m deeply in debt to NeHe for his OpenGL tutorial for the script for this lesson.
Leave a Reply