<< Lesson 10Lesson 12 >>

Welcome to my number eleven in my series of WebGL tutorials, the first one that isn’t based on the NeHe OpenGL tutorials. In it, we’ll show a texture-mapped sphere with directional lighting, which the viewer can spin around using the mouse.

Here’s what the lesson looks like when run on a browser that supports WebGL:

Click here and you’ll see the live WebGL version, if you’ve got a browser that supports it; here’s how to get one if you don’t. You’ll see a white sphere for a few moments while the texture loads, and then once that’s done you should see the moon, with lighting coming from above, to the right, and towards you. If you drag it around, it will spin, with the lighting remaining constant. If you want to change the lighting, there are fields beneath the WebGL canvas that will be familiar from lesson 7.

More on how it all works below…

The usual warning: these lessons are targeted at people with a reasonable amount of programming knowledge, but no real experience in 3D graphics; the aim is to get you up and running, with a good understanding of what’s going on in the code, so that you can start producing your own 3D Web pages as quickly as possible. If you haven’t read the previous tutorials already, you should probably do so before reading this one — here I will only explain the new stuff. The lesson is based on lesson 7, so you should make sure that you understand that one (and please do post a comment on that post if anything’s unclear about it!)

There may be bugs and misconceptions in this tutorial. If you spot anything wrong, let me know in the comments and I’ll correct it ASAP.

There are two ways you can get the code for this example; just “View Source” while you’re looking at the live version, or if you use GitHub, you can clone it (and the other lessons) from the repository there.

As usual, the best way to understand the code for this page is by starting at the bottom and working our way up, looking at all of the new stuff as we go. The HTML code inside the `<body>`

tags is no different to lesson 7, so let’s kick off with the new code in `webGLStart`

:

```
function webGLStart() {
canvas = document.getElementById("lesson11-canvas");
initGL(canvas);
initShaders();
initBuffers();
initTexture();
gl.clearColor(0.0, 0.0, 0.0, 1.0);
gl.enable(gl.DEPTH_TEST);
canvas.onmousedown = handleMouseDown;
document.onmouseup = handleMouseUp;
document.onmousemove = handleMouseMove;
tick();
}
```

These three new lines allow us to detect mouse events and thus spin the moon when the user drags it around. Obviously, we only want to pick up mouse-down events on the 3D canvas (because it would be confusing if the moon span around when you dragged somewhere else in the HTML page, for example in the lighting text fields). Slightly less obviously, we want to listen for mouse-up and -move events on the document rather than the canvas; by doing this, we are able to pick up drag events even when the mouse is moved or released outside the 3D canvas, so long as the drag started in the canvas — this stops us from being one of those irritating pages where you press the mouse button inside the scene you want to spin, and then release it outside, only to find that when you move the mouse back over the scene the mouse-up has not taken effect and it still thinks you’re dragging stuff around until you click somewhere.

Moving a bit further up through the code, we come to our `tick`

function, which for this page just schedules the next frame and calls `drawScene`

, as it has no need to handle keys (because we’re not looking at key-presses) or to animate the scene (because it only responds to user input and does no independent animation).

function tick() { requestAnimFrame(tick); drawScene(); }

The next relevant changes are in `drawScene`

. We start it off with our boilerplate canvas-clearing and perspective code, then do the same as we did in lesson 7 to set up the lighting:

function drawScene() { gl.viewport(0, 0, gl.viewportWidth, gl.viewportHeight); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); mat4.perspective(45, gl.viewportWidth / gl.viewportHeight, 0.1, 100.0, pMatrix); var lighting = document.getElementById("lighting").checked; gl.uniform1i(shaderProgram.useLightingUniform, lighting); if (lighting) { gl.uniform3f( shaderProgram.ambientColorUniform, parseFloat(document.getElementById("ambientR").value), parseFloat(document.getElementById("ambientG").value), parseFloat(document.getElementById("ambientB").value) ); var lightingDirection = [ parseFloat(document.getElementById("lightDirectionX").value), parseFloat(document.getElementById("lightDirectionY").value), parseFloat(document.getElementById("lightDirectionZ").value) ]; var adjustedLD = vec3.create(); vec3.normalize(lightingDirection, adjustedLD); vec3.scale(adjustedLD, -1); gl.uniform3fv(shaderProgram.lightingDirectionUniform, adjustedLD); gl.uniform3f( shaderProgram.directionalColorUniform, parseFloat(document.getElementById("directionalR").value), parseFloat(document.getElementById("directionalG").value), parseFloat(document.getElementById("directionalB").value) ); }

Next, we move to the correct position to draw the mooon:

mat4.identity(mvMatrix); mat4.translate(mvMatrix, [0, 0, -6]);

…and here comes the first bit that might look odd! For reasons that I’ll explain a little later, we’re storing the current rotational state of the moon in a matrix; this matrix starts off as the identity matrix (ie., we don’t rotate it at all) and then as the user manipulates it with the mouse, it changes to reflect those manipulations. So, before we draw the moon, we need to apply the rotation matrix to the current model-view matrix, which we can do with the `mat4.multiply`

function:

mat4.multiply(mvMatrix, moonRotationMatrix);

Once that’s done, all that remains is to actually draw the moon. This code is pretty standard — we just set the texture then use the same kind of code as we’ve used many times before to tell WebGL to use some pre-prepared buffers to draw a bunch of triangles:

gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, moonTexture); gl.uniform1i(shaderProgram.samplerUniform, 0); gl.bindBuffer(gl.ARRAY_BUFFER, moonVertexPositionBuffer); gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute, moonVertexPositionBuffer.itemSize, gl.FLOAT, false, 0, 0); gl.bindBuffer(gl.ARRAY_BUFFER, moonVertexTextureCoordBuffer); gl.vertexAttribPointer(shaderProgram.textureCoordAttribute, moonVertexTextureCoordBuffer.itemSize, gl.FLOAT, false, 0, 0); gl.bindBuffer(gl.ARRAY_BUFFER, moonVertexNormalBuffer); gl.vertexAttribPointer(shaderProgram.vertexNormalAttribute, moonVertexNormalBuffer.itemSize, gl.FLOAT, false, 0, 0); gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, moonVertexIndexBuffer); setMatrixUniforms(); gl.drawElements(gl.TRIANGLES, moonVertexIndexBuffer.numItems, gl.UNSIGNED_SHORT, 0); }

So: how do we create vertex position, normal, texture-coordinate and index buffers with the correct values to draw a sphere? Conveniently, that’s the next function up: `initBuffers`

.

It starts off by defining global variables for the buffers, and deciding how many latitude and longitude bands to use, and what the sphere’s radius is. If you were going to use this code in a WebGL page of your own, you’d parameterise the latitude and longitude bands, and the radius, and you’d store the buffers somewhere other than in global variables. I’ve done it in this simple imperative way so as not to impose a particular OO or functional design on you.

var moonVertexPositionBuffer; var moonVertexNormalBuffer; var moonVertexTextureCoordBuffer; var moonVertexIndexBuffer; function initBuffers() { var latitudeBands = 30; var longitudeBands = 30; var radius = 2;

So, what are the latitude and longitude bands? In order to draw a set of triangles that are an approximation to a sphere, we need to divide it up. There are many clever ways of doing it, and one simple way based on high school geometry that (a) gets perfectly decent results and (b) I can actually understand without making my head hurt. So here’s that one It’s based on one of the demos on the Khronos website, was originally developed by the WebKit team, and it works like this:

Let’s start by defining the terminology: the lines of latitude are the ones that, on a globe, tell you how far north or how far south you are. The distance between them, as measured along the surface of the sphere, is constant. If you were to slice up a sphere from top to bottom along its lines of latitude, you’d wind up with thin lense-shaped bits for the top and the bottom, and then increasingly thick disc-like slices for the middle. (If this is hard to visualise, imagine slicing a tomato into discs for a salad, but trying to keep the same length of skin from the top to the bottom of each slice. Obviously the slices in the middle would be thicker than those at the top.)

The lines of longitude are different; they divide the sphere into segments. If you were to slice a sphere up along its lines of longitude, it would come apart rather like an orange.

Now, to draw our sphere, imagine that we’ve drawn the lines of latitude from top to bottom, and the lines of longitude around it. What we want to do is work out all of the points where those lines intersect, and use those as vertex positions. We can then split each square formed by two adjacent lines of longitude and two adjacent lines of latitude into two triangles, and draw them. Hopefully the image to the left makes that a little bit clearer!

The next question is, how do we calculate the points where the lines of latitude and longitude intersect? Let’s assume that the sphere has a radius of one unit, and start by taking a slice vertically through its centre, in the plane of the X and Y axes, as in the example to the right. Obviously, the slice’s shape is a circle, and the lines of latitude are lines across that circle. In the example, you can see that we’re looking at the third latitude band from the top, and there are 10 latitude bands in total. The angle between the Y axis and the point where the latitude band reaches the edge of the circle is θ. With a bit of simple trigonometry, we can see that the point has a Y coordinate of cos(θ) and an X coordinate of sin(θ).

Now, let’s generalise that to work out the equivalent points for all lines of latitude. Because we want each line to be separated by the same distance around the surface of the sphere from its neighbour, we can simply define them by values of θ that are evenly spaced. There are π radians in a semi-circle, so in our ten-line example we can take values of θ of 0, π/10, 2π/10, 3π/10, and so on up to 10π/10, and we can be sure that we’ve split up the sphere into even bands of latitude.

Now, all of the points at a particular latitude, whatever their longitude, have the same Y coordinate. So, given the formula for the Y coordinate above, we can say that all of the points around the *n*th latitude of the sphere of radius one and ten lines of latitude will have the Y coordinate of cos(*n*π / 10).

So that’s sorted out the Y coordinate. What about X and Z? Well, just as we can see that the Y coordinate is cos(*n*π / 10), we can see that the X coordinate of the point where Z is zero is sin(*n*π / 10). Let’s take a different slice through the sphere, as shown in the picture to the left: a horizontal one through the plane of the *n*th latitude. We can see that all of the points are in a circle with a radius of sin(*n*π / 10); let’s call this value *k*. If we take divide the circle up by the lines of longitude, of which we will assume there are 10, and consider that there will be 2π radians in the circle and thus values for φ, the angle we’re taking around the circle, of 0, 2π/10, 4π/10, and so on, once again by simple trigonometry we can see that our X coordinate is *k*cosφ and our Z coordinate is *k*sinφ.

So, to generalise, for a sphere of radius *r*, with *m* latitude bands and *n* longitude bands, we can generate values for *x*, *y*, and *z* by taking a range of values for θ by splitting the range 0 to π up into *m* parts, and taking a range of values for φ by splitting the range 0 to 2π into *n* parts, and then just calculating:

- x =
*r*sinθ cosφ - y =
*r*cosθ - z =
*r*sinθ sinφ

That’s how we work out the vertices. Now, what about the other values we need for each point: the normals and the texture coordinates? Well, the normals are really easy: remember, a normal is a vector with a length of one that sticks directly out of a surface. For a sphere with a radius of one unit, that’s the same as the vector that goes from the centre of the sphere to the surface — which is something we’ve already worked out as part of calculating the vertex’s position! In fact, the easiest way to calculate the vertex position and the normal is just to do the calculations above but not multiply them by the radius, store the results as the normal, and then to multiply the normal values by the radius to get the vertex positions.

The texture coordinates are, if anything, easier. We expect that when someone provides a texture to put onto a sphere, they’ll give us a rectangular image (indeed, WebGL, not to say JavaScript, would probably be confused by anything else!). We can safely assume that this texture is stretched at the top and the bottom, following the Mercator projection. This means that we can split up the left-to-right *u* texture coordinate evenly by the lines of longitude and the top-to-bottom *v* by lines of latitude.

Right, that’s how it works — the JavaScript code should now be incredibly easy to understand! We just loop through all of the latitudinal slices, then within that loop we run through the longitudinal segments, and we generate the normals, texture coordinates, and vertex positions for each. The only oddity to note is that our loops terminate when the index is *greater* than the number of longitudinal/latitudinal lines — that is, we use `<=`

rather than `<`

in the loop conditions. This means that for, say, 30 longitudes, we will generate 31 vertices per latitude. Because the trigonometric functions cycle, the last one will be in the same position as the first, and so this gives us an overlap so that everything joins up.

var vertexPositionData = []; var normalData = []; var textureCoordData = []; for (var latNumber = 0; latNumber <= latitudeBands; latNumber++) { var theta = latNumber * Math.PI / latitudeBands; var sinTheta = Math.sin(theta); var cosTheta = Math.cos(theta); for (var longNumber = 0; longNumber <= longitudeBands; longNumber++) { var phi = longNumber * 2 * Math.PI / longitudeBands; var sinPhi = Math.sin(phi); var cosPhi = Math.cos(phi); var x = cosPhi * sinTheta; var y = cosTheta; var z = sinPhi * sinTheta; var u = 1 - (longNumber / longitudeBands); var v = 1 - (latNumber / latitudeBands); normalData.push(x); normalData.push(y); normalData.push(z); textureCoordData.push(u); textureCoordData.push(v); vertexPositionData.push(radius * x); vertexPositionData.push(radius * y); vertexPositionData.push(radius * z); } }

Now that we have the vertices, we need to stitch them together by generating a list of vertex indices that contains sequences of six values, each representing a square expressed as a pair of triangles. Here's the code:

var indexData = []; for (var latNumber = 0; latNumber < latitudeBands; latNumber++) { for (var longNumber = 0; longNumber < longitudeBands; longNumber++) { var first = (latNumber * (longitudeBands + 1)) + longNumber; var second = first + longitudeBands + 1; indexData.push(first); indexData.push(second); indexData.push(first + 1); indexData.push(second); indexData.push(second + 1); indexData.push(first + 1); } }

This is actually pretty easy to understand. We loop through our vertices, and for each one, we store its index in `first`

, and then count `longitudeBands + 1`

indices forward to find its counterpart one latitude band down — adding the one because of the extra vertices we had to add to allow for the overlap — and store that in `second`

. We then generate two triangles, as in the diagram.

Right — that's the difficult bit done (at least, the bit that's difficult to explain). Let's move on up the code and see what else has changed.

Immediately above the `initBuffers`

function are the three functions that deal with the mouse. These deserve careful examination... Let's start by carefully considering what we're aiming to do. We want the viewer of our scene to be able to rotate the moon around by dragging it. A naive implementation of this might be to have, say, three variables representing rotations around the X, Y and Z axes. We could then adjust each one appropriately when the user dragged the mouse. If, say, they dragged it up or down, we could adjust the rotation around the X axis, and if it was from side to side, we could adjust it around the Y axis. The problem with doing things this way is that when you're rotating an object around various axes, and you're doing a number of different rotations, it matters what order you apply them in. Let's say the viewer rotates the moon 90° around the Y axis, then drags the mouse down. If we rotate around the X axis as planned, they will see the moon rotate around what is now the Z axis; the first rotation rotated the axes as well. This will look weird to them. The problem only gets worse when the viewer has, say, rotated 10° around the X axis, then 23° around the rotated Y axis, and then... we could put in all kind of clever logic to say something like "given the current rotational state, if the user drags the mouse downwards then adjust all three of the rotation values appropriately". But an easier way of handling this would be to keep some kind of record of every rotation that the viewer has applied to the moon, and then to replay them every time we draw it. On the face of it, this might sound like an expensive way of doing things, unless you remember that we already have a perfectly good way of keeping track of a sequence of different geometrical transforms in one place and applying them in one operation: a matrix.

We keep a matrix to store the current rotation state of the moon, logically enough called `moonRotationMatrix`

. When the user drags the mouse around, we get a sequence of mouse-move events, and each time we see one we work out how many degrees of rotation around the *current* X and Y axes as seen by the user that drag amounts to. We then calculate a matrix that represents those two rotations, and pre-multiply the `moonRotationMatrix`

by it — pre-multiplying for the same reason as we apply transformations in reverse order when positioning the camera; the rotation is in terms of eye space, not model space. (A note for readers — I'm sure there's a better way of explaining that, but don't want to delay posting this until I've had the moment of clarity required Any suggestions on better phrasing would be gratefully received!)

So, with all that explained, the code below should be pretty clear:

var mouseDown = false; var lastMouseX = null; var lastMouseY = null; var moonRotationMatrix = mat4.create(); mat4.identity(moonRotationMatrix); function handleMouseDown(event) { mouseDown = true; lastMouseX = event.clientX; lastMouseY = event.clientY; } function handleMouseUp(event) { mouseDown = false; } function handleMouseMove(event) { if (!mouseDown) { return; } var newX = event.clientX; var newY = event.clientY; var deltaX = newX - lastMouseX; var newRotationMatrix = mat4.create(); mat4.identity(newRotationMatrix); mat4.rotate(newRotationMatrix, degToRad(deltaX / 10), [0, 1, 0]); var deltaY = newY - lastMouseY; mat4.rotate(newRotationMatrix, degToRad(deltaY / 10), [1, 0, 0]); mat4.multiply(newRotationMatrix, moonRotationMatrix, moonRotationMatrix); lastMouseX = newX lastMouseY = newY; }

That's the last substantial bit of new code in this lesson. Moving up from there, all of the changes you can see are those required to our texture code to load up the new texture into the changed variable names.

That's it! You now know how to draw a sphere using a simple but effective algorithm, how to hook up mouse events so that viewers can drag to manipulate your 3D objects, and how to use matrices to represent the current rotational state of an object in a scene.

That's it for now; the next lesson will show a new kind of lighting: point lighting, which comes from a particular place in the scene and radiates outwards, just like the light from a bare light bulb.

*Acknowledgments: The texture-map for the moon comes from NASA's JPL website. The code to generate a sphere is based on this demo, which was originally by the WebKit team. Many thanks to both!*

Hi,

u are using the mat4.multiply two times.

First one:

mat4.multiply(mvMatrix, moonRotationMatrix);

Second one:

mat4.multiply(newRotationMatrix, moonRotationMatrix, moonRotationMatrix);

In your comment from “June 23″ you say

“That’s the line that applies the new rotation matrix onto the existing one — the last parameter is where the results are stored. If we were able to use normal arithmetic operators to multiply matrices, it would look like this:

moonRotationMatrix = newRotationMatrix * moonRotationMatrix;”

This makes sense to me but why does the first mat4.multiply only have 2 parameters and the second one THREE parameters ??? Where is the result of the multipliction stored in the first case ???

hi , very thanks to this greet Tutorails,

my final project ,in college, is a game(webgl-based) that like “Quake v.2″

but i’m not so good that make this.

please steerage me, what can i do?

I’m getting some odd behavior: if I mousedown on the canvas, drag out of the canvas (still on the page) and mouseup, then mousdown and drag on the canvas again – the second drag doesn’t rotate the moon until mouseup. At that point it continues to rotate (with no button pressed) until I press and release the button a third time.

Its seems the reason for this is my browser (Chrome 13 on Ubuntu) interprets the first drag as “select text” (but still forwards the events onto the script) but it interprets second drag as “drag-and-drop selection” (and apparently does not forward the mousemove or mouseup events to the script) at which point any subsequent mousemoves are interpreted by the script as rotations (because it missed the mouseup event and therefore believes the button is pressed).

The way you are constructing the triangles on the sphere is, I think, against convention: clockwise instead of anticlockwise.

You can see this by adding gl.enable(gl.CULL_FACE) in webGLStart, which is supposed to speed up rendering. The sphere will look a bit odd and behave even more oddly upon dragging.

Actually, I noticed this when I had a simple coloured sphere with no texture: the lighting looked odd.

gl.enable(gl.CULL_FACE), i am told, by default, culls the back faces, which again, by default, are faces described by a clockwise order of vertices.

To fix this, the equations for x and z should be swapped:

x = sin(phi)*sin(theta)

z = cos(phi)*sin(theta)

that is, phi should be measured with respect to the y-z plane rather than the x-y plane.

Your drawing of the order of vertices suggests an anticlockwise order but actually they are rendered in clockwise order.

[...] This code shows some of the basic WebGL features that you may want to include in a simple program. The accompanying notes provide ample explanation, and links to the PhiloGL documentation for further details. You should take this example and experiment, making changes to what’s there already and maybe even adding a few things of your own design. If you’re curious you can compare this PhiloGL implementation with its equivalent in “raw” WebGL. [...]

[...] tutorial de la serie Aprende webGL. Es una traducción no literal de su respectivo tutorial en Learning webGL. Ésta lección va a ser la primera que no está basada en algún tutorial de OpenGL de NeHe. Aquí [...]

Maybe I have misunderstood the way the texture wraps the sphere, but it sounds to me more like the Equirectangular projection, rather than Mercator projection (?).

Hi. Could you explain, why the sphere is smoothed? I can’t get it

I tried rendering the sphere using 300 longitudes and 300 latitudes but then some triangles are missing.

Is there a limitation to be aware of when drawing a shape or a scene?

@Anton: Because vLightWeighting is interpolated.

I think I know why you can’t have 300 longitudes and 300 latitudes.

I couldn’t make sure of it but I think indexes are limited to uShort values (from 0 to 65535) which means you can have up to 255×255 vertex.

If someone can confirm this it might be worth mentioning in those lessons.

Hi, I tried to change the texture’s image source and it does not render. I can only see black. Only moon.gif and nehe.gif from lesson 5 will render. All other graphics files failed. Why is this?

Got it. I changed the size of the image source to a power of two.

@arg I was tripped up by that one first for a bit too. You need to use a gif of a size that is a power of 2, e.g. a 256*256 gif. Inappropriately sized images (anything that isn’t a power of 2) and compressed images(e.g. png) won’t work.

I used pixlr.com to resize my image and http://www.pictureresize.org/online-images-converter.html to resize the image… Also the WebGL inspector http://benvanik.github.com/WebGL-Inspector/ can tell you if your texture is screwed or the image isn’t loading. Hope they help you

hi..how do i implement the mousewheel command so that i scroll the mouse to

zoomin/zoom out..where do i find stuff related to that??

There is something wrong with implementation of rotation of mv matrix.

If you circle mouse few times it’s produces rotation against Z axis and in theory shouldn’t. Maybe it’s caused by poor precision of javascript implementation?

@Komio I noticed the exact same problem and solved it by making a copy of moonRotationMatrix at mouse down, and in handleMouseMove not updating the lastMouseX and Y values:

* Where you declare “var moonRotationMatrix …” add a line:

var oldMoonRotationMatrix;

* Add a line to the function handleMouseDown:

oldMoonRotationMatrix = mat4.create(moonRotationMatrix);

* Change the multiplication in handleMouseMove to the following:

mat4.multiply(newRotationMatrix, oldMoonRotationMatrix, moonRotationMatrix);

* And in that same function comment out the last two lines:

// lastMouseX = newX

// lastMouseY = newY;

That should do the trick because no matter what part of the code is skipped or what round-off error is made, as long as you hold your mouse down everything is computed from the exact point where you pressed it.

@Komio and @Paul

Probably a bit late by now, but I think you’ll find this is a perfectly natural result of this kind of accumulative rotation. If you load up a first person space sim, or just use your hand to try out the sequence of pitches and yaws that moving the mouse in a circle would produce, you’ll find your (or ship) hand has rolled a little more after each circle.

It may be even clearer when doing a similar thing with keyboard input, rather than with a mouse.

For @Casi (zoom in, zoom out)

weblStart()

…

if (canvas.addEventListener)

canvas.addEventListener(‘DOMMouseScroll’,handleMouseWheel, false);

…

var z=-5.0;

function handleMouseWheel(event){

var delta = 0;

if (!event) /* For IE. */

event = window.event;

if (event.wheelDelta) { /* IE/Opera. */

delta = event.wheelDelta/120;

} else if (event.detail) { /** Mozilla case. */

/** In Mozilla, sign of delta is different than in IE.

* Also, delta is multiple of 3.

*/

delta = -event.detail/3;

}

/** If delta is nonzero, handle it.

* Basically, delta is now positive if wheel was scrolled up,

* and negative, if wheel was scrolled down.

*/

if (delta)

handleScroll(delta);

/** Prevent default actions caused by mouse wheel.

* That might be ugly, but we handle scrolls somehow

* anyway, so don’t bother here..

*/

if (event.preventDefault)

event.preventDefault();

event.returnValue = false;

}

function handleScroll(delta) {

if (delta < 0)

z=z-0.25;

else

z=z+0.25;

}

+

drawScene()

…

at4.translate(mvMatrix, [0.0, 0.0, z]);

*mat4.translate(mvMatrix, [0.0, 0.0, z]);

Hello, congrats for your great tutorials!

In this example you use an event listener for the whole canvas, is there a way that you use an event listener only for the sphere?

Thanks!

I’ve changed demo code a bit. In this version there is no texture, but random colours of squares. Suppose, it helps to understand how the sphere can be represented as set of squares.

Also mouse wheel zoom as Sime described was added.

http://pastebin.com/wvZdA4zb

Where’s the postbox?

hey…….. i m trying to read these tutorials….. but wen i use the source code…… instead of the sphere …all i can see is white space….

At GitHub, there is a pull request (https://github.com/gpjt/webgl-lessons/pull/9) which makes this example work for a touch screen too.

How can I exclude one axis from rotation, totally? I have got a problem that when I make mouse circular movements, it is not only rotating around X and Y axis, but also slightly around Z.