WebGL Lesson 14 – specular highlights and loading a JSON model

<< Lesson 13Lesson 15 >>

Welcome to my number fourteen in my series of WebGL tutorials! In it, we’ll introduce the last bit of the Phong reflection model that we introduced in lesson 7: specular highlights; the “glints” on a shiny surface, which make a scene look that little bit more realistic.

Here’s what the lesson looks like when run on a browser that supports WebGL:

Click here and you’ll see the live WebGL version, if you’ve got a browser that supports it; here’s how to get one if you don’t. You’ll see a spinning teapot, and as it spins you’ll see a constant glint to the mid-left and on the handle of the lid. There will also be occasional glints from the spout and the handle when they hit just the right angle to catch the light. You can switch these specular highlights on and off using the checkbox below, and you can also disable lighting completely, and switch between three options for the texture used: none, the “galvanized” texture that is used by default (and which is a Creative Commons-licensed sample from the excellent Arroway Textures), and, just for fun, a texture showing the planet Earth (courtesy of the European Space Agency/Envisat), which looks strangely attractive on a teapot :-)

You can also control the teapot’s shininess from the text fields below — larger numbers mean a smaller, sharper highlight — and you can adjust the specular reflection’s colour, and, as before, the position and diffuse colour of the point light that’s causing all of these effects. More about that below.

Before we wade into the code, the usual warning: these lessons are targeted at people with a reasonable amount of programming knowledge, but no real experience in 3D graphics; the aim is to get you up and running, with a good understanding of what’s going on, so that you can start producing your own 3D Web pages as quickly as possible. If you haven’t read the previous tutorials already, you should probably do so before reading this one — here I will only explain the new stuff. The lesson is based on lesson 13, so you should make sure that you understand that one (and please do post a comment on that post if anything’s unclear about it!)

There may be bugs and misconceptions in this tutorial. If you spot anything wrong, let me know in the comments and I’ll correct it ASAP.

There are two ways you can get the code for this example; just “View Source” while you’re looking at the live version, or if you use GitHub, you can clone it (and the other lessons) from the repository there.

Once you have it, open it up in an editor. We’ll start at the top and work our way down, which has the great advantage that we can see the fragment shader pretty much right away — this is where the most interesting changes are. Before we encounter that, there’s one minor difference between this code and lesson 13’s: we don’t have the shaders for per-vertex lighting. Per-vertex lighting doesn’t really handle specular highlights very well (as they get smeared out over an entire face), so we don’t bother with them.

So, the first shader you’ll see in the file is the fragment shader for per-fragment lighting. It starts off with the usual precision stuff and declarations of varying and uniform variables, of which a couple (highlighted here in red) are new, and one — the uniform that used to hold the point light’s colour — has been renamed, as the point light now has specular and diffuse components:

  precision mediump float;

  varying vec2 vTextureCoord;
  varying vec3 vTransformedNormal;
  varying vec4 vPosition;

  uniform float uMaterialShininess;

  uniform bool uShowSpecularHighlights;
  uniform bool uUseLighting;
  uniform bool uUseTextures;

  uniform vec3 uAmbientColor;

  uniform vec3 uPointLightingLocation;
  uniform vec3 uPointLightingSpecularColor;
  uniform vec3 uPointLightingDiffuseColor;

  uniform sampler2D uSampler;

These shouldn’t need much explanation; they’re just where the values that you can change from the HTML code are fed into the shader for processing. Let’s move on to the body of the shader; the first thing is to handle the case where the user has lighting switched off, and this is the same as it was before:

  void main(void) {
    vec3 lightWeighting;
    if (!uUseLighting) {
      lightWeighting = vec3(1.0, 1.0, 1.0);
    } else {

Now we handle the lighting, and of course this is where it gets interesting:

      vec3 lightDirection = normalize(uPointLightingLocation - vPosition.xyz);
      vec3 normal = normalize(vTransformedNormal);

      float specularLightWeighting = 0.0;
      if (uShowSpecularHighlights) {

So, what’s going on here? Well, we calculate the direction of the light just as we did for normal per-fragment lighting. We then normalise the fragment’s normal vector, once again just as before — remember, when the per-vertex normals are linearly interpolated to create per-fragment normals, the results aren’t necessarily of length one, so we normalise to fix this — but this time we’ll be using it a little more so we store it in a local variable. Next, we define a weighting for the amount of extra brightness that is going to come from the specular highlight; this is of course zero if the specular highlight is switched off, but if it’s not, we need to calculate it.

So, what determines the brightness of a specular highlight? As you might remember from the explanation of the Phong Reflection Model in lesson 7, specular highlights are created by that portion of the light from a light source that bounces off the surface as if from a mirror:

The portion of the light that is reflected this way bounces off the surface at the same angle as it hit it. In this case, the brightness of the light you see reflected from the material depends on whether or not your eyes happen to be in the line along which the light was bounced — that is, it depends not only on the angle at which the light hit the surface but on the angle between your line of sight and the surface. This specular reflection is what causes “glints” or “highlights” on objects, and the amount of specular reflection can obviously vary from material to material; unpolished wood will probably have very little specular reflection, and highly-polished metal will have quite a lot.

The specific equation for working out the brightness of a specular reflection is this:

  • (Rm . V)α

…where Rm is the (normalised) vector that a perfectly-reflected ray of light from the light source would go when it bounced off the point on the surface that is under consideration, V is the (also normalised) vector pointing in in the direction of the viewer’s eyes, and α is a constant describing the shininess, the higher the shinier. You may remember that the dot product of two vectors is the cosine of the angle between them; this means that this part of the equation produces a value that is 1 if the light from the light source would be reflected directly at the viewer (that is, Rm and V are parallel and so the angle between them is zero, and the cosine of zero is one) and then fades off fairly slowly as the light is less directly reflected. Taking this value to the power of α has the effect of “compressing” it: that is, while the result is still one when the vectors are parallel, it drops off more rapidly to each side. You can see this in action if you set the shininess constant in the demo page to something large, like (say) 512.

So, given all this, the first things we need to work out are the direction of the viewer’s eyes, V, and the direction of a perfectly-reflected ray of light, Rm. Let’s look at V first, because it’s easy! Our scene is constructed in eye space, which you may remember from lesson 10; in effect, this means that we’re drawing the scene as if there was a camera at the origin, (0, 0, 0), looking down the negative Z axis with X increasing to the right and Y increasing upwards. The direction of any point from the origin is, of course, just its coordinates expressed as a vector — so, likewise, the direction of the viewers eyes at the origin from any point is just the negative of the coordinates. We have the coordinates of the fragment, linearly interpolated from the vertex coordinates, in vPosition, so we negate it, normalise it to make it of length one, and that’s it!

        vec3 eyeDirection = normalize(-vPosition.xyz);

Now let’s look at Rm. This would be a bit more involved, if it weren’t for a very convenient GLSL function called reflect, which is defined as:

reflect (I, N): For the incident vector I and surface orientation N, returns the reflection direction

The incident vector is the direction from which a ray of light hits the surface at the fragment, which is the opposite of the direction of the light from the fragment (which we already have in lightDirection). The surface orientation is called N because it’s just the normal, which we also already have. Given all that, it’s easy to work out:

        vec3 reflectionDirection = reflect(-lightDirection, normal);

So, now that we have all that, the final step is very easy indeed:

        specularLightWeighting = pow(max(dot(reflectionDirection, eyeDirection), 0.0), uMaterialShininess);
      }

That’s all we need to do to work out the contribution of the specular component to the fragment’s lighting. The next step is to work out how much the diffuse light contributes, using the same logic as before (though we can now use our local variable for the normalised normal):

      float diffuseLightWeighting = max(dot(normal, lightDirection), 0.0);

Finally, we use both weightings, the diffuse and specular colours, and the ambient colour to work out the overall amount of lighting at this fragment for each colour component; this is a simple extension of what we were using before:

      lightWeighting = uAmbientColor
        + uPointLightingSpecularColor * specularLightWeighting
        + uPointLightingDiffuseColor * diffuseLightWeighting;
    }

Once that’s all done, we have a value for the light weighting which we can just use in identical code to lesson 13’s to weight the colour as specified by the current texture:

    vec4 fragmentColor;
    if (uUseTextures) {
      fragmentColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t));
    } else {
      fragmentColor = vec4(1.0, 1.0, 1.0, 1.0);
    }
    gl_FragColor = vec4(fragmentColor.rgb * lightWeighting, fragmentColor.a);
  }

That’s it for the fragment shader!

Let’s move a bit further down. If you’re looking for differences from lesson 13, the next you’ll notice is that initShaders is back to its earlier simple form, just creating one program, though of course it also now initialises one or two more uniform locations for the new specular lighting settings. A little further down from that, initTextures is now loading textures for the Earth and the galvanised steel effects instead of the Moon and the crate. Down a little bit more and setMatrixUniforms is, like initShaders, once again designed for a single program — and then we reach something a little more interesting.

Instead of having initBuffers to create the WebGL buffers containing the various per-vertex attributes that define the appearance of the teapot, we have two functions: handleLoadedTeapot and loadTeapot. The pattern will be familiar from lesson 10, but it’s worth going over again. Let’s take a look at loadTeapot (though it’s the second one to appear in the code):

  function loadTeapot() {
    var request = new XMLHttpRequest();
    request.open("GET", "Teapot.json");
    request.onreadystatechange = function() {
      if (request.readyState == 4) {
        handleLoadedTeapot(JSON.parse(request.responseText));
      }
    }
    request.send();
  }

The overall structure should be familiar from lesson 10; we create a new XMLHttpRequest and use it to load the file Teapot.json. This will happen asynchronously, so we attach a callback function that will be triggered when the process of loading the file reaches various stages, and in the callback we do some stuff when the load reaches a readyState of 4, which means that it is fully loaded.

The interesting bit is what happens then. The file that we’re loading is in JSON format, which basically means that it’s already written in JavaScript; have a look at it to see what I mean. The file describes a JavaScript object containing lists that hold the vertex positions, normals, texture coordinates, and a set of vertex indices that completely describe the teapot. We could, of course, just embed this code directly into the index.html file, but if you were building a more complex model, with many separately-modelled objects, you’d want them all in separate files.

(Just which format you should use for pre-built objects in your WebGL applications is an interesting question. You might be designing them in any of a plethora of different programs, and these programs can output models in many different formats, ranging from .obj to 3DS. In the future, it looks like at least one of them will be able to output models in a JavaScript-native format, which I suspect may look a bit like the JSON model I’ve used for the teapot. For now, you should treat this tutorial as an example of how you might go about loading a JSON-format pre-designed model, and not as an example of best practice :-)

So, we have code that loads up a file in JSON format, and triggers an action when it’s loaded. The action converts the JSON text into data we can use; we could just use the JavaScript eval function to convert it into a JavaScript object, but this is generally frowned upon, and so instead we use the built-in function JSON.parse to parse the object. Once that’s done, we pass it up to handleLoadedTeapot:

  var teapotVertexPositionBuffer;
  var teapotVertexNormalBuffer;
  var teapotVertexTextureCoordBuffer;
  var teapotVertexIndexBuffer;
  function handleLoadedTeapot(teapotData) {
    teapotVertexNormalBuffer = gl.createBuffer();
    gl.bindBuffer(gl.ARRAY_BUFFER, teapotVertexNormalBuffer);
    gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(teapotData.vertexNormals), gl.STATIC_DRAW);
    teapotVertexNormalBuffer.itemSize = 3;
    teapotVertexNormalBuffer.numItems = teapotData.vertexNormals.length / 3;

    teapotVertexTextureCoordBuffer = gl.createBuffer();
    gl.bindBuffer(gl.ARRAY_BUFFER, teapotVertexTextureCoordBuffer);
    gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(teapotData.vertexTextureCoords), gl.STATIC_DRAW);
    teapotVertexTextureCoordBuffer.itemSize = 2;
    teapotVertexTextureCoordBuffer.numItems = teapotData.vertexTextureCoords.length / 2;

    teapotVertexPositionBuffer = gl.createBuffer();
    gl.bindBuffer(gl.ARRAY_BUFFER, teapotVertexPositionBuffer);
    gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(teapotData.vertexPositions), gl.STATIC_DRAW);
    teapotVertexPositionBuffer.itemSize = 3;
    teapotVertexPositionBuffer.numItems = teapotData.vertexPositions.length / 3;

    teapotVertexIndexBuffer = gl.createBuffer();
    gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, teapotVertexIndexBuffer);
    gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, new Uint16Array(teapotData.indices), gl.STATIC_DRAW);
    teapotVertexIndexBuffer.itemSize = 1;
    teapotVertexIndexBuffer.numItems = teapotData.indices.length;

    document.getElementById("loadingtext").textContent = "";
  }

There’s not really anything worth highlighting in that function — it just takes the various lists from the loaded JSON object and puts them into WebGL arrays, which are then pushed over onto the graphics card in newly-allocated buffers. Once all of this is done, we clear out a div in the HTML document which was previously telling the user that the model was being loaded, just like we did in lesson 10.

So, that’s the model loaded. What else? Well, there’s drawScene, which now needs to draw the teapot at an appropriate angle (after checking that it’s loaded), but there’s nothing really new there; take a look at the code and make sure you know what’s going on (and please do leave a comment if anything’s unclear), but I doubt you’ll find anything to surprise you.

After that, animate has a few trivial changes to make it spin the teapot rather than make the Moon and the crate orbit, and webGLStart has to call loadTeapot instead of initBuffers. And finally, the HTML code has the DIV to display the “Loading world…” text while the teapot is being loaded up, along with its associated CSS style, and of course it has new input fields for the new specular highlight parameters.

And after that, that’s it! You now know how to write shaders to show specular highlights, and how to load pre-built models that are stored in JSON format. Next time we’ll look at something a bit more advanced; how to use textures in a slightly different and more interesting way than we are now, with specular maps

<< Lesson 13Lesson 15 >>

Acknowledgments: the galvanised pattern is a Creative Commons sample from Arroway Textures, and the Earth texture is courtesy of the European Space Agency/Envisat.

You can leave a response, or trackback from your own site.

65 Responses to “WebGL Lesson 14 – specular highlights and loading a JSON model”

  1. wererabit says:

    Hi guys,

    Does anybody know how I can use drawElements on a very large array (up to 100,000 vertices).

    I am using

    gl.drawElements(gl.TRIANGLES, ….. , gl.UNSIGNED_SHORT, ..);

    but when the index is too big, its value gets cut off, so the model is messed up. I’ve tried to change to gl.UNSIGHED_INT but nothing seems to work.

    Thanks in advance for your help!

  2. raghu says:

    Hi,
    Which tool you use to generate the vertex positions, normals, texture coordinates?

    thanks in advance.

  3. [...] tutorial de la serie Aprende webGL. Es una traducción no literal de su respectivo tutorial en Learning webGL. Hoy completaremos la técnica reflexión de Phong añadiendo reflejos a las superficies más [...]

  4. Ray Bellis says:

    I’m puzzled by your description of how the _light_ has a specular colour.

    In every other CG book I’ve read, and code I’ve used, specular colour is controlled as a property of the _surface_, and not of the light.

  5. ChubbyGoat says:

    I know it seems clever to take advantage of the transformation into eye space when calculating the direction to the camera, but it is incorrect. It doesn’t work because A) the light position is still in world space, and B) vPosition is a world-space coordinate – it only represents the direction to the camera when the camera is placed at (0,0,0).

    vec3 eyeDirection = normalize(-vPosition.xyz);

    To make this code work for when camera moves around the scene, pass a uniform variable representing the camera position into the shader program before drawing the scene and implement it like this:

    vec3 eyeDirection = normalize(uCameraPosition.xyz-vPosition.xyz);

    There might be another way to solve this problem, but this way worked for me. Hope this helps, and thanks for making these tutorials! :)

  6. Ramt1n says:

    Great Tutorial, Thanks!!

    Have a question, how can I load/draw multiple, same objects (like your Teapot for example) into the same canvas? Like lets say 100 teapots ?

    Thanks for a quick answer:)

  7. Jonipichoni says:

    @Ramt1n:

    You can find a good example of static model instancing at
    tojicode’s blog:

    http://blog.tojicode.com/2011/11/building-game-part-4-static-model.html

  8. Hristo says:

    Thanks for the tutorial! I had a question about the JSON file. I’m trying to extract the “faces” of the teapot. The way I’m interpreting the “vertex indices” is that they describe the faces of the teapot; each face is a triangle and each vertex of the triangle is given by the vertex indices.

    For example…

    [
    0,1,2,
    2,3,0,
    ...
    ]

    … (0,1,2) are the vertex indices of a face of the teapot and these indices correspond to these 3 vertices:

    v1 = (5.929688, 4.125, 0)
    v2 = (5.387188, 4.125, 2.7475)
    v3 = (5.2971, 4.494141, 2.70917)

    … which are the first 9 entries in the “vertexPositions” array.

    Am I understanding things correctly? I guess my question is, what do the vertex indices correspond to? Does a ‘0′ in the vertex indices array correspond to the first 3 entries in the vertex positions array? If my understanding is incorrect, how would I figure out the faces (triangles) of the teapot?

    Thanks,
    Hristo

  9. Antonio says:

    Hi! Thanks for the good tutorial! Which tool can I use to generate simple JSON models like the one in this tutorial?

  10. Novaterata says:

    @Werebit: I had this problem in C++, make sure all your GLushorts are changed to GLints regarding the Elements/Faces IBO or whatever you are using, I think I had to change mine in 3 places, the Array, and when adding each element to the array, and the Draw function.

  11. Peter says:

    Hi! Nice tutorial! The same question as Antonio, which tool you used to generate JSON models like in this tutorial?

  12. subhojyoti chakraborty says:

    Please let me know how can i convert the collada(.dae file) to JSON and load it in my webGL apps.

  13. Thor says:

    Well, for starters, this thing works. The only drawback is that Blender does not export to JSON…not natively anyway :(
    It would be nice if I could find a way to do that; I tried the exporter from Three.js but it’s not well formed. Of course, I try to loead the model (JSON file) from a local system and not tru a server…

  14. Ismail says:

    There is converter application in python that can be found from the github (threejs site) under the utils folder. Here is the link: https://github.com/mrdoob/three.js/tree/master/utils/converters/fbx

    Nice tutorial!

  15. [...] WebGL-friendly JSON formats. He started with a version that exported to the JSON featured in Lesson 14 but quickly moved to a Three.js version. It is early in its development but very promising… [...]

Leave a Reply

Subscribe to RSS Feed Follow Learning WebGL on Twitter