WebGL Lesson 7 – basic directional and ambient lighting

<< Lesson 6Lesson 8 >>

Welcome to my number seven in my series of WebGL tutorials, based on the part of number 7 in the NeHe OpenGL tutorials that I didn’t go over in lesson 6. In it, we’ll go over how you add simple lighting to your WebGL pages; this takes a bit more work in WebGL than it does in OpenGL, but hopefully it’s all pretty easy to understand.

Here’s what the lesson looks like when run on a browser that supports WebGL:

Click here and you’ll see the live WebGL version, if you’ve got a browser that supports it; here’s how to get one if you don’t. You should see a slowly-spinning cube, with lighting appearing to come from a point that is towards the front (that is, between you and the cube), slightly above and to the right.

You can use the checkbox underneath the canvas to switch off or on the use of lighting so that you can see what effect it’s having. You can also change the colour of the directional and ambient lights (more about precisely what that means later) and the direction of the directional light. Do try playing with them a bit; it’s particularly fun to try out special effects with directional lighting RGB values of greater than one (though if you go much above 5 you lose much of the texture). Also, just like last time, you can also use the cursor keys to make the box spin around faster or more slowly, and Page Up and Page Down to zoom in and out. We’re using the best texture filter only this time, so the F key no longer does anything.

More on how it all works below…

The usual warning: these lessons are targeted at people with a reasonable amount of programming knowledge, but no real experience in 3D graphics; the aim is to get you up and running, with a good understanding of what’s going on in the code, so that you can start producing your own 3D Web pages as quickly as possible. If you haven’t read the previous tutorials already, you should probably do so before reading this one — here I will only explain the differences between the code for lesson 6 and the new code.

There may be bugs and misconceptions in this tutorial. If you spot anything wrong, let me know in the comments and I’ll correct it ASAP.

There are two ways you can get the code for this example; just “View Source” while you’re looking at the live version, or if you use GitHub, you can clone it (and the other lessons) from the repository there. Either way, once you have the code, load it up in your favourite text editor and take a look.

Before we get into the details of how to do lighting in WebGL, I’ll kick off with the bad news. WebGL has absolutely no built-in support for lighting. Unlike OpenGL, which lets you specify at least 8 light sources and then handles them all for you, WebGL leaves you to do everything yourself. But — and it’s a big “but” — lighting is actually pretty easy once it’s explained. If you’re comfortable with the shader stuff we’ve gone over so far, you’ll have no problems at all with lighting — and having to code your own simple lights as a beginner makes it much easier to understand the kind of code you have to write as when you’re just a little more advanced! After all, OpenGL’s lighting is too basic for really realistic scenes — it doesn’t handle shadows, for example, and it can give quite rough effects with curved surfaces — so anything beyond simple scenes needs hand-coding anyway.

Right. Let’s begin by thinking about what we want from lighting. The aim is to be able to simulate a number of light sources within the scene. These sources don’t need to be visible themselves, but they do need to light up the 3D objects realistically, so that the side of the object that is towards the light is bright, and the side that’s away from the light is dark. So to put it another way, we want to be able to specify a set of light sources, then for each part of our 3D scene we want to work out how the all of the lights affect it. By now I’m sure you’ll know WebGL well enough to realise that this is going to involve doing stuff with shaders. Specifically, what we’ll do in this lesson is write vertex shaders that handle the lighting. For each vertex, we’ll work out how the light affects it, and use that to adjust its colour. We’ll only do this for one light for now; doing it for multiple lights is just a case of repeating the same procedure for each light and adding the results together.

One side note here; because we’re working out the lighting on a per-vertex basis, the effects of the light on the pixels that lie between vertices will be worked out by doing the usual linear interpolation. This means that the spaces between the vertices will be lit up as if they were flat; conveniently, because we’re drawing a cube, this is exactly what we want! For curved surfaces, where you want to calculate the effects of lighting on every pixel independently, you can use a technique called per-fragment (or per-pixel) lighting, which gives much better effects. We’ll look at per-fragment lighting in a future lesson. What we’re doing here is called, logically enough, per-vertex lighting.

OK, on to the next step: if our task is to write a vertex shader that works out how a single light source affects the colour at the vertex, what do we do? Well, a good starting point is the Phong Reflection Model. I found this easiest to understand by starting with the following points:

  • While in the real world there is just one kind of light, it’s convenient for graphics to pretend that there are two kinds:
    1. Light that comes from specific directions and only lights up things that face the direction it’s coming from. We’ll call this directional light.
    2. Light that comes from everywhere and lights up everything evenly, regardless of which way it faces. This is called ambient light. (Of course, in the real world this is just directional light that has been scattered by reflection from other objects, the air, dust, and so on. But for our purposes, we’ll model it separately.)
  • When light hits a surface, it comes off in two different ways:
    1. Diffusely: that is, regardless of the angle at which it hits the surface, it’s bounced off evenly in all directions. No matter what angle you’re looking at it from, the brightness of the reflected light is governed entirely by the angle at which the light hits the surface — the steeper the angle of incidence, the dimmer the reflection. This diffuse reflection is what we normally think of when we’re thinking of an object that is lit up.
    2. Specularly: that is, in a mirror-like fashion. The portion of the light that is reflected this way bounces off the surface at the same angle as it hit it. In this case, the brightness of the light you see reflected from the material depends on whether or not your eyes happen to be in the line along which the light was bounced — that is, it depends not only on the angle at which the light hit the surface but on the angle between your line of sight and the surface. This specular reflection is what causes “glints” or “highlights” on objects, and the amount of specular reflection can obviously vary from material to material; unpolished wood will probably have very little specular reflection, and highly-polished metal will have quite a lot.

The Phong model adds a further twist to this four-step system, by saying that all lights have two properties:

  1. The RGB values for the diffuse light they produce.
  2. The RGB values for the specular light they produce.

…and that all materials have four:

  1. The RGB values for the ambient light they reflect.
  2. The RGB values for the diffuse light they reflect.
  3. The RGB values for the specular light they reflect.
  4. The shininess of the object, which determines the details of the specular reflection.

For each point in the scene, the colour is a combination of the colour of the light shining on it, the material’s own colours, and the lighting effects. So, in order to completely specify lighting in a scene according to the Phong model, we need two properties per light, and four per point on the surface of our object. Ambient light is, by its very nature, not tied to any particular light, but we also need a way of storing the amount of the ambient light for the scene as a whole; sometimes it can be easiest to just specify an ambient level for each light source and then add them all up into a single term.

Anyway, once we have all of that information, we can work out colours related to the ambient, directional, and specular reflection of light at every point, and then add them together to work out the overall colour. Here’s an excellent diagram on Wikipedia showing how that works. All our shader needs to do is work out for each vertex the contributions to the vertex’s red, green and blue colours for ambient, diffuse and specular lighting, use them to weight the colour’s RGB components, add them all together, and output the result.

Now, for this lesson, we’re going to keep things simple. We’re only going to consider diffuse and ambient lighting, and will ignore specular. We’ll use the textured cube from the last lesson, and we’ll assume that the colours in the texture are the values to use for both diffuse and ambient reflection. And finally, we will consider only one simple kind of diffuse lighting — the simplest directional kind. It’s worth explaining that with a diagram.

Directional lighting Light that’s coming towards a surface from one direction can be of two kinds — simple directional lighting that is in the same direction across the whole scene, and lighting that comes from a single point within the scene (and is thus at a different angle in different places).

For simple directional lighting, the angle at which the light hits the vertices on a given face — at points A and B on the diagram — is always the same. Think of light from the sun; all rays are parallel.

Point lightingIf, instead, the light is coming from a point within the scene, the angle the light makes will be different at each vertex; at point A in this second diagram, the angle is about 45 degrees, whereas at point B it is about 90 degrees to the surface.

What this means is that for point lighting, we need to work out the direction from which the light is coming for every vertex, whereas for directional lighting we only need to have one value for the directional light source. This makes point lighting a little bit harder, so this lesson will only use simple directional lighting and point lighting will come later (and shouldn’t be too hard for you to work out on your own anyway :-)

So, now we’ve refined the problem a bit more. We know that all of the light in our scene will be coming in a particular direction, and this direction won’t change from vertex to vertex. This means that we can put it in a uniform variable, and the shader can pick it up. We also know that the effect the light has at each vertex will be determined by the angle it makes with the surface of our object at that vertex, so we need to represent the orientation of the surface somehow. The best way to do this in 3D geometry is to specify the normal vector to the surface at the vertex; this allows us to specify the direction in which the surface is facing as a set of three numbers. (In 2D geometry we could equally well use the tangent — that is, the direction of the surface itself at the vertex — but in 3D geometry the tangent can slope in two directions, so we’d need two vectors to describe it, while the normal lets us use just one vector.)

Once we have the normal, there’s a final piece required before we can write our shader; given a surface’s normal vector at a vertex and the vector that describes the direction the light is coming from, we need to work out how much light the surface will diffusely reflect. This turns out to be proportional to the cosine of the angle between those two vectors; if the normal is 0 degrees (that is, the light is hitting the surface full-on, at 90 degrees to the surface in all directions) then we can say it reflects all of the light. If the light’s angle to the normal is 90 degrees, nothing is reflected. And for everything in between, it follows the cosine’s curve. (If the angle is more than 90 degrees, then we would in theory get negative amounts of reflected light. This is obviously silly, so we actually use the cosine or zero, whichever is the larger.)

Conveniently for us, calculating the cosine of the angle between two vectors is a trivial calculation, if they both have a length of one; it’s done by taking their dot product. And even more conveniently, dot products are built into shaders, using the logically-named dot function.

Whew! That was quite a lot of theory to get started with — but now we know that all we need to do to get simple directional lighting working is:

  • Keep a set of normal vectors, one for each vertex.
  • Have a direction vector for the light.
  • In the vertex shader, calculate the dot product of the vertex’s normal and the lighting vector, and weight the colours appropriately, also adding in a component for the ambient lighting

Let’s take a look at how that works in the code. We’ll start at the bottom and work our way up. Obviously the HTML for this lesson differs from the last one, because we have all of the extra input fields, but I won’t bore you with the details there… let’s move up to the JavaScript, where our first port of call is the initBuffers function. In there, just after the code that creates the buffer containing the vertex positions but before the code that does the same for the texture coordinates, you’ll see some code to set up the normals. This should look pretty familiar in style by now:

    cubeVertexNormalBuffer = gl.createBuffer();
    gl.bindBuffer(gl.ARRAY_BUFFER, cubeVertexNormalBuffer);
    var vertexNormals = [
      // Front face
       0.0,  0.0,  1.0,
       0.0,  0.0,  1.0,
       0.0,  0.0,  1.0,
       0.0,  0.0,  1.0,

      // Back face
       0.0,  0.0, -1.0,
       0.0,  0.0, -1.0,
       0.0,  0.0, -1.0,
       0.0,  0.0, -1.0,

      // Top face
       0.0,  1.0,  0.0,
       0.0,  1.0,  0.0,
       0.0,  1.0,  0.0,
       0.0,  1.0,  0.0,

      // Bottom face
       0.0, -1.0,  0.0,
       0.0, -1.0,  0.0,
       0.0, -1.0,  0.0,
       0.0, -1.0,  0.0,

      // Right face
       1.0,  0.0,  0.0,
       1.0,  0.0,  0.0,
       1.0,  0.0,  0.0,
       1.0,  0.0,  0.0,

      // Left face
      -1.0,  0.0,  0.0,
      -1.0,  0.0,  0.0,
      -1.0,  0.0,  0.0,
      -1.0,  0.0,  0.0,
    ];
    gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(vertexNormals), gl.STATIC_DRAW);
    cubeVertexNormalBuffer.itemSize = 3;
    cubeVertexNormalBuffer.numItems = 24;

Simple enough. The next change is a bit further down, in drawScene, and is just the code required to bind that buffer to the appropriate shader attribute:

    gl.bindBuffer(gl.ARRAY_BUFFER, cubeVertexNormalBuffer);
    gl.vertexAttribPointer(shaderProgram.vertexNormalAttribute, cubeVertexNormalBuffer.itemSize, gl.FLOAT, false, 0, 0);

Just after this, also in drawScene, is code related to the fact that we’ve removed the code from lesson 6 that allowed us to switch between textures, so we have only one texture to use:

    gl.bindTexture(gl.TEXTURE_2D, crateTexture);

The next bit is a little more involved. Firstly, we see if the “lighting” checkbox is checked, and set a uniform for our shaders telling them whether or not it is:

    var lighting = document.getElementById("lighting").checked;
    gl.uniform1i(shaderProgram.useLightingUniform, lighting);

Next, if lighting is enabled, we read out the red, green and blue values for the ambient lighting as specified in the input fields at the bottom of the page and push them up to the shaders too:

    if (lighting) {
      gl.uniform3f(
        shaderProgram.ambientColorUniform,
        parseFloat(document.getElementById("ambientR").value),
        parseFloat(document.getElementById("ambientG").value),
        parseFloat(document.getElementById("ambientB").value)
      );

Next, we want to push up the lighting direction:

      var lightingDirection = [
        parseFloat(document.getElementById("lightDirectionX").value),
        parseFloat(document.getElementById("lightDirectionY").value),
        parseFloat(document.getElementById("lightDirectionZ").value)
      ];
      var adjustedLD = vec3.create();
      vec3.normalize(lightingDirection, adjustedLD);
      vec3.scale(adjustedLD, -1);
      gl.uniform3fv(shaderProgram.lightingDirectionUniform, adjustedLD);

You can see that we adjust the lighting direction vector before passing it to the shader, using the vec3 module — which, like the mat4 we use for our model-view and projection matrices, is part of glMatrix. The first adjustment, vec3.normalize, scales it up or down so that its length is one; you will remember that for the cosine of the angle between two vectors to be equal to the dot product, they both need to be of length one. The normals we defined earlier all had the correct length, but as the lighting direction is entered by the user (and it would be a pain for them to have normalise vectors themselves) then we convert this one. The second adjustment is to multiply the vector by a scalar -1 — that is, to reverse its direction. This is because we’re specifying the lighting direction in terms of where the light is going, while the calculations we discussed earlier were in terms of where the light is coming from. Once we’ve done that, we pass it up to the shaders using gl.uniform3fv, which puts a three-element Float32Array (which is what the vec3 functions are dealing with) into a uniform.

The next bit of code is simpler; it just copies the directional light’s colour components up to the appropriate shader uniform:

      gl.uniform3f(
        shaderProgram.directionalColorUniform,
        parseFloat(document.getElementById("directionalR").value),
        parseFloat(document.getElementById("directionalG").value),
        parseFloat(document.getElementById("directionalB").value)
      );
    }

That’s all of the changes in drawScene. Moving up to the key-handling code, there are simple changes to remove the handler for the F key, which we can ignore, and then the next interesting change is in the function setMatrixUniforms, which you will remember copies the model-view and the projection matrices up to the shader’s uniforms. We’ve added four lines to copy up a new matrix, based on the model-view one:

    var normalMatrix = mat3.create();
    mat4.toInverseMat3(mvMatrix, normalMatrix);
    mat3.transpose(normalMatrix);
    gl.uniformMatrix3fv(shaderProgram.nMatrixUniform, false, normalMatrix);

As you’d expect from something called a normal matrix, it’s used to transform the normals :-) We can’t transform them in the same way as we transform the vertex positions, using the regular model-view matrix, because normals would be converted by our translations as well as our rotations — for example, if we ignore rotation and assume we’ve done an mvTranslate of (0, 0, -5), the normal (0, 0, 1) would become (0, 0, -4), which is not only too long but is pointing in precisely the wrong direction. We could work around that; you may have noticed that in the vertex shaders, when we multiply the 3-element vertex positions by the 4×4 model-view matrix, in order to make the two compatible we extend the vertex position to four elements by adding an extra 1 onto the end. This 1 is required not just to pad things out, but also to make the multiplication apply translations as well as rotations and other transformations, and it so happens that by adding on a 0 instead of a 1, we could make the multiplication ignore the translations. This would work perfectly well for us right now, but unfortunately wouldn’t handle cases where our model-view matrix included different transformations, specifically scaling and shearing. For example, if we had a model-view matrix that doubled the size of the objects we were drawing, their normals would wind up double-length as well, even with a trailing zero — which would cause serious problems with the lighting. So, in order not to get into bad habits, we’re doing it properly :-)

The proper way to get the normals pointing in the right direction, is to use the transposed inverse of the top-left 3×3 portion of the model-view matrix. There’s more on this here, and you might also find Coolcat’s comments below useful (he made them regarding an earlier version of this lesson). (Thanks also to Shy for further advice.)

Anyway, once we’ve calculated this matrix and done the appropriate magic, it’s put into the shader uniforms just like the other matrices.

Moving up through the code from there, there are a few trivial changes to the texture-loading code to make it just load one mipmapped texture instead of the list of three that we did last time, and some new code in initShaders to initialise the vertexNormalAttribute attribute on the program so that drawScene can use it to push the normals up to the shaders, and also to do likewise for all of the newly-introduced uniforms. None of these is worth going over in any detail, so let’s move straight on to the shaders.

The fragment shader is simpler, so let’s look at it first:

  precision mediump float;

  varying vec2 vTextureCoord;
  varying vec3 vLightWeighting;

  uniform sampler2D uSampler;

  void main(void) {
     vec4 textureColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t));
     gl_FragColor = vec4(textureColor.rgb * vLightWeighting, textureColor.a);
  }

As you can see, we’re extracting the colour from texture just like we did in lesson 6, but before returning it we’re adjusting its R, G and B values by a varying variable called vLightWeighting. vLightWeighting is a 3-element vector, and (as you would expect) holds adjustment factors for red, green and blue as calculated from the lighting by the vertex shader.

So, how does that work? Let’s look at the vertex shader code; new stuff in red:

  attribute vec3 aVertexPosition;
  attribute vec3 aVertexNormal;
  attribute vec2 aTextureCoord;

  uniform mat4 uMVMatrix;
  uniform mat4 uPMatrix;
  uniform mat3 uNMatrix;

  uniform vec3 uAmbientColor;

  uniform vec3 uLightingDirection;
  uniform vec3 uDirectionalColor;

  uniform bool uUseLighting;

  varying vec2 vTextureCoord;
  varying vec3 vLightWeighting;

  void main(void) {
    gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
    vTextureCoord = aTextureCoord;

    if (!uUseLighting) {
      vLightWeighting = vec3(1.0, 1.0, 1.0);
    } else {
      vec3 transformedNormal = uNMatrix * aVertexNormal;
      float directionalLightWeighting = max(dot(transformedNormal, uLightingDirection), 0.0);
      vLightWeighting = uAmbientColor + uDirectionalColor * directionalLightWeighting;
    }
  }

The new attribute, aVertexNormal, of course holds the vertex normals we’re specifying in initBuffers and passing up to the shader in drawScene. uNMatrix is our normal matrix, uUseLighting is the uniform specifying whether lighting is on, and uAmbientColor, uDirectionalColor, and uLightingDirection are the obvious values that the user specifies in the input fields in the web page.

In light of the maths we went through above, the actual body of the code should be fairly easy to understand. The main output of the shader is the varying variable vLightWeighting, which we just saw is used to adjust the colour of the image in the fragment shader. If lighting is switched off, we just use a default value of (1, 1, 1), meaning that colours should not be changed. If lighting is switched on, we work out the normal’s orientation by applying the normal matrix, then take the dot product of the normal and the lighting direction to get a number for how much the light is reflected (with a minimum of zero, as I mentioned earlier). We can then work out a final light weighting for the fragment shader by multiplying the colour components of the directional light by this weighting, and then adding in the ambient lighting colour. The result is just what the fragment shader needs, so we’re done!

Now you know all there is to learn from this lesson: you have a solid foundation for understanding how lighting works in graphics systems like WebGL, and should know the details of how to implement the two simplest forms of lighting, ambient and directional.

If you have any questions, comments, or corrections, please do leave a comment below!

Next time, we’ll take a look at blending, which we will use to make objects that are partly transparent.

<< Lesson 6Lesson 8 >>

Acknowledgments: the Wikipedia page on Phong shading helped a lot when writing this, especially in trying to make sense of the maths. The difference between the matrices required to adjust vertex positions and normals was made much clearer by this Lighthouse 3D tutorial, especially once Coolcat had clarified things in the comments below. Chris Marrin’s spinning box (with an extended version by Jacob Seidelin) was also a helpful guide, and Peter Nitsch’s spinning box helped too. As always, I’m deeply in debt to NeHe for his OpenGL tutorial for the script for this lesson.

You can leave a response, or trackback from your own site.

76 Responses to “WebGL Lesson 7 – basic directional and ambient lighting”

  1. [...] detailed theory, you can refer here and here which offers great theoretical part of lightings. I have nothing new to [...]

  2. 131 says:

    var normalMatrix = mat3.create();
    mat4.toInverseMat3(mvMatrix, normalMatrix);
    mat3.transpose(normalMatrix);
    gl.uniformMatrix3fv(shaderProgram.nMatrixUniform, false, normalMatrix);

    As i read the webgl specs, i found the 2nd arg of uniformMatrix* functions is “GLboolean transpose”, does this mean you can obtain the same result doing simply

    var normalMatrix = mat3.create();
    mat4.toInverseMat3(mvMatrix, normalMatrix);
    gl.uniformMatrix3fv(shaderProgram.nMatrixUniform, true, normalMatrix);

    ?

  3. [...] model whereas DirectionLight has a constant level. To learn more about these lights check out this tutorial on webgl which I’ve found most [...]

  4. xema25 says:

    Hi. How can I get two different directional lights on the box?

    Thanks :D

  5. Matthew Mitchell says:

    Copying all the source code to a local directory and running it on chrome fails but the online version works. I copied the gif and the two libraries. It doesn’t work. Odd.

  6. baluv12 says:

    Hi!

    If I don’t use texture, how can I write this section?!

    vec4 textureColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t));
    gl_FragColor = vec4(textureColor.rgb * vLightWeighting, textureColor.a);

    I would like to use it for a simple coloring (vColor)…

    e.g. for a red cube…

    thanks

  7. Anton says:

    In GLSL what exactly does it mean when you use the * operator with two vectors.

    For example in the Fragment shader we do the following:

    textureColor.rgb * vLightWeighting

    My guess is that the first element of each vector is multiplied and the product is used as the first element of the resulting vector. This would continue for all the elements in the vectors.

    I am correct in this assumption?

  8. Alex Bunting says:

    @Anton – yes, presumably by multiplying it is taking the cross product of the two.

    In questions of my own I am having trouble getting this to work – verbose and Javascript console dont give me any errors but the cube wont show. The lack of error suggests the things has just gone black and yet the values for how bright it is are all set high.

    Interestingly when I take out the:

    shaderProgram.vertexNormalAttribute = gl.getAttribLocation(shaderProgram, “aVertexNormal”);
    gl.enableVertexAttribArray(shaderProgram.vertexNormalAttribute);

    lines, it starts to work, but without the directional lighting. Just the ambient.

  9. Alex Bunting says:

    ah, nevermind, had an issue with my variable itemSize for the normals variable.

    Another great tutorial by the way, thanks. Especially liking your nice lengthly descriptions :)

  10. [...] tutorial de la serie Aprende webGL. Es una traducción no literal de su respectivo tutorial en Learning webGL, que a su vez está basada en el capítulo 7 del tutorial sobre openGL de NeHe. Vamos a aprender a [...]

  11. It looks like the link to the lighthouse3d.com page about normal matrix has been moved to

    http://www.lighthouse3d.com/tutorials/glsl-tutorial/the-normal-matrix/

  12. Maruthi says:

    Hi,

    Thanks for the tutorials. They are of great help in getting me started.
    I had a question.

    In this tutorial, the ambient light is added to the directional light while computing the lightWeighting. However the lightWeighting is multiplied (kind of dot product) with the Texture Color Vec3 in the shader. At One place the colors are additive (ambient+Directional) and at other place they are multiplicative (TextureRGB*lightWeighting). I can understand that the multiplication produces different colors, but wanted to understand why addition at one place and multiplication at the other

    Also, looks like the mainlink http://www.lighthouse3d.com is broken. I found another good link that explains OpenGL Normal Vector Transformation. Might be of help to readers. http://www.songho.ca/opengl/gl_normaltransform.html

  13. Sergejack says:

    When you describe “Point lighting” shouldn’t the light be computed for every fragment not every vertex?

  14. Eugene says:

    @Maruthi

    At place with (ambient+Directional) calculating lighting level, not color. For example, if we have scene ambient color with power 0.2 and vertex direction lighting with power 0.3 – total vertex lighting level will be half (0.5).

    For apply lighting level to pure texture color we use multiplication. For this example we half down base color intensity.

  15. James says:

    I am exporting an .x3d from Blender.

    When I do my directional lighting upon the normals/indices I have been given from Blender with settings:

    Apply modifiers/ Triangulate/ Normals / Hierarchy
    Forward = Y forward
    Up = Z up

    I come across a problem. The lighting seems to spread across the face it points at but also leaks onto the sides in a diffusion.

    This leads me to think the normals and indices that are being given to me need to be translated in some way.

    If I look at the x3d XML that i get, I see some extra values I have not been using:

    I looked into what IFS stood for and realised it could be translations for the normals.

    Here are 2 questions.

    1.Can you rotate/scale normals without Rotating/Scaling the vertices (Which have already had translations made)

    2.How does this work: http://www.jamestrusler.co.uk/files/webglquestion/question1.html

  16. James says:

    OMFG.. YOU WONT BELIEVE WHAT THE BUG WAS. I FEEL LIKE A COMPLETE NOOB!
    obj[a].Transform.Group.Shape[b].vertexIndexBuffer = gl.createBuffer();
    gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, obj[a].vertexIndexBuffer);

    LOLLLL can u believe.. 3 days wasted to 1 bug. That’s what happens when you’re a complete NOOB!

  17. capnramses says:

    @AlexBunting @Anton To clarify;

    Anton is correct r,g,b,a * r,g,b,a = (r*r,g*g,b*b,a*a). This is called “component-wise” multiplication and is achieved in GLSL with the asterisk operator. Useful for vectors that hold colours, or for multiplying a vector by a scalar (float).

    Cross-product multiplication is quite different, and is achieved with the vectorC = cross (vectorA, vectorB) built-in function. You use this for direction vectors to get a vector that is perpendicular to 2 other vectors.

  18. Chris B says:

    @Matthew Mitchell, you need something like WAMP or MAMP server to run webgl from localhost

  19. Karen Reed says:

    Your website would be a whole lot more useful if the user could remove the panel of ads going down the right side of the screen. There doesn’t appear to be any way to close it so you can’t read the text on the website.

  20. Danny says:

    I dont get this, how come we don’t need to do a shaderProgram.vertexNormalAttribute = gl.getAttribLocation(shaderProgram, “aVertexNormal”); and enable it?? Can someone tell me please :(

  21. Grüse says:

    Danny, why would you think you don’t need to do that? Check the source code, it’s right there, in the initShaders() function.

  22. Michael says:

    var normalMatrix = mat3.create();
    mat4.toInverseMat3(mvMatrix, normalMatrix);
    mat3.transpose(normalMatrix);

    Here it’s unnecessary to inverse or transpose the matrix.
    Since for orthogonal transformations the inverse and the transpose are the same thing you’re just canceling the previous step and end up where you started.
    You just need to copy the top-left 9 numbers from mat4 to mat3.

  23. Michael says:

    On light direction.
    You could mention that the vertex shader is in clip space, which is left handed (-z points towards you), while the code in your app before sending stuff to the vertex shader is in eye space, which is right-handed (-z points away from you).
    That’s why the light direction works as expected in the x and y directions, while the z direction works differently.
    For example, consider these light directions:
    [1.0, 0.0, 0.0] lights up the right face of the cube, as expected
    [-1.0, 0.0, 0.0] the left face, as expected
    [0.0, 1.0, 0.0] lights up the top face, as expected
    [0.0, -1.0, 0.0] the bottom face, as expected, however
    [0.0, 0.0, 1.0] oops, lights up the back face, and
    [0.0, 0.0, -1.0] oops again, lights up the front face
    because, as mentioned, in the vertex shader the z axis is inverted because the clip space is left-handed.

    This effect also shows up in your demo.

  24. Sakari says:

    Thanks for these great tutorials ^_^

    I decided to use the newest version of glmatrix, the 2.x series.
    So if you are using the new glmatrix library, you will have to change the following parts in this tutorial in order for it to work properly:

    in setMatrixUnifors, instead of

    var normalMatrix = mat3.create();
    mat4.toInverseMat3(mvMatrix, normalMatrix);
    mat3.transpose(normalMatrix);
    gl.uniformMatrix3fv(shaderProgram.nMatrixUniform, false, normalMatrix);

    You can use:

    var normalMatrix = mat3.create();
    mat3.normalFromMat4(normalMatrix, mvMatrix);

    Seems that in 2.x this has been combined to one function.

    Also, in drawScene, instead of:

    var adjustedLD = vec3.create();
    vec3.normalize(lightingDirection, adjustedLD);
    vec3.scale(adjustedLD, -1);

    You will have to swap the parameters around like this:

    var adjustedLD = vec3.create();
    vec3.normalize(adjustedLD, lightingDirection);
    vec3.scale(adjustedLD, adjustedLD, -1);

    Took me a while to figure this out, but also a good thing so I had to take a look what glmatrix is actually doing :)

  25. Kendzi says:

    As I understands value of variable vLightWeighting should be in range , but when I’m looking into equation: uAmbientColor + uDirectionalColor * directionalLightWeighting it can become vector of 2. It could happen when directionalColor and ambientColor is (1,1,1) and dot product is in direction of light.

    Is it correct?

Leave a Reply

Subscribe to RSS Feed Follow Learning WebGL on Twitter