Next steps for the lessons, and a demo

I’m wondering what I should cover next in the lessons. The longer-term plan I’m working to right now is to put together something a bit larger than my lessons have been so far; a complete scene demonstrating most of what I’ve covered, but also showing more about program structure. I’ll aim to build it while I write a few more lessons (because there’s more I want to put into it than I’ve tried to explain so far), and then do one large post working through the structure of the big demo.

I’ve started putting something together; it’s called Spacelike (for no reason other than that I felt like writing something called Spacelike) and you can check it out here (a warning — the sky texture is quite large and will take a while to load). The only control right now is that you can orbit the spacecraft by dragging, and zoom in and out with Page Up / Page Down. Here’s a quick video for those without a WebGL-enabled browser:

Working on this has led me to think of a bunch of different things that feel like they would be worth teaching, some of which are already used in the demo and some of which aren’t:

  • The way cameras interact with lighting.
  • Sky-spheres and -boxes — that is, background imagery that is always behind objects in the scene.
  • Picking — the ability to handle the user clicking on an object in a scene.
  • Shadows — objects casting shadows on other objects.
  • Exporting meshes from Blender (like the spacecraft in the demo).
  • Normal-mapping — like specular maps, but with the normals, allowing you to give the impression of much more complex surfaces.
  • Reflections — shiny stuff.
  • Particles — flames, smoke, explosions, etc, like in this Google demo.
  • Heads-up displays.

What do you think? Do any of these sound particularly cool? Is there something else that I should be looking at?

You can leave a response, or trackback from your own site.

38 Responses to “Next steps for the lessons, and a demo”

  1. murphy says:

    Spacelike crashes both Safari and Firefox on Mac OS for me :(

    Shadows would be very cool. I didn’t get the stencil buffer to work yet, so I wonder what technique you’ll be using :)

    Picking is another nice topic.

  2. giles says:

    Hi murphy — that’s a pain, I suspect it might be my use of insanely large textures… do you get anything at all displayed or does it just go down straight away?

    I’ll put you down with a first-choice vote for shadows (I agree, they might be challenging right now, but still…) and a second choice for picking.

  3. steve says:

    Picking and Shadows are the two most painful, and therefore most worthy. :)

  4. Paul Brunt says:

    I really like the idea of a bigger topic for the lessons. Showing people what they could do with what they’ve learned has to be a plus.

    I’d agree picking and shadows are a challenge and a lesson on them would be really cool.

    But, I would really like to see normal maps as personally I found them more challenging as you have to start thinking about tangent space to use them properly which personally I found very difficult to get my head round.

    Why not add parallax mapping to the list, I’d love to see a lesson on that. For such a cheep and impressive technique I think it’s way under used. Although I’m a bit biased having just added it to GLGE;-)

  5. steve says:

    The challenge of picking is that the drawn shape can be different from the initial data, so the Javascript side of things just about needs to replicate the pipeline to figure out what happened.

    Another approach is to use picking shaders, one with a different colour per object (to find which object) and one with linear UV colouring for where the object was picked.

    This will handle vertex transforms, and bump mapping can be handled by linear changes to the colour in the fragment shader.

    I’m sure you already know this, I was just concerned that we may see a lesson based on ray intercepts to primitive shapes picking. :)

  6. jd says:

    Been going back an forth with Chris Marrin. In order to promote reuse, we need to start using the XmlHttpWebRequest for downloading vertex/fragment shader code. Otherwise common code used in both vertex/fragment shaders will have to be cut and pasted into both shaders. The “src” attribute in the tag can’t handle shader code.

  7. giles says:

    @steve — thanks for that, it wasn’t something I’d properly appreciated, so very timely! It makes a lot of sense. So, if I understand the process correctly, if we consider just being able to detect which object was picked and not where on the object the pick action was performed, we render the scene twice — once with the normal shaders to produce the image, and once with simplified shaders: no lighting, no textures, and only the pickable objects (which should leave us with a fragment shader that just does a static colour for everything). Each pickable object is a different colour, and we do this second rendering to a texture, so when the user performs a picking action we can just get the location, look up the colour of the pixel in the texture, and find out which object has been hit.

    Is that roughly right? If so, perhaps I need to introduce render-to-texture first in a simpler form: perhaps a scene containing a mirror?

    @jd — that’s a good point generally, I think for the larger-scale lesson I should look at moving the fragment shaders into separate files just like the meshes. Also, if I understand steve’s comment above correctly, then I guess you’re also saying that separating out the shaders like that will make it easier to do a render with different shaders for picking — is that right?

  8. rodolfo says:

    How about a lesson on collisions? That would be very helpful!

  9. jd says:

    Sorry, I was off topic with my comment. Just wanted to make it clear that keeping all shader code within tags hampers reuse of code that can be leveraged in both vertex and fragment shaders.

    Yes, shader code should be in separate files like the meshes. Then what we’re calling and can be ‘linked’ with other shader files (after being fetched with XmlHttpRequest) containing code that is common to both fragment and vertex shaders.

    In a nutshell, all shader code should be kept in separated files, downloaded with XmlHttpReqeust, and then linked within the page.

    This isn’t a big deal right now with simple shaders. But as shaders get more complex, reuse is going to be a bigger issue.

  10. jd says:

    something filtered out text in my posting …

    what were calling id=”shader-fs” and id=”shader-vs” can be “linked” …

  11. giles says:

    @rodolfo — that’s a good idea; once I’ve done picking and shadows, I might be able to re-use the ideas from them for that.

    @jd — ah, thanks. Makes sense.

  12. Titan says:

    Some News about stencil Buffers? (My tries fails)

  13. giles says:

    Nice idea! I’ll have to learn about them…

  14. Nathan says:

    Demo seems to fail, but the video looks excellent. I would LOVE to see a further explanation on this demo. Specifically, I am interested in handling camera movement. I have a 3D scatter plot from DB data. Performance is awful because of the amount of data shown. It also has problems with selecting elements for more information. I have come to discover this is because I am rotating the model, instead of the camera around the model so the specific elements are in a different vector when I try to select one.

    So I am interested in camera movement, and details about this awesome demo you put together. :)

  15. Nathan says:

    Picking is another interesting topic. All my efforts have been with O3D, but everything I have learned here applies to that. The picking element is extremely slow though, as it takes 4-5 seconds if you have a significant amount of objects rendered. There has to be a more optimal approach than traversing the entire tree. I would like to see your approach to picking as well.

    Love this site!

  16. giles says:

    Hi Nathan — many thanks for the heads up on Spacelike, I’d forgotten to upgrade it to the latest WebGL API! It should work fine now.

    Since I did this post, I’ve started on a new lesson to introduce render-to-texture — an interesting topic in itself, but more interesting because it’s the underlying technique beneath doing GPU picking — which enables you to handle picking at the same speed as repainting. It’s been languishing for a while because I’ve been very busy in my day job, but hopefully when it’s done it’ll be of use to you.

  17. Nathan says:

    They are all extremely useful. Thanks for all the insight you bring.

  18. giles says:

    @Nathan — glad to help :-)

  19. john says:

    Here is my attempt at implementing picking with webGL.

    http://dl.dropbox.com/u/5095342/WebGL/pickingLesson2.html

    I based it on Lesson4 and when you click on one of the rotating boxes it will turn yellow. The picking algorithm is based on the following tutorials.

    http://eigenclass.blogspot.com/2008/10/opengl-es-picking-using-ray-boundingbox.html

    http://trac.bookofhook.com/bookofhook/trac.cgi/wiki/MousePicking

  20. giles says:

    Hi John — that’s very cool! I’ll put it in the next “around the net” post if you want :-) BTW I’m thinking of using GPU picking (that is, render-to-texture with a different colour per object and then use the colour to determine what’s been picked) for my own tutorial on this.

  21. john says:

    Sure go ahead and put it in “around the net” post. I have 2 samples for picking using the math from the 2 tutorials. The first picking demo

    http://dl.dropbox.com/u/5095342/WebGL/pickingLesson1.html

    uses the picking math from this tutorial.

    http://trac.bookofhook.com/bookofhook/trac.cgi/wiki/MousePicking

    and the second picking demo

    http://dl.dropbox.com/u/5095342/WebGL/pickingLesson2.html

    uses the math from this tutorial.

    http://eigenclass.blogspot.com/2008/10/opengl-es-picking-using-ray-boundingbox.html

    Both of these picking demos use the ray intersecting the geometry technique of picking. In order to calculate the intersection of the ray with the geometry I used the Line and Plane Objects from the sylvester.js library.

    http://sylvester.jcoglan.com/docs

    I used the line intersecting a plane math from this library to intersect the pick ray with plane representing the front face of the cube object.

    I also used your webgl Lesson04 as a starting point for my demos. The only thing you need to watch out for in using these picking demos is getting the location of the top left pixel from the canvas object. I hard coded it to x=20 and y=20 in my demos. When listening for mouse events on the page you get the X and Y mouse location on the webpage and not the canvas so you need to subtract the canvas location from the mouse location for picking to work. Do you know the proper way to determine the location of the of the top left pixel of the webgl Canvas on an HTML page?

  22. john says:

    I used yahoo’s YUI ajax library to get the location of the webgl canvas on a web page so that there are no longer any hardcoded values in the picking demo. Here is the latest version of the picking demo which is modified to use YUI.

    http://dl.dropbox.com/u/5095342/WebGL/pickingLessonYUI1.html

  23. Hi,

    I’ve experimented with unprojecting and casting a ray from the screen, which was then checked for collisions with objects in the scene, as well as the GPU rendering approach.

    Source code for my implementation of the GPU rendering approach is available at https://github.com/sinisterchipmunk/webgl/blob/master/public/javascripts/engine/world.js and the demonstration at http://webgldemos.thoughtsincomputation.com/engine_tests/picking uses this code. The ray intersection approach is used at http://webgldemos.thoughtsincomputation.com/engine_tests/interface and is used to control the creatures.

    One major drawback to GPU rendering is that you don’t get the exact world space point of intersection, which you do get with the ray intersection approach. However, ray intersection is much less accurate if you use any kind of bounding volume; the only way (that I know of) to get around that is to check collision against every desired triangle, which (as in the case of my interface demo, above) is far too slow in JavaScript to be effective. To work around the speed issues in my interface demo, I split the objects to be tested into chunks (essentially like an octree, but without the sublevels) and first tested those against the frustum. This removed the need to check a lot of triangles, so that it is at least acceptable, but it’s still somewhat slow. (Prior to the workaround, the collision detection took roughly 1.0 to 1.5 seconds to complete, running only against the height map, which consists of approx. 30,000 triangles.)

    GPU rendering is slow, too, because you’re essentially rendering the majority of the scene again — but in my experience it was much faster because I could at least hardware accelerate the rendering.

    One really nice advantage of GPU rendering (beyond the speed debate) is that you can use readPixels() to read in a rectangle of pixels, not just a single one, and get a list of all objects in that rectangle. Of course, it can’t see “behind” an object and transparency poses a problem. It also removes the need for a lot of the matrix math, which I’m personally still having a hard time with. :)

    As my demos are based on Rails, which ships with Prototype, I used that library to get the canvas position on the screen. See the #registerMouseListeners function in https://github.com/sinisterchipmunk/webgl/blob/master/public/javascripts/engine/context.js for a reference to that.

    I hope this is helpful.

  24. john says:

    Here is my latest rendition of the picking demo. I added navigation to the canvas using the arrow keys, so that you can do move around in the 3D world and also do picking. Also added an extra webgl canvas on the web page so that there are two independent webgl canvas’s running the same picking demo. I also used the YUI javascript library from yahoo.

    http://dl.dropbox.com/u/5095342/WebGL/pickingLessonYUINav.html

  25. john says:

    I reworked my picking example again to use GPU rendering based on Colin Mackenzie’s picking example. So this version of the picking demo uses GPU rendering

    http://dl.dropbox.com/u/5095342/WebGL/pickingLesson3.html

  26. john says:

    Here is the side by side comparison of Picking using both ray intersection approach and the GPU rendering approach.

    http://dl.dropbox.com/u/5095342/WebGL/pickingLessonYUINav2.html

    To put together the above comparison I made use of the YUI javascript library so that both demos are standalone modules that run in their own “sandbox” on web page and do not share any global javascript variables.

    Here is another example of several demos from lessons 1 through 6 running on the same web page using this technique.

    http://dl.dropbox.com/u/5095342/WebGL/webglExamples.html

  27. giles says:

    @john, @Colin — thank you both so much for all of the picking demos! These are really useful, I’ll highlight them in the next roundup and they’ll hopefully inspire me to get going with my own tutorial (which will of course have appropriate shouts out to both of you).

  28. corentin says:

    In the tutorial on render to texture will you explain how to encode the depth in the 4 colors ? It can be of use for some postprocessing (dof, ssaa, ssao …).
    Then you can easily get the position of the clicked object by unprojecting it.

  29. giles says:

    For the RTT tutorial, I just want to cover the basic concept of rendering to a texture; the tutorial after that I want to do picking, then after that shadow mapping — so depth-encoding is required for the last one of those.

  30. Rod says:

    What about volume lights?
    You know, the type of effect a sun ray does as it crosses fog or the effect of Ironman’s ARC reactor. (blueish light on Ironman’s chest).

    The topics you suggested are really nice, too.

    Again, great job with the site. It’s been extremely useful.

    Rod

  31. giles says:

    Hi Rod — glad the site was useful. That’s a great suggestion, I’ve added it to my list!

  32. Steven says:

    Is anybody tell me why this function doesn’t work ?

    gl.pMatrix = M4×4.makePerspective(45, gl.viewportWidth / gl.viewportHeight, 0.1, 1000 , gl.pMatrix);

    gl.pMatrix = M4×4.makeLookAt([0, 0, -10], [0, 0, 0], [0, 1, 0], gl.pMatrix);

    I use the mjs.js .

  33. giles says:

    Hi Steven — I don’t normally use makeLookAt, but if I understand correctly it should operate on the model-view matrix. Give that a go and let me know if it help!

  34. Nobody says:

    I love your tutorials!

    Right now I’m most interested in the following three things:
    – Picking
    – Efficiently handling shared data between two canvases on the same page.
    – Efficiently handling back face culling where some vertex arrays must be culled and others (in the same scene) must not.

  35. giles says:

    Thanks! Picking is definitely the next topic for that (seemingly never-coming) day when I get a chance to write the next lesson.

  36. DK says:

    Hello,

    I am trying to get my mesh (a human face) from Blender to show up in a webpage…I have tried many things, but I just can’t figure it out.

    Could anyone help me with this?
    I’m thinking it would be the same way that the spacecraft in this example was created.. please let me know, thanks!

    e-mail me please at [email protected]
    THANKS so much!

  37. chirurgie says:

    Picking is essential . . . The http://dl.dropbox.com/u/5095342/WebGL/pickingLesson3.html examples are great, but how would you determine that the object picked is a named entity in javascript?

  38. Raidan says:

    I have a problem with picking.
    Can you help me?
    http://www.khronos.org/message_boards/viewtopic.php?f=43&t=5404

Leave a Reply

Subscribe to RSS Feed Follow Learning WebGL on Twitter