[UPDATE -- the WebGL spec has changed since this blog post, so the example will no longer work. However, there is now a WebCL standard in progress.]
Aaron Babcock, whose fallingsand WebGL demo I linked to last month, has gone ahead and implemented the first steps towards this, using shaders and render-to-texture to persuade the GPU to multiply two 1024×1024 matrices. This is really impressive; here’s his own description (with a few edits; errors are doubtless mine):
As a proof of concept I took the Sylvester matrix library and modified the multiply function to execute on the GPU. Matrix values are packed into textures, a glsl program computes the multiply to a framebuffer, and
readPixelsis used to retrieve the result. In my benchmarks stock Sylvester takes about 35 seconds to compute a multiply of two 1024×1024 matrices, the GPU-enabled version can do the same in about 5 seconds. Perhaps one day complex GPU-accelerated, distributed computing projects like seti@home could use only a simple webpage as a client.
I think it is a pretty cool proof of concept but it would be very interesting to get a discussion going from more experienced OpenGL developers. Here are some problems I see:
- Only works on Webkit, locks up Firefox, Chromium produces incorrect results. [UPDATE it works for matrices smaller than 500x500 on Minefield now.]
- Packing and Unpacking matrix values to textures (only handles integers now, could it be more efficient?)
- Alpha channel in textures cannot be used to store matrix data, leaving only 3 bytes per pixel. [UPDATE: Aaron writes "I found that If I changed the alpha value, it affected all pixel values that resulted from the
readPixelcall" — some kind of alpha premultiplication going on, perhaps?]
Right now the demo is Webkit only, although the concept should be possible on any webgl implementation. I’d be very interested to know what you and other WebGL developers think.
Let me second that — it would be great to hear what people think about this.