Check out my
new book!
HTML5 Games book
Planet #1024 - my JS1K entry So, the JS1K contest is closed for submissions. The goal was to create something cool in just 1024 bytes (or less) and there have been lots and lots of entries, some of them pretty damn cool. I just had time to submit something before the deadline - a planet generator featuring random zooming, rotating planets, complete with cloud system, shading, atmospheric glow and a star filled background. Check it out here.

Should work in current versions of Firefox, Opera, Chrome and Safari.

Read more...
3D chess developments A long time ago I started working on a 3D chess game using canvas but never got around to finishing it. Well, Stefano Gioffré took it upon himself to add some very cool features and not least, integrate a chess engine allowing you to play against the computer. Awesome work, Stefano.


Check it out here: http://htmlchess.sourceforge.net/demo/example.html

Read more...
HTML5 audio visualizations For a while now I've been playing around with music visualization, you can see some of my previous endeavours here. The canvas element gives us everything we need to draw all sorts of cool stuff in the browser but the audio part still needs Flash to be any fun. While the new HTML5 audio element does allow for audio playback, it doesn't give us much in terms of audio data information - which is crucial if the visualizations are to react to the music. In another experiment of mine I tried to solve it by doing a pre-analysis of the MP3 file and saving the data as JavaScript. The result was decent, I think, but having to pre-bake the data is a pretty severe limitation.

However, a few people have been hacking away at Firefox, attempting to provide just that functionality for the audio element. Some documentation of the new API is available here and if you're interested, you can follow the development in this bugzilla thread.

Anyway, I've been wanting to mess around with this and I figured that retro-fitting my "Pocket Full of Canvas" experiment to use HTML5 audio would be a fairly easy task. You can check out the result here.

At the moment you'll need a patched version of Firefox to get access to these extras but if you're not comfortable with building your own version, there are builds available here. It will still run in other (canvas-enabled) browsers but it will fall back to using SoundManager2 for the audio. While Flash audio is not that exciting anymore, it lets it run in eg. Chromium and Opera - and man, that's something else. Even if Firefox/Minefield has the upper hand with the audio stuff, it is feeling reaaally sluggish compared to those two (and you should definitely be using a nightly Minefield build). Chromium has always been speedy and Opera has gotten really crazy fast as well, running the visualizations in full-screen is just great (click the viz area to toggle).

And if you feel like playing around with the demos yourself, just click the "Show code" button and start hacking. I made a quick write-up of the very simple API here. The emphasis has been on making it easy to get from zero to something and not so much on making a lot of fancy functionality, so it is not very advanced.

Try it out: http://www.nihilogic.dk/labs/pocket_full_of_html5/

More examples:
Read more...
Worlds of WebGL Here's another thing I did while I was first playing around with WebGL, working on the musical solar system thing. Due to changes in the WebGL spec and the various implementations it got broken before I had a chance to put it online and it's just been sitting collecting dust ever since. I just tried fixing some of the issues and it seems to run ok now (let me know if it doesn't). It's a small experiment with a whole bunch of particles/planets forming larger shapes. Enjoy!

Link: Worlds of WebGL

Make sure you have a WebGL capable browser. Check out this page over at Learning WebGL if you need help getting one.

Read more...
WebGL Cheat Sheet WebGL has been getting a fair amount of buzz lately - and rightfully so, because it is cool. For those that don't know, WebGL is the 3D extension of the Canvas element, based on OpenGL ES 2.0. Having a standardized low-level graphics API like that available in the browser is pretty exciting stuff if you ask me. 3D canvas graphics has been long underway and not until just recently did it get to an interesting state when the first signs of WebGL showed up in both the (Mac) WebKit and Firefox nightly builds.

Note: This cheat sheet has become a bit out of date as the specs have been released and changes have been made that are not reflected in the cheat sheet. I will update it as soon as possible. Cheers. It's still very much a work in progress and the official specs haven't been made public yet, I'm not even sure how far they (Khronos, the working group responsible) are in settling on the specs. So that means there aren't much in terms of references out there for people like me who are anxious to play around with this new toy. There are a few example demos from both the WebKit and the Mozilla camps and cool sites like Learning WebGL are starting to pop up. There's of course the OES 2.0 reference and the source code for both WebKit and Mozilla implementations is also readily available. So, I decided to just make my own reference sheet by combining those sources and the result is a condensed WebGL cheat sheet which fits on 4 pages - or 2 if you have good eyes and print two on each page.

As an added bonus, this exercise forced me to dig through the entire OES2.0 spec, which was great since I'm an OpenGL newbie and learning stuff is cool!

Of course, given the current state of WebGL, any of the information in this document is subject to change from day to day. I might most certainly have missed a bunch of things as well due to lack of insight and good references. In addition to that there also seem to be a few differences in the two implementations, so those will be corrected when I know what actually needs correcting.

I also don't have access to a Mac and since the WebKit implementation is Mac-only for now, I haven't actually seen it action. There might be more differences between the two than I've found just by glancing over the source.

Anyway, here it is in both PDF and HTML:

WebGL Cheat Sheet PDF

WebGL Cheat Sheet HTML

The HTML version has the extra bonus feature of tooltip information when you hover the mouse over (most of the) function parameters and enum values.

And corrections and suggestions are of course most welcome!
Read more...
Strange attractors - beautiful chaos and canvas Math has the ability to both be totally awesome and beautiful as well as make me bang my head into the wall. While the actual math involved at times goes way above my head, some things are just so damn elegant and when, on top of that, they can be visualized with pretty pictures, I'm sold. To make things even better, we have <canvas> and with it the ability to throw some JavaScript at this magic math. Links to gallery and generator at the bottom if you want to skip the details.

I'm sorry if I offend any math-enabled people with this post, I am but a mere mortal so bear with me if I mess up any of the math.

Anyway, fractals like the well-known Mandelbrot set (and many others) have that ability as do another category of mathematical creatures known as strange attractors. So what are they? I asked Wikipedia:
"An attractor is a set to which a dynamical system evolves after a long enough time."
While in every day use, one might think of an attractor as something that attracts stuff, in this context it's really the pattern or the result of what's going on in a system. E.g. if you're measuring and graphing the relationship between a number of variables over time, by connecting those points you could refer to the resulting pattern as an attractor.

A point attractor a simple form of attractor. Consider a pendulum. Release it and it will always, eventually, end up at rest in the same place. That resulting pattern (the point) would be the attractor for that system.

What about strange attractors, then?
"An attractor is informally described as strange if it has non-integer dimension or if the dynamics on it are chaotic."
Chaos, alright. Among other things, that involves being very sensitive to initial conditions (butterfly -> wing-flapping -> tornado, etc). It's more complex that than, but that's where it just goes beyond what I can grok.

Ok, so what we need is a function that, when called over and over again, shows unpredictable and radically different behaviour if we change some initial conditions (even just slightly). Fortunately there are smart people who already found such functions.

One of those is the quadratic map given by

xn+1 = a0 + a1 xn + a2 xn2 + a3 xn yn + a4 yn + a5 yn2,
yn+1 = b0 + b1 xn + b2 xn2 + b3 xn yn + b4 yn + b5 yn2

where a0-a5 and b0-b5 are constants that make up that attractor. Another is the Peter de Jong attractor using trigonometric functions:

xn+1 = sin(a yn) - cos(b xn),
yn+1 = sin(a xn) - cos(b yn)

I've only played around with a few but there are many more. These, however, produce some very interesting visuals.

One problem, at least in the case of the quadratic attractor, is that, if the a0-a5 and b0-b5 factors are chosen randomly, very few (like 1% for quadratic, the other formulas seem to have a higher rate of success) combinations produce a useful chaotic system. Determining which ones do is done using something called the Lyapunov exponent, which I won't go into but look it up if you're interested.

All this was heavily inspired by the work done by Paul Bourke who has done all sorts of awesome math visualization. The code itself is also partly based on a program available on Bourke's site.

View the gallery for pretty pictures or make your own using the generator. The way it works is that you pick a formula/attractor type and click "Generate". It then searches for potentially nice images by selecting random values for the ai and bi coefficients. When a chaotic attractor is found, it draws it on the screen, optionally with some pretty and colorful compositing (courtesy of Pixastic). You can recreate any attractor by using its seed number (displayed after the name). If you find some really nice ones, leave a comment with the seed and attractor type so I and others can see.

Also, If you're going to generate your own images, I really suggest using Chrome (or WebKit, although it has problems with the compositing). Any recent canvas enabled browser should work, though.

View the gallery
Try the generator

Read more...
Pocket Full of Canvas One thing I found interesting when I did the JuicyDrop music visualization was MilkDrop's deformation effects. Rather than processing deformations for each and every pixel it works on a grid of points and then just interpolates the results for the actual pixels. I sort of mimicked that in JuicyDrop but in a simpler way. The grids used in JuicyDrop are something like 5x5 to 9x9 where MilkDrop uses much higher resolution grids and instead of doing per-pixel interpolation, the grid points are used to cut out triangles from the previous frame and paint them on the new frame, slightly transformed.


Since the deformations are usually very small when seen on a frame-by-frame basis, you can get some pretty good results even with fairly low resolution grids, and most recent browsers are more than capable of rendering 100 or even more triangles on a canvas. In the end, I was pleasantly surprised at how well everything turned out since I wasn't even sure I was going to get anything remotely close to what the original MilkDrop plugin produced.

While I still plan on doing some more work on JuicydDrop eventually, I decided to rip out just the grid deformation part and built something new around it. You see, every now and then I get the urge to just throw something quick together and make some flashing light or dancing balls or whatever but usually that urge comes when I only have 30 minutes to spare. So I figured I'd try to build a mini framework for making stupid demo effects and stuff like that.

I've ended up with a small application that loads simple scripts, exposes a bunch of functions to these scripts and then takes care of rendering and processing whatever the script tells it to. It's probably best explained by just taking a look at it. The whole idea was to make it as easy as possible for me to just throw some silly effect together real quick and hopefully not write too much code in the process, so it might not be the most thought through design but it gets the job done. The functions available range from basic drawing and image processing (via Pixastic) to audio data and 3D (via Pre3D).

There's no larger goal with this and there are already more robust and more elaborate frameworks out there for programming and animating graphics with JavaScript/Canvas(Processing.js for instance) so this is just my own little time-sink. You're of course welcome to play around with it, modify the existing scripts or even make your own.

To wrap things up, I made a little a demo comprised of scripts I cooked up while testing and developing this thing (as well as a few adaptations of other people's work). I totally recommend using Chrome and if possible, the dev channel as it's given me by far the best performance and visual appearance.

Watch the demo here

Play around with the application here
Read more...
Canvas Cheat Sheet update As zcorpan was so kind to point out, my Canvas Cheat Sheet wasn't quite up to date and I finally got around to fixing it up. Here are the links to the revised version:

PDF document
PNG image

Besides a few minor corrections the only significant change is that createPattern and drawImage can both take HTMLVideoElements now.
Read more...
Canvas Visualizations of Sorting Algorithms Via Simon Willison, I was made aware of an old but interesting post dealing with visualizing of sorting algorithms. Aldo Cortesi explains his dislike for animated visualizations and argues that their explanatory power equals that of a "glob of porridge flung against a wall".

He decided to make something better and ended up with some pretty cool static visualizations rendered with Python using the Cairo graphics library. Now, I don't know if these are really that much more informing than other attempts (especially if you're comparing algorithms), but I do think they're quite pretty.

Anyway, I thought they could use a little canvas love so I've spent my morning making a quick and dirty JavaScript / canvas port of Aldo's original Python program. It's a bit rushed and I don't have much experience with Python, so I might have missed a few details in the code, but it looks to be producing similar results.

See the canvas visualizations here

You can adjust the number of elements in the array and the dimensions of the canvas. When you click the "Render" button, an array of length NumElements is filled with random numbers and sorted using the algorithm of choice.

And make sure to read Aldo's original post for the full story.
Read more...
A few Pixastic updates A few minor updates to Pixastic before I begin working on the next edition of the photo editor (the one that will eventually also work as a Ubiquity command). Undo/revert functionality, a color histogram action, and more..

A few people have requested a way to revert the image back to the original, so Pixastic now remembers the original image and lets you call Pixastic.revert(img) to undo all the processing done on an image. It's important to know that the resulting image from a process() call is not the same element as the one passed to the process() call. Pixastic creates a new canvas element which means that most properties, attributes and events are not carried over to the new canvas element. For instance, if you are making a mouseover/out effect on an image, you'll have to listen for the mouseout event on the new element after calling process() in the mouseover event. The example on the introduction page has been reworked to use this.

After processing an image, the options object now holds the resulting canvas in a property called resultCanvas. Example:
  var options = {};
  Pixastic.process(image, "action", options);
  options.resultCanvas; // <- holds new canvas
The canvas is also returned by the Pixastic.process() method itself, but only if the image is completely loaded by the time of the call (if it is not, the actual processing is deferred until the onload event on the image)

The options object can now also take a boolean leaveDOM option that will leave the DOM untouched after processing. If not set (or set to false), Pixastic behaves as it did before and replaces the original image with the new canvas element. The new revert() method will also put the original image element back in the DOM, if possible.

Bill Mill did some color histogram code for his article series on canvas image processing. This code has now been integrated (and slightly modified for consistency with the brightness histogram). Thanks Bill!

Lastly, I made few performance improvements in some of the actions (brightness/contrast, color adjust, desaturate). More will come as I get around to it.
Read more...
JavaScript + Canvas + SM2 + MilkDrop = JuicyDrop More canvas music visualization - now with 100% more Winamp-iness.

A couple of weeks ago I played around with music visualization using JavaScript/canvas and SoundManager2. Well, I couldn't leave it at that and as I mentioned in the comments, I had an eye on the MilkDrop plugin for Winamp. The result so far is a little Winamp lookalike called JuicyAmp with its own music visualizer JuicyDrop that feeds on Milkdrop preset files.

If you just want to see the pretty colors -> CLICKY. (But please use Chrome or Firefox 3+)

MilkDrop is nice because, although there's a built-in editor in the plugin, the presets are in plain text. They are basically just lists of variables and equations that, with a bit of mangling, can be evaluated as JavaScript. There are also extensive guides that explain how to author presets and how variables are passed around between the different equations. And, even better, the source code for the plugin was released a couple of years ago.

MilkDrop presets consist of a number of different elements (waveforms, shapes, per-pixel effects, etc). Some of them I haven't touched at all, but JuicyDrop supports enough at the moment that a good handful of presets run just fine. That said, there are a whole bunch of problems to work out and the presets included were selected because they looked alright and didn't make it blow up.

I strongly recommend using Chrome for this. Firefox 3 can play too but is probably somewhat slower. There's something screwy going on with Safari, it's like it's refusing to update the display (try holding down a key on your keyboard) (Edit: at least a few people have reported that Safari is working fine for them, YMMV), I'm not sure what exactly is causing it. Opera is a mixed experience for me and it seems to have a problem with playing the music that I haven't found the reason for yet.

The issue with open Flash/sound-using tabs, etc. is of course still here as well (read the comments here for more).

A couple of keyboard controls:
  • Z : switch to smaller (128x128) visualization view (in case of low framerate)
  • X : switch back to normal (256x256) view
  • D : Toggle rendering of deformation mesh points
  • 1 : Toggle basic waveform
  • 2 : Toggle custom waves
  • 3 : Toggle custom shapes
  • 4 : Toggle borders
  • 5 : Toggle per-pixel effects
  • 6 : Toggle video echo
Note that not all presets use all of the features that you can toggle with the keys 1-6.

So without further ado, go have yourself a canvas trip.

I'm not sure where to go from here (besides a lot of optimizing). I'm not done with this, but I'm not sure I want to try to get full MilkDrop support. I believe the presets of today use a lot of pixel shaders anyway, which obviously is no good here. What I might do, though, is add another preset format (probably JSON?) that's a bit easier to work with and then shape that into something more suited for canvas and JavaScript. But for now I have a date with Pixastic..

Read more...
Music visualization with Canvas and SoundManager2 I'm not much of a Flash person and I guess I just hadn't been paying attention since I only found about the new audio features when Scott Schiller added support for them in his SoundManager2 JavaScript/Flash library and later posted a (very cool) favicon VU-meter that would just dance all night to the sound of your music. Since then I've been wanting to do something with those abilities since I figured canvas and live audio data would be a perfect match for some groovy audio visualization.

This is the first of hopefully many such experiments. It's a fun little music video of sorts, where I've just thrown in all sorts of things to the tune of some Radiohead. If you don't care about all the details, scroll down to the end for the link.

So, apparently Flash has a function called computeSpectrum which returns the current audio state, either as frequency data or as a waveform. In SM2 this has been split into two separate properties of the SMSound object (you have to specifically enable both with a setting and it's only available in Flash 9 mode). The data is available as 256-element arrays of values between 0 and 1.

For the frequency bars at the bottom, I've simply summed the values in 9 broader groups rather than paint the full spectrum. The same goes for the binary values on the left sides, they're simply averaged in 16 groups and then converted to binary. At the top of the screen you'll see the waveform, drawn at intervals of 8 for performance reasons.



The fun part is the flock of boids that is spawned at the beginning of the song. They are controlled by simple rules similar to those described by Craig Reynolds' original 1987 model for emergent flock behaviour (cohesion, separation, alignment) but will also start to react to the beat of the music. Every frame, the audio is analysed and if certain areas of the spectrum reaches a threshold limit a "pulse" is fired, making the boids attract each other and then repel each other again. When this happens, they also form grids and shapes that are drawn on the canvas. As the song progresses, the drawing takes the shape of Thom Yorke's lovely face (see the image at the beginning), but each playback is random and thus will produce a new drawing.

You can pause the whole thing by clicking on the main area. Clicking on the waveform at the top controls the playback position.

Naturally, this is only for canvas enabled browsers (no Internet Explorer). It runs a lot better for me in Chrome than any other browser. I have not tested it in Firefox 2 but Firefox 3 runs ok but slow (might be better on more powerful machines than mine) as does Opera 10. Safari 4 is fine as well although it stutters a bit at times. I'm sure there's plenty room for more performance improvements but that will have to wait for another time.

Another puzzling issue I came across: I get a security error from Flash's computeSpectrum when trying to run this if I also have a tab open with a YouTube video (probably some other Flash sites as well). Exact error from SM2: "(Flash): computeSpectrum() (waveform data) SecurityError: Error #2122". Happens in at least Firefox and Chrome and a bit of googling tells me it's a Flash issue.

I chose Radiohead because I'm a big fan and it just seemed like a nice place to start, given the Flash based visualization of their House of Cards "video" last year. And in case you're wondering, the track is Idioteque off of Kid A, taken from a 2001 performance in Paris that can be seen on YouTube here (just remember the tab issue I just mentioned).

I'm not aware of any other music visualization projects using JavaScript and canvas as its output medium and Flash as the "backend" (besides simple dancing bars and such), so I'd be very interested if you know of any.

Now have fun with the demo!

If you like this, you should definitely check out JuicyDrop, another music visualization project of mine (think Canvas meets Winamp).
Read more...
HTML5 Canvas Cheat Sheet Canvas cheat sheet My memory isn't very good and I often find myself looking up simple things in various specs but sometimes they're just too damn long-winded when you're simply looking for argument x of function y. That's where cheat sheets and reference cards come in handy with their compact, bare-bones information crammed into, at most, a few pages. There are cheat sheets for just about anything out there but I couldn't find one for the HTML5 canvas element, so I decided to do something about that, mostly for my own sake but if other people find it useful that's just all the better.

The information is pretty much just a copy of what is found in the WHATWG specs, just condensed and hopefully a bit easier to read. There are virtually no explanations, however, and no examples other than some graphics for compositing values and a few other things (the appearance of which is very much inspired by those found in Mozilla's examples). So, it's basically just a listing of the attributes and methods of the canvas element and the 2d drawing context.

Choose between a 2 page PDF document or a PNG file. Thanks!

Corrections and comments are welcome!

Read more...
Photoshop blend modes with Pixastic I've added a new action to the Pixastic library called "blend". This action lets you blend two images using different blend modes like the ones available in Photoshop (multiply, screen, exclusion, etc.).

Example usage

var img = new Image();
img.onload = function() {
 var blendImg = new Image();
 blendImg.onload = function() {
  Pixastic.process(img, "blend", 
   {
    amount : 1, 
    mode : "multiply", 
    image : blendImg
   }
  );
 }
 blendImg.src = "blendimage.jpg";
}
document.body.appendChild(img)
img.src = "myimage.jpg";
The image on which you apply the action will be considered the base image when blending. You can use either an img element or a canvas element for the "blend image". It expects the images to be the same size but will simply crop the blend image if it's too large or leave untouched areas if it's smaller then the base image.

Check the demo page to see it in action.

Below are shown all the blend modes included. The images used are:
Base image:

Blend image:

Blend modes

Normal
Darken
Multiply
Color Burn
Linear Burn
Darker Color
Lighten
Screen
Color Dodge
Linear Dodge
Lighter Color
Overlay
Soft Light
Hard Light
Vivid Light
Linear Light
Pin Light
Hard Mix
Difference
Exclusion


So, basically all the Photoshop modes sans "dissolve" and the HSL based ones (which I guess I'll add at some point as well).

Read more...
Genetic algorithms, Mona Lisa and JavaScript + Canvas About a month ago, there was an interesting article about using genetic algorithms to "evolve" images. Roger Alsing had made a small program and put it to the test by letting it make a very good approximation of the Mona Lisa with 50 layered, semi-transparent polygons. I figured I'd try to do something similar with JavaScript and Canvas.

Genetic algorithms

So, the basic idea of genetic algorithms is that you have a population of individuals, each carrying a DNA string representing a possible solution to the problem (in this case, the polygonal likeness of Mona Lisa). The initial population is assigned random DNA and subsequent generations are then created by mixing the DNA of the fittest individuals of the current population. In order to ensure diversity in the population, there's a small chance of mutation where a DNA value is randomly changed. However, Roger Alsing's project actually uses a population of only one parent, making it more like a hill climbing algorithm where the current solution is altered slightly and if it's a better fit, the old one is discarded.

I tried to go for a more proper genetic algorithm approach with an adjustable population size, selection, DNA mixing and everything. Now, Roger used just short of a million generations to get to his result (which was very accurate). It took 3 hours for his (compiled) program to generate the resulting image, and of course, even if JS engines are getting faster, it's going to take a bit more time than that to get as nice a result as his using JavaScript. Still, even after a few hundred generations/a few minutes in my demo, with the default parameters, you should see the shape of Mona Lisa starting to take form. I'm unfortunately not very patient, so I'm not sure if my experiment can even create as good an approximation, given the necessary time.

There are also a few other images you can play with. I've made the images pretty small (100x100) so that evolution would be as speedy as possible. The fitness function actually uses an even smaller (50%) version.

Options

Some of the parameters can be changed before starting the evolution. They are:
  • Number of polygons: The number of polygons used to in the image approximation.
  • Polygon complexity: The number of vertices in each polygon.
  • Difference squared: If checked, the squared differences of the RGB values are used when calculated fitness, otherwise simply the absolute difference.
  • Population size: The number of different candidates in each generation.
  • Succesful parents cutoff: The percentage of candidates selected for breeding the next generation, eg. 0.25 = the fittest 25% of the current population.
  • Mutation chance: The chance that a value will mutate when breeding new candidates, for example 0.02 = 2% chance of mutation.
  • Mutation amount: The amount the mutated value will be changed, for example 0.01 = 1% means a random change between -1% and 1%.
  • Uniform crossover: If checked, values are mixed at random one by one from each parent, otherwise a single random cut in the DNA string is made and one part from each parent is used.
  • Kill parents: If checked, the new generation will consist entirely of the children of the old generation. If not checked, the parents are left alive and will compete against their offspring.

If the parents are not carried over to the new generation, you will notice that the best fitness value in the new generation might actually be worse than the previous one. On the other hand, that could make it easier to avoid dead ends and premature converging towards local optima.

Note that changing the parameters won't have any effect until the evolution is restarted.

A few results

Here are a few quick runs, showing the results.


Mona Lisa after 25 minutes


Firefox logo after 25 minutes


Opera logo after 40 minutes


Mondrian after 14 minutes


Microbe after 9 minutes

Browsers

Since we're using canvas there's no support for IE. Furthermore, we're using the getImageData method, so only Firefox, Opera 9.5+ and WebKit nightlies will work. I suggest using either the latest Firefox beta/preview or a recent WebKit nightly as they seem to yield the best performance.

One last note

Only now, after I'd been playing around with this, have I noticed that someone had already done a JavaScript/canvas version of Alsing's program (where you can even use your own images) back when the original article was published, and for some reason I missed it. That version stays closer to what Alsing was doing, though, where mine differs in a lot of ways.

I think my approach gets to something resembling the target image faster, but it seems to have problems getting the details in place after that. I haven't had the patience to let it run for more than a couple of hours and it's quite possible that the other techniques are able to get a better approximation in the long run.

Play with it here

Read more...