Mez is a visual instrument that I've developed to VJ for live music events. I also collaborate with artists in advance to make their shows even better, such as through including specific imagery or developing custom scenes for certain songs.
But what does it mean to visualize a sound? Intuitively, what you see should correspond to what you hear, but there's more to it than just that. Synesthetes can give you a hint: certain sounds are associated with different patterns and colors of light.
My goal in developing the music analysis and rendering systems of Mez is to simulate and enhance that. Visualizing a song, or an album, or a entire trance set is rather a bit more complicated. Different genres of music have totally different acoustic signatures, and so they should look quite different. Making this work well depends on both music theory and the psychology of audio and visual perception.
As for the other half of Mez, when we experience music, we're affected by more than just the BPM, the notes and chords played, the key signatures, and so on. Our external and internal environments all affect how we experience a song to at least as great of a degree as the music itself. I've designed Mez to do more than just react to sound. It's a live visual instrument — I impart my own experience of the music into the show, making every rendition unique.
Right now I use a Leap Motion to control the analogue parameters. I've found no other tool that allows so many simultaneous axises of control. Programming this instrumentation has been at least as much of a challenge as the other systems. It should be intuitive for new users, powerful and precise in experienced hands, and also easy to work with programmatically.
Below is a version of Mez from several months ago.