Choir Music Video


We just published a new video for the band Choir :

Entirely made with processing to generate image sequences reacting to frequencies of the song (thanks to the minim library) Basic editing with Blender, and voilà.

Thanks to Daniel Shiffman for his video about Perlin Noise and Flow fields, which inspired me technically.


My favorite part was the transition that began at 3:19 in the video.

Did you make multiple sketches and assemble cuts, or is the video one long “take” and the transitions are built into a single sketch?

Actually I use the processing library as a maven dependency in my project. But you can consider it’s more of a multiple sketches approach yes. All frames rendered to PNGs in separate directories then assembled in a video with audio track using Blender.

I am working on a similar project making videos to accompany a friend’s music. So far I have only completed one video (posted to the gallery of this forum a little while ago) that was made just by recording the sketch itself while it was running.

I thought about capturing images while the sketch ran and converting those to video later, like what you describe. But I’ve found that my sketch suffers from severe clipping if I try to save the frames to images.

Do you slow down the framerate of the sketch to capture each frame without clipping? And if yes, follow up question: Do you play the music so that the sketch is visualizing input that’s happening live, or do you turn the frequency data into a table or array so the sketch can run without the music playing?

Sorry for all the questions, just interested in improving my process :slight_smile:

My sketches do not run live at all :blush: What I did is the following :

  • write the result of audio analysis to a file (but storing it in an array during sketch intialization would do just fine)
  • let the sketch run without audio as slow as it needs to ^^ (mine runs at something like 2fps when writing PNG files)
  • then you’re left with your source audio track, and an image sequence. All you need to do is use another software to create a video mixing your image sequence and your audio.

If you need/want to run multiple sketches for a single video, you just need to offset the index accessing the audio data array and write the frames to a separate directory.
This way, you can let your sketches run at their “natural” speed without worrying about performance too much. The small drawback is that you have to make the video afterwards in a video editing software. But it also may be easier to include titles in the final product.

Thanks so much for those details about the process!

I’ve already written a sketch that stores the music data to .csv, so I should be able to try running visualization sketches using the file data as input instead of playing the track while the sketch runs.

Now I will need to study up on using Blender as I’ve never used video editing software for more than capturing my screen before. :notebook:

Well, Blender is not the easiest tool for this but it is rock stable. Some pointers if you download the latest Blender 2.92 because video editor is not easily accessible. (Blender is mostly a 3D program)

  • Go to File > New > Video editing (this will setup Blender layout for your purpose)
  • Put your mouse pointer over the lower part of the screen (large stripes representing tracks)
  • Then Shift+A will popup a menu to “Add” audio or image sequence

Luckily, there are many resources out there about Blender. The video editor in blender is sometimes referred to as “VSE”, might help to search tutorials. Good luck on your journey ^^