In the beginning of this year I created a Processing application to paint the artwork for an album.
You can read about the “making of” in this blog post: https://hamoid.com/code/2019-a-point-between/
The album was released recently in Bandcamp, Spotify, iTunes, YouTube, etc. You can listen to it here:
I also made a video clip which included some coding, but only with Python in Blender (no Processing involved this time).
I’m very much looking forward to mix generative approaches with the (sur)realistic rendering and animation possibilities Blender provides
Would you mind explaining what part of your work in Blender involved scripting ? (Is it the animation, the textures, the meshes,… ?)
I also would love to hear your thoughts on Blender as a creative coding environment. I’ve read some interesting tutorials and think the full Python API makes it a very attractive tool but the rather complex interface (plethora of options, menus, shortcuts) didn’t inspire me to investigate further.
I used scripting basically to work faster. Since there are 16 stones taking part on a choreography, and each one has animated properties in a timeline with hundreds of keyframes, I wanted to avoid repetitive work like randomizing animation curves, or activating and deactivating things manually. But the most crucial part was something rather technical: I had the movie split in scenes, and I needed to transfer the physics engine state from the end of one scene to the beginning of the next one. So I wrote a script to do that.
And yes, the results can be amazing but it’s not very inviting towards experimentation with code. The documentation is not so easy to understand or even complete. Maybe I should play with it and then make some tutorials
One thing I would point out: working with Blender for two months changed my view about creative coding. Often, when you work with Processing or openFrameworks for a while, you start to need a framework that includes saving and loading configuration files, maybe animation timelines, a gui and lots of non-visual related code. Many people spend years iterating and building their own interactive system. Which is cool, and probably always unique. But when I see the massive GUi in Blender and all those thousands of options I think that I will never build such a complex environment myself, and I may just as well use Blender instead It just an idea, and not a decision. Of course you don’t need to build something so complex to create amazing work. But the truth is that I really enjoyed creating materials with nodes in the shader editor, replaying animations a hundred times while tweaking animation curves… It’s different. With Blender you are probably trying to do something polished with a fixed duration, instead of a living real time system (what you would create with Processing or similar). Combining both tools is another interesting option.
Secondly, on a very simple level since I’m still a beginner, I used Blender’s Video Sequence Editor to put together a video (Ice Terrain) that I put in this gallery a few months ago. I used Processing to make some shots and then edited in VSE. If I was more skilled, I probably could have made one Processing file to create the entire animation, but I saw that this Blender tool would let me make the edits and transitions without resorting to code.Thanks, @hamoid, for re-inspiring me to go back to combining tools.
Lastly, it was Blender that led me to Processing. I was learning how to use Blender when I realized that I really wanted to learn how to code to make generative art. So, at first, I started to learn Python, but then I found Processing which has helped me to learn how to code much faster and better than any of my previous attempts over the years.
Also, for anybody thinking of taking on Blender, it does have a formidable interface (although the latest version is much better), but remember that you don’t have to learn all of it. Just learn the parts you want to use. For me, that’s just VSE right now!
Actually I think it’s a good idea to generate a bunch of different shots with Processing, and then compose them in some way in Blender or some other video editing program like Kdenlive. I think there may be more variety than if you output everything directly from one program (because video editing is also a creative process, so you can play with cuts, rhythm, transitions and effects in a way that is not trivial when coding).
The visuals in our live performance are actually rendered with Processing in real time. I control them with 70 midi knobs and buttons. Maybe I should scale down a bit A teaser video:
I don’t need to turn them all at once, I react to the sound and control those creatures with knobs, to change material / motion / behavior properties.
When you practice it becomes like any other instrument, like a guitar or a piano. It can be done without looking much at the controllers but just focusing on the visuals and the sound. A very nice feeling.
ps. Think of a piano: it also has many more keys than we have fingers
Very inspiring to see how you merge coding with your creative processes Hamoid, it’s really nice to get an insight in some of the steps you take along the way. Hope to see more of your blog posts in the future!
Thank you! I’ll keep it in mind and try to share more of the process.
And thank you @Chrisir! You remembered It was a very nice evening! A friend is editing some video footage. I hope to share it soon. In a few days, another performance!