Hello World and dear Processing community!
Here’s my first Processing project I’d like to share with you guys.
The idea:
- feed the program with a database of images
- calculate the average color of each image in the database
- feed the program with an input image
- rasterize/split the input image into several equal cells
- calculate the average color of each cell
- replace each cell with the next best color match from the database
I was looking for something like this:
That first milestone got reached pretty quick so I decided to explore some variations of it…
Let’s make a video:
I first wanted to try how this would look on an sequence of images.
- I first needed a way greater database of images since we would now have around 25fps to replace
- I searched the internet for a free database of images that would let me download these via an API
- I stumbled upon pixabay.com which would let me download a reasonable amount of images simply by using a token in a http-request
- So now my little .pde would first download a new database of images on each run
I was pretty excited to see the first result, but unfortunately it turned out to look like a puree of wrong pixels rather than an interesting and pleasing mosaic look that I was looking for… But let’s not loose hope yet!
(I regrettably can’t show you the result since I deleted it (#idiot) and my .pde won’t run anymore since my access token at pixabay is expired… But trust me, you didn’t miss anything!)
An infinite zoom ?
Alright so what could be the next variation of our little mosaic programm since videos wouldn’t do it.
Well I have an image based on images, why not create something close to an infinite zoom ?
If we scale up one of the cells to fullscreen and so on, we could maybe get something interesting.
And so it was!
But we need a story!
Since I am also into filmmaking, I wanted a bit more than just an so called “effect”. Maybe we could use it to narrate something or even to retell something.
So I needed a story, and a story that people would know.
A Space Odyssey:
My professor at uni came up with the idea of Kubrick’s classic “2001: A Space Odyssey” and I decided to give it a shot with an infinite dezoom!
- To get the database I exported every second of the movie to a an .jpg into a ‘frames’ folder
- Now I had around 9000 images
- I manually picked stills from the movie and exported these into a ‘keyframes’ folder
- The .pde would now first go trough the ‘frames’ folder and only save the images which differ enough from each other into an array list
- It’d reduce the amount from 9000 to around 400 images to play with
- Afterwards it’ll process the n+1 still to a mosaic and place the n still into its center cell
- The n+1 mosaic will then be drawn so that the centered n still is fullscreen
- It’ll then scale the n+1 mosaic down on every draw call until the n+1 still goes fullscreen
- And so on until the last still.
See for yourself:
It still got some issues like the moire that comes from the downscaling algorithm in Processing.
The next step would be to check if a picked image from the database was already picked to get more variance in the final mosaics.
I hope your guys enjoyed it and I’d be glad to get some feedback!
Credits
I really would like to thank my good friend Ole who really made this possible since he is the brain behind all the math here.