Neural style transfer in processing

Does anyone know if it can be done in processing or are there similar examples?

1 Like

Sure, it can be done. But this is not exactly a trivial task. What do you have so far?

Here are some pointers for getting started with Processing and neural style transfer. (I am not an expert):


An interesting overview from Hannu Töyrylä on switching from using Processing to neural style transfer, then re-incorporating it into a Processing workflow.

…more related posts:

try it out

If you are looking to try out a working sketch in Processing, Bryan Chung of Magic & Love has a neural style blog post with results from a Processing (Java) sketch that applies pre-trained Torch models to live video.

The Processing sketch is here:

I believe Bryan gets the models from Johnson et al’s “fast-neural-style” feed-forward neural transfer system. That code (and a model download script) is here:

…specifically, I think the sketch needs the resource “composition_vii.t7”, which is available here:

implement a full system

If you are interested in doing it end-to-end in Processing(Java), one way might be to incorporate dl4j. They have a neural style transfer example in their examples list, and you could try include the library and the example class in a Processing sketch and then calling it.

There is a related dl4j Java tutorial here:

Note that another route is using TensorFlow.

do it with python

If you are thinking about Python rather than Java,

There is an introductory walkthrough in Python here, there is a great step-by-step tutorial walkthrough here:

…so you could walk through the example in Python 3 and then use p5py to incorporate it into a Processing-like API.

Note this requires p5py, as the examples are written in Python 3 and more importantly use PyTorch, which I believe is not compatible with (Python mode) even in older 2.7-compatible versions due to the reliance on CPython.