Hello, I am attempting to get audio/video sync working for exporting a music visualization. I’m working with this demo script from VideoExport:withAudioViz.pde.
I am able to generate the analysis .txt file and get the synced example. However, I am attempting to integrate with this visualizer ProcessingCubes and having some issues.
The crux of my issue is that the VideoExport example is bucketing the fft output from the AudioSample while the visualization script is parsing the fft.getband() data directly. Is it possible to do something similar to fft.forward() with AudioSample? It doesn’t appear to have this capability. Sorry if this is a bit hard to follow, happy to give additional information or follow up. Thank you!