Im building an audio visualization project with p5js, and was wondering if it was possible to merge multiple audio files together (and automatically trim them to the same length, maybe 10 seconds (just use the first 10 seconds of each clip and output a merged file)).
I realise this is probably past the capabilities of p5Sound, so if anyone has any good ideas of how i can do this using normal javascript or another plugin/library, please let me know.
The idea is to upload multiple files (using file inputs and php?), and have the program run a process in the background that normalizes to the same volume (if possible), trims each clip to 10 seconds if they arent already, and then merge into one audio file.
Would this be possible?
Any tips would be appreciated.
Thanks
Gershy13
I normally recommend to look into web audio API instead of trying different JS libraries because everything is basically a wrapper of the native API. I think this would help: AudioBuffer - Web APIs | MDN
for upload you can also use drag and drop: dragOver and dragLeave do not work - #6 by micuat
and for saving a file: wavefile - npm
if it’s more like a batch operation (if you know start/end time to trim) you can also consider editing on the server - which may be easier but has less flexibility on the editing.
Yeah literally it seems simple and doable with the API in my head, but for the life of me i cannot figure out how to actually do it.
All it needs to do is take 2 audio files and cut the ends off (if they are longer than 10 seconds), then join them into a single audio track and output that file for drawing on the next page.
hmm I didn’t say it’s simple and I don’t think it is. I cannot make an example as 1. I feel it would take me at least 2-3 hours to make and I don’t really need that example for my use and 2. you seem to have your specific workflow (js+php) and I have a fear that what I code may be completely useless in your case. Especially in your case it sounds like it’s way more efficient and simpler if you run a command (e.g., ffmpeg) on the server side (which has nothing to do with p5.js or even javascript)
I understand it is not simple in practice, by simple I meant it is a basic thing (to trim an audio clip) that I thought would have been much easier to code, but I do understand that it is pretty complex in reality.
I thought there would have been some libraries or something that would include it already…
I found a es6 module called crunker which joins two audio files together, but it doesn’t trim it.
How would I run a ffmpeg command on a website?
The final result of this project will be hosted on a standard web hosting service (I am currently using infinityhost)
I’m sorry for all the questions, I’m still very new to coding and all this and it’s hard to learn fast. (Uni deadlines are fun xD)
ok I have some general advice - if it’s a school project, ask around if the lecturers or the TAs can help you. Or it can be some student groups or whoever at the school - try to use your network to find someone who knows how to do this kind of project and can spend some time helping you. The problem here is that what you are trying to do is quite a big project (by the way it may be better for you to revise the goal of the project if possible) and you need to first frame the project - what tools to use, what the milestones are. Students who already took this course can give you concrete advice. These projects are meant to be doable (well… not always, I know). Of course you can ask here or similar platforms like stackoverflow, but we don’t know the curriculum which means we don’t know what you already learned and what you are supposed to learn from the project, so we may end up with giving you guidance that is completely irrelevant. These forums are more suitable if you have a specific issue (e.g., a synth doesn’t make any sound output, or you cannot figure out what callback parameters of SoundFile do).
I hope you good luck btw I added homework tag to the topic.
You can definitely split/trim audio files in Javascript. The p5.sound library doesn’t have support for this though, so you would need to leverage the underling AudioBuffer
API to some extent. Here’s a code snippet that does it:
let trimSlider; // a p5.js slider element created in setup
let audioFile; // a p5.SoundFile instance loaded previously
function trimAudioFile() {
// Get the end time we want to trim to in seconds
let endTime = trimSlider.value();
// Convert the time to a number of samples
let endOffset = endTime * audioFile.sampleRate();
let channelBuffers = []
// Copy data for each channel in our source file
for (let channel = 0; channel < audioFile.channels(); channel++) {
// Create a new array for each channel
let channelDataArray = new Float32Array(endOffset);
// Copy data to the array (this will only copy up to the size of the array)
audioFile.buffer.copyFromChannel(channelDataArray, channel, 0);
channelBuffers.push(channelDataArray);
}
// setBuffer expects an array of channels which are expected to be Float32 arrays
audioFile.setBuffer(channelBuffers);
// audioFile now contains the trimmed sound data and can be used like any p5.SoundFile
}
You can find the full working sample here: Audio File Visualizer + Trimmer - OpenProcessing
And here’s a more extensive code example for slicing native AudioBuffers: How to slice an AudioBuffer » Miguel Mota | Software Developer
2 Likes
Thank you! this is really helpful and what i was looking for…
Usually i would do that and ask them, but as its close to end of term, they dont really have much support available as far as i can tell. and its so close to the deadline that the project goal cant be revised. But thanks for all your help anyway!
Hey @KumuPaul , just curious if its possible to merge multiple audio tracks using a similar method? Currently i am using another library called crunker js to do this, i was wondering if there is a way to do it in p5/ with the audio api method that the trimming is using?
It certainly is. As you can hopefully see from the example, audio buffer data is just a series of sound wave amplitudes. Since it is possible to extract this data out of the AudioBuffer
using copyFromChannel
and you can create a new AudioBuffer
using the p5.SoundFile
’s setBuffer
method (or you can copy data directly into an AudioBuffer
using copyToChannel
if you don’t want to use p5.sound), the possibilities for splitting, joining, and otherwise manipulating this data are pretty infinite. However, note that the Float32Array object is a little different than your typical javascript array in that it has a fixed length and lacks some of the standard methods on normal arrays like concat
. So in order to combine two Float32Arrays by concatenation you need to allocate a new Float32Array with a length that is the sum of the input lengths, and then copy the contents of the two source arrays:
function concatFloat32Arrays(ary1, ary2) {
let result = new Float32Array(ary1.length + ary2.length);
// Copy all of ary1 into the beginning of the new array
result.set(ary1);
// Copy all of ary2 into the new array, at the offset of the number of elements in the first array.
result.set(ary2, ary1.length);
return result;
}
1 Like
thanks! I got it working…
One more thing, i am using your original example of the visualizer, and its working, but it seems to have a problem with some audio clips where the visualization looks off. (only thing i changed is i turned fill() on)
Its like the fill isnt working properly.
it looks like this.
Any ideas?
EDIT: The blue line in the middle is generated by me to show where the middle of canvas is…
Yeah, the reason the graph looks like that in some cases is because when the final point is not 0 amplitude the closing segment of the shape goes form the starting point directly to the ending point. So areas where both the curve and the diagonal segment closing the shape are above the center line will not be filled in.
Here’s a simplified example:
To fix this you just need to add a synthetic point at the end of the curve:
vertex(maxX, centerY);
Thanks!!! It’s working perfectly now…