Passing voice data to Processing

Hi,

I am a beginner in using Processing. I would like to make a project where voice commands (given at a real time) by the microphone of the notebook/tablet/smartphone are passed to Processing to display events linked with specific words (e.g. if one says “Milk” on the microphone, Processing shows on the display window an image of a milk bottle saved in the data folder of the project): is it possible? How?
Thanks
Best regards
Humus

Hi @humus welcome to the forum.
I don’t think it is easy for a beginner, but it is possible. You will need the Google speech recognizer.
Here is a code for an Android device that can capture words.

Thanks Noel,

but how to build the entire code? I tried to visit the link you have indicated but I can’t undrstand…thanks

But did you run the code?
Did it work?

no because it needs libraries that I don’t find…

These are native libraries already existent on the device. Just run the code. What Android version are you using.

but I have to run it on an Android device? How to run Processing on an Android device? Thanks

Download the app APDE, on the Play Store. You speak the words into your device’s microphone, and use the OscP5 library to transmit over wifi to the sketch on your PC which can display the images.

Where I have to install OscP5 library? On Android device or pc Processing version?

Also, how to set Wi-Fi to pass all to the pc? Do I need a separated router?

thanks

On both. Here you can download the android version and a sample sketch for communication between phone and PC. The lib for PC you can install from the IDE.
It’s supposed to work on your home network provider. You do use one, isn’t it?

It doesn’t work. I will try again with a dedicated router, probably that is the problem.

The thing that I would like to realize, in alternative, should be:

  1. register a voice recording;
  2. taking from that recording the table of amplitudes instant per instant (e.g. at time 00"0032 the amplitude is 0.0654 (on a normalised scale);
  3. associate that amplitude to a colour pixel (e.g. 200, 100,50 in RGB );
  4. show on the display the video that links colours pixels (or group of pixels) to every instant.

This should be the basic from which to start an animation (e.g. linking pixels with same colour by a line). It should be interesting to realize a 3D video in order to export the final result on the video as a solid 3D (or a 2D image too).

Thanks
Best regards
Humus

Do you mean frequency or amplitude recognition?
Here’s a video:

I’ve used OK google before to switch on lamps etc. I used my phone with voice recognition and an Arduino over Bluetooth. The hot word “Ok Google” is important for not having your hands on the phone all the time.
But indeed you need a router because a telephone company data connection would be too costly.
You can however use a hot word on PC as well with this Google browser extension.


And here is another video about voice recognition:

Once you have the phrase in the String you just filter it with the string class functions to pick out the words that trigger the action you want.

Hi Noel, thanks for your reply. I have just taken a look on the links you attached and in the code there are strings like “let” or “function” that the processing version I have downloaded and installed on my PC does not recognize: why?

Thanks
Humus

Oh, because that’s The P5.js version of processing.
You’ll have to study the translation.
For example function() will be void().
The variables you have to define, because ‘let’ can be any type of variable or object.
In that respect, P5.js it is easier.

Which is the difference between Processsing and P5.js ? Is there any translation manual? How to import a P5.js code in Processing? thanks

Thanks! And from P5.js to Processing?

Not aware of a translator, but I would watch this video, including the second part.

thanks! The P5.js code is played in Processing? If not, how to play it and by what? Thanks