Idea: Library that involves drawing and sound

Hi all!

My name is Heng Yeow and I’m an open-source enthusiast. I’m interested in K12 education and developing creative learning experiences.

I’m thinking of supporting a p5 library that combines audio and drawing on canvas. Similar function calls that draw something would be mapped to the same audio, helping students to identify similar function patterns through the use of sound.

Would like to seek the community’s thoughts and feedback about it. Thank you :smiley:

1 Like

Do you mean, for example, that rect and ellipse might produce different oscillator pitches? Or that 3x3 rects would be louder than one rect?

1 Like

To put this into analogy, consider a tree. A tree :evergreen_tree: is made up of leaves, branches, a tree trunk etc. When drawing each part separately, the audio played will be different. This example could help students learn that a program is made up of multiple function calls intuitively based on different sounds heard.

Some thoughts I have - Is this something feasible and useful for p5? Would a p5 library be suitable for this type of concept? Are there other similar things out there?

I’m not that familiar with developing for the p5 ecosystem now but I’ll be willing to pick up the necessary skills :smile: Let me know what you think!

2 Likes

This seems to be the key part of your concept. By default, a collection of instructions in Processing / p5.js all apply to a single frame (~1/60th sec) – there is no automatic temporal dimension / animation the way that there is in, say, Logo, and no built-in heirarchy in the way of say, Context Free.

So, this tree-like random windmill code describes a trunk, branches, and leaves which evolve randomly. However, it draws every element every frame by default, so all the associated sounds would play all at once, at 60fps.

https://editor.p5js.org/jeremydouglass/sketches/4LTEH7w0C

So in order to align sounds with visual instructions in a sensible way – that animates a tree, with different tones for trunk, branches, leaves – you would need some kind of framework to infer that animation, otherwise the sketches themselves will be quite complex, and not appropriate for learners.

Depending on your animation framework concepts, that would give you the sound attachment points. So, if you were doing recursive descent and trying to animate a tree, you need to know whether to play each branch and its leaves (bang ding ding, bang ding ding) or every branch and then every leaf (bang bang, ding ding ding ding).

It is an interesting problem! Would love to hear more ideas about it.

1 Like

Just saw this, looks interesting: