Procedural Sound

I think i will do a game with a sorta real time sound synthesis engine. Generating numpy arrays on the fly and queueing them with pygame's channels. Maybe pyglet for ui if i can mix them.

(log in to comment)

Comments

I'm not sure pyglet and pygame are a good mix, and I do think that this is easier to do in pygame. Have you looked at some of the ui libraries available in pygame?
I was hoping i could use a very opengl oriented something with abstractions on top to work. Also, it might not be pygame, it really depends on the ability to queue lots of small sounds samples and get them played without gitches (clicks in the middle). 

I think Pygame and Pyglet can work together. Pygame will create a window with OpenGL context that Pyglet will render to.

For sound Pygame wraps SDL-mixer and recent work with that (for pgzero tone generation) made me think it was pretty weak. I wrote my own Portaudio bindings with CFFI for my wedding light display where I wanted real-time audio analysis.

I am playing with https://github.com/bastibe/SoundCard which is a multiplatform cffi binding, might get it to work if i can work around the blocking. I could not find cffi in your code, but did find alsaaudio references?
My CFFI code is here.
I mean, that's a lot of technical implementation, but what about like... approaches to music theory?  How are you going to structure it so the combination doesn't sound like discordant ass or, worse, a phillip glass piece?
@ikanreed lol. Well, without too much work you could probably get it to play 4'33".
@mauve thanks. i took a look. i need to understand how compatible portaudio is to use as is in all 3 platforms. and see if i can understand what it does. Again, thanks a lot.


@ikanreed in my previous pyweek entry ( https://pyweek.org/e/elspirit/ ) i did a non realtime synth to create the notes and varied the scale and speed depending on the game situation, mostly randomizing note selection. This time, as i would like to have a realtime synth, i expect the variations to come more from timbre as realtime reaction to user actions than from pitch. But who knows!

That's cool, so the algorithmic approach to that is just to pick a timbre(why is that pronounced tamber, i can't find out anywhere), and then assign the notes from a predefined musical structure?
Yes, maybe :)


My goal is to showcase the synth with a game, make the game a sonic experience. Ill figure out how scores help me with that. I am afraid ill spend the whole week in the synth :)



Also, i dont know anything about linguistics, but i guess this is related to your other question: 


https://en.wikipedia.org/wiki/Great_Vowel_Shift

Where /i:/ turns to /ai:/ but being followed by an m, a labial consonant, could be an exception (?!?!). 

And maybe the -bre to -ber ending is the same as centre to center.

But be warned, i dont know what i am talking about.

Timbre is a French word and is pronounced like it. "Ti" sounds like "ta" before m or n, like in gratin.

:like: