Now, a program that needed soft realtime, infinite recursion and hot code deployment should be written in Erlang. But that would be too easy. So here it is in Python, along with a simple sample: ts.py ts.txt No audio clips yet. It seems at some point I lost my sense of pitch and forgot what makes a good chord. If you want to use this yourself, you'll need install funcparserlib, and to edit the path to the soundfont and the fluidsynth launch command. I swear I did not choose funcparserlib just because the homepage had a big Arch logo.
Two years ago, I was bit by the procedural audio bug and cranked out CFA. Today I felt like doing it again. CFA was a mess because you had to build all of your samples out of sine waves and envelopes. It was pretty slow, requiring minutes of render time to get a few seconds of sound.
Once again, I am taking the two really good parts of ContextFreeArt (the inspiration for ContextFreeAudio) for this project. First, infinite recursion is a good thing. In fact it is the only way to get anything done. Second, non-deterministic function calls. If you have three functions with the same name, each is called one third of the time.
MIDI seemed like a much better way to go. FluidSynth is probably the easiest of all MIDI synths to control. Just dump every little command into standard in! This also means you have to take care of every little thing, too. There is nothing to help keep notes in synchronization. So your layer above fluidsynth needs to be fairly realtime.
Making an initial wrapper for FluidSynth was pretty easy, but a lot of features were dropped. Now it is much more complete. Support for dynamics is pretty weak though.
On top of this went a second wrapper, the music engine. It looks at the programmed rules and tell Fluidsynth what notes to play. This layer new about things like chords, durations and channels. There are three major components within it. One is for soft realtime operation. Another is to keep the infinite recursion in check. The final component process an abstract syntax tree which contains the entire composition.
Finally, a third wrapper provides a crude interface. You specify a file on the command line. When the file is edited, the changes are read in, parsed and dynamically loaded into the syntax tree. The new rules take effect immediately.
Here is how the rules work:
Single chord 80 for 1 next Single Several chord 32 37 39 44 for 8 next Several Tune chord 80 for 2 chord 85 for 2 chord 87 for 2 next Tune Forkbomb chord 32 for 1 chord 33 for 1 next Forkbomb Forkbomb MutualA chord 80 for 1 next MutualB MutualA chord 85 for 1 next MutualB MutualB chord 87 for 1 next MutualA
These are crude examples and are missing channel set up. "Single" plays the same note every beat, looping forever. "Several" plays the same chord every eight beats, looping forever. "Tune" plays a pattern of three notes over and over again. "Forkbomb" will try to destroy your computer, while playing the Jaws theme. After a few forks, it will catch on and stop it from doing damage. The "Mutuals" are mutually recursive and play a slightly random pattern. Primitive, but this is just the proof of concept.
Originally, I had spec'ed out a huge an complex language to replace this. It was fun, but lead to death by architecture astronaut. Now I am building it incrementally, only adding a feature if it reduces the size of a composition by half. Here are some of the current meta operations:
channel 2 inst 0-48 volume 50 start chord 0 for 4 on 2 next a a*3 chord 40 for 1 on 2 next a a chord 42 for 1 on 2 next a
You can set instrument bank and volume for each channel. A chord of zero counts as a rest. The "on 2" means to play the chord on channel 2. To bias probability towards a particular rule, append the name with "*N", making that rule N times more probable. Current pain points: poor dynamic control, no way to manipulate lists of chords, no tempo changes, falls apart above 300 BPM. Those last two will probably need a huge rewrite to the engine. Also considering replacing floating point with fractions.