Studying the effects of waveforms in a sonic environment and transmitting those waveforms to audio cortices is a mission of iBoD. To that end, we have partnered with three synthesizers to explore the ways of waveforms. First came the Behringer Nuetron, which is a puzzle and a playful soundshaper. My first challenge was (and continues to be) getting sound out of it (VCA Bias and Overdrive Level knobs need to be wide open.) Nuet has lots of internal routings and moveable parameters including blending between oscillator waveform shapes, and multiple LFOs and VCFs. Next came the Moog SubHarmonicon, which is extremely fun and responsive and more intuitive. These two are “tied” together by the Make Noise O-Control, which serves as a sequencer for Nuet, and clock for Moogie. So that is my crew for this event: Nuet, Moogie, OC and me.
Using internal and external routings, iBoD explores the shifts in timbre and rhythm presented by the synthesizers. These instruments make sound from oscillating frequencies shaped by waveforms and envelopes, which are the basic building blocks for timbre (and EVERYTHING, but THAT is another story, which can be read in the links below). All of the action is triggered, directed, and massaged by control voltages. Sparks of electricity drive the whole show, which makes for alot of unpredictability and maleability.
A friend asked me “What is timbre?” Yes, we are all familiar with rhythm, but timbre is a kind of behind the scenes aspect of musical sound which isn’t as easily apprehended because it is so essential. Timbre is the “dna” of sound presented in harmonic code. Our brains decipher these codes so we can discern a foghorn from a racing engine from a baby crying. Please read these previous posts where I offer my understanding of timbre:
So that is what I will be playing with on Saturday. Since my favorite part of symphony concerts is the settling in and tuning that the orchestra does before they start the program. Timebral Artifacts will begin with some tuning and retuning of parts, followed by propagation and meanderings until an undercurrent of structure appears. When this happens, I will play Native American flute in call and response play with the synths.
iBoD will play from 2-2:45 pm. See map of Sculpture Garden for exact location (2a)
Eight months ago, the Elektron Model: Samples became part of my sounding board. (Here is a blog post about some preliminary explorations- https://wp.me/p5yJTY-Ch) When the MS started having key sticking issues, and the Global Reset rendered repairs inaccessible, the sounding board landed in limbo land. During this dreamy time, I realized/remembered that I want to play Control Voltages. The MS is a audio/midi device that can sequence synths that accept digital signals. CVs are another realm entirely! Although many synths can be stimulated by both types of signals, the depth and breadth of CVs is unparallelled to my ear.
Thanks to the flexibility of Sweetwater Music and the take charge attitude of my Sweetwater Sales Agent, Paul Allen, the MS (and a little cash) was exchanged for a Make Noise O-Control!! FINALLY, I have a voltage controlled modular set up as I pair this sweet thing with the Behringer Neutron!! Here you have it:
Today is the fifth day of play with my mini-mod, and I am happily patching, playing, studying manuals and/or Patch and Tweak, and deeply listening. Here are some sound samples from my first week’s play.
Our Waking Lives sometimes flow and sometimes glitch with the main point being “don’t mind whatever happens”. My personal practice is to turn the “oh,no!” into a “aha, what’s this now?” Easy to do sometimes, other times not so much. Immersed in feelings of failure, I sometimes need a few weeks to make that turn-around.
And so it goes in the world of Jude’s Soundlings. Everything is in transition, some stuff is shiny and new, other is old and (semi)reliable! New like the Behringer Neutron with the Make Noise O-Control as sequencer modulator routed through good ole Ableton as harmonics flinger. I am learning so much: I made a filter sweep and some kind of Heinz 57/Swiss Army Knife rack I put together. Got both the FX racks midi mapped to my Novation Launch Control. This is soooo cool! The harmonics shatter, shimmer, echo, melt, propogate and obscure each other.
Sometimes the harmonics from source audio get caught within the Effects Channels. The source stops, but the soundscape lingers on. I was taken aback at first when this happened. Stopping the Source audio track did not stop the sound!? Sonic material continued pulsating in the active FX tracks, so I rerouted other FX Channels to pick up audio from the channel that was pulsating. This sound went on for one to two minutes, while I passed it around through different AAC tracks. Several times I couldn’t figure out how to stop it and had to turn it all down and close the project. I enjoy this mystery and remain curious: recently read something about midi feedback loops! Perhaps that was where we were caught! And they can definitely be played!!
After 8 years of creating electronic soundscapes in Ableton Live using electronic instruments, I have learned a lot about sculpting sound! I enjoy the process of creating the movement of sounds around and through space. Ableton is a wonderful mixing environment. Their plugins are maleable enough without getting into writing program. Now is the time for an expansion! I am hearing a lush carpet of sound in highly structured harmonic streams.
Currently the final analysis of the data for the Sourdough Project is poised to happen. Up to now, my approach to data sonification has involved pitch class to designate the presence of something and amplitude to demonstrate the magnitude of that something. Pretty basic, but it worked for the Baby Lemur Biome Song. (https://wp.me/p5yJTY-tD) The Sourdough data is more demanding, and may involve conceptual frameworks based on the data in contrast to using numerical data to specify the sound. Here is the link to work I did with some of the Sourdough Project data using my pitch/amplitude method. (https://wp.me/p5yJTY-yN) In this example, the yeast growth can be heard as a sequence of steps illustrating rapid or gradual growth during each 12 hour period. These two sonifications have captured presence, magnitude and growth within time frames. As I study the Sourdough data, these three methods for sonic capture need to be brought together as interactions that change/modulate/meld over time to create Sourdough ecology, which begins with water and flour and ends in smell/taste/feel of the bread itself.
Feeling a bit stuck here at the moment. Must be time to play!!