Thanks to Nancy Lowe from AS IS Center near Penland School, I am “new best friends” with Mark Boyd. Mark is a sound artist who records and amplifies the “voices” of plants, ants and flowing water: the realm of the tiny vibratory world. Talk about deep listening! Mark has been using electrode sensors on plants into a Volca Synth to listen to the electrical life force within the plant! He sent me recordings of his “biologues” with Bleeding Heart and Fern, and Dogwood. The playback presents us with a lot of fast-paced random sounds. Mark is interested in transducing this data into something people might listen to. Here is an excerpt from Mark’s Bleeding Heart and Fern biologues:
Ableton Live contains numerous tools for transposing/transducing/converting sonic data. An audio clip, such as the one you just heard, can be converted into midi clips; one that renders melody information, and another that renders harmonic information. So now we have audio information rendered as 2 packets of midi information. Within Ableton, midi files can be collapsed or stretched across a timeline and still maintain the integrity of the rhythmic intervals. Midi data can be assigned to a “voice” that feels representative of the sound artist’s impressions of the particular plant that is speaking. Midi data can be fed back into a synth such as the Volca to complete the circle.
The scaling of the time frames of the midi clips is exactly what is needed to help us “hear” the biofeedback from the plants. Doubling the length of the midi clip slows the overall “tempo” and helps us to listen into a kind of river of sound emitted by the plants. Slowing down allows us to tune into a rhythmic cohesiveness that is obscured by the frantic pace of the plant’s raw electrical impulses. We inject spaciousness into the mix in just the right amount, and it sounds like something is being communicated.
After finishing the new rendering of the data, I sent this to Mark:
Excerpt Biologue BHnF Remix dejacusse
He was ecstatic, over the top about all the possibilities of Ableton. He downloaded the Lite version and took off with it. We had some great email exchanges and he sent me samplings of his tests and experiments with the flora around his mountain home. Here is a beautiful example with a plant in Mark’s home:
I look forward to Mark putting together an orchestra of local flora in concert in the near future. In the meantime, I am enjoying dialoguing with another human being who is listening as deeply as I am.
Love this phase in your work totally. I’m intensely interested in plant voices. Is the initial record representing the raw data culled from his electrode sensors, or did he need to modify it in an audio program in order to render it loud enough for us to hear it? I’m curious about our audible (or audio?) encounter of a plant’s energy. Apparently plants emit energy in changing wavelengths, all the time. They may have sequences of wavelength changes that are “familiar”–because they are either repeated or deemed characteristic (??? criteria for this). When you input the “raw” data into your program, my understanding of “representation” slides away. Are 2 midi tracks enough? is this all that is possible? What data is “lost” and how much “added” to the original sensory recording? I’m enamored of the original, as it conjures Pollock, Mondrian (boogie-woogie), deKooning, Mies van der Rohe.
LikeLike
WoW, Leah, you have opened up a whole new analysis idea to look for identifiable repeated patterns in these recordings. Mark is using a Volca Synthesizer to receive the electrical signals from the plants and give them a “voice” or a means for being heard. This is a perfect matchup as Synths are “played” via CVs (control voltages). Converting the audio to midi allows it to be heard as a packet of extrapolated information. Think of it as an audio microscope- just like a microscope – the midi gives us a new way of hearing into the audio of the data. And the wild thing is that many synths can be played via midi as well as CV, so this extrapolated midi clip can be run through a synth to create a new audio clip that could then be converted to midi, on and on.
I find the original audio very interesting to listen to as well. The visual artists you mention worked within a frame, which tells the eye where to look to see their creations. Audio frames need to be created by phrasing of melodic lines, changes in tempo, silence, etc. These frames are created when we time stretch the midi clips. The stretching keeps everything to scale, so everything is still in the same relation just with more space in which the natural phrasings can emerge. I haven’t played around with this yet, but I feel there is an “ideal” or more tuned in length for each plant’s biologue to be heard. Now you have suggested that there may be identifiable patterns repeated in there too! Lots to play with here! I will send you Mark’s testings. I just got one yesterday of an Oak Tree!!
LikeLike
Oh good. I wonder if the oak tree, or that lovely plant he recorded, emit different impulses on different days, or if they are parched, or overwatered, in too much sun or shade–in different condition, or circumstance, e.g. someone nearby who is singing, or if there is more complex sound-noise, buzz, harmonics, melodics, percussions.
LikeLike