The Art of a Scientist is an annual exhibit curated by Duke University graduate students who are interested in promoting dialogue between art and science. STEM graduate students submit images from their scientific research to the AoS Committee. I answered a call for artists to work with the project, and was paired with a graduate medical student who submitted a video.
The video is of the vascular system of a mouse hind leg. It begins with a view of the murine vascular tree branching out in red. We see the side of the leg rotating, then it tilts and rotates around, and disappears. An angled plane (that resembles a microscope slide) moves from bottom left of screen to top right of screen, to reveal the leg with the soft tissue enclosing the vascular system. The image rotates, then the layers melt away as the leg disappears. Then the image reverses and the layers swirl back together. The image stops before the layers finish, rotates a half turn and is complete.
The video is a collaboration between Hasan Abbas, an MD/PhD student, and the Shared Materials Instrumentation Facilities at Duke University. The mouse leg visualization can be used to model healthy and diseased cardiovascular systems. When I asked Hasan what he heard while creating the video, he said, “scholarship”, “elegance” and “discovery”. I love this as a jumping off point for a soundscape – “elegant discovery”.
And then there is the mouse! My first thought while curating samples for this project was to “give voice to the mouse”. Modern medical research is built on the backs of mice, so it seems right to honor and acknowledge their participation. I found hours of recordings of mice squeaks and scratches. Another element I wanted to capture was the branching of the vascular structure at the beginning of the video. So the creak and cracking of a large body of ice was layered into the sound bed. These sounds were synthesized into a liquidy flowing underbed (suggestive of bloodflow) over which orchestral voices swell in wonder. This piece is called O Men and Mice.
Hasan Abbas and I had several email correspondences. I sent him my first soundscape and he gave wonderfully useful feedback. In all of his correspondence, Hasan spoke with keen interest about the technology used to create the video. The method is called diceCT and the technology is a micro-CT scanner. Hasan referred to the technology as “a thousand tiny X-ray” images stitched together to create the detailed 3D image of the mouse hind leg. This made me wonder how 1000 Tiny X-rays might sound. So a second soundscape was born.
For this piece, while the primary idea is the sound of 1000 tiny x-rays, I also wanted to convey a sense of excitement and pride in an amazing technological accomplishment. diceCT is a new way of seeing living matter that could reveal hidden organic structures or systems. Drum rolls, claps and cymbal crashes are iconic sounds of triumph, so these were used as the sound source. In the video, when the tilted slide-like plane moves from bottom left to top right, the full leg emerges, and there is a feeling of a “great reveal”. This feeling is emphasized by a drum roll and cymbal splash into a moment of silence in the soundscape. For the sound of thousands of X-ray images being taken, granular synthesis was applied to the drum sounds as they built up in dense layers. Interestingly, granular processing does a similar thing to audio as the diceCT method does to matter. Hasan provided me with a video that was slower in pace for this piece. The layers of the whole leg system as they swirl away and return are so beautiful and perfectly fitted together, I wanted it to take more time.
The Art of a Scientist will open Saturday April 6 at the Golden Belt Grand Gallery (800 Taylor St. Durham) and will run through June 23, 2019.
My quest to synthesonize Ableton Live has taken an exciting new turn. Last Sunday, we discovered that by micing The Bells at the Central Park School Soundgarden, I can run that sound through Ableton and into the various synth modules and FX racks I am building. What happens is that the Abeju Synth Modules and FX Racks capture most of the harmonics that arise from Eleanor’s bell playing. The harmonics can be shaped by envelopes and attenuation and, of course, granular synthesis. My goal is to gradually shape the bell harmonics into a watery stream sound. This will be part of the soundscape for The Place ReSounds of Water (TPRSW) on April 14th at 4pm for SITES Season 2018-19.
When iBoD first started playing with The Bells, I recorded and analyzed their harmonic content. These bells are former compressed air tanks with the bottoms cut off, so the metal is not pure, it is some kind of alloy. This translates to lots of harmonic AND enharmonic content! A pure metal would render more pure harmonics. These pure harmonics are pretty, often beautiful, but my ear grows tired of the stasis of it all. The idea of purity in all of its forms is an illusion that leads to much misunderstanding and anguish in the world. Think about what striving for purity has given us: genocides, fascism, chronic autoimmune diseases, disconnection from and attempts to conquer nature, diminished empathy, and on and on. It is my prayer that riding and faithfully playing All the en/harmonic waveforms will encourage evolutionary growth. That is what I am going for!
TPRSW is my first attempt to sync up with the National Water Dance. My timing is off as this is not the year for National Water Dance, however I am hoping this will kickoff some interest for 2020. The idea for TPRSW is to give prolonged loving attention to water in the form of sound, light and the liquid itself. The soundscape will consist of Eleanor Mills playing The Bells, dejacusse aka Jude Casseday capturing and playing the en/harmonic waves from The Bells and morphing them into a watery feeling soundbed. Then Susanne Romey will play Native American flute over that for a while, then we start the wave again. The movers will pour water from vessel to vessel. An altar of flowers may be built. The whole thing is a mystery.
Our location at the Soundgarden at Central Park School gets full afternoon sun, so the visuals might include sparkles and shimmers of water. We could be lit up! If it is overcast, the air will be moist and the sounds of water will carry more clearly. If it threatens rain on Sunday, we will do it on Saturday instead! Or, perhaps, we will figure something else out and perform as it rains.
My first pass at working with Ableton as a modular synth was to explore “dummy” clips. These are tracks that contain an audio clip with the actual content muted. The clip is then transformed into a container for audio effects automation that can be applied to any audio routed through whatever audio track the clip is in. The clip becomes an Envelope template that can be pulled into other projects.
Ableton Live allows the sound designer to see and manipulate many parameters within each individual clip. This is most amazing, and something I have not paid enough attention to. Here is a picture of the clip console from the Ableton Manual:
For Audio Animation Clips, the focus is on the Sample box and the Envelopes box. The slider in the sample box is used to mute the audio by simply turning it completely down. Now the clip is ready to be shaped into an envelope. (That is how Audio Animation Clips will function in my Abeju Synth Station – as Envelope Generators (AAC/EG). The two drop down windows in the Envelope box give the sound designer access to all the effects and mix processing available on the track for this clip only. So whatever I draw onto the audio clip itself only happens when that particular clip is playing. In the case of AAC/EG, the clip will play the animation of the envelope and the envelope will be applied to any audio routed through the track.
In order to create an AAC/EG, we need an audio track and an audio clip. The length of the clip doesn’t seem to matter cause it loops. (Currently, I am sticking with 8-12-24 bars and I hear potential for longer clips with more complex movement in the automation in the future). Use the volume slider to mute the audio within the clip. (Addendum: Experience has taught me that the original audio in the clip should be low volume. I am going to start experimenting with creating audio specifically for these clips.) Then assemble a rack of audio effects you want to use to shape sound and insert them on the track. In the orginal audio clip, use the drop down box in the clip to access each effect, and turn them all to Device Off by moving the red line in the clip box to the off position. Now you have an open audio sleeve with no effects enabled. When source audio is routed through this clip, the audio will sound as is, with no effects present. Next, duplicate the clip. This will be the first AAC/EG. Go to the drop down box and turn on the devices you want to use. The top box takes you to the main effect, and the second will bring up every adjustable parameter within the effect. When you identify a parameter that you want to change, go to the red line in the audio clip window (on the right above) and draw in the animation you want.
Once I finally understood all this, I began to design AAC/EG Modules. Each module is a track with 2 to 4 different envelope template clips. Each instance of the clip can be shaped by adjusting the automation (I am calling animation) of the effects that are on the track. One technique I like is layering three effects over three clips. The track contains three effects (a delay, an EQ and a reverb), so the first clip has one effect on, the second has two, the third has three. Initially, I tried linking several modules (tracks containing the clips) together but found this too cumbersome. The option to layer modules is in the routing in each particular set.
To use the modules, insert another audio track that will hold the samples you are using. I label this track Source. Now there are several routing options, but the main idea is that the AAC/EG modules need sound coursing through them in order to perform. On the In/Out tab on the track, audio can be routed from someplace and routed to some place. The best mixing option so far is to have the AAC/EG modules receive audio from source and then everybody sends audio to the Master Volume. This allows the faders to act as dry/wet attenuators. The Source can be heard as is or through the AAC/EG Modules and Clip Templates.
So far, I have created four modules. Combed, Mangled, Stutter Delay and Harmonics Flinger. [Post Publication Addendum: More modules are added to above and all are being fined tuned. Our April 14th SITES event was rained out (rescheduled for May 19th) so the debut of Abeju Synth Station will be tonight as part of Electronic Appetizers program at Arcana. I am playing a piece called Perimeter Centre, which will feature the Ripplemaker semi-modular ios synth as my sound source for the Abeju Synth Station. Come listen!
Since releasing Audiorigami (Meditations on the Fold), my sonsense as to how to explore the Fold has shifted. This shift is in sync with Glenna Batson’s return to Durham andthe start of a monthly Human Origami Jam. Glenna is interested in exploring folds through a variety of deep somatic frameworks. She narrates the biomolecular potentials that the body travails from utero through the many modulating intersections of growth . My own sonsense of the Fold is opening to the quantum aspects of sound and further harmonic interplay. I sense that these sonic realms might possibly allow access to some basic templates of life. Perhaps sound, in the form of patterned frequencies, guides life into being. Perhaps harmonic frequencies are part of a templates for the growth and movement of life forms through space and time. That is what I am playing with here.
The focus of Audiorigami will now be to explore the changing shapes of sounds themselves. Audiorigami will propogate, excavate, and modulate the folds that emerge from and disappear into the waveforms that are the vehicle of sound. Modular/ Granular Synthesis and Frequency Modulation are the methods for engaging with sound media. I plan to more carefully curate the sound sources I use and to do more sampling from my own recorded sounds.
Here are some excerpts from the Human Origami Jam which happened last month at ADF Studios. Glenna leads an exploration of lines and trajectories, corners and angles. The soundscape is my first rendering with some of the Abeju Synth Station modules I created from “dummy clips” in Ableton, coupled with TAL- Noisemaker VST synth plugin and Ripplemaker on the iPad.
Glenna Batson is back in town and we have started a monthly Human Origami Jam at ADF Studios on Broad Street in Durham. Join us this Friday, February 15th for an exploration guided by the foldings of cells – the building blocks of nature. While Glenna guides you through a macroscopic to microscopic sense of the cell, dejacusse will sculpt sonic forms in the atmosphere of the room. The soundscape will swath you in harmonics, whispers, bounce and back to the ambient sound of the room. Sound as water is my theme, and since cells are mostly made of water…
Since excitedly sharing the results of a sonic analysis of Lemur Gut Microbiomes, I have been working up a soundscape based on the quick sketch included in the first Sonifications and Life Forms post. (You can hear both below.) In order to get feedback on the work, I sent the first blog post to Mark Ballora, with whom I had taken a data sonification workshop in May. His response helped me realize the need to clarify my sonification process. So here is a description of the project:
The purpose of the sonification is to illustrate the changes in baby lemur microbiomes from birth to weaning. Microbiome data was captured through fecal samples taken at birth, through nursing, introduction to solid foods, regular solid foods, and two times while the babies were weaning. The sonification will illustrate changes in the type and amount of bacterial phyla present at each of the six sampling stages for all three lemur babies. In addition, the mother’s microbiome was sampled at the time she gave birth, so her profile, which was assumed not to change, provides a baseline adult profile with which to compare the babies’ changes.
There were 255 strains of bacteria collected over the course of the study. These fell into 95 classes and 35 phylum. I focused on the phylum, as my plan was to assign a note value to each bacterial data point, so I needed a smaller data set. The data set was narrowed further (and made more interesting) by focusing on a family: a mother Pryxis, and her triplets, Carne, Puck and Titan. This group allows us to not only hear the variety of changes in the babies’ microbiomes, but compare the changes as well.
The original data set included 9 lemur babies and 7 mothers. So the first step was to go through the phylum data sheets and pull out the profiles for Pryxis, Carne, Puck and Titan. A phylum profile would be the type and amount of each phyla present at each data collection point. The profile changes over time at each collection point. The microbiomes of these four lemurs housed 15 phylum (at a density of >.001) out of the 35 found in the entire study group.
The next step was to assign a note value to each phyla. Since there are only 13 notes available in the chromatic scale, some phylum would need to be on the same note, albeit a different octave. Same note, different octave will lend a tonal consonance to the profiles. So what might this consonance represent? There were 5 phylum that had the greatest density and presence in all the samples, so I assigned those to the note G from octave 1 to 5. The remaining 10 phylum were assigned note values based on their presence throughout the profiles, and on their consonance/dissonance with the tonal center G.
In order to capture the density of each phyla, a midi velocity range was aligned with the decimal percentage of the phyla in each profile. Midi velocity settings determine the force with which the note is played. Thus the velocity ranges render a clear sense of presence or loudness to each note played. The decimal percentages ran from .001 to 1.0 and the midi velocity range runs from 1 – 127. Here is a chart of how these ranges overlap:
So for example, Protobacteria present at .25473 would be represented by the note G at octave 3 set at 40 velocity. The largest sample in all the data points captured for this project was around .9 and the smallest was .001 (this was a cutoff point as there were bacterial phylum present down to .0001 ranges.) Here is the chart for Titan showing note assignment and density values through each sample stage:
My sounding board for this data comparison is Ableton Live, a digital audio workstation (DAW). The individual lemurs are represented by a “voice”/midi instrument in Ableton. Tuck, Titan and Carne are bell-like voices that blend together, while Pryxis, the mother, is a warm, pervasive woodwind. She envelops and contains the changes in the babies’ phylum profiles.
All lemurs had a Phylum Profile Chart like the one above. In the DAW, the instrument track for Titan, for Puck and for Carne contains a midi-clip of notes of the phylum colonies present at each stage of dietary change, which were then laid out as a “scene” in Ableton. As example, Titan’s Phylum Profile at birth was
Protobacteria (Note value=G3) set at 101/127 in intensity
Euryarcheatae (Note Value=A2) set at 34/127
Firmicutes (Note Value=G2) set at 11/127
Cyanobacteria (Note Value=A#4) set at 1/127
Other Bacteria (Note Value=B3) set at 1/127
Spirochaetae (Note Value=G5) set at 1/127
Titan’s Birth Phylum Profile is the multi octave chord GAA#B. Three of the phylum were barely present, so those tones are almost inaudible in the chord. However, 2 Gs and the A ring out. The total number of phylum present in each dietary stage varied from 3 to 14, so the multi octave chord becomes more dense and dissonant when the phylum are so varied. Here is a look at the tracks (individual lemur voices) and the “scenes” (which are the phylum profiles from all 3 babies at each stage.)
The first sketch was just the mother’s phylum profile droning under the three babies’ profiles expressed as a stacked megachord. All 3 baby profiles rang out together four times at each stage, starting with birth and ending with the second wean. What could be heard was a homogeneity and consonance between the Mother and babies at birth that gradually became more diverse and dissonant as solid food was introduce. However, by the second wean, the babies’ and mother’s profiles become more consonant again. The researcher said this illustrated the conclusions of her study.
As a soundscape artist, I felt there was more here than just that basic chordal movement. The babies’ phylum profiles were quite different from each other as well, which is lost in the chord presentation. For example, Carne’s birth profile has only 3 phylum, while Titan has twice that amount. One way to hear this level of contrast in the baby profiles is to articulate the chords into riffs. Now we can hear the interplay of the changes in their microbiomes. In addition, we can hear how consonnant/dissonant and dense the phylum become as outside food is introduced into their systems. Titan’s phylum profiles arpeggiate down, Puck’s go up and Carne’s go down then up. A practiced deep listener could key in on a particular profile and follow it through to the end. I played around with rhythmic shifts to create more movement in the stages where the phylum profile were incredibly dense and diverse. The last two arpeggiating riffs you will hear are all of the phylum notes sounding through twice. And listen for the elevated levels of Protobacteria in all 3 profiles at birth – that G3 rings out at that point.
As I put this full family profile together, another more nuanced movement in the data appeared. In the chord rendering, I heard the data get more dissonant and dense from nursing through first wean, and then the phylum thinned out and became more consonant at the last wean. In the riff rendering, I can hear a contraction and more consonance at the Intro to Solid Foods stage as well as the second Wean. That was not clear in the chord presentation. When I checked my data records, there was a drop in the number of phylum present between Nurse and Intro stages. I love that a nuance appeared in the listening that made me go back and check the data. That is exactly how I hope this process will work.
Some other things for future consideration:
Aligning each phylum tone to a particular beat might help the listener hear the differences from stage to stage more clearly.
When assigning notes to data points, closer attention to the harmonic overtone series might help clarify the role consonance and dissonance play in hearing the data.
The voices of the baby profiles have similar timbre as a unifying element. The profiles could have very distinct voices which might make the variances in their profiles more audible.
During a trip to Washington D.C. in 2017, granddaughter Jahniya purchased a silver necklace from the International Spy Museum gift shop. The necklace is the word “strength” in Morse Code with round beads for the dots and longer tube beads for the dashes. When she showed me the necklace, I saw the dots and dashes as stabs and long tones and began wondering about soundscapes with words and phrases spelled out in Morse Code.
This idea is a perfect companion to TRIC Questions, and my interest in using overlapping “predeterminded” sound articulation patterns in music. Morse Code is also a way to imbed words into soundscapes. People seem to prefer music with words. We like a story, an image and the sound of the human voice. For me, the singer and the words hijack a large percent of the ear brain in any listening situation. That is why I prefer my music sans words – let the instruments convey the narrative. Since 70% of meaning is derived from the non-verbal components of speech, including cadence and tonality, then music should be capable of conveying information/meaning quite effectively. Morse Code brings a verbal aspect into the soundscape without words taking over the show.
Here is the simple instruction sheet for creating Morse Code:
This will be my idiosyncratic metric template. The time signature is open and based on units: 1 unit = 1 beat = the length of a dot. The measures can be laid out in the number of units it takes to complete the word or phrase in Morse Code with the 7 unit break between iterations of the word.
Feeling sure that modern composers have explored this idea already, I googled “Morse Code music”. What I found were instances of Morse Code used as drum patterns. The code was always tied to a time signature, so the actual prescribed template for Morse Code is not what is heard. Plugging these patterns into Ableton voices means that the exact Morse Code template is played and preserved in time. This rhythmically offsets the clips, since they are different lengths. I have played with this idea for over a year using different words like love and joy, or om shanti. I did an iteration of “peace” over and over for International Peace Day 2017.
Now it is September 2018, and I am playing off the Peace Day soundscape in preparation for the Wake Forest Dance Festival this coming weekend (9/29/18). Justin Tornow was invited as a guest choreographer. She is creating a solo for dancer Maggie Page Bradley which I will accompany. The sound piece is called PSS – short for Peace Shalom Salam. Each clip is a Morse Code Template of peace or shalom or salam. Each template follows the unit pattern described in the Morse Code description above. Some units are quarter notes and some are eighths, another layer of rhythmic offset. After laying out the templates, I squeezed several of them into smaller time frames (oh, the things you can do in Ableton), thus speeding them up while maintaining the integrity of the Morse Code Template. In addition, I am throwing into this randomness one of my favorite Ableton audio effects called Fade to Grey. This is a high pass and low pass filter that move towards each other and basically squish the sound to nothing, but the addition of a ping pong delay holds a bit of the sound in an echoic pattern. This effect allows the sound to be fractured and reflected in interesting ways. The effect is on every track, so there is the potential to break the sound into multiple glimmering pieces. Here is a short sample of this effect at work:
If you are in Wake Forest close to Joyner Park tomorrow, please come to the Wake Forest Dance Festival. There is an open tech rehearsal in the morning, movement around the park in the afternoon and the dance performances will be from 5-6:30. As always, I appreciate your support!
When I was a child, we often visited our grandparents in Elkins WV. Elkins is home to the Mountain State Forest Festival, and is my birthplace. My Mother’s family has a long history with Elkins. Her grandfather was one of the first mayors and one of two doctors after the town’s 1890 incorporation. I am not sure how my Dad’s mother got there. Mamaw lived in a brick row apartment with a porch and stoop to play on. And she lived one block from the volunteer fire department.
When I slept over with Mamaw, there was always a fire in Elkins, sometimes two. The volunteers had to be called in from all over town, and what called them was the longest, most mournful sound my young ears had ever heard. As loud as it was (remember we were one small block away) the siren also sounded ghostly. It went on and on and on for an eternity and then it stopped! A lovely silence would fall and gently wash away the residue of the wailing. If it happened at night, I would return to sleep; by day, it was back to play. Either way, the siren always elicited a jolt of free-floating anxiety.
The Mountain State Forest Festival takes place the first weekend in October in Elkins and has for 85 years (with a short hiatus during WW II). This Festival was a highlight each and every year of my growing up. We got out of school for two days, traveled through the gorgeous colors and crisp fall air to spend several days with carnivals, exhibits, parades and pageantry. One of the parades took place on Friday night and involved 100 firetrucks sounding their sirens at the same time. The Fireman’s Parade attracted fire departments from all over West Virginia, and into Virginia and Maryland. The trucks would line up at one end of town and slowly make their way down the main street blaring the siren song of their station, their truck. The sound of 100 firetrucks calling their warning song together cannot be described. People flocked the sidewalk, laughing, trying to talk to each other over the din. My brother Matt is famous in our family for having slept through the Fireman’s Parade when he was a babe. Even back then, I enjoyed the interplay of the various intervals that make up a siren song.
A few years ago, my cohorts from iBoD (idiosyncratic Beats of Dejacusse) were discussing ideas for soundscapes. The one sound artifact that really stands out in the urban growth we are experiencing in Durham NC is the frequency of emergency sirens. This became the basis for an iBoD piece called The Sound…of Sirens. One online resource said the intervals of sirens telegraphed who’s coming: the police are a perfect fifth, ambulance is a fourth, and fire trucks are a whole tone. I designed the soundscape with those intervals. We all started with the basic intervals, and as the piece went on, we threw different intervals into the mix. The ending is a big crescendo and all out except the tail of the reverbed voices of the scape, which I turn up to a final fading shriek. We played the piece at a few venues. I thought of it as a novelty song.
I talked about all of this in an interview with Margaret Harmer, who produces electronic music as Shifting Waves. Margaret is producing an album of work from 15 to 20 women electronic artists from all over the world. She asked each of us to think back to a sound in our childhood, to find the story around that sound, and bring it forward into a piece. (I actually added that last part, Margaret did not say the story had to be about the piece for the album, and it sure did flow that way for me.) Here is a link to the interview.
I took the soundscape for The Sound…of Sirens and began to analyze it harmonically and timbrally. The piece was sculpted from thick resonant voices (several synth pads and strings). This allowed me to carve out the movement of the sirens, the doppler effect of approach and recede, the abruptness of a nearby siren suddenly starting or stopping – the psychoacoustic impact we experience in our communities. Now called Song of Sirens, the piece was a fountain of siren voices overflowing and receding. There are several short repeated interludes during the first section. Several crescendos and several interesting places where the sound drops out leaving space in the front of the mix. This is most obvious when listening through headphones. This has peaked my interest in how we define the sonic space a piece takes up, and how to keep the full space alive when the sound recedes.
Siren’s song in mythology is characterized as an intentional “luring” of sailors onto the rocks. This sounds like one side of the story to me. Who was hearing and for what end? Was the siren song seductive, plaintive, demanding? Was it the call of grey seals, baying and mournful, resounding in the range of the female voice, a voice the sailors had not heard in years? Perhaps the sailors drove themselves into the rocks looking for women to rape. There are many possible scenarios when all points of view are considered.
I wanted to put an intention of comfort and nuturing from female voices into Song of Sirens. How interesting that modern day emergency sirens call out warning, answer your cry for help, or pursue you – all at once. How to embody all of this while flipping the mythology of blame the women. So I recorded Trudie, her daughter, Sheila, and three granddaughters singing phrases of Brahm’s Lullaby and wove them in and around the siren soundscape.
We are creating a new mythology as our brains and conciousnesses go through an extraordinary evolutionary shift. The reptillian brain – the one that fights or flees – is softening into the polyvagal brain. We are moving from survival of the fittest to survival of the kindest. Feminine consciousness knows how to be kind, not just benevolent. As the Song of Sirens raises the death knell of the reptillian brain, grandmothers, mothers and granddaughters sing a soothing lullaby swaddling the panicy cries.
Song of Sirens will be released as a track on Voices from Eris, produced by Shifting Waves studios. Stay tuned for more on fundraising and release date. I appreciate your listening!