Branching Songs Production Methods
June 2021
Post by Sam Street and Cara Jacobsen

Overview
Branching Songs, a collection of immersive soundscapes and accompanying 360 images, is a response to the planned removal of trees along the route of the Trans Mountain Pipeline (TMX), and the resulting damage to the surrounding environment. Inspired by the work of 1308 Trees, an activist project that aims to stop TMX and to bring awareness to the 1308 trees slated for removal along the pipeline route, Branching Songs commemorates multiple trees under threat from this fossil fuel expansion project. Each soundscape features interactions between the Wild Empathy team and individual trees, recorded through contact microphone interaction, as well as our field recordings of the environment surrounding each tree. The sections below describe the approaches used.

Contact Microphone Interactions
The interaction stage of this project involved using contact microphones to record gestural touch-interactions through the body of the tree, moving as vibration through the wood. This technique’s methodology was developed by Julie Andreyev for a third year media course at ECUAD, and was built upon for one of Wild Empathy’s first workshops, Performing With Trees.

The first touch-recordings for this project were conducted with several trees at Stoney Creek. Starting off strong with a beautiful large red cedar, we began with a four contact microphone setup running into two MixPre recorders. We placed two microphones on either side of the trunk, and two among the low branches, using flagging tape to secure the flat area of the contact mics against the tree. Using multiple microphones allowed us to listen through multiple perspectives, to hear gestures refracted across the tree’s body. After getting situated, we recorded three performances, a solo each for Sam and Julie, then a duet — moving around the tree, plucking branches, rubbing bark, tapping across the surface. The touch interactions were improvisational, responding to the sounds heard on site, and the specific features of the tree.

Another tree selected for performance was a small, ivy-covered tree not far along the path from the large cedar. A dual contact mic setup with a Roland R-26 microphone was used for recording. One contact mic was attached quite high on the tree’s trunk, while the other was placed much lower, and both were secured using flagging tape. The cables were wrapped around the trunk to help pick up the sounds of various gestures. Cara recorded several improvised performances with the tree, including tapping on the ivy leaves and trunk, brushing the bark, and moving closer and farther from the mics. It was a windy day, so sometimes the wind and the shaking of the ivy could be heard through the mics.
Also selected for performance was a large red cedar along Trans Mountain trail, a walking path just within the Trans Mountain site property line (listen to Red Cedar at Trans Mountain Trail). One contact mic was attached to a low branch using flagging tape and the other was nestled against the bark on the trunk. A series of tapping, plucking, and brushing gestures were used both on the trunk and the branch. The mics also picked up some of the construction noise from the adjacent site as well as birds overhead, translated through the bark.
Field Recording and Soundscape Composition
The soundscapes in Branching Songs utilize a wide range of field recording techniques, brought together to represent a wider scope of both the local environment as well as the tree itself. To begin, the primary microphone we use for the ambient “foundation” of the soundscape is the RODE NT-SF1 Ambisonic microphone. Rather than one single, directional microphone, the NT-SF1 uses four tetrahedal-positioned microphones to record the entire “sphere” of sound around the virtual listener. This is extremely useful in the field, as it helps to get around some issues of microphone placement which may miss certain unexpected sounds (as they often are in wild spaces). After recording, we used RODE’s companion Soundfield processing plugin to both position the “sound field” around the listener as desired, and then to mix down the four-channel recording to a two-channel binaural file, ready for stereo listening.

The next ingredient for the soundscapes in Branching Songs comes from the ground up, with the use of a geophone. This device uses a fine-tuned magnetic system to detect and transmit low frequency vibrations through the ground, and is typically used in seismic detection or geological applications. However, these waves can also be translated as sound, and with the LOM Geofón specifically built for this purpose, we can listen through the ground or even solid objects to gain a new sonic perspective. For Branching Songs, we placed the geophone in the earth amongst the roots of our chosen tree, and listened, or perhaps rather felt, the wider environment as the roots would. In the semi-urban locales that the 1308 Trees movement has highlighted, the geophone has proven useful as a translator of anthropogenic activity, be it passing joggers, distant construction at the TMX Burnaby site, or passing Skytrains. While the recorded sounds are all very low-frequency, when recorded in sync with the ambisonic microphone, they provide a deep, warm “glue” that reinforces the soundscape composition.


Most sparingly, we also made use of a hydrophone to record the aquatic aspect of certain environments. In Cedar at Stoney Creek, this obviously refers to the nearby creek which provides the lifeblood of the ecosystem. As the name suggests, this microphone is placed underwater and records the sounds within, the roll and turbulence of the stream and small clicks of rocks moving. In the Alders at Stoney Creek soundscape, this aquatic element served as a reminder of the totality of the ecosystem—all the local dynamic elements working together to create the space we inhabit.
Finally, with all the pieces assembled, we composed the soundscape. This process started with close-listening of the base ambisonic recording, listening for sonic events or moments detailing the location — nonhuman actors playing out a loose narrative “body” of the soundscape. Within this main background, or “stage” maybe, we find space to insert the tree performance into the composition. Perhaps there is a moment of quiet within which the contact microphone performance could arise, or maybe it plays in harmony with the soundscape (one example being the syncing of performance and recording via passing skytrain on Alders at Stoney Creek). Each composition emerges through an organic process based entirely on the material we gathered from the field. Finally, the last elements are brought in: a creek fading to a whisper, a low swell of the geophone recording, maybe some small volume adjustments to highlight small moments. Any harsh edges are rounded off, the transitions are smoothed, and we do a final mix, arriving at a finished soundscape.
360 Image
And finally, we documented each tree with a 360 degree photo using an Insta360 ONE X camera and tripod. The camera connects remotely to the Insta360 phone app, allowing us to take photos without using a timer and instead just clicking one button once out of sight of the camera. These panoramas appear alongside the Branching Songs soundscapes to give an immersive context of the environment surrounding each tree.
Immersive video recording using an Insta Pro 2 camera
July 31, 2019
Post by Julie Andreyev
We finally arrived at a good quality video recording technique for the VR headset experience, Tree Earth Sky. This post describes our methods using the Insta Pro 2, a dedicated VR camera that records 360-degree stereoscopic video. We are very happy with the results. I’ll explain why.
We hired Olivier Leroux, a VR cinematographer with the company Impact VR, to do the shoot, and he was able to locate an Insta Pro 2 that was available on short notice for our location production stage (thanks Olivier!) The Insta Pro 2 is a VR camera that records 360-degree video in stereo at 8K at 30fps, and has greater colour depth than the previous model, the Insta Pro, that we tested in the first phase of research. The Insta Pro 2 has ambisonic sound recording capability, but we chose to do the sound recording using a dedicated Zoom Ambisonic recorder. The settings on the Insta Pro 2 are accessed by the Insta app via an iPhone or iPad. This is handy for shooting 360, when the camera operator and other crew need to leave the scene before recording.
This shoot was our final production stage for recording immersive video, and we only had one day on location, so we needed to carefully plan our work stages. We had some knowledge from the previous location experiments with the Sony and nodal system, and the stereo GoPro system. We knew that the light changes in the forest, and it may be difficult to predict which specific tree and grove would have a reasonable amount of light, and be accessible for the tripod and camera. We planned to arrive on location before 8am, and spend a good 6-7 hours recording a variety of shots. Getting into the forest early was also a goal for Simon who was intent on recording the dawn chorus of birds. To help facilitate the unpredictable characteristics of shooting in the forest, we used interspecies approaches developed by Simon and myself in previous Animal Lover projects. This involved taking cues from nonhuman beings and natural forces, such as sunlight, to guide the processes.
Video recording in a forest in the morning presents a few difficulties. As with most old forests, ours has a well-developed canopy, so sunlight that reaches the forest floor is sparse and intermittent. We set out to locate the exact grove that we used in the research experiments, but when we arrived, the light turned out to be dim and flat. So, we decided to use a different approach: not rely on this location, but shoot in a few locations even if they are unfamiliar. We explored the general area, and evaluated a few groves of trees as potential locations to shoot. Since we were not using the known location, we had to come up with criteria to evaluate these other potential locations. The knowledge gained from our previous experiments helped us create a checklist to evaluate each location:
- The presence of an old growth tree (to be our main subject in the video)
- The ability to set up the Insta Pro 2 approximately 1.5-2 meters away from the old growth tree, a distance needed to emphasize the old tree as being the main feature in the VR space
- Surrounding plants, no less than 1 meter away from the camera
- The movement of the leaves of plants with the breeze
- Dappled light on the tree and surrounding plants
According to the manufacturer and the cinematographer, the Insta Pro 2 works best if the subject is more than 1.5 meters from the lenses, so this limited our selection of potential locations. Some locations had too much undergrowth to gain access, or their leaves were too close to the lenses. The best locations had wonderful plants surrounding the trees that are typical of old growth forests on Vancouver Island. These included sword ferns and vanilla leaf—a plant traditionally used by indigenous people that, when dried, functions as an insecticide (and smells like vanilla!) In video recordings, these undergrowth plants provide good depth cues for the VR experience. The static elements of the tree and plant locations helped us choose some of the potential shooting locations. However, the unpredictable elements, such as movement and light, needed a special approach.
Movement and light are important to include in an immersive video experience because they help emphasize the spatial quality of the location, and its natural vitality. Recording in a forest in the morning when there is very little breeze makes it virtually impossible to predict movement in nearby plants and leaves. Any wind is usually felt higher up in the canopy, not at ground level. So, we aimed to simply record a number of locations, and hopefully, a few of the recordings will have captured some movement.
In forests, light can also be unpredictable. As the sun moves higher in the sky, the light can change before you’ve even had a chance to set up the camera. The sunlight will be indirect and subdued by the branches. In the best case, this will create dappled lighting conditions on the plants and trees. Because of the unpredictability of light we decided to allow the light to direct us—it was our guide along a path to each shooting location. For example, when we spotted a small amount of sunlight hitting an old growth tree, we moved to that location. We quickly evaluated the location using the above criteria and, if found to be appropriate, set up the camera. We made a five-minute video recording, even though we only needed a one-minute final clip. This allowed us to record the changing light, and any physical movement of the plants and tree branches. After recording, we made a quick sketch of the location that included the types of trees and a brief description of each; we measured the distance from the camera to the main trees in the shot, and took photos. This information was useful in the post-production stage to estimate the distances of objects in the scene and create a realistic 3D stitch depicting the spatial qualities of the location. Olivier recorded a few stereo photos of the ground below the camera. This was useful in the post-production stage as material to overlay onto the tripod that was recorded in the shot. After we completed these steps, we’d look around for the next location, determined by the light.
In this way, we were able to record 7 individual locations. Back in the studio, Olivier made a rough stitch, using the Insta software, of each location. We evaluated each one in an Oculus Quest headset, and chose the most compelling footage that contained the perfect combination of the above criteria. Our selected footage had a magnificent old Douglas fir as our main subject, lit with a changing pool of sunlight, a fern nearby had a bit of movement with the breeze, and vanilla leaf plants carpeting the forest floor. Later, we used post-production techniques to create the final immersive space of the forest.
Recording sounds of forests and trees
October 2018 – July 2019
Post by Julie Andreyev
This post describes the sound recording stages in our old forest location on Vancouver Island, where we gathered the real-world sonic material needed for the Wild Empathy projects. I’ll describe two stages of the production process, using different approaches to record the ambient sounds of the forest and the specific sounds of trees.
During our first production stage in October 2018, we aimed to record the forest’s ambient sounds, such as bird calls, insects and frogs. It was the afternoon by the time we found our location in the forest, so the ambient sounds were less rich than we had hoped. Birds inhabiting Pacific Northwest forests tend to do most of their calling in the morning and evening, and are quiet in the afternoon. Despite this, we intended to see what we could get. Simon chose a location far away from the rest of the team in order to record without interference from human sounds. He used a stereo pair of Earthworks TC30K Omni condenser mics and a portable Sound Devices MixPre 3 audio recorder. The Earthworks were arranged at 90 degrees to each other and used a windsock (see photo below). Simon’s technique for recording in wild settings is to set up the equipment and leave it there to record, then walk away, sit quietly and listen. Using this technique, he was able to make 30 minute recordings in a few locations. One of the best recordings of water in a stream and the calls of frogs.
The second recording stage was set for July 31, 2019. From my experience, the best time to record birds in a Pacific Northwest forest is in the morning from dawn to around 8:30am. So we arrived at the forest by 7:30am, and managed to catch the last hour of the dawn chorus. Simon entered the forest on his own while the rest of the team stayed behind at the car. This allowed him to quietly set up the recording equipment and complete two 30 minute recordings of dawn chorus without interference from human sounds. He used the same technique as the October production stage, setting up the equipment to record, walking away to sit quietly and listen. In this way, the birds carried on with their calls without feeling intimidated by his presence.
For this recording session, the intention was to record immersive sound. So Simon used a Zoom H3 VR recorder. This recorder allows you to record and process spatial audio for use in VR environments. It has a built-in ambisonic mic system with 4 capsules. (The recorder is also capable of stereo binaural monitoring and recording). For our purposes, Simon was able to record the spatial sounds of the forest for our VR headset experience Tree Earth Sky.
Recording using the Zoom H3VR was successful because we achieved the goal of recording a rich soundscape of birds and other animals. We identified two loons singing a duet, Steller’s jays, Swainson’s thrush, squirrels, and even the sounds of mosquitoes flying by the recorder. The Zoom H3 VR has a high noise floor relative to the quiet forest ambience which proved to be difficult in the post production processes. However, given it’s portability and ease of use, it is a good recorder for spatial sounds for VR projects.
Later in the afternoon, Simon and I conducted the last recording session with the goal to record tree sounds. I described our experiments with recording the sound of trees moving, and the sound of ‘performing’ a tree. Using the knowledge gained from that previous experiment, we focused on recording the sounds of performing a tree. We selected a small fir tree on which we could attach the contact mic. We used a buffered contact mic (from a local manufacturer called Cold Gold Audio, and a Sound Devices MixPre 6 audio recorder. We attached the mic using a double-sided adhesive strip, a material manufactured by 3M and sold by Cold Gold Audio, that transmits soundwaves. This time, we used flagging tape to secure the mic, because it has a slightly stretchy quality (see image below).
We took turns performing with the small fir tree, using our hands to stroke, scratch, hold and gently shake the tree while we recorded (see image at top). By carefully listening, we could come up with a variety of interesting sounds. We used methods similar to those used by musicians in improvisational settings. In a conventional improvisational situation, musicians carefully listen to each other in order to creatively respond on their instruments in a way that builds the music on the fly. Performing with the tree, we would listen to the sound of touch and respond with another type of touch to build a sound composition. These recordings were eventually used for our they speak in whispers immersive art project where they were treated using granular synthesis techniques to represent the tree’s inner life.
they speak in whispers
Mixed media immersive art
Post by Julie Andreyev
Early on in our discussions with Science World, they showed us the old-growth Western red cedar remnant, called the hollow tree, that’s been on permanent display in their Search Gallery. The hollow tree has a carved out interior, with seating that accommodates three adults (or more children), and small stairs for visitors to climb. The tree is open at the top to the ceiling. Maria, and I were immediately struck by the presence of the tree, and had the idea to activate it with sound and video. It could be developed into an immersive experience for visitors entering the cedar, offering a visual and sonic understanding of a living tree as part of a forest ecosystem. The acoustical qualities inside the hollow tree offered potential to create an immersive sound environment, and the opening at the top afforded a natural place to install a rear-projected forest canopy video. We imagined that a soundscape of the forest, and how it is heard by the tree, could be recreated using software and speakers installed in the interior space.
During our July 31 location recording session in the old growth forest by Antler Lake, Vancouver Island, we set aside time in the afternoon to record forest canopy video, and sound of the forest biophony. Damien located a few excellent spots in the forest that offered a nice view of the tree canopy. He set up his Sony FS7 to record 4K raw, with the lens pointing directly up. Recording this point of view would create a realistic perspective when it is shown in the hollow tree as a projection in the top opening. He recorded a few two-minute clips at each location, exposing for the tree branches. The duration would be sure to record any movement in branches due to breeze, and any light changes— important features for a realistic illusion.
Back in the lab, we selected a canopy clip that had good detail, and movement in the branches. The clip was colour corrected using DaVinci Resolve and trimmed to exactly 2 minutes.
The hollow tree has features that allowed for the projected image to be easily installed in the top of the tree. It has a bracket fastened inside at the top to hold the bark together, and this was the perfect location to install the projection screen. We measured the bracket to make sure it would take a circular screen, and hired a local company, Best Films Service, to fabricate a custom 60” diametre circular frame out of square profile aluminum tubing, with a white rear projection screen material stretched over the frame. They produced an amazing screen that is beautiful and light, and projected a fantasticly clear image.
We had a unique situation where it required the projector to be hung from the ceiling pointing down at the screen. We purchased an Epson LS100 ultra short throw projector. This model is full HD, with 4000K lumens, good for locations that have some ambient light. It’s laser light allows for the projector to be installed pointing down, without overheating the bulb as in conventional projectors.
Simon used Max/MSP to create a software patch to play the video from a Mac mini. The video needed a circular mask, to match the projection screen, and it needed to play in a loop seamlessly. To create a seamless video loop, the first frame and the last frame need to be the same. Because this is not possible with a clip that may have change over time, we used a palindrome technique based on a previous project called Wait where the software plays the video, and then immediately plays it in reverse, then forward, then reverse, etc. This creates a seamless quality for viewers because the subject matter reads well even if played backwards. These techniques created a visual experience continuous with the materiality of the hollow tree.
The soundscape creation for they speak in whispers required a bit of experimentation to process the field recordings, to build the generative computational soundscape, and determine the right speaker system. Simon needed to edit the field recordings, mostly to reduce the noise levels. The Zoom Ambisonic recorder we used for the field recording has a high noise to signal ratio. To do this, he first listened to the recordings in Adobe Audition and logged and marked the locations that would be exported as clips. These marked locations were exported as individual files, and then taken into Zoom ambisonic software to export as stereo files. The stereo files were taken back into Audition and treated using the iZotope RX 7 Spectral De-noise reduction plug-in, and edited with a tiny fade in/face out. These files where then exported for use in the playback software Simon designed using Max/MSP.
Simon and I discussed compositional techniques to use for the soundscape playback software. We listened to all the exported files, and categorized them according to their own intrinsic qualities. These were sorted into four folders: soloists, ambient, loud, insects. The soloists were the recordings of prominent calling animals, such as loons, squirrels, woodpeckers, Stellar’s jays. The ambient were recordings of calling animals who were further away from the recorder, and therefore background-sounding. The loud files were loud, close-up animals calling; the insects were recordings of flies and mosquitos flying by the recorder. Simon created a sound file player for each of the four folders, that would randomly play each file in the folder. He designed instructions for how often the files in each folder would play in order to create a natural depiction in the final playback: two ambient files played together at any one time; the soloists played for 45 seconds and then there was a pause; the loud and insects files played less frequently. In addition to these animal sounds, Simon created a tree voice sound that played in response to the soloists. This used a sound recording from our field tree recording session in the production stage treated using a granular synthesis patch to lower the pitch and give it a more pronounced vibrational quality. The compositional intention was to create a tree voice, as a low and rumbly sound, because the tree is so old and slow. The tree voice was reproduced using bass shaker speakers in the installation. Visitors would be able to sense the tree voice through sound and touch.
Science World wanted all the equipment to be out of reach, so we needed to identify a speaker system that would work with the tree acoustics, and be installed in the upper portion of the tree’s interior. We tried building a system out of 3” speaker components but these didn’t adequately reproduce the sonic fidelity of the recorded bird calls. Plus, they accentuated the noise in the sound recordings. Instead, we went with a Polk RC60i stereo pair system that has a good range, and seemed to reduce the noise. The speakers are designed for home use to be installed in ceilings. We connected them to a 90 watt Pyle PFA300 compact stereo amplifier. This system worked out well, and sounded great in the space of the tree. We asked Emma, our graduate research assistant to spray paint the speakers to match the interior of the tree, and to create a brown paper surround to semi-camouflage them against the tree’s texture.
As mentioned above, we wanted to include a tactile quality for the soundscape in the tree, so we included bass shaker speakers to mount on the interior of the tree. Bass shakers are normally used to add vibration to the very low frequency sounds in movies and games. In our situation, they could be used to create vibration for people sitting inside the tree, or touching its interior surfaces. We chose bass shakers made by Dayton Audio, including one 50W BST-1 bass shaker to reproduce the low frequency sound waves, and one 40W DAEX32EP-4 to reproduce mid-range frequencies. Both are powered through a Dayton APA150 150 watt amplifier. Emma also created paper enclosures for the bass shakers so that they looked good in the tree but don’t draw too much attention to themselves.