The recording from the performance on the 19th of September is now online.
In the search of an organic live sound – instead of using just a bunch of oscillators – I have implemented spectral samplers into the live process.
The latest pieces are now on the sarana SoundCloud page.
Sample of the live set at Akusmata sound art gallery
The whole 30 minute performance into the spectral content of a small choir recording and whispers.
sarana Live at Akusmata on 2014-11-05
Sample of the live set at warehouse L3 in Helsinki. The warehouse was transformed into an urban park for a day. The set was performed in conjunction to a morning yoga session.
The two hour set
Friday the 19th saw a party happening by a field, by an ancient rock wall. There were two bunkers, and in one of the bunkers the main party took place. The other bunker was sadly unusable as the chill out cave, so the setup was arranged outside by the rock. Two sarana sets were performed. One around the midnight and another one in the morning.
They are now available: https://archive.org/details/srn2014-09-19
Sample on Soundcloud: https://soundcloud.com/sarana/kivi
I think that generative and emergent music mean different things. In my view generative music is based on algorithms that try to mimic natural dynamics. Emergent music on the other hand is not based on algorithms but an analysis (of natural processes), and the outcome can’t be mathematically formulated.
The dynamics are in constant change, but on the other hand in the context for example the sound of waves or wind is based on the physical properties of the said processes and the environment. They are not random, but they also never repeat themselves and there’s constant change, evolution.
I’m thinking about this because I’ve maybe found a label for the music I create – emergent spectral ambient.
For an example and to make my point; yesterday I was experimenting with Spectral Toolbox for Max/MSP. There is a wonderful tool there, that remaps the target sound’s partials to the nearest overtone of a harmonic series, where the fundamental frequency can come from another sound. I created a multi path system instead of just one harmonic set, which was then fed by a frequency analyzer in realtime.
In the experiment a sound of wind provides the partials, and a short cello progression the fundamentals. The combination of natural dynamics and the continuity errors in the analysis process create something unexpected, a whole piece of music with melody and harmony that goes on for eternity.
The scintillating piece is also on SoundCloud.
The latest set from Uunolan Ukotus now on Archive.org
The live process I’ve build for many years is slowly taking the shape I’ve dreamt it to be. The phase now is to move the control elements from the computer to an external device. This is achieved with iPad and the wonderful infinitely customizable Lemur application. This in turn takes the live performance where it was before: in the music and sound creation.
The system still lacks the fluidity and elements to make it a seamless experience, but I feel more and more confident that this can actually work. I’ve had doubts many times, but the clouds are shifting.
It’s rather interesting that I cannot in any way foresee the frequencies that the system picks up, so I have to be acutely aware of what’s happening at the moment. It’s exhilarating that I’m not in total control and can only guide the process and progression to certain direction. On the other hand the music doesn’t have progression if I don’t do anything. Balancing myself somewhere in between makes me a listener and the creator at the same time. The system doesn’t let me go, and on the other hand it does if I do too.
I was honored to be a performer in Dark Ambient Friday, a concert preceding the conference.
The 30 minute set on Archive.org
Excerpt of the set on Soundcloud
I’ve implemented a couple of new techniques in the live process. The timing of the sound triggers is now more flexible, and follows the spectral content of the source material. The individual spectral band related triggers can be adjusted separately, and the end result is livelier than before. I’ve ended utilizing a central timing mechanism based on fixed fractions rather than free oscillation. This gives the resulting music a nice fundamental structure. The base sync is usually a few beats per minute.
I’ve recorded a small demo of the system. In the demo a thunderstorm recording is transformed into music. In this example the higher frequencies produce shorter envelopes for the synthesizers in use. Enjoy a spectralization of a thunderstorm. The process happens in real time, so it never quite repeats itself, even though the storm recording would be played in loop.
The other technique of which I’ve been really fascinated is about stopping the playback of a sound to a standstill. This has been enabled by modified extreme time stretching algorithms provided by Jean-Francois Charles. Typically in a live performance I’m playing the source material between 1/100 – 1/1000 speed, and the music and other elements follow the slowly changing spectral content analyzed from the material.
I recently performed live at a library main room in Helsinki. The recording of the spectral ambient set is now available.
5th part for the Sounds of Calligraphy series has been published. The original images were provided by Vuokko Koho. Uncials on Canal Grande comprises of two calligraphy works, which transform to one another in a video. The work was presented as the opening for a workshop held in Venice in March 2013.
Live video matrix is analyzed and sound waves and processing properties are derived from the image information. The pitches are based on color channel luminosity: they control the frequency and the matrix is read in three parallel processes. The resulting sound is fed into an image controlled 16×3 channel filter with image controlled cut off, resonance and panorama.
The StillStream set is now on Archive.org. It was supposed to be a three hour set but ended up being shorter because of unexpected software glitches in the music creation process.
The second recording is of an online session with Blu from Britain. Blu provided most of the played material, which I used in the process as input and then created new re-synthesized sounds and spectral mixes as my part of the dialogue.
I created a video documentary / music video of the Ambient² installation:
The calligraphy exhibition performance was continued with a commissioned work to realize three calligraphy pieces as music. The playlist consists of three different images, chosen because of their potential as musical compositions.
Recordings were done in real time from the Max/MSP/Jitter process, and the form of the sounds and music was decided by the composer. The process is made of 128 oscillators and filters per color channel, and the stereo panorama follows the position of the visual cues. Envelope attack and release values are modulated by the strength of the color in question, as is the volume of a single oscillator.
Great thanks to the Viiva & Viiru artists for their input and openness to the abstract.
The performance on the Dream House 24H experimental sounds radio show: http://archive.org/details/srn2012-05-05
The latest live set from Trip To Goa trance party chill out space: http://archive.org/details/srn2012-09-14
The three-hour exploration into the spectral characteristics of human, nature and instrument material is now listenable and downloadable at http://archive.org/details/srn2012-07-30. The set was performed at the Ambient Source stage at 05 – 08 am.
In many ways the set is still a work in progress, and it will continue to evolve and encompass new elements.
The live ambient music generating process is nearly ready. In this piece a single vocal track is transformed into an ambient soundscape. The piece uses a multitude of FFT-based objects, with which the original material is cross synthesized, malformed, re-synthesized and filtered. This enables fluid transformations from any original sound into music.
The second demo uses a recording of waves hitting shore as the source material (non harmonic content).
The system as it is will be the foundation of my future live performances, on which a couple of external hardware synth modules and effects will provide more texture and depth. The re-synthesizing process and the subsequent musical device control are microtonal in the sense that the perceived frequency content is directly replayed rather than fitted in a present musical system. Besides the looping sound players are never analyzed at precisely the same spots.