Spectral ambient music engine

01 Jul 2012 10.47

Spectral live desk on Max/MSP

The live ambient music generating process is nearly ready. In this piece a single vocal track is transformed into an ambient soundscape. The piece uses a multitude of FFT-based objects, with which the original material is cross synthesized, malformed, re-synthesized and filtered. This enables fluid transformations from any original sound into music.

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

The second demo uses a recording of waves hitting shore as the source material (non harmonic content).

The system as it is will be the foundation of my future live performances, on which a couple of external hardware synth modules and effects will provide more texture and depth. The re-synthesizing process and the subsequent musical device control are microtonal in the sense that the perceived frequency content is directly replayed rather than fitted in a present musical system. Besides the looping sound players are never analyzed at precisely the same spots.

Advancing methods

02 Oct 2011 16.04

I have advanced my setup a bit. Here’s a demo of a real time spectral process where audio is analyzed and reconstructed with eight sine wave oscillators.

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

I have used this kind of process before too, but this is more fine grained and delicate. It’s based only on frequencies and not on any kind of existent tuning system (even though it may mimic one). Besides amplitudes of the  frequencies are extracted and reused.

The system allows me to input for example a single waveform with uneven frequency distribution of harmonics and use that as the base. It’s not very difficult to add option for reading color information and use live video as the source too. There really are no limits.

The original vocal recording is from the Freesound user Digifishmusic: Katy_Sings_LaaOooAaa.wav

New devices

17 Sep 2011 19.46

I have recently acquired Access Virus TI2, which will be an integral part of my future live performances. I’ll no longer play live music with the devices I’ve played so far, and ESQ-1 and Polysix have been retired to studio use only. I’m building a completely new setup and completely new style of creating music live, and the setup will also be much easier to carry around.

I need to abandon the methods and automations, preset rules and systems and break through to something much more unique and personal. I will create my own fluid and infinite tuning system as well as integrate myself physically with the music creation process via my brainwaves. The process is just at the beginning. The devices of the former setup had become an intuitive part of me over the years, and to have a decent touch with the new ones takes a bit time.

Mind sync

08 Mar 2011 15.25

I have purchased an EEG headset from Emotiv, the Emotiv EPOC research model.

This will allow me to integrate the brainwaves to my musical output. The device also makes it possible to control sound by thought. At the moment I’m studying Max/MSP, which will be the central piece of my future live sets. With Max it’s possible to freely mix all of the elements, including possible light and video effects.

With additonal programming the system would also allow outside participation over the internet.

Exploration continues

01 Feb 2011 21.40

I’ve just finished the sound design of an indie movie, and I’m heading back towards sound exploration. I’m building a fluid tuning system on Max/MSP, with which I’m able to play along any sound scape or sound, in real time or recorded.

The H-Pi microtonal keyboard I ordered should be ready in a month or so, and I’ve also acquired new sound devices to play with and make the live setup more compact.

There will be a new sarana album this year. It’ll be a little like the first one, what comes to the music, but with a unifying feel. I’ll also begin to release tracks from my archives again soon.

New music

12 Oct 2010 12.34

A classical composition is basically a mathematical construction. By classical in this context I mean all of the music that conforms to some rigid notation system and is live performed. The ideal tones and frequencies are precise and in some given harmony within each other. It’s possible to create a perfect electronic reproduction of any composition. But is the result then perfect? Is the ideal of a performance to be as precise as an electronic rendition of the same composition? In classical music one can feel the muscles and individual psychophysical characteristics of the performers, which all contribute to the sound and composition in a slightly new way.

It’s this interpretation that is interesting. The errors are slight but significant. By comparing a performance to an electronically created ideal the errors themselves could be made audible. The differences in timings and tune could provide completely new music. The resulting sound would essentially paint a musical portrait of the performer rather than the composer or the composition.

This new music would be dullest when the performance or the performer is perfect (no differences in tune and timing results in silence) and most interesting when the performer is carried away and interprets the composition (continous or intermittent differences in tune and timing). Practising a piece would also provide interesting musical material, as well as a complex and difficult composition.

The differences of just a few hertz and milliseconds would have to be magnified in order to make them audible.

A dialogue could be formed between the interpretation and the performer. The error parameters could be used to create a new composition, which the performer has to play immediately (or after a little delay). A notation software would provide the notes and timings in real-time. If one hits the correct time and notes all the time there wouldn’t be any music to play.

For example there would first be silence. The performer then creates a tone from a violin. As long as the performer keeps hitting the same note exactly and keeps the timing (derived from the pause between notes) the composition would consist of one note hit at a constant interval. When the performer deviates from that the differences are transformed into new notes and times, which the performer should follow.

The resulting new music would follow quite an interesting pattern and end up to be too complex and difficult to follow exactly (thus creating new music) . The subtle individual characteristics of the performer and the performance become immensely magnified, and the performer becomes the composer.

Breaking out

17 Sep 2010 21.30

My efforts to free myself from the 12 key hegemony continue. I’ve ordered a two octave 422 key Tonal Plexus TPX2 keyboard from H-Pi Instruments. It’s a two octave keyboard only in classic sense, because all of the keys can be individually tuned to any frequency. Models with more octaves and keys are also available. With accompanying software the tuning should be easy and intuitive.

Light controlled

18 Aug 2010 22.01

Light controlled ambient piece – recording of an intuitive musical performance.

The ambient piece acted as the musical introduction to an open discussion between photographer Victoria Schultz,  psychoanalyst Heikki Majava and a sound explorer. The topic of the discussion was My Body and I – Synchronic Image, Vision and Sound. The discussion was held on August 13th.

The pitches of the piece were controlled with light dependent resistors. I had built two simple enclosures for the resistors, which I held in my hands. In a slow dance I moved about and explored the light and dark areas of the gallery, Laterna Magica, where the event was held.

Victoria’s photos were on display in the gallery, and I thought that light controlled music would bridge the media and weave the topic together. I had built a light controlled audio device a few years ago, which was perfect for the idea.

It has two oscillators, which can be independently controlled. I connected the stereo out of the device to a laptop, in which a modular sound processing software turned the pitches of the oscillators into synthesizer control messages. The other channel was used for texture and the other had more soloish character.

I wasn’t able to control the sound as much as I desired. In the possible future performances I will use the signal difference between the two resistors as the source of modulation. The difference can be explored via complex mathematical equations or logic analysis for example.

Spectralized atmosphere

03 Aug 2010 14.35

The set from last Sunday is now available for download and listening on archive.org. The set took place at a compact sound art festival, which was held on an island.  The soundscape of the island was turned into synthesizer control messages by using Plogue Bidule modular sound processing software. The software analyzed the spectrum of the soundscape and picked frequencies from it at a certain tempo. The frequencies were then turned into MIDI note information. The notes were replayed by software and hardware synthesizers. The performance was an application of spectral music technique.

Link to the original image on Flickr: Harakka island by rmrzI had no role in selecting the notes, and I’m glad I’ve found a way to further diminish my role in the process. I’m exploring the possibilities of working with the frequencies rather than  just 12 note harmonic system. I’m planning to set up a system that would create the intervals automatically from the spectrum. The system would sample dominant frequencies of any material and create an octave using the sampled frequencies as the base. The number and logic of octave divisions could be separately determined.

The island is called Harakka (Magpie). Harakka is inhabitated by birds and most of the smallish island is preserved. The premises were originally build for Finnish army, and the place was used to design and study explosives and related chemistry. The buildings, tunnels and bunkers are now in the use of artists.

The set was performed in the tight wooden auditorium of the main building, which was built in 1929. It’s kind of a place where students and colleagues could have been witnessing the latest achievements of science. In this case a sound surgery took place.

A thirty minutes exploration into the sound of an island.