Amazing! S3 v.2 (picture frame) improv session with Nicola 10.03.2021

Tonight, a refined Saundaryalahari system was ready and improvised to with very little discussion necessary (!). This system involved audiovisual patches that sent responsive visual information to envelope trackers, random voltage generators AND analogue waveforms generated by the 3trins. The effect was quite responsive and easy to control, as the visual patches were left with only 3 or 4 variables at a time - much more manageable. The audio manipulation was primarily achieved through Spectral Mode in Clouds (again!).

The score of Saundaryalahari 3 v.2 was cut up and put into slides and played back in random order (similar to the technique used by Mauricio Kagel in Prima Vista) and with the Sricakra establishing the beginning and end to the work. So, this process was fully aleatoric.

Listen for yourselves (also with an excellent Zoom recorded complete visual performance, including score and cameras). Click on each image to play the two improvisations from this week’s session.

Saundaryalahari 3 v.2 rehearsal 03.03.2021 with live online system

Today, Nicola and I experimented with 4 takes of version 2 of Saundaryalahari 3. It was a very productive session with some great dialogue to help understand some of the processes at play.

After the first two chaotic takes, we began a great dialogue on the concept of response in an audio-visual system. Nicola – on the musically interpretational role of solo instrument – felt that the videos were random and unresponsive, and not quite as they were before. (He’s right, and also that the patches as they were set-up were not nearly as responsive as I had hoped, so I was struggling to control them.)

The electronic sounds seemed to be excessive and over complicated. This was, in my opinion, partly due to the overall system – including online recording and routing system – being too complicated to control at once, but also because of another factor in the work. The Saundaryalahari score does not contain very much complicated “searches” for sound and dynamically active bits that, potentially, could acuate the visual systems to respond. This is intentional, because the Saundaryalahari concept – as a compositional concept – is to represent a platform for finding and establishing SELF and thus peace. This means that the interpreters – the musicians and also the electronics artists – are the ones interfacing with the deeper notions of internal dissonance, conflict and “actuation of things”. So, the music remains calm, but it doesn’t have to stay that way!

This led to the beginning of a wonderful dialogue on the question of embellishment and co-composition: Nicola feels more compelled to respond to visually responsive systems and wanted to know – to me as the composer and “director” of the work – if he can make more dynamic changes to the interpretation of the work if he sees it working in the audio-visual (and potentially electronic sounds) systems. Yes, he can and should! So, great outcome.

I think the composition should remain stable, but as a co-performer not as a restrictive device; and that the human interpretation permitting the natural causing of agitation that changes the work each time differently. Indeed, perhaps a 3rd version of Saundaryalahari 3 should be made with even simpler structures, so that this natural SELF/co-composition method can be clarified and emphasized further.

Click on the first image below to play the Take 3 version of the session, in which had the electronic systems integrated. After this, click the next image below to hear the full S3 v.2 in which we chose to eliminate the unnecessary audio effects (except for the reverb) to concentrate on visual interaction. 

This was a great rehearsal and I look forward to next week for further new experiments.

The Score to Saundaryalahari 3 version 2 can be found here.

Online improvisation with Nicola and Yati 05.02.2021

Tonight was an improvisation session with cellist Nicola Baroni; this time working with the new feedback systems together with the geometric audio effect formations. Unlike last session, in which I used an envelope follower to alter the effect outline of the visuals, I sent the signals instead to the VCOs of the 3Trins which created a more complex sort of visualisation feedback. This was not as effective in relaying the visual information online, especially since there is a lag on the performer’s end. Never-the-less, the layering of the active formations in the form was very pleasing and at times almost like a hand-drawn animation or like the forms of Walter Ruttman from the early 20th century. We will aim to attempt the next session with perhaps a mix of techniques from the last session.

Note: the interface on Nicola’s end was acting up, so the signal is not very clean. This is an experimental recording but also nice to watch!

(click on the image below to stream the 10min improvisation)

This last week: Feedback! (processes)

I’ve been very active this week experimenting with works utilising various feedback perimeters, between Structure and 3Trins modules, but also internally via feedback nodes in the Structure. Some of the works have been directly “composed” with audio circuits simultaneously, but many have been produced “silent”. I don’t think these processes are intrinsically much different from each other anyway…

The feedback processes obviously can create “infinite tunnel” style effects, particularly when they are controlled with LFO’s or audio signals, but they appear to really come alive when exploring the various signal settings and parameters to find “fringe events”, which create sometimes unpredictable outcomes. In particular, when interfacing the digital Structure with the analogue 3trins, there are some voltage variable situations in the pathways that make them seem particularly organic.

Perhaps expectantly, the outputs are tending to mimic Saundaryalahari geometric and organic forms (triangles, lotuses, etc.) which, I don’t pretend to imagine are not just “what I want to see” however, it is interesting with the control parameters used to create them involve feedback processes - thinking about how the Saundaryalahari involves symbiotic co-creative processes - so that is certainly feeling right!

The upper selection of images was from sessions around 28.01.2021 and the lower selection of images was from around 02.02.2021.

Rehearsal 21.01.2021 with working online Saundaryalahari 3 system

The current system allows for online monitoring and visual feedback to the player via Zoom. Its all working!

Now the work begins: to establish a properly reactive and intuitive visual feedback system. The current new system sends the audio through a pitch and envelope follower to send to various 3D parameters. These react smoothly and rather slowly, so as to not oversaturate the visual feedback.

The issue of timbre is a complex one, and there seems to be no better solution that to use Pixivisor to visually represent this. This is because Timbre and distortions of resonance are particularly complex in resonant instruments such as a cello. Pixivisor has - when properly tuned - an almost analogue representation of this phenomenon, rather like an oscillascope, so that the player can feel the organic nature of the sound when playing to the visuals.

The following is 11mins of playing (click the picture below to watch the video) with various settings to establish a reactivity. Nicola was pleased with the reactivity of the envelope follower - and he is such an excellent musician that the visual delay he encountered had little effect on his playing!


First virtual S.3 rehearsal with Nicola Baroni

So, we are trying now to develop a method to allow the Saundaryalahari system to be used online. After the performance in October, it was apparent that we needed to do quite some work on several aspects of the performance system in order to better reciprocate the processes involved.

Originally, we had planned to have S.3 in a new ensemble constellation and a few performances in Northern Italy in early 2021. Since we are still in uncertain circumstances and unable to travel, it makes sense to rehearse online - potentially also developing a system that can work over the internet.

The system is quite similar to the previous one, in that we can establish reactive audiovisual systems to perform with, but so far there is no avoiding a significant latency in processing on either end. I suppose the best and most sophisticated solution would be to develop a server to process externally from both computers - like a video game server - but the programming is extremely daunting - I won’t go there!

There is a latency in getting the signal from the performer in Italy, then the processes can be synced on my end, and the visual response will have an additional latency in getting there. Overall, it will be several hundred milliseconds at least, even with wired connections. This is perhaps an example where it is a helpful that the performer is a seasoned, highly-experienced professional. The performer adjusts to different playing latencies and adapts - much more effectively than any musical or technological system!

On the processing end, the signal I receive will be in sync for recording and archiving, at least within half of the latency intended by the performer. I’m using a routing system through DP that allows the Zoom call to be internally routed via Soundflower input and output to the visual system and through a monitoring system.

The next task, is to develop the reactivity of the visuals using the Structure coupled with the 3Trins - something we should have spent more time doing in rehearsal in Zurich in October 2020. Our rehearsal yesterday then came up with the following comments for developing reactivity in the system. We will meet weekly to develop the system with the following components to work on:

  1. High and low frequency should be measured.

  2. Timbre: the timbres of string instruments are quite variable and the frequency response of the body of the cello is irregular. Perhaps use a multi-band gate or spectrum map especially since as Saundaryalahari is quite melodic, there may not be some many timbre variations.

  3. Perhaps we should measure noise.

  4. Distortion (visual) could be a form of feedback.

  5. Look around frequencies of 1000 - 3000hz; maybe put a gate on them to prevent decibel saturation.

  6. Consider the centroid pitch and its pitch variability. Distortion can also change that variability -low pitches and high pitches move the centroid.

  7. Develop the system to respond within a reasonable amplitude.

Screenshot 2021-01-13 at 19.48.54.png

Images from new processes: Structure with 3Trins and Saundaryalahari system

The Erogenous Tones had to be sent back for repairs - and a replacement unit arrived on Christmas Eve! For special wishes, I created a series of still images of the Saundaryalahari system running produced from the Structure running the 3Trins through it. These are sonic audiovisual “images” that represent a musical flow.

As still images, they are very interesting and always unique, which is very visually appealing. The next steps after this process will be to construct a complete small-scale performance of a Saundaryalahari work and record both the audio and visual elements concurrently. The still. images from that performance will relate to precise times of the performance - so that they will have meaning both in non-real and real time as well as audio and visual - a potentially more complete form of reciprocity!

Below are the selection of still images and some of them in printed and frame versions. Happy holidays!


Seeing Sound 2020 Link to the paper: Saundaryalahari – a search for a reciprocal audio-visual system

Please click the image below for the paper delivered at Seeing Sound 2020 on 12.12.2020 entitled:

Saundaryalahari – a search for a reciprocal audio-visual system

Abstract: The Saundaryalahari project is a series of works based on an 8th Century Indian literary work in Sanskrit written by Adi Shankara called the Soundarya Lahari or Saundaryalahari (Sanskrit: सौन्दर्यलहरी) meaning "Waves Of Beauty”. The outputs of this project explore through music, sound and visuals the “non-verbal” creativity found in these ancient texts while utilising the structuralism of the spiritual/graphic formation of the Sricakra, which defines and arranges the verses from outside to inside. As part of this, the Saundaryalahari sound art “system” utilises signal modified electronics and interactive visuals to attempt to translate image into sound and back again – providing the opportunity for the music to have a “feed forward” loop to regenerate musical materials within a composition and interact directly with the performer or ensemble during a performance.

In this paper, I will explain the genesis and evolution of the current Saundaryalahari reciprocal audio-visual system and why going beyond an arbitrary representation of visuals to audio/music allows for more distinct co-compositional approaches in performance and creation. Considerations on research into synaesthesia, experimental animation and improvisation will be discussed along with an introduction to the most recent Saundaryalahari audio-visual sound art works by the author. The project is funded by a Creative Scotland – Sustaining Creative Research Grant with the goal of exploring how Sound Art works can play a role in potentially transcending inequities of excessive miscommunication (i.e. social media, fake news and other overly prolific verbal/text-based communication in media).

Link to the paper: Saundaryalahari – a search for a reciprocal audio-visual system

Link to the paper: Saundaryalahari – a search for a reciprocal audio-visual system

Addition to the audiovisual arsenal: an Erogenous Tones Structure!

With some of the funding from the Creative Scotland grant, I purchased a new visuals module: the Erogenous Tones Structure! The intention was to have the possiblity to process the analogue video generated from the 3trins in order to have more control over vertical and horizontal dimensions of the visual generators from the 3Trins. They tend to only be generated horizontally and from left to right. My rationale is that, if the intention is to create analogous visuals from audio signals and/or reactivity, there should be ways of controlling the creation of more sophisticated structures. The Structure seems to do this incredibly, even with the short few hours I’ve had in it so far!

Here are some examples of the new forms of experiments as still photos. I’ve included a video of the most compelling examples so far is the “Endless Kaleidoscope” at the bottom, in which is self-generating and utilises the 3trins in oscillation (from several different internal and external oscillators) sent through the Structure, which is set to a sort of Kaleidoscope effect. It is simply wow…and I look forward to many, many more amazing discoveries.

The next step after this will be to devise the proper audio-reactive/generative reciprocal setup for further Saundaryalahari experimentation. More to come…

Endless Kaleidoscope 10.12.2020 (click for video)

Endless Kaleidoscope 10.12.2020 (click for video)

New reciprocal experiment 25.11.2020

I’ve experimented with a new work using the reciprocal system attempted at the Zurich performance of Saundaryalahari 3 last month. This audio-visual work involved placement of an a pre-recorded sound field generated by a Ciat-Lonbarde Cocoquantas 2 and placement in the same responsive tracks of the system as a live recording (previous where the cello was recorded). Since there is only one player (in contrast with two, where one plays and the other manipulates the audio-visual system), the performance was done in two takes, so perhaps can not be considered entirely reciprocal. The pixivisor system, which contributes to the initial system, was also not active.

Here is a link to the streaming performance: https://drive.google.com/file/d/1hIBO9TpanhBIJQJ4LZ9NXRkbjAYctSGa/view?usp=sharing

Report on the first performance of Saundaryalahari 3 v.1

So, the first performance of Saundaryalahari 3 v.1 has taken place at the IMMSANE Zurich Congress on 2nd October. Firstly, the performance was much different than anticipated: there were some technical issues in the incredible ZHdK Dolby Atmos theatre where it took place, in that the cinema projector did not want to resolve the analogue PAL signal coming from the Blackmagic interface. Ultimately, this prevented very much (or any) rehearsing to take place. The performance really was just a test of how useful the notation and system worked.

I’m happy to say that it went extremely well and the feedback I have received from the performance by the audience expressed positivity in the form, “roundedness” and maturity of the piece. The audience sat patiently and aptly for more than 40 minutes!

The notated music score is here

Click the image to stream the performance below.

Saundaryalahari 3 (v.1) programme information IMMSANE Congress 2020

IMMSANE Zurich Congress 2020

Zurich University of the Arts, Zurich, Switzerland

8:00pm 3.K17 Dolby Atmos

Saundaryalahari 3 (v.1) 2020

for solo violoncello and audio-visual electronics performance

Nicola Baroni - Violoncello

Yati Durant - audio-visual electronics

Saundaryalahari 3 (v.1) for solo violoncello and audiovisual electronics performance is part of the Saundaryalahari audio-visual sound art compositions series. The performance uses signal modified electronics and interactive video-synthesised visuals utilising hardware and software to translate image into sound and back again – providing the opportunity for the music to have a “feed forward” loop to regenerate musical materials within the performance. 

Inspiration for the works are based on an 8th Century Indian literary work in Sanskrit written by Adi Shankara called the Soundarya Lahari or Saundaryalahari (Sanskrit: सौन्दर्यलहरी) meaning "Waves Of Beauty” and are performed as a mediation on the spiritual/graphic formation of the Sricakra, which defines and arranges the verses from outside to inside and include aspects of universal social- and spirituality that affects many things, especially sources of creativity.

The Saundaryalahari project involves researching audiovisuality using important synonyms that are meant to be understood, as much as possible, using non-verbal methods. Music and moving image (film) provide the most complete possible communication to achieve this, especially as they are synesthetically co-related within the creative media arts. It is representative within our contemporary society that a great deal of the population is now focussed on online audio/visual communication. This is leading to inequities of excessive miscommunication (i.e. social media, “fake news”, etc.) and is often due to overly prolific verbal/text-based communication in media. Music and sound in combination with moving image media (film), therefore, may play a significant role in transcending these problems. 

As such, these audio-visual compositions explore how a conscientious focus on the reciprocity of music and sound within images might potentially inspire others to think about the positive benefits of non-verbal communication. 

A research blog outlining the current research progress can be found here: http://www.yatidurant.com/saundaryalahari-blog. Many thanks to Creative Scotland for their support in enabling this compositional research.

shutterstock_131103278.jpeg

Visual experiments before 1st performance of Saundaryalahari 3

The first performance of Saundaryalahari 3 is taking place in a little more than a week at the IMMSANE Zurich Congress. I’m testing the reactivity and robustness of the current setup, consisting of a 3TrinsRGB+, OP-Z, Blackmagic Ultrastudio HD Mini and MOTU 828es controlled by a mic into MOTU DP 9 and NI Reaktor (for reactivity into the 3Trins).

IMG_7074.jpeg

This time, I included Pixivisor to “listen” to the audio input and superimposed this back into the video input of the 3Trins. The results are pretty complex (at first) but make more sense in terms of creating a reciprocal audiovisual reactive system. The trick seems to set the correct frame rate and contrast in Pixivisor, and there is still some fine-tuning to do.

IMG_7073.jpeg


What is becoming apparent, is that this system - in contrast with a generative system or “intelligent” (A.I.) based system, is that the system utilised sort of like an instrument; in the way that in needs to be adjusted to create the correct reactivity and response with the performer. This is good. I would rather that the performers manually adjust their audiovisual reactive “instrument” to their own field of liking than an invisible system try to do it for them. I believe some things are meant to be done by hand.

Here are some examples (in lo-res) of the visual system in action:






Development of a system for Saundaryalahari 3

I’m very pleased that Creative Scotland has awarded me with the Sustaining Creative Development fund grant to support developing Saundaryalahari 3 over the next few months. As part of this, there are several stages of development I am aiming to complete by the start of 2021.

A performance at the IMMSANE Zurich congress on 2nd October 2020 with Nicola Baroni:

Saundaryalahari 3 will have its first performance, as a duo interactive sound art performance, together with myself and the bologna-based cellist (and good friend) Nicola Baroni, at the IMMSANE Zurich congress in the University of the Arts (ZHdK) on 2nd October 2020 at 8pm. The performance will utilise a new audio/visual system (as described below) and the notated material derived from the large-orchestra version of Saundaryalahari 3 currently in composition.

The aim is to test the new system in a performance atmosphere, as well as the way the visuals affect the performance of the work. There are also plans to further perform S3 with Nicola, and other musicians, in a concert tour of Italy early in 2021.

A system for interactive reciprocal visuals:

Developing a system that, not only produces visuals to accompany sound and music, but especially, interacts with sound to CREATE visuals and thus co-compose with the live interpretation of the work, is a key goal to set for the upcoming performances of Saundaryalahari 3. I’ve made some changes to the interactive system and moved slightly away from only using Pixivisor to produce and resolve audio-visuals. The 3TrinsRGB+ visual synth, developed by Gieskes, is now a more analogue and interact visual platform that I have now included in the performance setup.

One of the interesting developments is how the 3Trins modulates its video oscillators with audio signals rather than cv signals. As with pixivisor, the system is extremely sensitive to precise system to resolve steady waveforms. Also, those audio signals are generally not tonal or compatible with tempered scales. They loosely work with certain frequencies built upon the overtone series.

So it was necessary that I improve my knowledge of Reaktor Blocks to develop a consistent series of envelope-followed oscillators to let certain factors of the performance alter the visuals - not just accompany them. The way the system works is through altering the waveforms of the oscillators, not the pitch, and sending them to the oscillators of the 3rins. With a bit of fiddling, they look very similar. Then, I can also interact with the visual synth by twisting knobs, effects - audio reactivity, etc. to provide a sort of audio/visual feedback for elements that allow improvisation in the work.

Future plans and Creative Scotland fund purchases:

The first item I purchased with the grant, was a video interface that could stream and capture performances. I purchased a Blackmagic Ultrastudio HD mini, which will allow the 3Trins to be recorded in the computer in real-time, along with an analogue mix of all the audio components for archiving. In the performance, this will allow a real-time monitor of the visual components and recording for the performance, along with an HDMI projection for the audience. So far, the quality looks great.

I’m aiming to develop different and relevant forms of visuals for future iterations of S3. Perhaps these should be more related to actual Sricakra constructions, triangles and things - I’m not entirely sure of yet - through vector-based visuals and other visual synthesisers. I’m looking at the LZX Chromagnon that promises to do a lot of this, or perhaps also a Radiator. Certainly, I will invest in a good quality projector and possibly a laser for performance.

Closer to reciprocal processes: adding Pixivisor to 3TrinsRGB+1c

I have achieved a lot more information and understanding on how audio reactivity works when patching the 3Trins. The waveforms from each CV input can trigger a cut-off of the colour channel oscillators which can follow the shape of the imported video shapes. This has good potential for achieving reactive linear audio/video accompaniment - and, depending on the location of the chain of the cv input, also allows for the design of delayed or non-congruent video accompaniment (much more interesting).

I managed to record a good sounding piece last night, but encountered some significant problems with the recorded format. Since they are just experiments, the system I developed had very little monitoring and levelling control and - unfortunately - recorded most audio on the a single channel of the stereo image. So, this first piece is only a good bit of fun and improvisation, but not presentable.

I spent the morning rerouting the audio and video through an interface and mixer. I also received an HDMI to Composite AV adapter today, after much deliberation as to a more flexible analogue video input scenario. This system will allow the computer to generate images and feed them back into the video synth system - all the while able to reroute internal audio signals. This allows for a reciprocal system to be experimented with.

The first example is simply setting a digital oscillator to both visual systems (3Trins and PV) and experimenting with the linearity of a single audio-reactive system. It seems to work though the phasing particulars of each system need to be calibrated. Also, the apparent increased fluidity of the image in an analogue system seems no comparison with the rich and flowing output of the 3Trins. One can compliment the other, however, and there is a lot to experiment with.

The second experiment has to do with a single PV system broadcasting visuals and sound into the cv- input of the 3Trins. This works very well, and while this is not yet so reactive (the audio to the 3Trins) it is the best way forward to develop a more reciprocal system.