Visual experiments before 1st performance of Saundaryalahari 3

The first performance of Saundaryalahari 3 is taking place in a little more than a week at the IMMSANE Zurich Congress. I’m testing the reactivity and robustness of the current setup, consisting of a 3TrinsRGB+, OP-Z, Blackmagic Ultrastudio HD Mini and MOTU 828es controlled by a mic into MOTU DP 9 and NI Reaktor (for reactivity into the 3Trins).

IMG_7074.jpeg

This time, I included Pixivisor to “listen” to the audio input and superimposed this back into the video input of the 3Trins. The results are pretty complex (at first) but make more sense in terms of creating a reciprocal audiovisual reactive system. The trick seems to set the correct frame rate and contrast in Pixivisor, and there is still some fine-tuning to do.

IMG_7073.jpeg


What is becoming apparent, is that this system - in contrast with a generative system or “intelligent” (A.I.) based system, is that the system utilised sort of like an instrument; in the way that in needs to be adjusted to create the correct reactivity and response with the performer. This is good. I would rather that the performers manually adjust their audiovisual reactive “instrument” to their own field of liking than an invisible system try to do it for them. I believe some things are meant to be done by hand.

Here are some examples (in lo-res) of the visual system in action: