Studio performances took place on Dec. 13,14,15 and 16, 2000 at Suddensite Studio, NYCConcept and Text – Bruce GremoDirection – Kyle De Kamp, Bruce Gremo, Rene Beekmancomposition, Max/MSP programming, MSP performer – Bruce Gremovideo, nato programming, nato operator – Rene Beekmanactress, MSP performer – Kyle de Kamp
Talkative Gods: an interactive theater production
Studio performances took place on Dec. 13,14,15 and 16, 2000 at Suddensite Studio, NYC
Concept and Text – Bruce Gremo
Direction – Kyle De Kamp, Bruce Gremo, Rene Beekman
composition, Max/MSP programming, MSP performer – Bruce Gremo
video, nato programming, nato operator – Rene Beekman
actress, MSP performer – Kyle de Kamp

Talkative Gods: an interactive theater production

The work Talkative Gods is a text based work. It is multi media in at least two ways. First, it utilizes a three-computer instrument concept developed by composer Bruce Gremo and video artist Rene Beekman which enables not only independent real time DSP improvisation in both the audio and video realms, but enables the two performers to cross route control, so that the audio performer controls the visual and vice versa. What they primarily improvise with is the configurations of routing, constantly changing the control surface, its topography. In this regard, it is perhaps more accurate to describe the work as cross-media. Second, in its final form it will have a web site instrument component which will enable remote control, an idea which thematically permeates the text. In the version which will be realized this year with the support of Harvest Works, the web site component will not be implemented.

Talkative Gods uses the software MSP and nato; the former for audio and music composition, the latter for real time video performance. In its current form, it requires three performers; Gremo and Beekman in their respective audio/video video/audio roles, and an actress, Kyle De Kamp, who recites the text. The musician manipulates musical and pre-recorded text materials, as does the actress. The musician’s control comes entirely from instrumental sources; the computer responds to pitch and pitch sequence triggers, interval and interval sequence triggers gesture triggers, rhythm triggers, pitch bend and interval direction scrolling techniques. The musician provides a background polyphony of text and music to the foreground manipulations of the actress. As an alternative strategy to the highly complex process of speech recognition, here the spoken voice is treated like a musical instrument, subjected to pitch, intensity, bandwidth and gesture analyses, then converted into midi which is in turn routed for control purposes. In other words, by treating the voice as a musical instrument, the actress can provide a live feed through a microphone, and can by her vocal gestures and inflexions, instruct the computer to process her voice, or call upon pre-recorded materials to cross-synthesize with her live voice, for example. At the same time , the text functions at its own semantic level. Meanwhile, the video performer is projecting on a screen behind the actress, images which comment on the text both by mimicry and indifference. His instrument of control is the wacom tablet.

The control of the actress can extend into the video, just as the musician’s can. And the video performer can of course do the same thing in kind. Three control sources; musician, video performer, actress. Three control destinations; music, text, and video imagry. In addition to this, the actress provides a real time camera feed. Similar types of control can be achieved with this feed when the camera image is analyzed for certain types of movement, which can then be scaled into midi control.

Studio performances of the work in progress will take place on Dec. 13,14,15 and 16. Performances will take place at 8:00.

185 Lafayette St. (between Broome and Grand) New York NY 10013

Text: Bruce Gremo