There is a problem with having audio in and audio out in the same room.
As we know, the amplification takes from the microphone, the microphone takes from the amplification, and the world collapses. I recently read about what exactly makes that ‘feedback’ sound, but I’ve forgotten what it is, how exactly it works. It is a bad sound.
And of course if you’re devising an environment-aware music system, there’s the problem that the machine will play along with itself, creating its own environment. It’s like the opposite of garbage in the system: there’s no difference between the garbage and the food. The recent past becomes the near future becomes the recent past, and they have to be different in order to move… forward.
And if you’re playing with a person… how is that person understanding his separateness?
The person is not making an echo, and not playing the same ‘thing’ as you, which helps. But he’s not depending only on sound, either. And he’s deciding what features to respond to — not to your sound, necessarily, but to you, through some combination of features and imitations.
Making a machine respond musically to a gesture is not in itself so problematic… but picking out something to play with? In an engaging way? The question is open.
When we listen to each other, we do not necessarily respond to the sound we hear, at least not as a physical sensation. We can store the sound as an idea or shape, remember it, draw from it, vary it, put it on our own terms, and then respond to it (or not). We actually require ourselves to do something different. We separate ourselves in order to establish our own selves.
And what would you do to have a machine remember itself — its own sound — as different from yours? Well, first it would have to have ‘a sound’. It’s not clear that it does, if it doesn’t physically resonate across space. But of course, synthesis is plenty successful on the large scale, so why exclude it?