Interview by Guy Harries

Interviewer: Guy Harries, vocalist/composer/performer/sound-artist,
for his website Live Electronic Sound
3 June 2013

https://www.liveelectronicsound.com/dafna-naphtali

excerpt check link for full interview:

Q: Your performance involves a lot of interactive electronics. Could you describe your set-up?

A: Most of the work I’ve done has been me controlling live sound-processing using software I developed in Max. Initially, I used an Eventide H3000 as my main instrument, and all these years later it still sounds really great. I had Peavey sliders going into a Max patch which had routings and combinations of parameters as “preset” that I used for controlling and sequencing processing I was doing with the Eventide.  I made all these Max patches in grad school and then I built more and more stuff around them, and then occasionally I would throw things away that were no longer needed or viable.

Usually I’m looking to make multiple simultaneous parameter shifts because that gets interesting results.   Things like changing the pitch-shift and increasing the feedback suddenly, and at the same time. Over time, I started grouping parameters and making them into my own presets. Then, I figured out that it’s fun to sequence the presets, so I started sequencing these radical shifts in parameter changes, using a  polyrhythmic metronome I had programmed.

 

New Music Box blog posts Oct 2017

Live Sound Processing, Improvisation – aesthetics and technical choices in real time.

Below are 4 blog posts by Dafna Naphtali (originally written for New Music Box), about Live Sound Processing and Improvisation. Part tutorial, part manifesto, it’s Naphtali’s take on electronic musicianship for performers.

These posts can be found in the New Music Box archive, and will be moved here soon..

LIVE SOUND PROCESSING AND IMPROVISATION
https://nmbx.newmusicusa.org/live-sound-processing-and-improvisation/
October 5, 2017

DELAYS AS MUSIC

October 12, 2017

DELAYS, FEEDBACK, AND FILTERS: A TRIFECTA

October 19th, 2017

RESONATING FILTERS: HOW TO LISTEN AND BE HEARD

October 26, 2017

Musical Instruments in the 21st Century

“What if Your Instrument is Invisible?”

is my chapter in the new book  “Musical Instruments in the 21st Century: Identities, Configurations, Practices” (December 2016, Springer: Singapore).

Abstract:  As an electronic musician I am largely occupied with capturing and manipulation of sound in real time—specifically the sound of instruments being played by other musicians.

Also being a singer, I’ve found that both of my instruments are often perceived as “invisible”.

This article discusses various strategies I developed, over a number of years, in order to “play” sound manipulations in musically reactive ways, to create a live sound-processing “instrument”.

Problems were encountered in explaining what I do to other musicians, audience, and audio engineers about what I do, technically and musically.  These difficulties caused me to develop specific ways to address the aesthetic issues of live sound-processing, and to better incorporate my body into performance, both of which ultimately helped alleviate the invisibility problem and make better music.

You can download/read my chapter here — “What if Your Instrument is Invisible” or purchase the chapter here: http://www.springer.com/gp/book/9789811029509

Interview on Cycling74.com

“An Interview with Dafna Naphtali”  (from Cycling74.com — Sept. 20, 2011)
http://cycling74.com/2011/09/20/an-interview-with-dafna-naphtali/


Interviewing Dafna Naphtali was especially exciting for me. Sure, it could be because she’s one of the few women involved with Max since its earliest days. But I’m especially interested because she’s a vocalist, composing beautiful vocal music amidst an electronic world. A lovely break from straight synthesis, her work is a mix of organic source and digital processing. I believe that her fantastic input during her collaboration with Eric Singer’s Lemur Robots really added life to what could have been a very sterile piece. She’s also an inspiring teacher and has some interesting approaches to teaching Max.

Can you describe your work to me?

I have a very eclectic music background. I’ve been performing since I was a teenager, all different kinds of music. I was very interested in Near-Eastern and Greek music because of my cultural background.

The majority of what I’ve been doing for the last 15 years with Max, has been live sound processing in an improvised music context. I’m a vocalist, so I process my own voice as well as processing other people that I play with.

Branching out from there I composed some chamber works. I found that in each piece I ended up adding some processing. I wrote something for Disklavier (for Kathleen Supové), and I have an early flute piece of mine that I’m reworking to make it more of a Max piece. But I think I even used Max to generate the tape part back I was in grad school.

More recently I wrote music for Eric Singer’s LEMUR Robots. I controlled the GuitarBot and percussion ModBots and live sound processing using Wii controllers and my voice, and using various rhythmic algorithms, and Morse code. I used texts created by online poetry robots. It was a lot of fun to work on.

I often have big aspirations for my projects, but the only big ideas that end up being realized are the ones for which I’m lucky enough to find some funding. For example — I’d been working with for a couple of years with a vocal group (Magic Names) that sings Stimmung by Stockhausen, (as singer only). I found it beautiful how really this vocal piece was so like his electronic music in the way it is constructed and even sounds. As my response to Stimmung, I proposed to the American Composer’s Forum to write a piece for six voices and live electronics. There’s a lot of wonderful vocal music out there, obviously, but nothing combined with electronics and live sound processing in the way I wanted to do it.

We premiered the piece, “Panda Half-Life” a year ago and recorded it this past April. It’s me plus the five other singers in Magic Names, each of us going into my Max patch where everybody gets processed or looped in real-time, using gestural controllers (Wiis and iPhone – I ran the whole piece using the c74 app!) It’s a work in progress, and has evolved a bit since the premiere. The next thing to do is to work more on the electronics to make them sound better and run more efficiently.



View the full-sized screen
shot.

The sung parts are about the Tower of Babel, and draw on Balkan music electro-acoustic music, sound poetry, liturgical chants, even early tape constructions (Hugh La Caine’s Dripsody reflected in a section called “Dripsodisiac”). Like many of my pieces there is a Middle Eastern influence — because that’s usually what just comes naturally out of out of my mouth. But I also came from a jazz background, which I find influences all of the music I write as well.

A lot of my music has improvised and aleatoric elements [chance, Latin = ‘dice’.] My project What is it Like to be a Bat? (a “digital punk” trio with Kitty Brazelton), combined rigorous contemporary classically scored music with a punk noise fest, unpredictable sections and my craziest Max patches to rhythmically manipulate feedback and live processed sound (hands free — I played electric guitar and sang). Another project is my duo with Chuck Bettis (a hyper-creative Max programmer/performer who processes his voice). The CD Chatter Blip is an “interstellar multi-character audio operetta using a multitude of human, alien, and machine voices, and a mash-up of primal and classic sci-fi and electro-acoustics…..” it’s pretty wild, and we started experimenting controlling Jitter using our voices and Wiis.

Školská 28 Prague

It’s refreshing to talk to a vocalist. I like contemporary vocal music a great deal. But you’re also a musician. What was the first instrument you ever played?

I played piano when I was seven, ‘til I was nine. I didn’t get very far, but I remember listening to classical music at the time and being really enthralled by it. I was just not really getting anyplace with the lessons. Then I composed my first piece in high school. I wasn’t a songwriter. I didn’t play any instrument through high school. I started singing in choirs. I was in a madrigal choir in my middle-school years that performed a lot in New York, and would take us around.

The instructor was amazing and I met with again him again recently (Robert Sharon) and realized he spawned like a thousand musicians in New York. Everybody loves this music teacher. He’s like a Mr. Holland’s Opus guy.

But other than singing, I was at a specialized math-science high school (Stuyvesant in NY), and I wasn’t involved in any music there. But in an English class we had to do something on Hamlet, so I remember I wrote a piece taking one of the soliloquies and turning it into something I sang and played on the piano and put on a cassette tape — and people liked it. I started playing guitar at sixteen, and performing the next year.

So, how did you get into Max?

I made a piece when I took a class with Robert Rowe, at NYU in 1992. He had written Cypher, music software that behaved like an improvising musician, and introduced us to many ideas, and Max. He wroteInteractive Music Systems, and books that have been very influential.

While I was taking Robert’s class he brought a friend of his, Cort Lippe to NYU. It was right when Cort was writing Music for Clarinet and ISPW [IRCAM Signal Processing Workstation] for Esther Lamneck. We had an ISPW on loan for a while and I got really hooked.

I was amazed by the live sound processing — with being able to use natural sound and take it apart and simultaneously use it as a control source and material. I didn’t have words for what I was hearing yet, but it just made the electro-acoustic music really come alive for me in a visceral way.

So after that, that’s pretty much all I did, work on interactive pieces that involved live processing. I think I made a tape piece at some point, but we had an ISPW at NYU only briefly, and then it was gone. Then the problem was that although we had Max to control MIDI there was no MSP audio processing yet on a regular Mac. But luckily, NYU had an Eventide H3000, and the MIDI implementation was really thorough and really great. I started making Max patches that really control everything aspect of the Eventide, and I really loved the sound of it.



View the full-sized screen shot.

Did you know the late Richard Zvonar?

Of course. Richard Zvonar was the only person I knew who had worked as extensively as I had with the Eventide extended controllers and Max (except maybe Joel Ryan). But Zvonar approached it very differently. When he performed live, when I saw him with Robert Black, he was working the front panel. He used Max to create an elaborate librarian program to help him make and develop these great patches that he would then use to perform live. But he was performing he was using the buttons you can assign on the front panel, and the internal LFOs.

I made a patch that would allow me to control all those parameters in real time and store collections of parameters into presets. Then I controlled the patch using controllers, using various algorithms I was making, and sequences of presets (often very extreme changes, which I liked very much).

Some students were watching me give a lecture once and one called out, “Oh my God, there’s a file in there from 1994. I was only two years old!” I’m like, “Shut up.” [Laughs.] I first made the patch around 1994 — something that controls everything on the Eventide. But it did not take long to I realize that I can’t controleverything at the same time while performing. So I started making algorithmic controls and grouping things, and kind of had a complete instrument by around ’95 and then started gigging a lot around New York doing live sound processing and singing.

What I was doing was very seat-of-the-pants, because I really didn’t know anybody else who was doing this, in New York anyway. I got in with a crowd of people doing kind of avant-jazz or ecstatic jazz, free improv under all kinds of names. So, I organizing some gigs: for myself, Tom Beyer (drums), Paul Geluso (bass), Leopanar Witlarge (woodwinds, brass) and Daniel Carter (sax, flute, trumpet), who were veterans of the whole loft-music scene. I would grab little bits of what they were doing, and create new rhythms and sounds out of it. They’d been playing together for quite some time, and always experimenting, but I was doing something they hadn’t tried before. There really weren’t many people in NY in that music scene who understood what I was doing then, but it is much more prevalent now.

Anyway, I couldn’t actually sample anyone, because I didn’t have MSP (it did not exist yet). And I wasn’t using a sampler. I just had two 1400 ms delay lines and a bunch of filters on the Eventide. But I found all kinds of fun things to do — extracting all kinds of things, filtering and playing with delay times. And I wrote a Master’s project about Realtime time-domain effects processing.

When MSP finally came out, I started using it to just serve up some audio files, and maybe record a little bit. Since then, over the years, though I’m still basically using the same patch, my computers have gotten way more powerful — so I added more MSP, I added VST plug-ins. I use GRM tools for some things, because, as I feel about my Eventide, although I’m a good programmer, I’m not five DSP engineers. And I know that no matter what I do it’s not going to sound as good as some of the stuff that the engineers behind GRM or Eventide can make for me– I’m totally happy with having some parts of my setup be made by other talented people. I want to mold the tools to make them do what I want, but I don’t feel like I have to grow the wheat and grind the wheat and do absolutely everything. A good deal of my time goes into the other half of the equation — keeping my voice in shape, being a good musician, and very importantly– keeping my electronic sounds meaningful as a musical instrument, and not just as kind of a add-on to the music.

The tools that I build are usually the result of a musical need.

How do you think that affects your personal programming style?

Well, it’s like when I started doing free improvised music. I was playing with a drummer who likes to play polyrhythms, (as do I). Another programmer might have thought to herself, “Oh, gee, let’s analyze the audio to figure out what the polyrhythms are and do some kind of artificial intelligence thing to re-create them.” And that’s perfectly acceptable. There’s some great work out there doing that.

But that’s not what I wanted to do. What I wanted to have something that would let me play with the drummer, and be in control of my effects processing and sampling in a way that was going to allow ME to play the polyrhythms.

So I build little tap-delay time patches and little polyrhythmic metronomes and other things to use as control sources. A couple of years ago I got really interested in Morse code– so there’s Morse code built into my main patch now too. When needed, I can use a piece of text related to a particular piece and Morse code will become part of the rhythm-scape. In the case of Robotica (my piece for the LEMUR bots), the bots are playing Morse code for the word “robot.” Do you care as an audience member? No. You don’t have to.

But in a holistic way, it helps me in my process, that a meaningful word expressed as a code that has an interesting rhythm, can become part of my entire musical process, even my vocal line, and control of the electronics themselves.

Do you use any external hardware controllers?

Yes. I’ve been using a Peavey PC-1600X for years, and even though it’s wearing out, I keep fixing it and will eventually get another one. It’s very flexible; it’s very programmable, very solid. — I’ve had people say, “Oh, she’s got one of those old things.” I was like, “Yep, it works.” The other fader controllers I have looked at don’t have 16 faders and 16 buttons that I can program in so many different ways, and it has 2 CV inputs as well which I use for pedals.

Also I use two Wii controllers in tandem sometimes. And I used iPhones in my vocal piece. That was fun.

How do you use the iPhones?

I was using them because running six Wii controllers is really unreliable and I needed a backup plan. My piece is very reliant on the technology working, and it was giving me nightmares. The older iPhones, have a little problem if you’re trying to use the accelerometer and don’t want to screen to flip over. Otherwise I found the c74 app to be very easy to use and trustworthy.

Usually I’m running my own shows, and this was a 15-channel setup at Issue Project Room in New York. Since I was one of the singers for the vocal sextet I had somebody back at the desk running the show for me. I’m not used to being away from my computer while performing, and it was a bit uncomfortable. But then I realized, “Oh, I can run the cues for the whole show from my iPhone.” So I did.

You’re the first person I’ve talked to that uses it for the whole show. That’s great.

It’s perfect. I was able to make my own decisions about processing and audio events that were happening, although I was a performer on stage (and without my head buried in a laptop, can be boring to see). You don’t have to have a laptop on stage, if all you’re going to do is hit the space bar to start, and yet I can still monitor what is happening, and change little things if I need to.

Do you teach Max?

I’ve been teaching Max at Harvestworks since the mid-90’s. I taught a beginners class for many years at NYU, and then passed it on to Joshua Fried when I had my babies, and wanted to do something new. I started teaching an advanced class for Grad students. This Fall I teach electronic music performance. (I alternate with Joel Chadabe who teaches in the Spring), and I teach private composition, maybe students looking to make elaborate performance projects and other kinds of things. I also have taught and consulted privately for a good many people over the years and in artist-in-residence programs for Harvestworks and Engine 27.

Do you have a philosophy or a theory about how to teach Max?

Yeah. I found that, when I used to teach a lot with Luke DuBois, as well with others, that we all started our classes in similar ways, which was, by making our students create some kind of random “atonal” thing. I think Luke is the one who coined the term, Random Atonal Crap Generator, for this first patch we had our students make in a class. I like to think of it as a skeleton or Platonic ideal of an algorithmic patch. (It’s ametro connected to random connected to some MIDI note or sound generating combination of objects..) We then start adding on to this altering the patch.

We start by make something that works, and then flesh it out so that everybody understands, that we have bangs that we’re scheduling, and we use them to generate some numbers, and then we have to use the numbers to create some kind of output. Then in-between these areas in the patch, we may have to do a little bit of scaling or massaging of numbers, to make the right range of numbers come out for the kind of output that we have chosen.

So I always start with that patch. With someone who’s new to Max, its important to get them to understand that this one thing I’m showing them is not everything Max does. The “random atonal crap generator”, could be many many things — for creating music, or changing video parameters, or to control the movement of a robotic arm, or could be a fan that’s turned on and off. [Laughs.] My friends and I have played a lot of parlor games about what kind of crazy interactive patches we could bake, and else could we do with this basic patch.

Beyond that, my philosophy is to always try, when I’m teaching, to have the students ask me for something before I give it to them. I try to lead them to water… but I don’t give them the idea to drink it [laughs.]

So, you don’t show them patches as examples?

I feel that if they don’t yet know that they need an object, I’m going to wait to show it to them. Barring the first couple of hours, of course, when they are learning the most basic things. But they need to learn to program for themselves, and they do this best when they discover good programming practice and modular thinking because they need it. And yes, of course, I do show them lots of patches so they can see what is out there and what interesting ideas are being implemented in the community.

The tutorials are wonderful, and very thorough, but I tend to skip around in them. It’s like, say I’m going to Thailand next week, and I’m not going to study Thai grammar. No, I’m going to figure out how to say, “I would like a cup of coffee,” and “thank you very much” and “Where’s the bathroom?” First I want to learn only the words that I would need to function and get around. Then, I’ll start filling in the gaps.

That’s the way I start off when I’m teaching Max. Let’s get enough understood about the different kinds of objects and good programming practice, without getting into every detail about every object. Then I have my students go back and do the tutorials as a way to fill in the blanks. They learn best by making patches that do something they personally need or want to do, and by making mistakes and finding solutions to their problems. That’s my teaching philosophy.

And that is essentially how I learned Max, MSP and Jitter – by getting curious about some audio or musical idea, then finding various ways to express that idea — in the process learning more about how to program more efficiently and in ways that are expandable. I also took a couple of computer science classes to learn more about data structures and good programming practice, and this definitely helped my programming. But having worked with so many artists on their projects and installations and performance pieces over the years really helped me try out a wide variety of ideas and solve way more problems than I would have encountered working alone and only on my own projects.

What’s next for me is a greater focus on solo performance. My initial work with live sound processing made me very dependent on the sounds that other people made, and this was fine, since I really love playing with other people and the interaction on stage. Wanting more independence sonically, I started working with feedback systems controlled by rhythm generators too, and audio as a control source, all which that allowed me to be more comfortable performing solo and have it be fun (and I try to build in plenty of aleotoric, unpredictable elements so I won’t miss being around other musicians and to keep me on my toes.) I plan to get more of my sound processing music from the past 15 years out where it can be heard, plus the new things I am doing, and definitely put out another CD with my husband, Hans Tammen. November 5th I will perform some new solo work at Vital Vox Festival in New York.