Log in or RegisterFR | EN

For a limited time only

David Kristian

Head 2 Head

Since our first 2002 interview with David Kristian, he accumulated an impressive amount of releases in featured films, shorts, videos, and on CDs…

Since his last 2005 CD release, Rhythms for a Rainy Season, he has focused more on music for films. David's latest work is a collaboration with Emmy Award winning composer Ed Dzubak to score Douglas Buck's Sisters. He also created music and soundscape for filmmaker Francois Miron's The 4th Life. Over 70 minutes of David's unsettling music and soundscapes are featured along with the work of legendary sound designers Glenn Freemantle and Andy Wilkinson (28 Days Later, V for Vendetta), and composer Alfons Conde (who wrote the opening titles music) in Nacho Cerdà's long awaited The Abandoned.

Since 2012, David worked as an audio artist and co-composer on Ubisoft’s much anticipated Watch Dogs game which features next-gen graphics and sound design.

Be sure to visit David's website for his complete filmo/discography!


A few words with David Kristian

You have been doing a lot of music for horror movies recently. What are the particularities involved in writing music for this kind of project?

Genre films have accumulated a lot of conventions and cliches over the past few decades, but I find the ones that are hardest to break or reinvent are the ones born out of trends. Whenever I get the opportunity to work with a director who understands the importance of having an original soundtrack, I get inspired to try new things. I find to many current horror films are scored like either action flicks or music videos, so whenever possible, I'll suggest a less predictable approach where music and sound design blend in a seamless manner. Filmmakers Nacho Cerda and Karim Hussain are very aware of the importance of the music of sound as opposed to the sound of music. In The Abandoned, some of music feels like its seeping out of the walls and getting under your skin.

Is there a particular challenge dealing with the dynamics needed to scare people?

It's important for the score to support what you see onscreen, but it should never take the audience out of the film. My two principal scare tactics are to creep the audience out through subtle atmospheric music and soundscapes, and the proven yet effective stinger. I think both can work in tandem at creating tension and release, but the tricky part is to balance the hypnotic buildup and the jump scare. My favorite kind of jump scare is the one where you get frightened by a shocking image before the sound, which is then used to sustain the shock.

I'm very interested in scoring games, where musical elements are more modular. This is an area of soundtrack work where your sense of textures and layering is really put to the test.

When scoring a movie, how do you balance your time between recording and editing a performance?

I usually record and edit everything track by track, but I always go back to tweak things. The reason I'm not big on MIDI sequencing is that a lot of hardware and software instruments cannot reproduce a performance exactly the same way twice when controlled via MIDI. Analogues drift, and newer physical modelling virtual instruments have a more random, organic feel that distinguishes them from sampled instruments. Way back when most of my music was made using samples, the MIDI sequencer was king, but when I switched (back) to analogue, I would record everything live, so I got used to playing my instruments, not just the keys, but the wheels, knobs and sliders.

Being able to tweak things on the fly is important when you're doing soundtracks, especially during takes when you're underscoring a scene with many kinetic elements.

Your latest soundtrack was made using a limited number of instruments yet it has an amazing orchestral quality. Could you tell us more on how it was created?

The last soundtrack I worked on was Douglas Buck's re-imagining of Brian DePalma's Sisters, which I scored in collaboration with American composer Edward Dzubak. We both used virtual and "real" instruments, but most of the richly layered stuff was done by combining sampled sounds with live recordings, acoustic and electric instruments. The ironic thing is that a lot of the acoustic instruments were used to create effects, and the virtual ones were made to sound orchestral.

At that time, I was really into using my lap steel guitar for string and orchestral effects, but that was before I started using String Studio, which is now my main source of string sounds and effects. Most of my latest work features a combination of sampled orchestral instruments, virtual instruments made on either String Studio or Tassman, and the few analogue synths and effects I have kept for good measure, and in hopes that someone will ask me to do something that sounds like a classic John Carpenter score (laughs).

How well do the String Studio sounds and sampled instruments complement each other?

Anyone working on soundtracks would probably agree that the main reason using a real orchestra sounds better is that sampled libraries are usually limited in terms of articulations and performance effects. Sustained strings are harder to tell apart in a mix, but a sampled run or effect will stand out like a sore thumb. String Studio is really useful for adding realism to sampled string lines, but while imitating acoustic instruments is one of its strong points, it can also be used to invent instruments that have never existed, but blend perfectly with an orchestra.

You'd be amazed at how many of my clients start off by telling me they don't like electronic or "MIDI" sounding things, but the truth is they rarely have any complaints once they hear what can be done with physical modelling.

I guess your selection of instruments has changed since we did the first interview back in 2002. Are you using more software instruments these days?

It makes very little sense to own digital hardware synths and modules these days, but I do believe there is still room for analogue. My studio is almost completely software-based at the moment, with the exception of a few hardware effects and analogue synths and sequencers. Not that all analogue synths sound better than digital ones mind you, as I used to own some of the new modular instruments, and they sounded thin compared to some of the virtual ones.

Ultra Analog and Tassman sound quite warm even without enhancements, and a synth like Tassman has the added bonus of a nearly unlimited number of modules as opposed to what you can afford and afford to fit in hardware-wise.

Computer CPUs are now faster than ever, so there are less glitches and freeze-ups than there were in 2002. One thing that really needs to be improved is the build quality of hardware controllers, and by build quality I'm not just talking about metal versus plastic, but the implementation of soft labels and better visual feedback.

It's known that you like analog synthesizers and you are using a Moog Little Phatty live and in the studio, any reasons why you choose this particular synthesizer?

One of my favorite synths of all time is the Moog The Source, so I really felt I should also own one of the latest machines, and one which was designed by the man himself. Contrary to what many believe, the Little Phatty has more in common with the Minimoog than The Source, but it truly is unique and rich sounding; a perfect lead instrument. I also love the design, build quality and the fact that there are CV ins for the VCO, VCF and VCA. Moog have been very good at updating the LP with new software features, which makes it a living instrument as opposed to a run of the mill product.

How are Tassman and String Studio used with the rest of your live setup?

Now that I'm busy with soundtracks, I rarely get to play out, but there are exceptions, and I have started taking out a laptop to shows to complement the hardware instruments. I'm really into sound-on-sound looping and like to combine elements generated on the computer with Tassman and String Studio with those of live instruments and effects. The juxtaposition of textures is quite nice, and permits me to avoid the funnel effect, where everything turns into mush because it's all coming from one source going through one looper.

Some people have told me my live sets sometimes feel like film soundtracks, and I agree up to a point, but one main difference is that in a live context, I'll provide the music for you to visualize, whereas in the studio, someone else will provide the visuals and it will be up to me to imagine the music.

Thanks David!