David Kristian is a music-maker, film score composer and sound designer who has been involved in audio for media since the early 1980s. For over a decade, David Kristian has been mostly making sound and music for games, but he is still active as a music-maker, who's latest release is Relevance and Serendipity, a classic Berlin School synthesizer in 2020. Always musically reinventing himself, his latest projects are focused on a blend between eerie nostalgia and auditory illusions.
Genre films have accumulated a lot of conventions and cliches over the past few decades, but I find the ones that are hardest to break or reinvent are the ones born out of trends. Whenever I get the opportunity to work with a director who understands the importance of having an original soundtrack, I get inspired to try new things. I find to many current horror films are scored like either action flicks or music videos, so whenever possible, I'll suggest a less predictable approach where music and sound design blend in a seamless manner. Filmmakers Nacho Cerda and Karim Hussain are very aware of the importance of the music of sound as opposed to the sound of music. In The Abandoned, some of music feels like its seeping out of the walls and getting under your skin.
It's important for the score to support what you see onscreen, but it should never take the audience out of the film. My two principal scare tactics are to creep the audience out through subtle atmospheric music and soundscapes, and the proven yet effective stinger. I think both can work in tandem at creating tension and release, but the tricky part is to balance the hypnotic buildup and the jump scare. My favorite kind of jump scare is the one where you get frightened by a shocking image before the sound, which is then used to sustain the shock.
I'm very interested in scoring games, where musical elements are more modular. This is an area of soundtrack work where your sense of textures and layering is really put to the test.
I usually record and edit everything track by track, but I always go back to tweak things. The reason I'm not big on MIDI sequencing is that a lot of hardware and software instruments cannot reproduce a performance exactly the same way twice when controlled via MIDI. Analogues drift, and newer physical modelling virtual instruments have a more random, organic feel that distinguishes them from sampled instruments. Way back when most of my music was made using samples, the MIDI sequencer was king, but when I switched (back) to analogue, I would record everything live, so I got used to playing my instruments, not just the keys, but the wheels, knobs and sliders.
Being able to tweak things on the fly is important when you're doing soundtracks, especially during takes when you're underscoring a scene with many kinetic elements.
The last soundtrack I worked on was Douglas Buck's re-imagining of Brian DePalma's Sisters, which I scored in collaboration with American composer Edward Dzubak. We both used virtual and "real" instruments, but most of the richly layered stuff was done by combining sampled sounds with live recordings, acoustic and electric instruments. The ironic thing is that a lot of the acoustic instruments were used to create effects, and the virtual ones were made to sound orchestral.
At that time, I was really into using my lap steel guitar for string and orchestral effects, but that was before I started using String Studio, which is now my main source of string sounds and effects. Most of my latest work features a combination of sampled orchestral instruments, virtual instruments made on either String Studio or Tassman, and the few analogue synths and effects I have kept for good measure, and in hopes that someone will ask me to do something that sounds like a classic John Carpenter score (laughs).
Anyone working on soundtracks would probably agree that the main reason using a real orchestra sounds better is that sampled libraries are usually limited in terms of articulations and performance effects. Sustained strings are harder to tell apart in a mix, but a sampled run or effect will stand out like a sore thumb. String Studio is really useful for adding realism to sampled string lines, but while imitating acoustic instruments is one of its strong points, it can also be used to invent instruments that have never existed, but blend perfectly with an orchestra.
You'd be amazed at how many of my clients start off by telling me they don't like electronic or "MIDI" sounding things, but the truth is they rarely have any complaints once they hear what can be done with physical modelling.
It makes very little sense to own digital hardware synths and modules these days, but I do believe there is still room for analogue. My studio is almost completely software-based at the moment, with the exception of a few hardware effects and analogue synths and sequencers. Not that all analogue synths sound better than digital ones mind you, as I used to own some of the new modular instruments, and they sounded thin compared to some of the virtual ones.
Ultra Analog and Tassman sound quite warm even without enhancements, and a synth like Tassman has the added bonus of a nearly unlimited number of modules as opposed to what you can afford and afford to fit in hardware-wise.
Computer CPUs are now faster than ever, so there are less glitches and freeze-ups than there were in 2002. One thing that really needs to be improved is the build quality of hardware controllers, and by build quality I'm not just talking about metal versus plastic, but the implementation of soft labels and better visual feedback.
One of my favorite synths of all time is the Moog The Source, so I really felt I should also own one of the latest machines, and one which was designed by the man himself. Contrary to what many believe, the Little Phatty has more in common with the Minimoog than The Source, but it truly is unique and rich sounding; a perfect lead instrument. I also love the design, build quality and the fact that there are CV ins for the VCO, VCF and VCA. Moog have been very good at updating the LP with new software features, which makes it a living instrument as opposed to a run of the mill product.
Now that I'm busy with soundtracks, I rarely get to play out, but there are exceptions, and I have started taking out a laptop to shows to complement the hardware instruments. I'm really into sound-on-sound looping and like to combine elements generated on the computer with Tassman and String Studio with those of live instruments and effects. The juxtaposition of textures is quite nice, and permits me to avoid the funnel effect, where everything turns into mush because it's all coming from one source going through one looper.
Some people have told me my live sets sometimes feel like film soundtracks, and I agree up to a point, but one main difference is that in a live context, I'll provide the music for you to visualize, whereas in the studio, someone else will provide the visuals and it will be up to me to imagine the music.