For a limited time only
For the first installment of our new Tech Talk feature, it seemed only fitting to try and provide some insight into how the technology behind AAS products has evolved, and where it's headed, and who better to do that than our very own CTO, Philippe Dérogis.
I have always been interested in understanding how things work. When I was kid I spent a huge amount of my time building stuff with Lego using axles, wheels, and gears and after that Mecano, electrical components, then computing on ZX81 and VIC20...
In parallel, my math and physics courses fascinated me because they were teaching me how a hand full of relatively simple concepts can explain a huge amount of different phenomenon and can predict accurately how a system is supposed to evolve and function over time. Everything can be known...
Later at university, I completed my masters in maths focusing on dynamic systems, chaos, and work on fractal theory. These fields are very interesting because they go against the above: they show that very simple systems can have very complex behavior and some times be totally unpredictable... fascinating...
In those fields of research computers are of a great help because numerical models are often the only way to know what is going to happen. These types of situations are very paradoxical : the science can show that these things exist but the science can't predict accurately how the things are going to evolve, even the simplest systems. So in these cases nothing can be known. Yes, I know, very far from music...
With the combination of math and physics, and my "passion" for playing piano and synth, as well as an article in a journal of physics which described how we can observe semi periodic and chaotic behavior in the sound produced by a reed instrument, my way was found. I made my PhD thesis in the field of acoustics.
During my lab work at that time I developed more in depth physical models describing the behavior of the different parts of musical instruments such as strings, beams, plates, reeds, bows and so on. During those years I got a solid experience in digital signal processing as well.
So the "boucle est bouclé": Make individual blocks modeling the acoustic objects mentioned above, analog filters, and generators, and provide a program which allows you to connect them the way you want and experience, in real time, the way they sound reacting to external control.
This was the idea behind Tassman.
There is a buzz around the word physical modeling these days, and perhaps some misunderstanding too. Recently I read a paper on physical modeling which start like this:
In this discussion, as in the pro audio industry, 'physical modeling' refers to the use of digital computers to model (simulate) the sound of musical instruments.
Well, according to this definition, any synthesis method which uses a computer could be called physical modeling. So I will concentrate on what is behind "AAS physical modeling", and what the advantages are of this method versus other techniques (subtractive, sampling, FM etc..).
In our understanding physical modeling, is done by finding an accurate mathematical model of the object you want to simulate, basically a set of differential equations or partial derivative differential equations, and then, using numerical integration techniques, solving those equations, in real time, in order to generate the sound produced by the object.
For example, imagine a hammer hitting a string. With a physical model you simulate the motion of the hammer, the motion of the string and the interaction between them when they are in contact. The consequence of this is that if you hit the string when it is already in motion rather than at rest, you will have a different resulting sound due to the initial condition of the system which is different each time. This is the sort of advantage provided by physical modeling over sample based methods. The sound reflects real acoustic behavior and changes according to the control parameters received, naturally reproducing timber changes versus loudness, vibratos, rich and changing transients, and so on.This makes the result sounding more realistic and provides the performer or listener with a more lively experience.
For the analog synth, physical modeling provides exactly the same advantages over wave table or sampling techniques as it does for acoustics instruments. As analog systems are under continuous tension to control the synthesis, they can produce an infinity of slightly different sounds. They are thus a very bad candidate for sampling.
Another point is the genuine warmth of analog sound, which is often regarded as nearly impossible to reproduce through digital methods. I think that, in going back to the equation governing the behavior of real biased electronic circuits, physical modeling is the best solution for reproducing the warmth and liveliness of those sort of instruments, and is certainly the best candidate to dispel this perceived coldness of digital synthesis.
I'm a fan of old analog and electromechanical instruments and as a piano player, especially of electric pianos. These pianos can produce a very wide palette of sounds from a crystal clear tones to very nasty ones.
Your absolutely right in talking about excellent multi-sample solutions, I have tried a couple of those at home, and there are definitely some good ones! The point is that when you play them the first time, you find the sound amazing but after one or two weeks you begin to find it a little cold, and after a month you've decided that it is good for practice, but it can't provide the truely pleasurable sensation of playing the real instrument.
My goal in developing Lounge Lizard was to provide an alternate solution for true fans of electric pianos, players and listeners alike, which goes farther than just replaying prerecord sounds, by producing the sound resulting from these instruments' functioning as a whole, accurately modeled part by part.
I'm surely not the right guy to respond to this question! It's true though that in testing our application we use all the hosts, sound cards/ MIDI interfaces,and other audio gear so...
For the computer I would have preferred a Power PC running Linux but there's no good distribution of linux on Mac and no standard host applications running on linux at this point. Too bad...
Back to the question, I guess I'd go with a PC running Windows, and a MIDIMAN Delta 66 sound card. For the sequencer my heart goes to Sonar, and for the speaker Genelec (I'm French).
For the midi keyboard I still prefer my old DX7, as it's got good velocity response.
It's certainly true that done correctly, physical modeling requires a lot of computation and thus takes a certain amount of CPU power. But I would not go so far as to say it isn't a viable synthesis solution. If we look in the field of imaging processing and special effects, for years things were accomplished simply by layering various 2D images. Then along came these guys who worked out these heavy 3D mathematical models for synthesizing images.
Take a look at special effects in fairly recent films such as Jurassic Park or the new Star Wars, could you say that 3D image synthesis is not a viable solution at this point? What can we expect from computer hardware, going faster or slower? And what can we expect from the physics research, to carrying on or not?
As I said in the beginning of the interview, my main interest is in the study of dynamic systems or more specifically non linear sets of differential equations. In acoustics, you find those sort of scenario in self oscillating systems such as brass, reeds, bowed strings. You also find those systems in electronics and analogs oscillators with the well known Van der pol equation. All these systems have a great richness of function, just think how many different sounds you can make with a cello from a small squeaking to a huge warm sound. This is definitively a field that I will continue to explore in more depth.