Synthesis - Whats Next?
the history of audio synthesis
Original subtractive synthesisers used oscillators to create wave forms and then filters and envelop shapers to try and recreate the sounds of musical instruments. Over the years these sounds and others (additive synthesis, wavetable, FM, physical modelling et al) have themselves become accepted as musical instruments in their own right and modern synthesisers try and recreate the sounds of both real World musical instruments and early synthesised sounds.
In the early eighties Fairlight invented the sampler almost by accident, an electronic musical instrument capable of recording audio and playing it back at different pitches from a musical keyboard. Static freeze frames of instruments were captured in a sound library and it was possible to create entire compositions using these, though the results were often expressionless and a bit sterile.
In the late eighties Roland brought sample playback to the masses with their D50 keyboard, mixing sampled attacks with looped digital waveforms to create a sampler style sound using very little ROM memory which was an expensive commodity at the time. Having digital waveforms, filters and envelopes it could also sound a bit like an analog synth.
Modern synthesisers can manipulate complex waveforms by re-synthesising sampled recordings. This makes it possible to play with the time and pitch elements of a sound in real time using intelligent adaptive processing, giving a player infinitely more variation in sound than a static waveform or sound sample. Coupled with massive amounts of sample ROM, modern synthesisers can accommodate a huge library of pre-recorded sounds. When you're fed up with those, you also have the ability to add your own recorded sounds.
Synthesisers have evolved but still nearly all of them use the principle of waveforms, filters and envelops as well as providing the user with a familiar interface and universal set of parameters. Therein lies a problem. While some can even morph from one sound to another, what good is this when the basic sounds are not properly represented? Is a waveform, even a complex waveform such as a mono, or even a stereo audio recording enough information to recreate a musical instrument? I say no!
My hunch is that synthesisers are not evolving and that it is probably because of the way that scientists and mathematicians define sound. Perhaps the problem is that they have not been trained to listen? I am 43 years old and confound my doctor by having better ears than an average 25 year old. Do I think this is true? Probably not. I think I fare well in a hearing test because I have been trained to distinguish between different levels and frequencies where your average 25 year old hasn't. I wonder how old Hertz was when he defined frequency? When he died he was 37, so it is unlikely that he had 30 years experience in music and sound engineering when he came up with his theories. Theories, I might add, which are now well over 100 years old.
Anyway, I promised not to get technical, so let's just say that "sound in the real world is not conveniently squashed into a mathematical cube". It's not my intention to completely reinvent how we look at sound here, let's leave that to the scientists. What I do know is "sound is far more complex than simply being amplitude and frequency measured over time" and I can only hope what I am writing impels some nobel prize winning sort of thinker to throw a spanner in the works somewhere and come up with a new theory to explain it.
I think many synthesiser designers have simply missed the point by spending too much time looking at emulating other manufacturer's synthesisers and should really have gone back to rethinking the way we define sound. Even the cleverest synthesiser that demixes sound into its component parts is working at a compromise, because a computer just can't predict all the random elements that went into creating an instrumental sound. Humans have much better perception than synthesisers and perhaps the ability that humans have to predict what is going to happen next should be built into any interface.
Musicians talk about vibe yet synthesiser designers continue to give us VCO and VCF. Terms ironically that represent something musicians are supposed to understand and are only a representation in an all digital World. If nothing else is possible I would like to see a synthesiser that uses more interesting terms to define its structure, only not by repackaging it and changing the names of the knobs.
It sounds as if you want to be Harry Partch, but you're waiting for it to happen inside of a box, which could take a generation or more, and reality will still be more complex and beautiful....
19-Feb-07 10:23 AM
Yea, synths are in a rut, more due to corporate domination of the market than anything else. Corporations are risk-averse, so they crank out the same crap year after year, attracting and rewarding mediocrity in their design staff - which makes them easier to manage.
Your dream synth description strikes me as pie-in-the-sky. Personally, I want a genuinely new form of controller tightly coupled to a predominately physical modeling synthesis engine. A guitar can keep me entertained for hours and it doesn't have a 3D operating system. I want something more like that.
23-Feb-07 05:42 AM
ItÂ´s true,Mark.Maybe some of your ideas are a bit radical,but with a virtual plasma-screen or something like that,floating letÂ´s say 25 cm.above the "main device"-say,a synthesiser,we could do miracles already.Of course it would be all touch-screen and manipulative in whatever way, thus imagine applying it to making music!ItÂ´s all possible,but itÂ´s just a matter of the "hard-cash".ItÂ´s just still too expensive to manufacture.Or do I have to remind that this perticular Fairlight,who indeed was revolutionairy spectacular in 80,had a luxury car pricetag and those guys from Kraftwerk did something on their own and were already millionairs too,as was Giorgio Moroder.It takes us probably another genius rich lunatic to come up with such,but itÂ´s just a matter of time...
27-Feb-07 09:49 AM
'The only limit is your imagination' has been used to advertise synths from the from the beginning. The magic is not in the instruments, but in the harmonic movement that touches the emotions. Anyone can do it yet no one can explain it. I encourage every electronic musician to explore the keys, intervals, and inversions that music is made of. Finaly, be concious of the proportions in your composition: tension vs drone, loud vs quiet, noise vs music, vocal vs instrumental, electronic vs natural, simple vs complex. Explore the extremes and evolve your music!
27-Feb-07 01:56 PM
One only has to read the comments you recieved about this article to understand why companies don't want to risk a large investment in new technology when users are satisfied with the things they are given. They just want another guitar. I feel the main problem is that most users cannot grasp much more than a "virtual analog" with 2 osc and a LP filter. Yamaha fought this with FM, and we'll see the industry continue to pump out sample playback and virtual analog untill there is the market of end users that are ready for something more.
03-Mar-07 08:32 PM
I agree with the main thesis of the article, synthesisers are still VCOs and VCFs. Why, especially given the buckets of CPU and RAM available don't synths do something new? I guess given that it is only sample manipulation that the author doesn't think the V synth is new. Although there's a bit of COSM in there too.
I don't agree that frequency, phase and amplitude are limited concepts. OK, trying to understand sound in these terms means that we lose out because we can't deal with the complicated information that a full description in terms of frequency, phase and amplitude has. Direct intuitive manipulations of these quantities will always be fairly simple and predictable(VCF, VCA, ring mod etc). I don't think that FM synthesis is such a radical departure either. Still uses simple manipulations of frequency, amplitude and phase.
So it appears that the author's proposal is a dynamic object/space acoustic simulation. Its a nice idea but not to my ears a new one. For what is proposed a huge cluster of PCs would be necessary for realtime operation and a darn good GUI/hardware interface to make it an instrument and not a simulation. Still, all those manipulations sound like great fun. Imagine striking a gong and then morphing it into a tubular bell and then tossing it down a grassy hillside.....
09-Mar-07 10:51 AM
I recommend reading the following article. http://www.musicwords.net/musictech/sucks.htm
10-Mar-07 01:27 PM
Â«My hunch is that synthesisers are not evolving and that it is probably because of the way that scientists and mathematicians define soundÂ».
Add marketing to this and forget that the synthesizer evolution will go further of even an inch. Many of the manifacturers regard the synthesizer as a sort of a toy ,believing that what counts the most is to have a set of flashing leds at the panels; it's not quite so; I believe that even in this context the artesan way must be resumed; and, most of all there must be a deep and close collaboration with the musicians in order to give out expressive and modern instruments; look at the old Yamaha CS80: if many people still use it and restore it though it weighs 100Kgs it's not because they still try to imitate Vangelis or so: they use it because it sounds and behaves and,most of all, can be used, like a real, even acoustic instrument. The goal to me should be to get something you can express yourself with, not only a newest synthesis method: let the manifacturers produce tools with natural velocity responses, poly aftertouch, ribbon controllers, breath controllers, all sort of levers and pedals , let these tools be expensive even we'll have instruments, not toys at last
18-Mar-07 12:07 PM
I guess my experience and intuition leads me to feel that the issue ultimately lies with the playing interface, not the programming i/f (although improvements to it might not hurt). I'm not certain what a better one would look like, but I don't think its more sliders, buttons, knobs and pedals.
When I play my trombone, there are many ways I can affect nuances of the sound with tiny adjustments of muscles and breath. In some ways, the possibility of getting a really awful tone â€“ rather than a mediocre but consistent one, as from an average synth â€“ is a gauge of the potential control (or lack thereof). I suggest that greater expressiveness, not greater programmability, will lead to greater musicality.
Maybe a reverse "Dance-Dance Revolution" interface, allowing you to use nuanced motion of your whole body, or as much of any part of it as you want, would be a direction to explore. Game controllers are going this way anyway. best, bentropy
24-Mar-07 12:11 PM
Completely bonkers. Sound IS defined in terms of frequencies, wavelengths, harmonics and so on. The author's fantasy synth wouldn't change that. Science is fact.
20-Feb-08 06:42 AM