Synthesis - Whats Next?
a possibility for the future
What I have wanted from a synthesiser for the last 15 years is an interface that lets me build models of real world things in a three dimensional space and then computes what they should sound like. My concept for a synthesiser is not about the science, it is about the interface. Though my interface would probably give the scientists some clues.
My ability to place a sound in a mix is down to using a part of my brain called complex spatial reasoning and creating an operating system that can model this ability rather than the circuit of some decrepit analog system is the key to getting synthesiser design out of a rut.
I think key areas synthesiser manufacturers should be looking at are:
sounds not sound
A very simple distinction is that an instrument does not make a sound! It makes sounds. Because sounds that happen at the same time are perceived together, we perceive it as a sound and treat it as such. In contrast a recording of an object is a sound, a single electronic or digital waveform, so it won't work very well as a musical instrument. It will sound static and expressionless.
Audio recording is fine for film sound tracks, sound effects or radio... It is not very good in synthesisers. Manipulating the sound using granulation, or convolution won't work either. It is the same sound being manipulated and while these processes add a certain degree of expression to a sound, you will always be left with the synthesisers sonic fingerprint. All sounds being manipulated in the same way by the same synthesiser or plug-in making music sterile and expressionless.
My interface is probably not that revolutionary. It starts with a multi-dimensional space similar to a 3D graphics program into which I can put things that make or modify sounds. The boundaries of the space are only limited by what I can hear, so placing a top fuel drag bike several miles away with nothing but air in between makes it audible. Placing a bee more than a few hundred yards away makes it disappear.
Perhaps this synthesiser uses physical modelling in combination with complex multi samples and convolution reverb - perhaps not! We've worked out the equations needed to alter a signal to represent acoustics in software, only are we representing the right signal? Or should I say, what have we missed? The glaringly obvious? Is it so obvious in fact that none of us can see it?
Anything in the virtual space can be defined as an object and each object has a simple set of controls. These controls could be SIZE, ranging from small to large; SHAPE, ranging from prism, through cube and ball to random or multifaceted; DENSITY, ranging from hollow to solid and absorbent to resonant, as well as the object's co-ordinates.
An object can also be defined as a striking object, a struck object or a benign object and I can make morphometric transitions between these states so that I can see what it sounds like if a string strikes the pick instead of a pick striking the string. I can even throw stones at a guitar string that uses a bucket full of water to amplify it, or make pedantically accurate representations of Stradivarius violins.
These objects start with me, the listener and I can alter my point of view, or should I say point of listening. My physical presence in the room has an effect on the sound. I can be fat, thin etc. If I am a Cello player, my leg and its proximity to the instrument I am playing is important. It will alter the sound, almost to the point that it matters whether or not I am wearing jeans or a skirt.
If I want to be indoors, I need to define walls, windows, doors, a roof and so on. And with a synthesiser being primarily aimed at making music, in this graphically oriented model it is simple for me to visualise the instruments I am making by drawing on references to real World objects. These should be objects which I have seen and have an understanding of. Strings, pipes, skins, bows, picks, hammers, breath and air as well as other elements like fire and water.
And before I push recording techniques completely off the knife edge into reality! Yes... I would probably like some of my old skool recording equipment in there. So as well as having ears to listen in this environment, I could have a vast range of virtual microphones as well.
When defining the building blocks this instrument, the designer will have made strategic decisions. They will have researched the component sounds of musical instruments and found new ways of representing them. Perhaps by isolating component sounds in a vacuum to define each as a building block, or maybe they will have found that atoms in air have an important function in the way sound works and will have analysed the space between them? After all, air is critical in this model and would have to be yet another object that I could define.
It sounds as if you want to be Harry Partch, but you're waiting for it to happen inside of a box, which could take a generation or more, and reality will still be more complex and beautiful....
19-Feb-07 10:23 AM
Yea, synths are in a rut, more due to corporate domination of the market than anything else. Corporations are risk-averse, so they crank out the same crap year after year, attracting and rewarding mediocrity in their design staff - which makes them easier to manage.
Your dream synth description strikes me as pie-in-the-sky. Personally, I want a genuinely new form of controller tightly coupled to a predominately physical modeling synthesis engine. A guitar can keep me entertained for hours and it doesn't have a 3D operating system. I want something more like that.
23-Feb-07 05:42 AM
ItÂ´s true,Mark.Maybe some of your ideas are a bit radical,but with a virtual plasma-screen or something like that,floating letÂ´s say 25 cm.above the "main device"-say,a synthesiser,we could do miracles already.Of course it would be all touch-screen and manipulative in whatever way, thus imagine applying it to making music!ItÂ´s all possible,but itÂ´s just a matter of the "hard-cash".ItÂ´s just still too expensive to manufacture.Or do I have to remind that this perticular Fairlight,who indeed was revolutionairy spectacular in 80,had a luxury car pricetag and those guys from Kraftwerk did something on their own and were already millionairs too,as was Giorgio Moroder.It takes us probably another genius rich lunatic to come up with such,but itÂ´s just a matter of time...
27-Feb-07 09:49 AM
'The only limit is your imagination' has been used to advertise synths from the from the beginning. The magic is not in the instruments, but in the harmonic movement that touches the emotions. Anyone can do it yet no one can explain it. I encourage every electronic musician to explore the keys, intervals, and inversions that music is made of. Finaly, be concious of the proportions in your composition: tension vs drone, loud vs quiet, noise vs music, vocal vs instrumental, electronic vs natural, simple vs complex. Explore the extremes and evolve your music!
27-Feb-07 01:56 PM
One only has to read the comments you recieved about this article to understand why companies don't want to risk a large investment in new technology when users are satisfied with the things they are given. They just want another guitar. I feel the main problem is that most users cannot grasp much more than a "virtual analog" with 2 osc and a LP filter. Yamaha fought this with FM, and we'll see the industry continue to pump out sample playback and virtual analog untill there is the market of end users that are ready for something more.
03-Mar-07 08:32 PM
I agree with the main thesis of the article, synthesisers are still VCOs and VCFs. Why, especially given the buckets of CPU and RAM available don't synths do something new? I guess given that it is only sample manipulation that the author doesn't think the V synth is new. Although there's a bit of COSM in there too.
I don't agree that frequency, phase and amplitude are limited concepts. OK, trying to understand sound in these terms means that we lose out because we can't deal with the complicated information that a full description in terms of frequency, phase and amplitude has. Direct intuitive manipulations of these quantities will always be fairly simple and predictable(VCF, VCA, ring mod etc). I don't think that FM synthesis is such a radical departure either. Still uses simple manipulations of frequency, amplitude and phase.
So it appears that the author's proposal is a dynamic object/space acoustic simulation. Its a nice idea but not to my ears a new one. For what is proposed a huge cluster of PCs would be necessary for realtime operation and a darn good GUI/hardware interface to make it an instrument and not a simulation. Still, all those manipulations sound like great fun. Imagine striking a gong and then morphing it into a tubular bell and then tossing it down a grassy hillside.....
09-Mar-07 10:51 AM
I recommend reading the following article. http://www.musicwords.net/musictech/sucks.htm
10-Mar-07 01:27 PM
Â«My hunch is that synthesisers are not evolving and that it is probably because of the way that scientists and mathematicians define soundÂ».
Add marketing to this and forget that the synthesizer evolution will go further of even an inch. Many of the manifacturers regard the synthesizer as a sort of a toy ,believing that what counts the most is to have a set of flashing leds at the panels; it's not quite so; I believe that even in this context the artesan way must be resumed; and, most of all there must be a deep and close collaboration with the musicians in order to give out expressive and modern instruments; look at the old Yamaha CS80: if many people still use it and restore it though it weighs 100Kgs it's not because they still try to imitate Vangelis or so: they use it because it sounds and behaves and,most of all, can be used, like a real, even acoustic instrument. The goal to me should be to get something you can express yourself with, not only a newest synthesis method: let the manifacturers produce tools with natural velocity responses, poly aftertouch, ribbon controllers, breath controllers, all sort of levers and pedals , let these tools be expensive even we'll have instruments, not toys at last
18-Mar-07 12:07 PM
I guess my experience and intuition leads me to feel that the issue ultimately lies with the playing interface, not the programming i/f (although improvements to it might not hurt). I'm not certain what a better one would look like, but I don't think its more sliders, buttons, knobs and pedals.
When I play my trombone, there are many ways I can affect nuances of the sound with tiny adjustments of muscles and breath. In some ways, the possibility of getting a really awful tone â€“ rather than a mediocre but consistent one, as from an average synth â€“ is a gauge of the potential control (or lack thereof). I suggest that greater expressiveness, not greater programmability, will lead to greater musicality.
Maybe a reverse "Dance-Dance Revolution" interface, allowing you to use nuanced motion of your whole body, or as much of any part of it as you want, would be a direction to explore. Game controllers are going this way anyway. best, bentropy
24-Mar-07 12:11 PM
Completely bonkers. Sound IS defined in terms of frequencies, wavelengths, harmonics and so on. The author's fantasy synth wouldn't change that. Science is fact.
20-Feb-08 06:42 AM