Sonic State
Sonic State
Sonic State   News Synth Site Studio Amped - Guitar news Gas Station Samplenet In depth reviews and articles Store
Sonic State Full article listing
 
In-depth Feature:  Synthesis - Whats Next?
Mark Tinley writes: .

Page PREV 1   2   3   4   of 4 NEXT
...continued

the history of audio synthesis
Original subtractive synthesisers used oscillators to create wave forms and then filters and envelop shapers to try and recreate the sounds of musical instruments. Over the years these sounds and others (additive synthesis, wavetable, FM, physical modelling et al) have themselves become accepted as musical instruments in their own right and modern synthesisers try and recreate the sounds of both real World musical instruments and early synthesised sounds.

In the early eighties Fairlight invented the sampler almost by accident, an electronic musical instrument capable of recording audio and playing it back at different pitches from a musical keyboard. Static freeze frames of instruments were captured in a sound library and it was possible to create entire compositions using these, though the results were often expressionless and a bit sterile.

In the late eighties Roland brought sample playback to the masses with their D50 keyboard, mixing sampled attacks with looped digital waveforms to create a sampler style sound using very little ROM memory which was an expensive commodity at the time. Having digital waveforms, filters and envelopes it could also sound a bit like an analog synth.

Modern synthesisers can manipulate complex waveforms by re-synthesising sampled recordings. This makes it possible to play with the time and pitch elements of a sound in real time using intelligent adaptive processing, giving a player infinitely more variation in sound than a static waveform or sound sample. Coupled with massive amounts of sample ROM, modern synthesisers can accommodate a huge library of pre-recorded sounds. When you're fed up with those, you also have the ability to add your own recorded sounds.

Synthesisers have evolved but still nearly all of them use the principle of waveforms, filters and envelops as well as providing the user with a familiar interface and universal set of parameters. Therein lies a problem. While some can even morph from one sound to another, what good is this when the basic sounds are not properly represented? Is a waveform, even a complex waveform such as a mono, or even a stereo audio recording enough information to recreate a musical instrument? I say no!

My hunch is that synthesisers are not evolving and that it is probably because of the way that scientists and mathematicians define sound. Perhaps the problem is that they have not been trained to listen? I am 43 years old and confound my doctor by having better ears than an average 25 year old. Do I think this is true? Probably not. I think I fare well in a hearing test because I have been trained to distinguish between different levels and frequencies where your average 25 year old hasn't. I wonder how old Hertz was when he defined frequency? When he died he was 37, so it is unlikely that he had 30 years experience in music and sound engineering when he came up with his theories. Theories, I might add, which are now well over 100 years old.

Anyway, I promised not to get technical, so let's just say that "sound in the real world is not conveniently squashed into a mathematical cube". It's not my intention to completely reinvent how we look at sound here, let's leave that to the scientists. What I do know is "sound is far more complex than simply being amplitude and frequency measured over time" and I can only hope what I am writing impels some nobel prize winning sort of thinker to throw a spanner in the works somewhere and come up with a new theory to explain it.

I think many synthesiser designers have simply missed the point by spending too much time looking at emulating other manufacturer's synthesisers and should really have gone back to rethinking the way we define sound. Even the cleverest synthesiser that demixes sound into its component parts is working at a compromise, because a computer just can't predict all the random elements that went into creating an instrumental sound. Humans have much better perception than synthesisers and perhaps the ability that humans have to predict what is going to happen next should be built into any interface.

Musicians talk about vibe yet synthesiser designers continue to give us VCO and VCF. Terms ironically that represent something musicians are supposed to understand and are only a representation in an all digital World. If nothing else is possible I would like to see a synthesiser that uses more interesting terms to define its structure, only not by repackaging it and changing the names of the knobs.

More Resources              Articles - full listing

Page PREV 1   2   3   4   of 4 NEXT
 

Copyright Sonic State Ltd. 1995-2024. All rights reserved.
Reproduction in whole or in part in any form or medium without express written permission from Sonic State is prohibited.

About us - Ad enquiries - Contact - Privacy Statement