Sonic State
Sonic State
Sonic State   News Synth Site Studio Amped - Guitar news Gas Station Samplenet In depth reviews and articles Store
Sonic State Full article listing
 
In-depth Feature:  Synthesis - Whats Next?
Mark Tinley writes: .

Page PREV 1   2   3   4   of 4 NEXT
...continued

a possibility for the future
What I have wanted from a synthesiser for the last 15 years is an interface that lets me build models of real world things in a three dimensional space and then computes what they should sound like. My concept for a synthesiser is not about the science, it is about the interface. Though my interface would probably give the scientists some clues.

My ability to place a sound in a mix is down to using a part of my brain called complex spatial reasoning and creating an operating system that can model this ability rather than the circuit of some decrepit analog system is the key to getting synthesiser design out of a rut.

I think key areas synthesiser manufacturers should be looking at are:

sounds not sound

A very simple distinction is that an instrument does not make a sound! It makes sounds. Because sounds that happen at the same time are perceived together, we perceive it as a sound and treat it as such. In contrast a recording of an object is a sound, a single electronic or digital waveform, so it won't work very well as a musical instrument. It will sound static and expressionless.

Audio recording is fine for film sound tracks, sound effects or radio... It is not very good in synthesisers. Manipulating the sound using granulation, or convolution won't work either. It is the same sound being manipulated and while these processes add a certain degree of expression to a sound, you will always be left with the synthesisers sonic fingerprint. All sounds being manipulated in the same way by the same synthesiser or plug-in making music sterile and expressionless.

the vision

My interface is probably not that revolutionary. It starts with a multi-dimensional space similar to a 3D graphics program into which I can put things that make or modify sounds. The boundaries of the space are only limited by what I can hear, so placing a top fuel drag bike several miles away with nothing but air in between makes it audible. Placing a bee more than a few hundred yards away makes it disappear.

Perhaps this synthesiser uses physical modelling in combination with complex multi samples and convolution reverb - perhaps not! We've worked out the equations needed to alter a signal to represent acoustics in software, only are we representing the right signal? Or should I say, what have we missed? The glaringly obvious? Is it so obvious in fact that none of us can see it?

Anything in the virtual space can be defined as an object and each object has a simple set of controls. These controls could be SIZE, ranging from small to large; SHAPE, ranging from prism, through cube and ball to random or multifaceted; DENSITY, ranging from hollow to solid and absorbent to resonant, as well as the object's co-ordinates.

An object can also be defined as a striking object, a struck object or a benign object and I can make morphometric transitions between these states so that I can see what it sounds like if a string strikes the pick instead of a pick striking the string. I can even throw stones at a guitar string that uses a bucket full of water to amplify it, or make pedantically accurate representations of Stradivarius violins.

These objects start with me, the listener and I can alter my point of view, or should I say point of listening. My physical presence in the room has an effect on the sound. I can be fat, thin etc. If I am a Cello player, my leg and its proximity to the instrument I am playing is important. It will alter the sound, almost to the point that it matters whether or not I am wearing jeans or a skirt.

If I want to be indoors, I need to define walls, windows, doors, a roof and so on. And with a synthesiser being primarily aimed at making music, in this graphically oriented model it is simple for me to visualise the instruments I am making by drawing on references to real World objects. These should be objects which I have seen and have an understanding of. Strings, pipes, skins, bows, picks, hammers, breath and air as well as other elements like fire and water.

And before I push recording techniques completely off the knife edge into reality! Yes... I would probably like some of my old skool recording equipment in there. So as well as having ears to listen in this environment, I could have a vast range of virtual microphones as well.

When defining the building blocks this instrument, the designer will have made strategic decisions. They will have researched the component sounds of musical instruments and found new ways of representing them. Perhaps by isolating component sounds in a vacuum to define each as a building block, or maybe they will have found that atoms in air have an important function in the way sound works and will have analysed the space between them? After all, air is critical in this model and would have to be yet another object that I could define.

More Resources              Articles - full listing

Page PREV 1   2   3   4   of 4 NEXT
 

Copyright Sonic State Ltd. 1995-2024. All rights reserved.
Reproduction in whole or in part in any form or medium without express written permission from Sonic State is prohibited.

About us - Ad enquiries - Contact - Privacy Statement