In a slightly unexpected move, Google has released details of NSynth Super experimental physical interface for the NSynth algorithm. Unexpected in that I hadn’t imagined Google moving into hardware synthesis, but only “slightly” because they’ve demonstrated an interest via their work with Web MIDI in Google chrome and their various musically interactive Google doodles. So, what is this all about?
It’s an ongoing experiment by Magneta, a research project within Google that explores how machine learning tools can help art and music in new ways. This machine is based upon the NSynth algorithm which uses a deep neural network to understand the characteristics of sound and then use that knowledge to create new sounds. Does that seem an awful lot like physical and acoustic modelling? Yes, it does but more cleverer somehow.
In messing about with the algorithm they’ve been developing hardware to allow people to interact and play with the sounds in a more human and less science-y kind of way. The NSynth Super is a prototype created in collaboration with Google Creative Lab. It lets you fiddle about in the space between 4 sound sources set in the corners. As you move your finger about on the Kaos Pad inspired screen you’re not mixing between the sounds, instead, the algorithm is generating completely new sounds based on the mix of characteristics from the original 4.
In the video below they sampled 16 sound sources across 15 pitches and stuffed them into the algorithm. Out the other end came over 100,000 new sounds. That seems impressive, but aren’t you essentially going to end up with 1000’s of sounds that sound very very similar? Dial up the sounds with the 4 knobs and NSynth will combine the acoustic qualities of those sounds relative to the position of your finger.
This sort of morphing acoustic modelling is not exactly new. I remember combining acoustic characteristics on the Technics WSA1 back in 1995. And since then physical modelling has become a standard feature in both hardware and software synthesis. I guess what’s new here is the generation of sound through interpolation of data taken from existing sources. The question will always be whether it generates anything of musical worth or usefulness?
The NSynth Super is available as an open source project. You can completely build your own, perhaps with a slightly less impressive box. But all the libraries used are open source and the code, schematics and design templates are available for download on GitHub. There’s no actual product or kit you can buy, if you want to get into the NSynth you’ll have to roll up your sleeves and get your hands dirty.
Check out the video below for the whole story of Magenta and what they are about. Hands up who wants to be that Doug guy? The most interesting thing he said was that “we’re not trying to make music making easier, we’re trying to build some sort of machine learning tool that gives musicians new ways to express themselves.” I wonder if they are recruiting…