by Robin Vincent | Approximate reading time: 3 Minutes
NSynth Super

NSynth Super  ·  Source: Magenta

ADVERTISEMENT

In a slightly unexpected move, Google has released details of NSynth Super experimental physical interface for the NSynth algorithm. Unexpected in that I hadn’t imagined Google moving into hardware synthesis, but only “slightly” because they’ve demonstrated an interest via their work with Web MIDI in Google chrome and their various musically interactive Google doodles. So, what is this all about?

ADVERTISEMENT

NSynth Super

It’s an ongoing experiment by Magneta, a research project within Google that explores how machine learning tools can help art and music in new ways. This machine is based upon the NSynth algorithm which uses a deep neural network to understand the characteristics of sound and then use that knowledge to create new sounds. Does that seem an awful lot like physical and acoustic modelling? Yes, it does but more cleverer somehow.

In messing about with the algorithm they’ve been developing hardware to allow people to interact and play with the sounds in a more human and less science-y kind of way. The NSynth Super is a prototype created in collaboration with Google Creative Lab. It lets you fiddle about in the space between 4 sound sources set in the corners. As you move your finger about on the Kaos Pad inspired screen you’re not mixing between the sounds, instead, the algorithm is generating completely new sounds based on the mix of characteristics from the original 4.

In the video below they sampled 16 sound sources across 15 pitches and stuffed them into the algorithm. Out the other end came over 100,000 new sounds. That seems impressive, but aren’t you essentially going to end up with 1000’s of sounds that sound very very similar? Dial up the sounds with the 4 knobs and NSynth will combine the acoustic qualities of those sounds relative to the position of your finger.

You are currently viewing a placeholder content from Youtube. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.

More Information

ADVERTISEMENT

This sort of morphing acoustic modelling is not exactly new. I remember combining acoustic characteristics on the Technics WSA1 back in 1995. And since then physical modelling has become a standard feature in both hardware and software synthesis. I guess what’s new here is the generation of sound through interpolation of data taken from existing sources. The question will always be whether it generates anything of musical worth or usefulness?

Open Source

The NSynth Super is available as an open source project. You can completely build your own, perhaps with a slightly less impressive box. But all the libraries used are open source and the code, schematics and design templates are available for download on GitHub. There’s no actual product or kit you can buy, if you want to get into the NSynth you’ll have to roll up your sleeves and get your hands dirty.

Magenta

Check out the video below for the whole story of Magenta and what they are about. Hands up who wants to be that Doug guy? The most interesting thing he said was that “we’re not trying to make music making easier, we’re trying to build some sort of machine learning tool that gives musicians new ways to express themselves.” I wonder if they are recruiting…

More information

Video

https://youtu.be/iTXU9Z0NYoU

 

NSynth Super

How do you like this post?

Rating: Yours: | ø:
ADVERTISEMENT

One response to “NSynth Super: Experimental machine learning Neural Synthesizer from Google”

Leave a Reply

Your email address will not be published. Required fields are marked *