Deep Audio Visualization ~ Visual Music Perceptions


Audio visualization usually relies on hand-crafted features, like intensity, timbre or pitch. These metrics are defined by humans and are biased towards our cultural representation of sound. In this project, we have trained a Neural Net to generate these features directly from spectrograms, in an unsupervised way. We thus get rid of this bias and hope the resulting visualizations can help us perceive music in different ways.

https://br-g.github.io/Deep-Audio-Visualization/web-app/

Advertisement

Leave A Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s