Audio visualization usually relies on hand-crafted features, like intensity, timbre or pitch. These metrics are defined by humans and are biased towards our cultural representation of sound. In this project, we have trained a Neural Net to generate these features directly from spectrograms, in an unsupervised way. We thus get rid of this bias and hope the resulting visualizations can help us perceive music in different ways.