This research transforms a given image into a soundscape. The soundscape is not generated, however, but taken from an existing database. Paper: https://nips2017creativity.github.io/doc/Imaginary_Soundscape.pdf Demo: http://imaginarysoundscape2.qosmo.jp/view/v26G-hZG6H/384210_1661766_0 What is interesting is that it provides a way to match an image embedding to a sound embedding. So for a [[human-AI creative collaboration]], this could be a used as a way for a creator to input an idea into the system through a visual representation. But more interestingly, using something like [[Dall-E]], the user can express an image in words, then Dall-E generates the image, and then the image is fed into either a decoder to generate music, or matched with an [[embedding]] for a sound in a Database. This can provide a way for a user to interact dialogically with a creative AI ([[Dialogic Creative Artificial Intelligence (DCAI)]]). For [[Memu]], this could be helpful in many ways. It could provide a way for a creator to engage dialogically with the system. Moreover, it opens a new creative avenue through AI, the ability to create music from visual descriptions, which is really cool.