A [[latent space]] is a map of compressed representations of objects that exist in a higher dimension. For example, a neural network can learn to compress a photo that has 500x500 pixels can be into 256-valued vector, containing almost all the information relevant to recreate the image. This vector is an [[embedding]] Embeddings capture the essence of the object, they boil it down to the most fundamental features. In [[human-AI creative collaboration]], humans-co-navigate latent spaces ([[A creative collaboration is a co-exploration of a conceptual space]]). This navigation can happen very explicitly via interfaces like the ones used in Runway ML, where you literally move with the cursor through a two dimensional space, or via language interfaces, which take the language, translate it into the reduced representation. Even translation between different languages works this way. The utterance from language A is translated into its reduced representation, and then resampled back but in the other language, back and forth. What is amazing about machine translation is that these ML models learn the latent shape of languages. Of course, languages have similar structures. Thus, a translation is merely taking a point from one structure and mapping it to another. This is related to [[Margaret Boden]]'s idea of [[conceptual spaces]], which are navigated by artists in the creative process. One can mathematically conceptualise these conceptual spaces as latent spaces. **Dialogue and latent spaces** Each participant in a dialogue has different ideas in their heads. They have a different representation of the world, which can be understood as a latent space, because it is hidden in themselves, and perhaps not even fully visible to them. One can understand dialogue as the process of aligning those latent spaces ([[Dialogue is the process of aligning meaning]]), and understand where in the latent space we are actually standing. #Idea