One of humanity’s enduring fascinations has always been gazing into the night sky. As Jo Dunkley puts it, beyond the aesthetic appeal, gazing into the space above us has been a source of fascination, inspiring questions about how we fit within this larger universe. However, for millennia, our visual explorations of space, and our capacity to answer these questions ,was limited to what we could see with the naked eye. But the invention of telescopes quite opened up new regions in space for us to see. I think of art and creativity in a similar way. Humanity has always been intrigued by exploring the space of creative possibilities, and art has helped us conceptualise how we fit within the larger cosmos, and deal with our most existential questions. Just as the telescope did for astronomy, creative instruments open up new possibilities for us to peer into the space of creative possibilities. First we used our hands to explore the possibilities that cave walls and pigments made from crushed insects could afford us. Since, we have invented many other creative instruments: complex paint brushes, cameras, guitars, synthetisers, and now, generative algorithms (which are ultimately, funny enough, just electricity passing through rocks). Generative Neural Networks as a new Telescope Generative AI algorithms are a particularly interesting type of telescope. They literally learn the distribution of datasets, be them images, text, or music and creating a space of possibilities known as the latent space. We can peer into this space through different interfaces, akin to how we peer into a space through a telescope. Currently, the dominant interface to peer into them is text. we craft prompts that guide us to specific regions within that space. > Side note: the metaphor of telescopes and diffusion models is no only figurative: just like a telescope gradually focuses light to reveal distant objects, going from a blurry to clear image, generative models, especially diffusion models, start from noise and progressively denoise over time steps until an image matching the prompt emerges. They transition from a coarse resolution to a finely detailed image, much like adjusting a telescope to see a region of space in greater detail. ## Beyond text The telescope metaphor is not only illustrative: it can also help us design better tools. Telescopes are primarily used for exploration. Early astronomers moved their telescopes around in search of interesting regions, to then focus, in an exploration vs exploitation dynamic. However, generative AI models today mainly require users to craft text prompts to reveal a region. This limits exploration, as it implies that the users already know where you want to go in the creative space. But how can they know what’s out there if you’ve never seen it? Instead, like telescopes, we need exploration, the ability to move around, up and down the latent space. What could these new interfaces look like? I envision navigable interfaces that allow users to move through the latent space. This is a significant challenge because the latent space of generative models can have hundreds or even thousands of dimensions, while humans can typically navigate only two dimensions on a screen, or three if we get creative. How do we reduce that high-dimensional space into a meaningful 2D or 3D representation? Beyond moving up and down on a screen, how else can we navigate it? Can we involve the body using motion sensing? Could we directly respond to brain signals as directives using brain-computer interfaces (BCIs)? These are open questions, and they are ones I’m excited to explore now at Leonardo.Ai.