There is an emerging trend in Large Language Models (LLM) where they are increasingly being given the ability to interact with the outside world. This will unlock the greatest potential of AI but also poses the biggest, even existential risks. For instance, the latest release of GPT agents now enables the development of systems that can access a myriad of external tools ranging from emails and live data sources to smart home appliances and on-demand services like Uber and Uber Eats. This seems to align with the natural progression of technology, imbuing agents with capabilities to execute real-world actions and thus exponentially increasing their utility. This principle lies at the heart of platforms like Langchain, which describes their AI agents as both agentic and context-aware - able to process external data and execute actions based on it. However, this trend also presents us with a wide range of new possibilities, each with its own implications. For example, on one hand, the practical applications will be clear. Being able to tell your agent to find a time to meet with a friend by looking at both of your calendars, and then booking a table at a restaurant with a booking app, and sending you both a generated Spotify playlist before you attend. On the other is a possibility I am more excited about: new unprecedented creative operations. Recently I explored this in a project commissioned by the Sydney Opera House, we linked an agent to a music generation system that reads real-time activity data from the building, creating a continuous live stream of music for an entire month that served as a sountrack for what has happening in the building. The potential for such creative interplay of LLM models, real-world interactions, and control factors is truly exciting. On the other hand, the opening of such a Pandora’s Box also poses significant risks. Eliezer Yudkowsky, a well-known AI safety researcher, asserts that this could represent the gravest danger related to AI. He particularly highlights existential threats emerging from a scenario where an AI, such as an LLM, gains access to a chemical synthesis biolab. With control over the lab’s robotic equipment, this AI could theoretically synthesise new molecules. For instance, pharmaceutical companies might utilise a system like ChatGPT to manage the drug discovery and manufacture process. However, an adversarial actor could potentially exploit such a system to produce undetectable bio-weapons. This sounds like science fiction, but recently I saw a tweet by a researcher at a university where they have connected an LLM to chemical synthesis equipment. I predict that in the coming years, we can anticipate language models gaining further access to our world, becoming deeply embedded in various systems, controlling various aspects of our life from our cars and emails to our climate systems. These AI systems could serve as the operating system for our cyber-physical systems, bringing about a transformational shift along with significant risks. We can imagine there will be an underlying layer of intelligence operating the most important systems of our world. Before this happens, we need to make sure these intelligences are aligned to our goals. The thing is we don't yet quite know how to do this reliably.