In 2023, large language models came into the scene at full force. Unquestionably, language models and generative AI more generally have been the most transformative technologies in recent years. However, I think what lies in the next decade is many times more transformational. Recently, OpenAI announced the launch of GPT agents, which allows developers (and non-developers) to construct their own GPT agents which can integrate with external online systems such as calendars, browsing, apps and real-time data sources and knowledge. Through the implementation of "functions", GPTs are able to control external services and things in the real world. In the beginning, this will predominantly involve digital services, with AI handling tasks like making bookings, sending emails and more on the users' behalf. However, we are already beginning to see language-based learning models controlling robotic systems. This was recently demonstrated by Boston Dynamics, who integrated ChatGPT into their robo-dog. ChatGPT can send commands, which can drivee the movements of the robot. It can also analyses its visual inputs, and converses with human users while moving around a room. This experiment hints at the near future of human-AI interaction. https://www.youtube.com/watch?v=djzOBZUFzTw But things will crazier in a few years after that. Brain implants and brain-computer interfaces are currently hot areas of research and development, with promising results. A wild example is a monkey that was enabled to play a game of pong using only its brain, facilitated by interfaces that can interpret brain signals linked to specific intentions and actions. https://www.youtube.com/watch?v=2rXrGH52aoM https://www.youtube.com/watch?v=xHSgUNz8C-Q This technology, developed by Elon Musk's company Neuralink, allows individuals to control digital objects using their brains. It's not hard to imagine that AI models will be integrated within these brain-computer interfaces. You could ask a question to ChatGPT or whatever LLM is embedded within it by simply thinking about it, and the response would be sent directly into your brain. This is purportedly Musk's ultimate vision for his company: a complete merger of human brains with AI. A hint towards that direction is Another example of AI embedding is the recently launched Humane Pin, a portable device powered by a language model. It interacts with users voicelessly through voice and touch, connected to many of the user's services. https://www.youtube.com/watch?v=CwSeUV3RaIA I think the Humane Pin is merely a stepping stone towards a future of full human-AI merging. What happens when the LLM can send commands in the user body? This is reminiscent of Kurt Vonneguts sirens of titan, where an army of ex-earthlings is controlled unknowingly by brain chips that execure the triggering of certain feelings.