5 of August 2024
One of the most important skills of the future will be to learn how to interact with different intelligences. This includes artificial intelligence. It can be incredibly useful, or incredibly frustrating and counterproductive. How do we interact with different intelligences? Here are a few ideas: understand how they are different. Be in the loop. Do not outsource everything. They make mistakes, they are probabilistic. Be clear, concise. Understand how they work underneath, this helps you phrase interactions better. For developers, do not anthropormise. Instead, recognise unique strengths and weaknesses, and design around that. An example is that LLMs make mistakes often. Can we add a layer or verifiability? Can we maie it visible? Can we create products that are robust for 80% of success rate? What if we go beyond text? Can some AIs be better interacted through different interfaces? Is text the best way to explore image or music latent spaces? I dont think so. We can design with this in mind. Use examples. Models are better at coping than reasoning from first principles. Prompt for step by step.
5 of August, 2024
Escribir: the main issue that people find when using AI now is that we are trying to anthropormise a very different intelligence. It is like trying fit a circle through square peg. Instead, we need learn to interact with, and create interfaces that allow us to interact with artificial inteligence as a different form of intelligence, in all its weirdness and different way of processing information. For example, when coding, or working on iterative projects, it is easy to make the model go crazy, and start writing nonsense, by taking it in different directions. For example, if I try something i n code, but then realise that wont work and say, ok scrap that, lets try this other direction, that previous context stays there, and will condition the As output. The attention will be partly stillplaced on that incorrect solution. THerefore, it will affect the probability of the output tokens, likely biasing them towards an incorrect solution. The more unnecesarry information in the prompt, the worse the output. With a human this is not a problem, they are better at forgetting that wrong infromation. But the way LLMs work, they will ALWAYS reference it to a certain degree. When this happens, I am forced to start a new chat with GPT or Claude. This makes me lose progress, and it is very inconvenient. The way to address this is to acklendge the nature of how LLMs work and do a simple interface change. You can allow the user to simply delete those unnecesary messages, “cleaning the conversation”, wuthout having to start gain.
5 de agosto 2024
Often I have felt that my PhD somehow became to crowded. With AI becoming the buzzfield that it has become everyone seems to be working on it, contributing to it, and it somehow makes me feel irrelevant, or overwhelmed.
It sort of felt, beforehand, that when I was doing my phd in the early days, when nobody knew about LLMs, that I was working in something niche, with huge potential, where I had the change to think in peace. Now, it feels like I am thinking in a crowd full of screaming people. This has made me enjoy less. However, this is also due to a specific framing and way of engaging with the outside world. First, engaging with social media feels now super crowded, full of buzzwords and charlatans. Second, I also should feel incredibly lucky to be working in the field that now is sort of the hottest thing, and I have been working beforehand in it. The most useful approach is to disengage from social media as much as possible, and keep working on the things I find interesting, in silence, only to release things out int he world, be it ideas products, experiments or research. I can trust that if its of value, it will be paid attention to, being such an active field.