Z`1 I just came back from a three-week trip to Europe to attend the International Conference on Computational Creativity in Sweden and the Sound and Music Computing Conference in Porto, and I presented a paper in each. Here, I'll collect the main take aways. Is temperature the creativity parameter of LLMs? - Large language models (LLMs) are applied to all sorts of creative tasks, and their outputs vary from beautiful, to peculiar, to pastiche, into plain plagiarism. The temperature parameter of an LLM regulates the amount of randomness, leading to more diverse outputs; therefore, it is often claimed to be the creativity parameter. Here, we investigate this claim using a narrative generation task with a predetermined fixed context, model and prompt. Specifically, we present an empirical analysis of the LLM output for different temperature values using four necessary conditions for creativity in narrative generation: novelty, typicality, cohesion, and coherence. We find that temperature is weakly correlated with novelty, and unsurprisingly, moderately correlated with incoherence, but there is no relationship with either cohesion or typicality. However, the influence of temperature on creativity is far more nuanced and weak than suggested by the "creativity parameter" claim; overall results suggest that the LLM generates slightly more novel outputs as temperatures get higher. Finally, we discuss ideas to allow more controlled LLM creativity, rather than relying on chance via changing the temperature parameter. Jonathan Demke and Dan Ventura. _[Overcoming Algorithmic Bias as a Measure of Computational Creativity.](https://computationalcreativity.net/iccc24/papers/ICCC24_paper_42.pdf)_ Chess paper: the first step looks dumb, but after a few more steps in reveals itself as a brilliant move. https://computationalcreativity.net/iccc24/papers/ICCC24_paper_200.pdf A brilliant move is often classified as lower quality in the first move, but is evaluated as brilliant down the line. New ideas: AI helps us move up levels of abstraction: does this hurt or enable creativity? One notable benefit that emerges from employing Artificial Intelligence (AI) is the opportunity of ascending through levels of abstraction. There are two appreciable levels: execution space and conceptual space. Creatives move between the two. AI has the capacity to enable individuals to capture more time within the conceptual space, while directing the AI to do the grunt work in the execution space. In some cases this could be highly beneficial for creativity: for example it can help coders execute ideas beyond their level of expertise. I personally use it a lot not only to do things that I don't know how to do, but also to speed up my process. Instead of focusing too much on the intricacy of writing syntax for my Python program,, with syntax errors, going on Stackoverflow, I can just focus on the conceptual elements and have the AI do that low-level heavy lifting. From a psychology of creativity perspective, this can be highly useful to enable creative flow by closing the skill-difficulty gap. The skill difficulty gap is an established concept in the psychology of creativity that states that flow is achieved when there is an optimal balance between skill and difficulty. ![[Pasted image 20240709123403.png]] However, it also presents a potentially negative impact on creativity. According to Schon, author of The Reflective Practitioner, the process of creation allows the creator to reflect on the process itself. There is a dialogue between the conceptual space and the execution space. Doing things in the execution space, changes perspectives in the conceptual space. For example, as articulated by William Zinsser, in his book writing to learn, the process of writing itself is a process of thinking. Through writing, the writer undergoes a learning process and generates new perspectives. Consequently, transferring the core operational tasks of writing in a way outsources the grunt work of thinking, and can ultimately negatively impact the level of the writing and research someone can produce. Ultimately, there is the issue of skill loss. Heersmink has argued that language models can potentially contribute to skill loss and decreased cognitive abilities if we default to outsourcing entire tasks to the models with little user involvement. https://www.nature.com/articles/s41562-024-01859-y On the other hand, Essel and colleagues have shown empirically that using ChatGPT helped students increase levels of critical skills and creativity https://www.sciencedirect.com/science/article/pii/S2666920X23000772, however, they emphasise that is crucial that this tool is used to guide the students, provide feedback, help them understand complex topics, etc, rather than automating thinking outputs entirely. Something similar was observed in the McKinsey study, where consultants were able to improve their performance. **Shared playground** Large Language Models (LLMs) are now being employed as aids across a wide-range of language-related tasks, from writing emails and speeches to crafting blog posts and research articles. Following the introduction of ChatGPT, chat interfaces have become the prevalent form of interaction with these systems. While this design has made models more user-friendly and accessible (ref), a purely chat-based interface present important limitations for effective human-AI co-creativity. In particular, purely chat-based interfces impose substantial friction on active user involvement in artifact production. Consider, for example, a co-writing of a blog-post manuscript. If the user wants to edit an output of the model, they have to copy and paste the text in an external text editor, make modifications, and then paste back into the chat, to continue iterating collaboratively on the manuscript with the model. As a result of this friction, users tend to become less involves in making active contributions to the creation Rather than being an active co-creator, the user adopts an outsourcer role, merely asking the model to perform certain tasks, and the copy-pasting the models response with little oversight and modifications. Previous research has shown that a lack of active involvement from the user leads to increased errors, diminished performance, and skill-loss over time. On the other hand, it has been shown that when the user is actively involves, performance can be greater than AI or human working alone, while errors are diminished and rather than facilitating skill-loss, interaction with the model can contribute to greater critical thinking and creativity. We propose an alternative design that presents a simple, yet effective change to facilitate active user involvent: a shared creative playground where both the user and the AI have access to add, edit and modify outputs. While a chat window remains, it's no longer the primary communication channel. The chat becomes a space to discuss the joint creation being worked on in the shared playground. This model of interaction is what we term interaction through and about the artifact, where user and AI not only chat about the common creation, but where also both have a shared space to contribute directly to the creation. We examine if this interaction design results in greater user-involvment and higher perceived co-creativity. We conducted a user study with two groups: one used a purely chat-based interface for a writing task, and the other received a tool featuring our shared playground model, allowing direct contribution to a shared text editor and discussions in a chat window. Our findings indicate that X. Artificial intelligence presents an important opportunity for enabling greater human performance across a range of tasks by enabling co-creativity and collaboration. However, interaction design is a crucial element to materialise this possibility. Our work highlights how a simple interaction change that enables a shared playground contributes to higher user involvement. Me parece interesante ![[Pasted image 20240710215149.png]] Antes del 2020, habian reducido emisiones 5 anos seguidos. Desde la introduccion de la IA generative en sus productos, han subido, y desde el 2020, sus emisiones se han casi duplicado. Esto es importantisimo. La inteligencia artificial se propone como algo que nos va a ayudar a resolver todos nuestros problemas. Tomemos un paso atras. Sam Altman el fundador de OpenAI, habla abiertamente de los peligros de la IA. Elon Musk obviamente ha hablado mucho de esto. Muchos investigadores y pensadores consideran que la IA presenta un peligro existencial a la raza humana. Uno pensaria, porque lo hacen entonces? Si los mismos que la construyen, como Sam Altam, creen que presenta un peligro grave, por que lo hacen? El argumento es que si bien los peligros son grandes, el beneficio potencial es aun mayor. Es decir, si podemos evitar los peligros, los beneficios sera muy grandes. En particular, tener una superinteligencia nos ayudara a resolver muchos de los problemas. ![[Pasted image 20240710223013.png]] Climate change is the biggest problem we have, and AI is the biggest technology of our time, and they seem to be at odds with each other. We human civilisation burn itself from the heat generated by its mega brain? Execution Space and Conceptual Space AI allows you to move more into the conceptual space, but you lose touch with the execution space. When coding, its often better to just understand it. And then maybe usethe AI to do the heavy lifting, but not take you there from scratch. Reading documentation, and reading in general non generatively still has a lot of value. Engaging with static text over generative text.