Artificial Intelligence


Artificial intelligence (AI) is the simulation of human intelligence by computers. Overall, it involves the design and creation of non-human systems with human cognitive capacities, such as language, perception, and decision-making.* Julian Peller et al., Ok, Pandora (Buenos Aires: El Gato y la Caja, 2024), 160. https://elgatoylacaja.com/ok-pandora/indice Originally in Spanish, translation to English by the author. AI is a pretty big field, and it’s been around since a long time ago. The earliest substantial work and theoretical development in the field began with the work of Alan Turing. Turing’s concept of computation is now commonly referred to as the universal Turing machine, which serves as the foundation for all modern computers.* Alan Mathison Turing, "On computable numbers, with an application to the Entscheidungsproblem," J. of Math 58, no. 345-363 (1936); Alan Turing, "Computing machinery and intelligence," Mind - A quarterly review 49 (1950).

Linking AI to human cognition and ultimately to creative practice is somehow possible and illustrative. It reveals some parallels. As I proposed in the ‘Introduction’ of this reflection, it is possible to follow Margaret Boden’s advice and try to understand certain forms of cognition as they exist outside the human brain –for example, in a computer– in order to get a clearer picture of how these exist and develop within our mind. However, many crucial differences between them must be understood for meaningful comparison.

Similarities with human cognition

Essentially, AI can learn, recognize patterns, solve problems, predict, make decisions, and adapt to new situations. These are all traces that resemble those equivalent capacities of human cognition. Beyond these broad similarities, there are more nuanced parallels. One of the most striking parallels lies in how both systems encode input data. In humans, this occurs through sensory systems that process environmental stimuli; in computers, it happens through algorithms that process digital inputs, such as text, images, or numerical data. Both systems then reduce the dimensionality* To better understand the idea of dimensionality, we can revisit Gärdenfors’ notion of conceptual space and quality dimensions, viewing it as analogous to the concept of dimensionality in AI. Highly-dimensional data primarily refers to complex information represented using many similarity –or quality dimensions. Reducing its dimensionality, therefore, means representing the data using fewer quality dimensions. of this data, essentially simplifying and organizing the complexity of raw inputs into a more manageable and meaningful form.

A somewhat analogous form of representing data in AI models is the notion of a latent space. A latent space refers to a hidden or underlying space where highly-dimensional data is projected, revealing essential patterns or structures (highly-dimensional data here refers mainly to complex information). Usually, these latent variables or dimensions are not directly observable. In addition to the idea of latent space, there’s another related concept from AI, which is usually used indistinctively with latent spaces, although they have nuanced differences: embedding spaces.

Latent spaces and embedding spaces are both used in Machine Learning to represent high-dimensional data in a more compact form. However, embedding spaces refers to the transformation of data into continuous vectors.* A vector is a mathematical concept often represented as an ordered list of numbers corresponding to coordinates in a space. These embeddings thus capture relationships or similarities in the data, aiming at preserving these similarities through vectorization. In other words, while latent spaces focuses on uncovering hidden structures, embedding spaces are more focused on the transformation of data into vectors that preserve similarity.

This is another similarity between humans and AI: the human brain encodes information in a way that reflects semantic similarity.* Semantic similarity measures the degree to which two concepts, words, or pieces of text share meaning or are related in context. See Abhilasha A. Kumar, "Semantic memory: A review of methods, models, and current challenges," Psychonomic Bulletin & Review 28, no. 1 (2021), https://doi.org/10.3758/s13423-020-01792-x. Neural representations in the brain are believed to cluster semantically related concepts. For example, certain areas of the brain, such as the temporal lobe, are involved in organizing and storing semantic memory, where related knowledge is stored in close proximity. This facilitates the efficient processing and retrieval of information based on categories of similarity. This form of neural organization can be seen as the neural correlate to Gärdenfors’ theory of conceptual spaces.

Differences

In the chapter “AI y Conciencia,”* Peller et al., Ok, Pandora. Enzo Tagliazucchi proposes that the biggest difference between AI and human cognition is the emergence of consciousness in humans. Namely, the capacity of humans –and potentially other animals– to have subjective sensations. However, this distinction leads to problems, as it is impossible to establish measurements of consciousness. We, humans, essentially claim that we are conscious and experience subjective sensations, but so far, there is no way to prove it outside ourselves. Furthermore, it is not even clear what are the neural mechanisms that allow the emergence of consciousness. Therefore, if an AI system were to be conscious, there is today no way of proving it true or false. Consciousness might be a capacity that AI will never achieve, or might have been already achieved by the most advanced models, or might be close to being achieved. It is impossible to tell so far.

Another important difference is the notion of embodiment. As AI models are disembodied –at least from an organic human body– they are incapable of experiencing a multiplicity of sensations and perceptions that ultimately shape our cognitive processes –I will discuss more about this in the section ‘The Embodied Perspective.’ When we think of art creation, embodiment seems to be a crucial aspect that determines fundamental aspects of a practice. Boden exemplifies this using a writer and the metaphor of “Winter.” The embodied experience of cold, darkness, lack of vitamin D, flu, etc., certainly has an impact on how a writer might create a poem about the winter, or how winter metaphorically relates with non-corresponded love, and so on. In this and countless other examples, we can see how experiencing embodied sensations has a strong agency in the creative process.

However, two strong arguments are challenging this notion of lack of embodiment as a determining factor that differentiates AI from human cognition. On the one hand, the development of organic computation and how it is currently possible to compute using organic support,* Ramiz Daniel et al., "Synthetic analog computation in living cells," Nature 497, no. 7451 (2013), https://doi.org/10.1038/nature12148; Jacob R. Rubens, Gianluca Selvaggio, and Timothy K. Lu, "Synthetic mixed-signal computation in living cells," Nat Commun 7, no. 1 (2016), https://doi.org/10.1038/ncomms11658. something that could potentially lead to AI existing in an organic medium. On the other hand, the gradual integration of multiple sensory capacities into the more advanced AI systems. ChatGPT-4o, for example, integrates visual and auditory recognition and datasets based on these domains. This, potentially, could bridge the gap between AI and humans concerning embodiment.

Last but not least, recent research has been oriented to test some of the most advanced AI models using intelligence tasks, and some of these results are showing astonishing results in tasks that could not be conceived to be successfully resolved by existing disembodied agents. Against these odds, some models are showing sparks of general intelligence, even without having a body.* Sébastien Bubeck et al., "Sparks of Artificial General Intelligence: Early experiments with GPT-4," arXiv preprint arXiv:2303.12712 (2023). Even though these findings are still disputed, it seems like the progression of advancement of these models will inevitably lead to the emergence of an AGI* Artificial General Intelligence (AGI) refers to a form of AI with the ability to understand, learn, and apply intelligence across a wide range of tasks at a level comparable to human cognition. without the need for an embodied existence.

Finally comes what Tagliazucchi defines as the biggest difference, which is related to how the human brain and AI models process information in fundamentally different ways. Specifically, the brain relies on globally recurrent interactions among specialized modules for conscious information processing.* Stanislas Dehaene, Consciousness and the brain: deciphering how the brain codes our thoughts (New York: Penguin, 2014). This involves massively interconnected neurons that allow information to be distributed to specialized cortical regions, which is crucial for consciousness and attention. In contrast, AI systems lack significant recurrent connections and do not have a global workspace* In the context of cognitive psychology and neurosciences, a "workspace" refers to a theoretical construct known as the "global workspace," which posits that distributed and integrated neural activity allows diverse information to be broadcast across the brain's cortical regions, facilitating conscious access, processing, and integration of information. or specialized processing modules akin to the human brain. In practice, AI progress hasn’t been delayed by the absence of a global workspace, since consciousness in humans acts as a bottleneck that essentially filters most sensory information and limits the capacity of multitasking. Rather, the pursuit of efficient AI often focuses on parallel processing, which contradicts the serial processing of conscious experience.

OK. How does all this connect to music? And further, how does it relate to the artistic outcomes of the project?

As I mentioned in the introduction, the discussion surrounding each piece in the artistic result is framed within one of the three paradigms of human cognition along with a fourth approach related to abstraction, which intersects the symbolic, connectionist, and embodied paradigms. Moreover, some of the tools I use for the composition of these works come from AI and machine learning, and might be seen somehow as digital counterparts to certain aspects of human cognition. I will delve further into this last aspect in due time, and hopefully, the relevance of the concepts discussed earlier will become clearer in the chapters that follow.