The AImnesia project focuses on the concept of human hybrid memory, which can be augmented, influenced, and modified by AI. Echoes of this seemingly fantastic and futuristic concept can already be found in the way in which social media algorithms learn our online behavior, process data, and recommend relevant content; in new cameras with built-in AI; and advancements in brain-computer interfaces. Today, artificial neural networks play an unseen but crucial role in our digital ecosystem by defining and recommending what should be seen, listened to, and read. We are beginning to explore a new, technological aspect of our nature by using machine learning algorithms to overcome the boundaries to our understanding of what human consciousness is. Through the experience of absurd video installation about a hybrid memory creation made by using different ML algorithms, Chris Kore intends to prompt a discussion of evolving algorithms that can be trained on our online photos and can fill in memory gaps by creating fake memories that are plausible enough to be perceived as real.
Chris Kore is a digital dreamer, multidisciplinary artist, and designer interested in the exploration of an ever-changing mediated reality and its influence on human perception. Her works touch the philosophical and psychological sides of our technological nature, question the expanding development of AI, mixed realities and digital traces.
Ian Goodfellow: They will still have the same meanings. It has always been possible to tell false stories from true stories, even if they were statistically indistinguishable.
Gray Scott: With this question, we are getting to the deeper end of a philosophical swimming pool. This is the area of Timothy Leary, Terence McKenna, and Alan Watts. This is the area of what is real and what philosophers had been attempting to understand since the beginning of time. We assume that what we are experiencing is real, and that is a major assumption for an animal that can only see a very small sliver of a light spectrum. That is just one example. So, we assume that what we are experiencing is a true reality, when in fact we have very little evidence. Now, with quantum computers and quantum physics scratching out the door of science, there is something in nature and the cosmos that has yet to reveal itself to us. I think it has been here since the very beginning. Some people call it god or divine intelligence, but I would not describe it that way—it is not more miraculous that the code in the seed that turns into a tree. It is just the way the universe is structured, and we are not capable of fully seeing it yet. My working theory is that the future is the portal inward and technology is a mirror that we will find inside of a portal. When we develop a true and holistic understanding of what our reality is, we might discover that this is the illusion and that the next stage is just a higher level of reality. This is hard to get across to people because it is like telling a fish about water. How would it know? Suddenly, one fish becomes enlightened and says “Oh my God, we are in the water!” The other fish will think he is crazy.
Gene Kogan: This is a difficult one. What is real? This is a question for philosophers. A more pragmatic perspective is to consider events that actually happened as “real.” It will become extremely hard to distinguish “real” from “fake” content on the web. We occasionally imagine that people who are not experienced when it comes to the Internet can be easily tricked by fake news. I think that, in the future, people who are very familiar with the Internet will also become easily tricked because fake content is becoming incredibly realistic. With current “deepfakes,” it is still possible to determine that it is computer-generated, but in another year, fake content may be indistinguishable from real one, especially in digital format. For example, GPT-2 is a text generator powered by OpenAI. It produces samples of machine-written text that were incredibly coherent and realistic. The text sounded and looked like it was written by humans. It can generate articles, comments, and summaries and call restaurants and order pizza. It can make an appointment in a hair salon, like Google Duplex. We are definitely entering a period in which non-human entities will be able to produce realistic sounds, texts, and images. This will make distinguishing process way more difficult. I do not think that these developments changes the concepts of what is real and fake; it simply means that the fake content will look incredibly close to real.
Tomo Kihara: This is a really good question. I would say that it will matter, but the boundary between what is fake and real will become fuzzier than ever. For the past few centuries, humans have been putting effort into making synthetic things that resembles the original real objects. Some attempts such as creating synthetic diamonds were successful to a point where some people even consider the fake diamonds more beautiful than the original one. Nevertheless, people still value real diamonds. The value of the “real” will still remain, but I think that people will start to value fakes in a new way. "The fakes are more real than the original because they thrive to be real." — is a quote from the Fake Tales in Japan that captures this notion. In the book it gives a story of how a master art forger can be more valuable than the artist. The notion here is that the act of creating an original thing is easy while the act of replicating the original sometimes requires more effort and skill.
Gray Scott: I think that is what will happen to us, but the industry for this has not yet been developed. However, they are kind of circling your idea right now. There are a few companies that are working on this, but none of them have done what you are referring to yet. I understand exactly what you are describing. I would say that what is going to happen is that, eventually, a digital contact lens or a brain 'Neural Lace'—something that is able to record and store in real time, like live vlogging—will be able to look back at any event with clarity. We know that memory is not accurate; for example, consider court cases, in which many witnesses provide incorrect accounts because memory is suggestive. So, if the prosecutor asks the witness whether he or she saw a man in a blue shirt, the witness suddenly thinks of blue shirts, even though the man may have been wearing a red shirt. Thus, I think we are moving towards an age of complexity in terms of understanding what reality is. You are going to have all these fractured realities (virtual, augmented, and nature realities), that will form a techno reality, when we sort of move back and forth. Kevin Kelly calls it the "Mirrorworld". Imagine having pair of digital contact lenses; an AI is constantly recording what you are seeing in every moment in the real world but is also recording in another layer the augmented version of it and the virtual layer that you are experiencing at the moment. Thus, you would have three different reality files to look at, which is likely more than the average person can handle. That is where we are moving towards. I do think that there will be a memory- and dream-recording industry. I do not know if we are going to use GANs to paint the impression of those things or something that will produce a one-to-one image output of what we are seeing and thinking. However, in the beginning, it will be GANs because using them will be much much easier, as we already have these datasets on the Internet. When you have a vision of a cat, you can pull it in two seconds with an AI. And, in fact, they may not be one-to-one. So, if it has to be a black cat—we have millions and millions of images of black cats. An AI just needs to match what the room is like and suddenly there is the cat – exactly how it looked in your memory, even though it is not actually your memory. It is an extrapolation and interpretation of what you are seeing, but, because it is so close to one-to-one, you probably will not be able to tell the difference. A concern about the use of GANs to fill gaps in memories is urgent because I have the fear that politicians, corporations, and oligarchs may start (and I think they are already starting) to tamper with the memories of cultures and people. How we would be able to stop them? How would we even know? I have written about simulation singularity before, which refers to when a simulation becomes so realistic that there is no way for us to tell the difference. It can be achieved through virtual or augmented simulations. Today, we are in the cartoon phase of augmented reality with jellybeans or dancing dragons, which is an infantile stage. I am talking about the moments when, if you are in winter, you can overlay summer with the exact same trees, and you would not be able to tell the difference. So, in your memory, you are suddenly experiencing summer, when the person next to you is experiencing winter. The continuity of consciousness is what we are talking about. I think that Google is going to buy a few of these EEG companies like Muse, which produces EEG headbands that people use to meditate, control clouds and waves on a screen through the brain waves.
Guillen Fernandez: There are animal experiments in mice in which researchers have implanted artificial memories by biological techniques. Therefore, a hybrid memory is possible. A mouse can learn to be afraid in a particular room despite never having been in it previously. However, such studies used molecular methods, not computational, which work by artificially strengthening of synaptic connections in networks that represent fear memory. Such experiments were done first by my colleague Susumu Tonegawa.
Gene Kogan: I think that people have been taking that as early as the 1960s, with inventors such as Douglas Engelbart, who was interested in devices that would assist people to recall things. Devices such as that are definitely possible, for example Memorex. I think that our interfaces will become increasingly adaptive at retrieving information for us, which I can imagine would be very useful. Memories are very fuzzy, and, on occasion, may not reflect actual events.
Tivon Rice: There are already practical echoes of that happening in the way in which people provide information to the media. If we are talking about actually implanting new memories or lost memories in a brain, that is a science fiction scenario. However, artists who speculate about how machine intelligence works are creating images and texts that creatively imagine how humans are going to respond to augmented memories, which is actually going to happen in the near future. The way in which Google Photos curates images is an early example of how algorithms intervene in photographic memories. Another example is the Google Clips camera, which constantly rolls and takes pictures at at 30 frames a second, with its internal AI determining what to leave out based on the photos taken previously. The AI learns how often the user takes pictures of faces, architecture, or animals. Based on the data, it gives the user a curated snapshot of his or her day. It decides which specific moments of the user’s life should be memorable. Currently, this is probably the nearest example to the concept of AI creating memories for us and making distinctions about what we should remember and which moments are more important.
Tomo Kihara: Software has been augmenting our perception for the last 10 years. At present, we have a notion of a hybrid of man and machine. In a way, our smartphones have become our organs. I have recently experimented with Google Photos, which is quite disturbing. It sometimes knows more about my memory than I do. It provides reminders that I was at an event five years ago, and I cannot recall what happened. I started using this platform because because I was curious. However, as it stores my photos, it knows who I am and identifies all of the people in my life. It knows when I met someone, as it adds the names to photos. It uses facial recognition, location tracking, and other tools to evolve rapidly. Google Photos is more accurate than my memory, which is somewhat frightening.
Joel Simon: In a sense, having these massive datasets on the Internet is already a form of collective digital memory. Will we develop tools that can create alternative memories based on our data? It would be interesting to train a model on all of the photos on my phone. I imagine that, in that context, the question is whether they will also be my memories.