In Conversation with Maria Arusiag Saatgian



Maria Arusiag Saatgian is a computer science student and an avid language learner. She is interested in computational biology and computational linguistics and interacts with fifty AIs every day (more on this below). She writes poetry and listens to international rap in her spare time. Maria can play sixteen Wordles and has a 200-day Duolingo streak.

Let’s have a conversation!

Facilitated by Emma Hwang, SHIFT* Creative Director on 2023 February 4 



The graphics above were generated using DALL-E, an image-generation AI. The engine was prompted with the following phrases in the style of digital monochrome risograph: (1) A cyborg and person in love, (2) What love looks like on the inside.

This piece continues a conversation in SHIFT11: Fantasy Deconstructed where Emma questions whether AI is capable of love in her article I Love You: Say It Back. She first approached Maria during her writing process and was provided with deep insights that informed her line of inquiry.



Q: What goes into the process of developing an AI system? Your expertise in computer science has exposed you to many different systems—publicly available or in development.

First of all, let me explain the ‘black box’ analogy. Essentially, it’s an interactive system where you know what you’re inputting and what you’re getting out of it, but you don’t know what’s actually going on inside. You might have some sort of idea about what’s going on inside, but how many people know what happens when you send a text, post a photo, or go on Google Maps? How about when Google Maps knows exactly when you need to turn? Even as someone who studies computer science, there’s still so much to learn.

When you first learn about AI, it can be super confusing. It’s so unlike the technology that we’re traditionally used to. Most basic technology—like a knife or a pencil—has a human purpose or function. AI is different.

An AI intelligently learns to solve problems, almost always, through machine learning.

According to one author, Mo Gawdat, we interact with around fifty AIs every day. The Instagram algorithm is an AI. You are feeding it information every day and the system shows you something by learning from your interactions. Any time you have recommended searches—that’s AI too. In science fiction, AI is always depicted as a humanoid robot that will take over the world and is far more intelligent than humans in general. This isn’t how AI is taking over, it’s more of an increasing level of prevalence. 


“This isn’t how AI is taking over, it’s more of an increasing level of prevalence.”



Q: In their book Atlas of AI, Kate Crawford claims that Artificial Intelligence is neither artificial nor intelligent. Would you agree with this statement?

I would agree with half and disagree with the other. In terms of AI’s artificiality, it’s very much a real reflection of whatever you feed it. Sometimes when I’m learning languages, I imagine that I’m an AI. When you teach an AI a language, you feed it books, translations, documents, and other sources—until it’s able to break down what’s going on, different parts, and how they operate. We would feed it movies so that it would understand different emotional responses and expressions, and music and lyrics, so that it could understand more figurative language. Exposing the AI to as many contexts as possible allows it to perfect its understanding of each word and its uses. In sum, the way that AI learns is not really artificial—it’s how we learn as humans.

As for its intelligence, that begs the question as to whether we’re intelligent. I would argue that this is true. We talk about neural networks and habits—for humans, these are derived from pathways in your brain that are activated more often. AI builds similarly analogous ‘neural networks’—once it learns a function, it gets better and better at it and doesn’t really forget.


Q: Would you consider AI’s intelligence greater than that of humans? If you’re feeding the system all of these books and media, and it can’t forget, wouldn’t this make it smarter than any human alive?

Yes, in this way you could consider an AI much smarter, but even in all its capacity, it’s still limited to the same ‘plane’ as humans. In regards to the human experience, we’re limited to our innate human-ness—and I would argue that this limits what we can create. When we dream of a face we’ve never seen before, that’s not actually a new face. We know that it’s always something that we’ve seen and learned, perhaps not a face that we can consciously recognize, but we are incapable of constructing a new face. We understand the world by contextualizing it through our own lived experiences.


“We’re limited to our innate human-ness—and I would argue that this limits what we can create.”



If a single human had all the knowledge of every book in the world, they would still only be capable of solving human problems in human societies taught by human needs, values, and worldviews. In the same way, we create AI to fit into our contemporary society through our perceptions of the world. We are bound by this human-ness and so is the AI.


Q: In what ways do you think AI contributes to human progress (this can be defined in many ways, but I’ll let you interpret progress for yourself) and in what ways do you think it could impose problems or create potential conflict?

Building off our previous questions, AI’s goodness would be a human-like goodness, amplified by the knowledge of thousands and millions. In the same way, it can reflect humanity’s darker side on a large scale. I do believe that humanity will benefit tremendously from AI, but in order to understand how this will happen and to regulate it, we have to develop a better understanding of ourselves, through fields such as psychology and other sciences. We’re making something based on ourselves—we know how we learn and we know how we know things, but we don’t know everything—and that’s why we don’t know everything about AI.

If we were teaching an alien everything there was to know about humanity, we would be teaching it about society and how important people and things were to be treated—would it not want to be treated well itself? We often use and abuse technology—it has a defined purpose and we use it to meet our needs. When you have a technology that is so understanding of everything, it’s different—and it has a different role in society than a pen or a knife. In a way, AI is free to draw its own conclusion and in doing so, can help us solve complicated problems and help determine best-case scenarios in difficult situations. It’s honestly such a brain, for lack of better words.


“[AI] can reflect humanity’s darker side on a large scale.”



Here is where understanding ethics is really important. It’s really a lot less black and white than many tools or other objects we encounter and use in our everyday lives. In a way, AI is active—it’s so natural because it’s fed raw, human truths—and this means biases. Gawdat makes a point that humanity doesn’t reflect truth in media. In the media, we reflect bad things and on social media, we reflect fakeness. If AI is to consume what humans produce through media, it wouldn’t get a very good idea of humanity itself. It’s important to remember that companies develop AI for their own objectives and their own purposes (often capitalistic in nature), this adds yet another level of bias and affects what information is being fed to these systems. These systems observe and learn from our existing biases—and they don’t fix these—they emulate these biases.

You see this so clearly in systems such as Google Translate. If you take a gender-neutral language and translate it into English with a phrase such as “they drive,” what will come out the other end, more likely than not, is “he drives.” The AI is more exposed to contexts of driving that involve a man, and this is not neutral. This isn’t to say that Google Translate is sexist—it looks at the set of outcomes and tells you what it perceives to be more likely.


Q: I know we’ve alluded to sentience earlier—do you think that machines have that capacity to actually experience love or fall in love with a human, as opposed to just simulating love or responding to cues as an interface?

Well…do humans themselves understand love? If we are to create an AI that can love, we have to know that that function is, in fact, learnable. A lot of us have the capacity to love, but do we know how it works or do we just have a lot of socially constructed ideas? Some experts will claim that it’s fundamentally biological while others will say that it’s all social—I’d like to think most people agree it’s both, but we don’t know the ratio between the two.


“If we are to create an AI that can love, we have to know that that function is, in fact, learnable.”



Can an AI feel biological attachment or experience a rush of dopamine? No— not right now. Can we replicate this attachment in other ways and teach the system what we know about love? Definitely. We can feed these systems everything we know about love from different cultures and it can learn about the concept of love and come to conclusions, know how to respond, etc. So much of love and attachment is habit—and this is exactly how machine learning works.







Other sources and references in this conversation






Based in Toronto, The SHIFT* Collective is a student-run publishing collective that aims to disentangle the practices of art, architecture, and design from the biases, exclusivity, and elitism that have historically shaped their canon.  

GET IN TOUCH WITH US

shiftmagtoronto@gmail.com
@scaffoldjournal