Dornsife Dialogues

Automation, AI, and our Human Future

USC Dornsife College of Letters, Arts and Sciences

Send us a text

We hear a lot about how automation will reshape human life in the age of artificial intelligence. From revolutionary new techniques in health care to job losses in creative industries, our world may look radically different in the near future. Alongside these great transformations comes a great amount of anxiety about our place in this world.

We usually pay the most attention to the impact of AI on work, but what about political and social life? What does intellectual property look like when AI can generate new media? How will diverse communities and groups face the urgent need to reevaluate our relations with the tools we have created? How might governments and other political actors respond?

Join our live discussion for the answers to these questions – and yours – from our panel of experts.

With:
-Ziyaad Bhorat, Visiting Scholar at the Center on Science, Technology, and Public Life, USC Dornsife; Associate Director, USC Center on Generative AI and Society

-Jennifer Petersen, Associate Professor of Communication; Director of the Graduate Certificate in Science and Technology Studies, USC Annenberg

Moderated by:
Andrew Lakoff, Professor of Sociology and Anthropology; Director of the Center on Science, Technology, and Public Life, USC Dornsife

Learn more about the Dornsife Dialogues and sign up for the next live event here.

00:00:03:12 - 00:00:28:11
Speaker 1
Welcome back to Dornsife Dialogs. It has been less than a year since GB was released to the General public. While this is by no means the only example of increasingly advanced artificial intelligence capabilities, it has been the most visible one to many people, and it has already profoundly changed the conversation across a number of industries. When it comes to A.I., minds often turn first to questions about the future of work.

00:00:28:22 - 00:00:52:11
Speaker 1
We are seeing how some industries are beginning to use it to augment human productivity, while others are preparing to replace aspects of the traditional workforce. And while the future of work in an advanced A.I. landscape is an important topic, these are not the only important questions. We also need to recognize and begin to anticipate the ways in which we might transform our political and social spheres.

00:00:53:05 - 00:01:23:18
Speaker 1
How it will reshape power Dynamics. Privacy norms and the ways we define intellectual property. And we will need to be thinking very hard about how I develop for one sector might influence others. For example, what are the cybersecurity and national security risks of releasing capable A.I. tools into a sphere where they may be used in unintended ways? As always, we have a terrific panel of experts with us here today to help us unpack these questions and offer some perspective on what's around the corner.

00:01:24:10 - 00:01:50:02
Speaker 1
Our discussion will be moderated by professor of sociology and anthropology Andrew Lacoste, who serves as director of the USC Dornsife Center on Science, Technology and Public Life. Professor Lacoste Research explores globalization processes the history of human sciences, contemporary social theory and risk society. He has published dozens of journal and chapter articles, and he has authored or coauthored five books with another one on the way.

00:01:50:14 - 00:01:58:08
Speaker 1
So let's get to it. I'll hand things over to Professor Laycock, who will introduce our panelists. And thank you, as always, for tuning in.

00:02:03:05 - 00:02:33:04
Speaker 2
Thanks to Dean Miller for that introduction. So I serve as director of the Center on Science, Technology and Public Life, which serves as a platform for research, training and public engagement on pressing questions at the intersection of science, technology and society. And I'm pleased to welcome you today to our panel on automation, AI and our human future. When I think about technological innovations that have generated excitement and anxiety over the fate of human life in the last century, my mind turns to examples like the splitting of the atom.

00:02:33:08 - 00:03:05:19
Speaker 2
Human spaceflight, in vitro fertilization, or genetic engineering. The current discussion about generative AI feels like one of those moments. There's a sense of wonder about new capacities that is accompanied by deep unease about the implications of these capacities. It's a conversation about software engineering, but it's simultaneously a conversation about what makes us human and whether our efforts to achieve technical mastery may have unanticipated and unwelcome consequences.

00:03:06:23 - 00:03:32:04
Speaker 2
And it seems to have implications for fields ranging from entertainment to the arts to education to journalism, all the way to the future of warfare. Our guests today for the panel are perfect interlocutors for helping us to reflect on these questions. Ziad Baraat is a South African political theorist whose work focuses on automation and AI, global digital governance and democratic politics.

00:03:32:20 - 00:03:58:12
Speaker 2
He's the associate director of the new US Center on Generative AI and Society, as well as a fellow at the Mozilla Foundation's Responsible Computing Challenge Program. He holds an MBA from Oxford and a Ph.D. in political science from UCLA, and he was previously a U.S. Berggruen fellow at the Center on Science, Technology and Public Life. Jennifer Peterson is an associate professor of communication at USC.

00:03:59:03 - 00:04:24:22
Speaker 2
Her most recent book, How Machines Came to Speak Media Technologies and Freedom of Speech, shows how changes in media technologies from silent film to computer code have transformed the legal boundaries of the speech or expression covered under the first Amendment. Her current research focuses on the history of conceptions of agency and intelligence, in artificial intelligence and the implications of AI for legal constructions of personhood.

00:04:26:10 - 00:04:56:12
Speaker 2
I'll kick off the panel with a few questions for Ziad and Jennifer, and then we'll have a good amount of time to address queries from audience members, which you can submit using the Q&A function. So let me start with a couple of very basic questions. What makes a generative AI tool such as chat chat, GPT three distinct from prior iterations of artificial intelligence, and why has its release into the world over the last ten months or so created such a stir?

00:04:57:03 - 00:04:58:10
Speaker 2
Ziad Why don't you start?

00:04:59:17 - 00:05:24:02
Speaker 3
Kara Thank you so much Andy, for that introduction, and it's great to be here and to have folks plugged in from all over the world on this seminar. It's it's a good question. I think I want to I want to start off by saying the pandemic really changed the way we follow global conversations. I feel like we are attuned in a sense, to developments on a global scale, unlike we've ever been before.

00:05:24:02 - 00:05:43:07
Speaker 3
And so it's only natural for me that A.I. and New developments in AI, like generative and like CBT, have come to the fore in this most prominent way. Folk have been working on this for a long time, but I think there are two things that stand out in addition to that. One is the scale of deployment. Suddenly you could find these tools everywhere.

00:05:43:13 - 00:06:06:01
Speaker 3
They have these radical transformative capabilities. And then secondly, they were able to tap into a basic user case that you could tap into at home. Right. This is the difference between Ed and something like, let's say, CRISPR technologies or genetic engineering that form the basis of the RNA vaccine for for for COVID, for example. Know, we knew it was a big thing.

00:06:06:05 - 00:06:27:10
Speaker 3
We knew was a big deal and indeed it was. And folks have been chatting about it for some time. But generative AI and chatbot openai, all of these conversations actually stem from the fact that I can log in to open an AI system and start playing around in the chat chat chip and do incredibly amazing things that I wasn't able to do previously in the comfort of my own home.

00:06:27:10 - 00:06:51:08
Speaker 3
So I think it's that usability that we have, global usability and the ability to follow that conversation that really has sparked this, this increased interest from all stakeholders, aside from obviously the fact that it's a very radical and novel technology with a new use case for us. And I want to actually point to speak to your first question.

00:06:51:08 - 00:07:05:20
Speaker 3
You know, what are the nuts and bolts a little bit about generative? I actually want to hear a little bit from Jen to see what she you know, what you might have to add on that actually.

00:07:05:20 - 00:07:35:18
Speaker 1
Sorry. And enter to Andy's question or prompt about why why. Generative AI is so is provoking so much conversation. I think there's a couple of things that are key to this. And the first of these has to do with machine learning and generative AI as a subcategory or a particular application of machine learning. And some of it has to do with a type of activities that generative AI does, you know, expression, conversation and creation.

00:07:37:09 - 00:08:01:21
Speaker 1
But first, to think about machine learning. So machine learning is a little bit different from compute like software or computer programing, the way we have used computers in the past to do things in that it's not programed in the same way. Machine learning is a process not by which a programmer sits down and sets a writer, sets a creates a set of instructions that tell the computer exactly what to do in machine learning.

00:08:01:22 - 00:08:23:16
Speaker 1
The process is less defined in advance by a protocol. The process, The outcome is defined in advance by a programmer, but the process itself isn't. It is a case where a lot of what happens happens in at the computational level. So if we think about something like if you want the computer to recognize what an image of a cat is, this is kind of a famous example.

00:08:23:16 - 00:08:38:20
Speaker 1
It's like recognizing cats because the Internet is made up of cats and we need to be able to know what is a cat and what isn't a cat. You could try to create a program that gives a really explicit set of instructions on how to recognize a cat. And we try to do that for a long time, and it didn't work very well.

00:08:39:04 - 00:09:09:22
Speaker 1
Machine learning is a really different type of process by which programmers do define the outcome. You recognize a cat, provide, select and provide the data, and maybe even give certain parameters on other types of selection or decision making processes. But then the computers start, you know, trying to match, find correlations, what between images that they know are cats that the training sets identify as cats and new images.

00:09:09:22 - 00:09:30:14
Speaker 1
And so and there's a training there. Again, people say, yes, you've got it right. This is a cat or no, you got it wrong. And so this is this type of, you know, machine learning. It's creation of a model is something where we don't really have an author or a creator. All right. It's hard to say who did this.

00:09:31:14 - 00:09:49:02
Speaker 1
Is it the work of the computer programmers? Is it the work of the computers themselves? Does it have to do with the selection of the dataset? So you have this kind of, I think, anxieties over control, just what is going on. And a lot of times when we talk about transparency or black boxes, this is a set of concerns.

00:09:49:12 - 00:10:16:00
Speaker 1
We don't really know quite what the what the rationale or the reasons are for why how the computer recognizes the cat. In the end and to some extent, you know, generative AI works in a similar way. And I think in addition to those concerns about process and who is the origin or who's in control, when we have stuff like generative AI, it's also has to do with the outputs, right?

00:10:16:06 - 00:10:51:17
Speaker 1
We're looking at examples of conversations of art, of, you know, use of language. These are all types of things that we tend to think of as being uniquely human, as markers of particular type of human intelligence, if not pinnacles of human intelligence. So I think that generative AI has raised some really big questions about how do we think about agency, what is creativity, right?

00:10:51:18 - 00:11:17:22
Speaker 1
If we tend to think of creativity as coming from a unique person and their experiences and we find out that large language models which really add basis, what they're doing is predicting the next word in a sentence. So right, these are trained on lots and lots of sentences. And what the model is doing in any case is responding to a prompt providing the most likely genre and next word in a sentence.

00:11:17:22 - 00:11:40:11
Speaker 1
And really it comes down to that. And if a really sophisticated use of, you know, statistics and, you know, also training procedures and data can produce what we think of as art, you know, is what's going on in art, not what we thought it was. And just I just want to say one more thing on this and then I'll wrap up in that.

00:11:40:11 - 00:12:02:02
Speaker 1
These are things that we tend to really closely associate with being human, human expression, etc.. They do raise these larger questions that Andy was alluding to in the opening around the boundaries of the human. And this isn't only, you know, what do we think of and what are we talking about when we talk about humans? And this is not only for philosophers, but this has to do with things like human rights.

00:12:02:02 - 00:12:06:04
Speaker 1
So that has really big implications for institutions and law.

00:12:08:02 - 00:12:33:20
Speaker 2
Though you've touched on a couple of points that I think can help us in understanding why there's so much discussion and attention to generative AI. And as we've seen this flurry of commentary over the last ten months in response to the release of of tools like Chat, CBT, it seems like there there are two camps among the expert commentators in how we should think about the ethical and political implications of these tools.

00:12:34:11 - 00:13:02:12
Speaker 2
On the one side are what we might call the sirens. They're warning us of various kinds of impending doom all the way from the increased spread of disinformation to a loss of control over our own fate. In a dystopian world that's run by intelligent machines. On the other side are what we might call the Sanguine camp, and they point to previous innovations in communication technology from the invention of writing to the personal computer that were initially greeted with great exciting.

00:13:02:12 - 00:13:14:10
Speaker 2
But that in the end led to an expansion of human creativity. I'm curious to hear where each of you stand along this spectrum, from sirens to Sanguine Zero when you start.

00:13:15:08 - 00:13:40:17
Speaker 3
So, Andy, I love the way that you phrased it, came up with that terminology of saying good versus sirens. I think for me, it closely maps different approaches that we find in the literature, which is there's a techno optimistic camp and the techno pessimistic camp, and then there's various shades in between realists, etc.. And so these are these are determined ways of thinking about what technology will do for us.

00:13:41:12 - 00:14:07:00
Speaker 3
And they have their own traditions and their own literature in their own political movements, so to speak. I will be honest with you. I am often seen hanging out with sirens, but that's only because I. I borrow quite heavily from the phrase popularized by by not originally coined by Gramsci in the prison notebooks that there's pessimism of the intellect and optimism of the world.

00:14:07:00 - 00:14:27:07
Speaker 3
I think that we really ought to be thinking very deeply about technologies that we deploy, why we build them, and why we deploy them and how we deploy them, and being critical about all of these facets and dimensions that go into these questions around technology, but at the same time be optimistic about what we can do together. Right.

00:14:27:07 - 00:14:46:10
Speaker 3
And optimistically bold. So it's not necessarily a kind of I'm sitting in the middle. It's a it's a general disposition to how I think about technology, which is be critical. It's important to be critical. It's important to ask questions. Right. But it's also important to take those questions into something that's constructive and to build something because it is the case.

00:14:46:10 - 00:15:13:16
Speaker 3
It absolutely is the case that our we are wealthier than we ever used to be in human history. On the average, our our health outcomes are much more improved. And compared with, say, for example, hundreds of years prior, I mean, there are there are absolute benefits on those outcome levels that we can point to and allude to even amongst some of the things we might say are a little bit more critical.

00:15:13:16 - 00:15:21:22
Speaker 3
So that's that's my general general approach. But curious to hear from from either of you, actually one or three where you find yourselves.

00:15:22:15 - 00:15:25:14
Speaker 2
Yeah. Jennifer Siren Or sanguine.

00:15:26:21 - 00:15:54:08
Speaker 1
So as a as a historian, I tend a bit toward the sanguine, though I have my moments of alarm or where the sirens are going off. You, you know, like Ziad was talking about, we have a very long history of talking about these things and having strong reactions to new technologies, and it's easy to forget just how revolutionary a lot of those were thinking about media photography.

00:15:54:11 - 00:16:30:09
Speaker 1
It's it's hard it's really hard to imagine just how disruptive photography was and the type of really large questions about time and temporality, art, creativity, to name only a few things that in evidence that that photography brought. Likewise, radio. It's such a background taken for granted, very boring technology. Today it was, you know, voices from the ether like hauntings or something.

00:16:30:09 - 00:17:02:06
Speaker 1
When it first came out, when it first arised arose and it was something where one person could speak to, you know, one potentially autocratic political leader could use a radio to address people in different countries. This was deeply terrifying for many reasons. People thought it was going to homogenize culture and people's minds. So we've seen a lot of new technologies that seemed incredibly disruptive at the beginning, and then over time they become institutionalized.

00:17:02:06 - 00:17:48:08
Speaker 1
We develop regulations, but also sets of users. Industry develops best practices, users develop different ways of interacting with these with these technologies. So we see a lot of innovation on the part of the users and social change that and adaptation. If and if we're thinking about. And then often shifts social relations, right? We see these things not kind of replacing or collapsing or threatening things like we might think about creativity, but rather transferring these created, changing how we do things.

00:17:48:08 - 00:18:08:10
Speaker 1
I think back to learning to write papers like I, I'm old enough to have started writing my papers longhand and then transferring them to the computer and then going from that to composing with the computer. And that did change the way that I wrote it changed my process of thinking and creation. But I don't know that it didn't destroy it, right?

00:18:08:12 - 00:18:32:00
Speaker 1
It just merely transformed it and kind of shifted it. And I think my sporadic moments are have to do with the and the things that that are maybe unique to or that are different about. I you know, we talked a little bit about what's special about generative AI. What these models are really, really good at is mimicking human communication.

00:18:32:00 - 00:18:57:13
Speaker 1
What they absolutely cannot do is think about what those words mean. They don't have good models for articulating words to the things that they refer to in the world. So they're really terrible at giving you facts of knowing whether they're giving you And knowing is an anthropomorphized ocean. Right? They're very bad at providing sorting out disinformation from information and thinking about causal relationships.

00:18:57:13 - 00:19:15:03
Speaker 1
And I think this is a source for at least mild alarm and thinking through how we are going to implement these technologies. I mean, we're already having these conversations in our classrooms. I know around how to catch up to you will be implemented.

00:19:15:21 - 00:19:16:02
Speaker 3
Yeah.

00:19:17:01 - 00:19:43:12
Speaker 2
So let's let's turn to some specific topic areas and sit a little bit with these pessimists of the intellect and what their what they're warning us about. And you know, among my colleagues, there are those who are worried that professors may become obsolete. And certainly we're already seeing lots of student essays being turned in that may have been produced by Chad GPT, and that may actually be better than many of the student essays that were produced beforehand.

00:19:43:23 - 00:20:04:07
Speaker 2
And so I want to turn more specifically to the question of what these recent developments in AI might mean for the future of work. And the stakes are obviously very high here, as we can see in the current strikes in Hollywood. Do it, do you think? Excited about automation and the replacement of human labor is a real risk this time?

00:20:05:03 - 00:20:06:01
Speaker 2
Ziad, why don't you start?

00:20:07:01 - 00:20:24:21
Speaker 3
Yeah, and I think the key phrase there is and you used actually to worry and anxiety. So I think a lot of work has been done, especially by economists, to try and map out what is automation, what is AI and automation going to do to jobs? Are we going to lose jobs? Are we going to create new jobs?

00:20:25:07 - 00:20:51:13
Speaker 3
And I think some of the more nuanced, the more nuanced stories I've seen focus on actually who loses out, right? Who what is the granularity involved in who loses out, what are the conditions of work, i.e., how much will people be working, those who retain their jobs? What kind of jobs will they be employed in? How long would those jobs take them to do each day?

00:20:51:20 - 00:21:19:21
Speaker 3
Those are the kinds of nuances that I think that will be most impacted. In fact, actually, with the advent of these new technologies and people. All right. Are right to be concerned about this. Right. This anxiety is not a new one. I want to say when we look at automation and its history and I mean, way, way back, my research on automation took me to Greek antiquity, where we were talking about the classical Greek philosophers were talking about the replacement of slaves by tools that could perform their own tasks.

00:21:19:21 - 00:21:40:03
Speaker 3
This, this, this is real anxiety that is a part of human history about the replacement of labor, but also the conditions of work that that folk will face in the advent of tools that might be able to supplant or somehow even augment their work. We don't even have to go back as far as ancient Greece to look at this.

00:21:40:03 - 00:21:59:02
Speaker 3
I mean, in in the 1950s, in the 1950s, 1950s, automation was such a huge topic. There was a front page of The New York Times, which spoke to what we've just been talking about, this idea that automation can either free us to do more and better things or it's going to be a harbinger of doom and we're all going to lose our jobs.

00:21:59:02 - 00:22:26:04
Speaker 3
That was the front page of New York Times way back when, and there were congressional inquiries about automation, what it would do to labor. So some of these debates that we're having actually this anxiety, it's not new, it's real. And actually it's it's sad maybe that the word of the day, it's generative, actually, it's an important generative aspect of our human endeavor, our thinking towards these technologies to be able for us to take a step back and say, okay, how are we going to organize now?

00:22:26:04 - 00:23:06:04
Speaker 3
How are we going to how do we imagine our life proceeding from here on? Given the advent of such a groundbreaking and novel technology, which, however, taps into an age old anxiety of our relationship to the tools that we that we employ and the labor which we undertake. So I think I think the anxiety is real. Anxiety also has political effects is a study out that was done by the Oxford Martin School at the University of Oxford, which which shows how actual electoral impact, electoral outcomes are impacted by, by people's attitudes towards automation, showing that in areas where there was a higher chance of routine labor being automated, folk tended to vote in certain ways

00:23:06:04 - 00:23:18:01
Speaker 3
that that was statistically significant when compared when compared with others. So I think there's also a translation effect here into political outcomes and we might increasingly see some some of that as well.

00:23:19:08 - 00:23:35:00
Speaker 2
I'm going to turn our conversation in just a moment to two questions around the relation between AI and politics. But I first want to give Jennifer a chance to address this question of your sense of what I might mean for the future of work.

00:23:35:04 - 00:24:11:12
Speaker 1
Yeah, and I've been thinking about this a little bit in relationship to media industries and intellectual property and in that area right now. Immediately, there isn't a lot of there there's not an immediate danger for a very simple reason that works produced by generative AI are not copyrightable and media industries really want to copyright content. So jobs are not necessarily yet being in immediate danger of being replaced in that way, though I think in the future very, very much will be.

00:24:12:12 - 00:24:39:04
Speaker 1
But one of the interesting things here is, well, first off, the way that the Copyright Office has really doubled down on things can only be copyrighted if they're produced by humans. They've said, you know, generated works are not right now excepted. I think they will be there in the middle of a large study, but they kind of doubled down and said, well, we're not going to take AI, we're also not going to take photographs from or art produced by animals or works produced by deities.

00:24:39:11 - 00:25:06:08
Speaker 1
So they produced a number of interesting rules around this. But in the case one case that is really interesting that shows how this is being used was there was a graphic artist who produced a comic book and the title is Czarina of the Dawn, and she used a text two image generator image learning to help produce the images in this.

00:25:06:08 - 00:25:27:11
Speaker 1
And she tried to copyright it and at first she did copyright it. And then the copyright office revoked the copyright on this because they kind of came to understand more how I was used in the production of this of this work. And afterwards they said, okay, well, we'll copyright the text and the relationship of the text to the image, but we're not going to copyright the images.

00:25:27:18 - 00:25:49:21
Speaker 1
And so this is a little bit of a brake on on industries and they're having to think through, which is kind of good, I think, because it's giving us time to think about how to bring in these technologies to to the production of culture. And if you want to call it content, I think that it is changing the way people work for this artists.

00:25:50:08 - 00:26:10:00
Speaker 1
She worked with a model. I don't know why she chose that. It might be something that augmented her artwork. It might be it might have been instead of working with an artist, so may have been an artist displaced their desired point. But it's certainly is shifting the crux of what people are doing, how people are engaging in creativity.

00:26:10:10 - 00:26:42:01
Speaker 1
And one thing that we are seeing in some of these media industries is people hiring folks who are creative with prompts and prompt engineering. So as we begin to incorporate these models, the type of work that people are doing or changing, you're maybe not producing content for a website or something like that, but you are working with in conjunction, creating prompts and responding, selecting, generative, chat shaped responses and amending them.

00:26:42:09 - 00:26:59:09
Speaker 1
Now to the ads point about how that changes the nature of work. I don't know. I don't know if it's better to be writing content for the website or working with this tool, and I imagine that does vary and that's a site for very interesting kind of conversations concern. But also action.

00:27:01:06 - 00:27:25:17
Speaker 2
In this area of intellectual property is also a nice site where we have to pose questions about what human originality and authorship really mean. I mean, and of course in other areas before generative AI, those questions have come up, we might think about sampling in music or, you know, ways in which authors have been accused of plagiarizing because they, of course, borrow themes from prior from prior great works.

00:27:25:17 - 00:27:52:04
Speaker 2
So again, on the one hand, nothing new, but on the other hand, lots of pretty distinct specific instantiation and possibly different applications this time. And I want to remind members of the audience that you can submit questions to our panelists via the Q&A function and we'll be turning to those questions in just a few minutes. I want now to turn to the question of what I might mean for our politics.

00:27:52:04 - 00:28:04:19
Speaker 2
Does A.I. open up new forms of freedom or does it risk entrenching authoritarianism? And how might A.I. and automation change international relations? Zero. You want to take a crack at one or both of those questions?

00:28:05:20 - 00:28:34:08
Speaker 3
Huge, huge topics. Well, I'll start off by noting that if you look at some of the major indices that track democracy over time, we are really in an era of democratic backsliding. The democracy index that's done by the economists since its inception, even at controlling for the pandemic, we're seeing democracy decline. And yet, interestingly enough, at the same time, this might be a spurious correlation, but I haven't really seen enough research done to connect the two or not connect the two.

00:28:34:08 - 00:28:56:20
Speaker 3
But at the same time and at the same token, we're an explosive era of automation and, you know, we increase the amount of robots that we use in manufacturing factories each year, year on year, and it keeps increasing to new records. We are increasing our investments in automation and AI, and there's a wonder if these two are related.

00:28:56:20 - 00:29:36:17
Speaker 3
So so I think that's an open question and a question that we really need to delve into because, yes, I think there is a scope, especially for the use of technology to curtail suppress surveillance peoples and populations. To an extent that's unprecedented, right? These technologies can be absolutely misused by governments, but not only by governments, by private actors, large private actors that have the resources and in some cases the revenues in excess of other large countries themselves who are able to to to exert influence over whole population groups.

00:29:36:17 - 00:30:04:21
Speaker 3
And so that runs the risk of de facto authoritarianism for sure. Right. We need democratic modes of accountability. We we don't find that in corporate environments. And in these in many of these environments where it is being developed and deployed and yet it has global impact. And so I think that that's a that's a a key concern of mine is that I can really be used to accelerate the suppression of peoples.

00:30:05:14 - 00:30:36:00
Speaker 3
That's something we have to guard against in terms of how it changes international relations. Well, we have to look as well in terms of global supply chains. You know, the AI that you interact with in terms of catching up has an entire supply chain attached to it, a complex infrastructure. Right. And that goes way all the way down into the minerals, the rare earth minerals that are being used to create the servers that that are being relied upon for these technologies to be able to to function.

00:30:36:07 - 00:31:26:07
Speaker 3
Right. But also the the the the laborers that work in other countries outside of, let's say, the global north who who label data and who clean data and who are part of the AI ecosystem and supply chain. And so you start to see I take is right a who who are adopting AI models and from from, you know, spaces and places that can afford to develop extremely expensive and complicated and complex AI tools and regions that that are therefore must take these model do take these models and and and and folk who are, you know let's say downstream in the supply chain who are who are creating these models but not adequately compensated or under

00:31:26:07 - 00:31:58:07
Speaker 3
conditions of labor and work in a global supply chain that that that that has that changes their that their their relationship to to power in ways that have traditionally not been the case. Right. So we see global movements because of HIV so we can see global movements in which states rise to the fore and which states start to shrink back based on their capacity and capability to deal with these new global supply chains and how their people, their workers, the industries can adapt or not.

00:31:58:21 - 00:32:18:07
Speaker 3
And I think that's one key way in which it is already starting to shift. Those those those conversations and those power imbalances and having, in many cases, deleterious effect on historically oppressed peoples in places such as the Global South.

00:32:18:07 - 00:32:43:19
Speaker 2
That your mention of the material infrastructures, the extractive resources, the invisible labor that underpins these competing technologies also leads me to want to mention a great book by one of our colleagues here at USC, Kate Crawford. I'm called an Atlas of A.I.. So for those of you who are interested in this topic, it's very much worth exploring that book and others that she's been involved in.

00:32:43:20 - 00:32:52:04
Speaker 2
And Jennifer, let me give you a chance to to say a word or two about the political implications of AI from your perspective.

00:32:52:04 - 00:33:37:11
Speaker 1
I think everything that C.I. brought up about those unseen material infrastructures and conditions of possibility for are superimposed in an understanding in the way that the need for these minerals supply chains creates conditions of possibility or bolsters autocratic regimes. Repressive policies and repression of people is superimpose. And I would add that I think that one of the things, one of the kind of political dangers around AI is the ability to use these tools to generate the kind of scale of content that they can create that creates illusions of support for political leaders.

00:33:37:21 - 00:34:04:20
Speaker 1
Is is a really alarming to go back to the sporadic moments, as in other surrounding moments, a really alarming potentiality allows that can enable political leaders to act as if they have a kind of set of support for a set of policies they don't. It also affects you know, we know from research that this affects the way that people vote and stuff.

00:34:04:20 - 00:34:11:11
Speaker 1
If they think a leader has support, they're more likely to support that leader. So these are potentially dangerous political tools.

00:34:12:19 - 00:34:44:16
Speaker 2
And I'm going to turn in just a moment to some questions that we're already getting from audience members. But I want to first follow up this thread about the sort of dangerous political implications of of the new technologies and ask you both, you know, given these potential dangers and the broad potential ramifications of these technological innovations, it's not surprising that there are currently increasing calls for the government to impose regulations on the development and use of generative AI.

00:34:45:11 - 00:34:57:20
Speaker 2
Are there any challenges to regulating in AI and wish and policy makers be most focused in targeting their efforts? Jim, let's start with you for a change.

00:34:57:20 - 00:35:22:22
Speaker 1
So ia1 of the things so one of the first things that I think is probably an issue in regulating AI is a number of different things that get labeled as AI. So I think that we need to be more specific and targeted in, in these conversations. Like if you think about everything you know, from Netflix recommendations to catch up, this is not, you know, and there's a lot of other applications.

00:35:22:22 - 00:35:48:09
Speaker 1
This is only in thinking about media and communication that it's really hard to have a general framework. So so being a little more targeted, I think that the first there's there's a when we're talking about regulation, we should not only think about the government, there's a lot of levels of regulation. There is, you know, legislature, government agencies are engaged in regulation.

00:35:48:09 - 00:36:11:17
Speaker 1
But also it's important to remember that the industry is engaged in regulation as well through, you know, self-regulating bodies, through trust in safety divisions at companies to the very design of products. And we need to think all about all of these as forms of regulation. And I do think that there are some challenges here, and I don't know that they're unique, too.

00:36:11:23 - 00:36:51:19
Speaker 1
I think that these are all being developed in the Silicon Valley culture where the motto has been a very, you know, quite a celebratory move, fast and break things is a bit of a problem. This is a there's a lot of things we might not want to be broken. And I think that this is one of the reasons, at least the reason that people like Openai's Sam Altman are giving for trying to get government involvement is a sense that these companies are caught up in needing to produce something new, make money and get out there faster than the next guy and trying to just, you know, the kind of capitalist pressures and industry culture pressures

00:36:51:19 - 00:37:25:14
Speaker 1
on them to to make the next new thing without thinking about how it might be deployed are problematic. And I think this is at least one reason there may be others that some industry leaders are calling for some kind of regulation to kind of put common constraints on everybody so that you're not you don't feel pressure to produce something, a tool that you think might have dangerous political outcomes or bad outcomes for workers, etc..

00:37:25:14 - 00:37:31:17
Speaker 2
Ziad, What are your thoughts on where regulatory and legal efforts should should be most focused right now?

00:37:32:10 - 00:38:20:08
Speaker 3
Yeah, I think, you know, it's it's such a huge task and I think we are at a unique moment where global attention is focused on it at every level that we are capable of achieving some form of meaningful regulation here. But I would say that it is a real win if we were to just basically able globally to agree on certain norms around the deployment and use of AI specifically as as it relates to, let's say, you know, nuclear warfare decision making on anything to do with nuclear, anything to do with lethal autonomous weapons, anything to do with, you know, military applications that can absolutely go haywire in when when we have humans out of

00:38:20:08 - 00:38:48:12
Speaker 3
the loop to an extent that is, you know, quite dangerous, in fact. So I think norms around that might be easier for us to to to corral around globally. And I think that's definitely where we can see some some some some meaningful movement. I will say that, you know, talking about A.I. in a way that we have been talking about it recently reminds me a lot of the way we talked about, say, ten, 15 years ago about the advent of just digital, right?

00:38:48:17 - 00:39:09:23
Speaker 3
Digital itself became this thing with this world in which, you know, we regulate digital. We had to understand digital. Digital was everywhere. And so that's that's the case where it's going to be everywhere and in many multiple, multiple sectors, multiple areas of focus. So, you know, let's let's focus on the security that, you know, the ones that can cause massive damage or at least let's be able to focus on that.

00:39:10:03 - 00:39:30:14
Speaker 3
But I would also say, you know, education, we say, you know, we want skills develop. We want to narrow the skills gap. Everybody recognizes that. But how often have we seen, you know, actual educational policies on a local government level that have to do with a I'll empowering our public libraries, for example, to offer training courses in AI that we are easily able to I'm attempted to access.

00:39:30:14 - 00:39:59:02
Speaker 3
So where is the education policy in AI and how is it reaching students and folks from that? You know, the idea that the youngest cohorts so education you know AI and nuclear lethal autonomous weapons and then as as generally talking about, you know property is a huge deal so intellectual property and figuring out what that means, I've often joked with folks that, you know, especially in the U.S. and IP law is probably more important than constitutional law, which is, you know, a half joke.

00:39:59:02 - 00:40:20:00
Speaker 3
But it is it has it gets to a point of something that that is actually it's something we have to figure out when it comes to AI, because it is it's foreign. It is actually is foundational in the use and deployment of these systems and also the global patterns of inequality that can result. So figuring out globally what we do in the space of copyright laws is going to be it's a challenge.

00:40:20:00 - 00:40:33:02
Speaker 3
It's ongoing, and we're starting to see these cases being taken up and in multiple jurisdictions and where we end up and how we can agree on them. I think those those are the areas that and then the multiplayer is, but those are the things that come to top of mind to me.

00:40:34:19 - 00:41:08:06
Speaker 2
You know, your mention of questions around educational opportunity and inequality is a nice segway. One of the questions that we've gotten from our audience member and members. So Susan asks, following up from the opening siren versus sanguine question, can can our speakers address how they imagine A.I. will impact, first of all, people of different socioeconomic backgrounds, and secondly, the quality of our ability to weave diverse, resilient communities?

00:41:09:00 - 00:41:27:23
Speaker 2
And she notes in closing, there is paid work, but there's also unpaid work of care, and that would certainly be impacted by some of the implications of of AI. So who would like to jump into Susan's question?

00:41:27:23 - 00:42:06:07
Speaker 1
I'll take a stab at it and then pass it up to Ziad. I think that first off, it is we have every reason to believe that I will benefit the wealthy and, you know, not benefit the, you know, the poor. But there are a few things that complicate that in that some of the tasks that is best at replacing are more professionalized work, white collar work, higher paid work.

00:42:06:07 - 00:42:33:20
Speaker 1
We were looking at radiologists. We're talking about, you know, our college professors going to be replaced, lawyers are worried about their jobs. So there is a little bit of a complication in this. And I think in terms about resilient and diverse communities, there's a very real concern, especially when we think about the way these models work. They're predicting the most likely if we think about chatbots predicting the most likely next word in the sentence.

00:42:34:02 - 00:43:02:18
Speaker 1
So this is based on, you know, what is a probabilistic model of the looking at how everyone else has used these words and lots of different examples. So there is a kind of a a a both a homogenized force going on there in what is created, but also a weighting of things toward what the what the data is there, what examples we fed it.

00:43:03:04 - 00:43:28:19
Speaker 1
So there is both a kind of homogenizing. So that's the opposite of diverse communities, but also potentially, you know, kind of shifting these things toward shifting what is produced to reflect existing power structures existing. You know, we think about who are the authors that are we have more access to we have more examples of white authors, male authors, which is a much larger corpus of texts.

00:43:28:19 - 00:43:39:20
Speaker 1
And you could go on to to that. So I don't think that these have a lot of promise in those areas. And indeed there are concerns in those areas.

00:43:39:20 - 00:44:12:06
Speaker 3
And I want to take a lot of what Jen was saying as well in terms of, you know, the potential for inequality to be further entrenched with the advent of these of of especially if we continue along the line of how we currently are in terms of who owns these AI systems, some of which are incredibly, again, incredibly, incredibly expensive to develop, maintain, deploy and therefore out of reach, even even if we might hear about to talk about democratizing AI solutions.

00:44:12:14 - 00:44:33:17
Speaker 3
Right. Unless there is some mechanism by which we can own these these systems in a meaningful manner, which is often not the case, we are always going to be at the behest of those who can control these systems and manipulate them to certain effects. Right. And so the potential for inequality quite great here and without, you know, you know, seeming too much like a siren.

00:44:34:15 - 00:45:03:08
Speaker 3
But I think that there's a double edged sword here. On one hand, our ability to weave resilient communities together. I think it's a double edged sword because on one hand I see local community fragmentation wherever I go. I mean, when was the last time folks were really plugged in to their their local communities and consider themselves as part of their neighborhoods to an extent that they were participating in, you know, local council meetings.

00:45:03:08 - 00:45:38:02
Speaker 3
And people do that right, in different, different degrees and at various places. One of the effects of these technologies has been to fragment that fragment that, on the other hand, it's allowed, for example, for us to be sitting here virtually across multiple jurisdictions and to create new cleavages and create new forms of community. So so on. One, what I would like to see, let's say the the the the optimism of the will here is to augment, let's say we use AI technologies to augment our ability to form local communities of practice and local community and participate in.

00:45:38:07 - 00:45:58:14
Speaker 3
And then as much as at the same time, it can facilitate global conversations, which it is already it is clearly already doing on the on the point of care. I mean, care is care really speaks to a number of issues, one of which I will deal with. If you look at a lot of the research and this is not just research on AI now, and it's not just research on automation now.

00:45:58:14 - 00:46:34:10
Speaker 3
It's research that was looking at automation from way back when in the last 100 years or so traditionally care and work that was attributed to women with where were are the areas in which folk would lose out the most. And that's that's that is that is exactly where predictions are lying. And what we're seeing now as well is that a work of care in many cases is devalued work of care and work attributed to women in particular who have traditionally performed the work of care has been devalued.

00:46:34:10 - 00:46:59:16
Speaker 3
It's an increase. It's an interesting question because care becomes so much more important when an AI systems can't necessarily do so. And even though there are attempts to replicate those through all sorts of robots and whatnot, to replicate and to try and develop models of care, it's not it still seems to be that it still seems to be the case that that women find themselves.

00:46:59:16 - 00:47:23:19
Speaker 3
MASON When it comes to AI and when it comes to two roles of care that are simultaneously extremely valuable in terms of, you know, what do we value in soci society, not necessarily monetarily, but at the same time devalued from a perspective of what people are actually given in terms of monetary, monetary affordances.

00:47:24:23 - 00:48:01:04
Speaker 1
And if I could just jump in and add to some of what Ziyad was saying about care, one of the things that's so interesting about the history of artificial intelligence and what we imagine that it was going to do is, you know, artificial intelligence was usually was for a long time modeled on types of activities and reasoning that are generally coded as masculine for the longest time and more recently, especially as it's been commercialized and commoditized and rolled out, we see a lot of what these products are doing is indeed feminized labor, secretarial work, right?

00:48:01:05 - 00:48:20:05
Speaker 1
First we got rid of secretaries because we didn't think that was important. Now we're kind of realizing maybe it is important, but we're going to have, you know, A.I. products that might replace that work. And if we combine this with robotics, right, also care for the elderly, for the ill and, you know, and education also for the young.

00:48:20:05 - 00:48:33:07
Speaker 1
So we have these this interesting shift toward thinking about artificial intelligence in terms that are gendered female as it becomes commercialized and as we're looking at what jobs and labor is going to be replaced by the technology.

00:48:35:00 - 00:49:01:17
Speaker 2
So a question has come in from an anonymous audience member that's very close to my heart as a faculty member at a research university. And the question is turning the lens of inquiry onto the world of academia. Do you have thoughts or anticipations of the impact of AI and large language models on applications for positions, academic promotional processes and dossiers, research and lab work and authorship processes, etc.?

00:49:02:06 - 00:49:13:18
Speaker 2
Somebody clearly familiar with the bureaucratized, intensive academic life and zero. Jennifer, thoughts on on the implications for academic life?

00:49:13:18 - 00:49:54:17
Speaker 3
I mean, I think all academics will agree there is a lot of paperwork and paper and a lot of writing and a lot of a lot that that that that is a part of an academic career for sure. And what's been interesting is to see how different academics from all different types of academic, different disciplines, different positions, different areas of their career are using these these these tools in everyday life that and it can in very in very many cases, assist with some of the, you know, very much mundane tasks that you have to that we have to perform.

00:49:54:23 - 00:50:32:00
Speaker 3
And when I say that and I'll fully admit when I'm looking for a structure or something to say, I will sometimes, you know, use Chartbeat for that. I to use the language that chap Chip gives me. But I'll look at that as a kind of reflection to be like, Okay, here's a structure. And from there, if it, you know, if it's a task that, that I, I need to perform and the time crunch etc. Now I think that's a model in which I'm already using chatbots and large language models to assist my kind of thinking about drafting.

00:50:32:00 - 00:50:51:05
Speaker 3
And I think, you know, when I spoken to lawyers, something similar is happening. Something similar is happening with a lot of focus in terms of preparing and writing and just help with structure of things, right? Not necessarily putting down word for word content, although that does happen. What I would say will be very interesting and it must be a precursor to some of these conversations.

00:50:51:05 - 00:51:26:05
Speaker 3
Is there is a an interesting area of authenticity around A.I. that remains to be navigated, i.e. we do somehow and can't quite pinpoint what right this relates to in a human rights framework. But we do somehow want to know that something has been produced by a human being or largely by a human being. Right? This question of not being deceived by by, you know, receiving a paper that is authentically written by a student, authentically written by the person who sent it to you is somehow important to us.

00:51:26:05 - 00:51:47:19
Speaker 3
So I've been tracking very closely developments in watermarking technology. And I think that will be its own industry. I think, you know, the same way that AI is creating all these opportunities, it creates new markets for itself. And one of those markets will be an authentication, right, that this was created largely or almost completely by human being and can be checked.

00:51:47:19 - 00:52:11:22
Speaker 3
And it we're already seeing those those methods underway. And so I think there will be this balancing between recognition of that and those tools being increasingly adopted somehow, but also the benefits to to to administrative processing, to learning, you know, and assisting folk who some might find it difficult with structuring things in a short amount of time given the demands of of the work which could perform.

00:52:12:04 - 00:52:18:13
Speaker 3
And in helping us as a tool to to be able to to work with structures.

00:52:18:13 - 00:52:57:18
Speaker 1
Yes, I think that we will see this soon. I imagine we are going to see cover letters and whatnot that are I assisted if we're allowing our students to use these things to help them as tools not to replace their work in the classroom, I would imagine we will see some of that. And I think the question about how it might change things like authoring articles is a really interesting one in my department has already had some conversations around that, like how a lit review might be produced by or through, you know, an AI application and or a plug in and whether or not we think that that's legit.

00:52:57:18 - 00:53:22:10
Speaker 1
And you know, and this depends, this also varies with the mode of writing and different disciplines in some disciplines it might be easier to get the to have sort of summaries of lit reviews that are already quite doable by up and others. It's a bit harder because of the way that the lit review is is done discursive. But I think it's it's absolutely coming.

00:53:23:06 - 00:53:39:19
Speaker 1
It's going to be interesting in a good and bad way. I will say that the questioner did not include emails and I am really waiting for the moment when I can, when my emails can be taken over, when I can really use one of these applications to do half of my emails. That would be good.

00:53:39:19 - 00:54:11:08
Speaker 2
And so we have time for crisp responses to a very big question posed by our audience member Juan and who I think is asking us to to go back to the question, optimism of the will. And his question is how can we tackle the hurdles to guarantee that air technologies and maintain themselves as ethical, impartial and accountable? Huge questions.

00:54:11:08 - 00:54:11:12
Speaker 2
Yeah.

00:54:12:18 - 00:54:41:15
Speaker 3
Huge question. I think we need to take the first step in building consensus around what that means for us. And I think I, I think it requires action effort. Right? The mere fact that there are folks on this webinar, you know, interested in learning more, taking these steps to figure out more about is that step. And it needs to translate into political advocacy and action.

00:54:41:15 - 00:55:08:17
Speaker 3
And that's what I don't see a lot of. I see a lot of top down approaches when it comes to I and I see community organizations that doesn't quite get the level it deserves to get what it did in some of these in the in these conversations. So it's a question of building scale and building movements, really, I think for me, because, you know, we need to have some consensus about what ethical, impartial and accountable means.

00:55:08:17 - 00:55:32:00
Speaker 3
And those are those, you know, not stable concepts, they're not universal concepts, and they actually require form of participation, effort and community building that I'm very big on, as you may have picked up, starting at the at the local level, which is an area which I think is very neglected in some of these conversations.

00:55:32:00 - 00:56:03:03
Speaker 1
And I would say we we cannot certainly guarantee. Right. This is a place where there are no guarantees. And I think, you know, if we think about past, you know, Arab stations or importations of technology and industry, these were not ethical, impassable or impartial and often not accountable. So this is definitely a site of future struggle. And we will definitely see implementations that are not ethical, impartial And we need to vote for accountability.

00:56:03:03 - 00:56:25:00
Speaker 1
And this is something that, you know, we have in many institutions we think about, you know, work in is talking. It talked about, you know, workplaces. Also, if we think about other types of communication industries, these have long been critiqued and people have struggled to hold them accountable. So I think that this is very much going to be a side of struggle.

00:56:25:00 - 00:56:34:06
Speaker 1
We need to do new work in the way that Ziyad is talking about in order to build tools and consensus around how to do this.

00:56:34:06 - 00:57:10:20
Speaker 2
So thank you both. We've reached the end of our time. I am grateful for the audience participation and for your thoughtful reflections that remind why authentic human beings and scholars are still necessary. I think we're at the beginning of a obviously a much longer set of conversations about these topics, and I'm very much looking forward to hearing what comes from the new USC Center on Generative A.I. and Society that Ziad is working with, and also what comes out of Jennifer's current research on AI and human agency and conceptions of human intelligence.

00:57:11:00 - 00:57:21:00
Speaker 2
And so thanks again, for joining us.