After a decade of study, pioneering sociologist Zeynep Tufekci says social media is “specifically designed to draw you in, and waste your time, and distort your thinking.” With the rise of artificial intelligence, the role and power of social media may shift radically.  But in what ways? This may be the most urgent question of the next decade. Tufekci—the Henry G. Bryant Professor of Sociology and Public Affairs at Princeton University and columnist for the New York Times—brings a deeply informed and pragmatic approach to finding answers.

Tufekci will speak at the University of Vermont's  Alumni House, Silver Pavilion, 61 Summit St, Burlington, VT, on Wednesday, March 6, 4:30pm. The event, part of UVM’s Presidential Lecture Series, is free and open to the public.

UVM science writer Joshua Brown spoke with Tufekci to learn more about her views on social media and AI—and what she might have to say to UVM students about how to respond.


 

In your upcoming lecture, you’re going to be speaking about lessons learned from a decade of observing social media. What are some of these lessons?

There are lessons from social media, and I’m interested in applying them to the big change happening right now—the rise of artificial intelligence. One lesson is to not too quickly jump to conclusions about the winners and losers. It’s important to consider the entirety of the system from technology to social institutions—and their interactions. We need to be specific about how does technology work, what are the human incentives, what are the structural dynamics? And we also should be very mindful of the fact that these interactions and dynamics don’t just happen—they are going to depend on choices we make. There's not one set, single path that AI will take.

 

I often hear people say, “technology is neutral. It's all about what you do with it.” Do you think the moral nature of AI depends on how we use it?

There’s a very common saying, “Oh, it's just a tool. It depends on whether we use it for good or bad.” While that may be true in some very abstract sense, it’s misleading because particular tools don't have infinite sets of possibilities. Certain outcomes are more possible and more likely, and technologies have certain structures. So you can't just say, “Oh, here it is, and I can do anything with it.” You have to be cognizant of the question: which direction is this tool pulling me? And which direction is society going to pull this technology because of the way society works? An open-ended, anything-can-happen scenario is misleading.

I would warn people against that. Your intentions matter greatly, but AI or other technologies are not things that you can do anything with. Specific technologies, specific scientific advancements, have certain equilibriums that they facilitate—and certain equilibriums they don't. Take an example from nuclear weapons. There's only so many ways a world with nuclear weapons can continue. And one of those ways is not that we have a nuclear war every other year. That is not a viable path. You either have a world in which there is a significant barrier to their use or you don't have a world. And there's not really many alternatives to that because of the nature of the technology. Nuclear war is not something you can kind of do.

And it’s similar when you look at artificial intelligence; it's not some abstraction. It is a very specific set of technologies, machine learning, reinforcement learning, it's a particular way of being trained. It's not some abstract intelligence. It's a very concrete application of a particular computational technology, which means it can do certain things but not others. And it has weaknesses and costs and trade-offs, but they're really specific. And that's what we should talk about.

 

Who is this "we" and what choices should we be considering?

There are lots of actors who would like to be making those choices! My view is that, as a society, we should be making those choices based on democratic legitimacy and the public interest. And it should not be just a few companies making money from them. That's what I argue. But that is not always what happens because of a complicated process with these companies: They make money, they become very friendly with politicians who also want to use these tools. One of the lessons from social media is that many of these decisions have been made by very few unaccountable actors rather than through society as a whole through mechanisms of democratic legitimacy.

 

This makes me think about climate change. Some people think of climate change as the big problem and others think of climate change as just a symptom of an even bigger problem of—I don't know, it depends on who you talk to, right?—capitalism or greed or technological naivete or failed markets. So when you think about artificial intelligence, how big of an issue is it?

I understand what you're saying and here's what I would say: You can have a problem posited at multiple levels. You can, for example, argue that climate change is downstream of a particular type of capitalism and greed. You might be right. Other people could argue socialists and communist countries were very polluting too. But, in some sense, it doesn't really matter because you need to address climate change! So if your argument is there is no way to address climate change without solving, say, capitalism, I would argue that we have made progress in lots of areas of life without necessarily solving the bigger problems within which they're embedded.

Perhaps what is most important is understanding the power of short-term interests. The short-term interest of powerful people, rather than the long-term interest of the people in general, is the problem—in climate change and AI. It’s not a capitalism-only problem.

 

What do you recommend undergraduate students do about artificial intelligence?

I’d have a lot of recommendations if I ran the world, which, obviously, I don't! But I think the most important recommendation for undergraduates is to become involved because it is not a world in which people who are sitting it out are getting heard, right? That's just not happening. So if you want to have a say in how the world works, you have to get involved in how the world works. And I know a lot of students are interested in doing that, but usually their concept of what that involves is various forms of activism. Activism is very good. I was an activist myself when I was in college, and beyond, and it's something I've studied. But there's a lot of other ways to shape the world—including becoming part of the political system and running for office and trying to directly influence policy; or running NGOs; or running companies that come up with innovations. I would encourage undergraduate young people to keep their sense of the possible open.

One thing I tell my own students is to recognize that in the academy we have disciplines: sociology, computer science, this or that. And they stand separated, in different departments, but that's not the way the world works! Of course, you have to get a degree in one major and maybe one minor, but it’s an advantage to learn broadly with an open mind and curiosity—and then make those connections because the world is not separated into neat disciplines that fit to historically defined majors.

 

The New York Times has written that you “keep getting the big things right,” that other experts, government officials, pundits got wrong— election forecasts, the effects of digital media on American politics, the risk of a pandemic.  What are the big things now that we should know about—and what ways should we behave to get them right?

I'm not making predictions. That's not really what I do. Predictions are not the right target because what happens depends on what we choose do. But what's important is to make a proper analysis of the trade-offs. And I think that what that article was referring to is that I have pointed to trade-offs—when a lot of people were pointing it to an upside or downside in the tech world. Forever I’ve been asked, “are you an optimist or pessimist?” And I always say “yes,” because that's not really a meaningful question. It is useful to understand the trade-offs that exist among the possibilities in the technological developments in the current world, and the current sociology, and the current power structure. I work to look at them realistically and with the public interest in mind.

Once you do that, that's an advantage. It’s easier to do this as an academic because I'm not trying to make money from these choices. I'm trying to understand, to the best of my ability; I’m trying to advance these conversations, which is quite freeing because there are ways in which trying to make money from these hard questions influences the way one thinks. To the degree that I have succeeded, it’s because I draw people's attention to trade-offs. Usually, there's a wave of optimism and then there's a wave of pessimism, and then, in the end, we start discussing the trade-offs. And those trade-offs are what I've been discussing from earlier on.

People like me are not predicting rigid paths, we're just saying, “look, here are some choices and here are some things these developments are likely to do. Let's try to analyze them in their full complexity without pretending we know exactly how it's going to go because nobody does, because it depends on what we do. But let's try to see the world of possibilities in a realistic and analytic way and try to have society make better choices in the public interest from this framework.” That's what I aspire to do.

 

In your work in the academy, and also in your personal life, how much time do you spend on social media and what do you do there?

This is not a good question for me because I study these social media! So I spend more time on them than I would if I were not studying them. It’s kind of like asking a pathologist, “how much time you spend with microscopes?” A lot! But that's not a good indicator of what people should do. I think social media has specific uses for some topics. There are some communities that exist there. But if I weren't studying it, I would spent a lot less time on it because it is designed to waste your time. It is very specifically designed to draw you in and waste your time and it distorts your thinking. Social media is tribalizing. It’s an in-group, out-group pushing environment—just trying to keep you there. It creates distortions in your thinking. If I were not also studying these things, I would limit my time on social media purely because I think it would make my thinking less useful. It would mislead me. It would distort my thinking and my emotions. Even when I'm studying it—because I need to understand something—I feel “oh gosh, I have to take a break,” because I am a person too. I start having certain inaccurate impressions about the world that I know are just coming from social media.

And I'm like, all right, I got to go take a break from this and talk to people who aren't in these small groups! It’s not that there’s nothing useful in social media. There's are genuine and helpful communities there. But it's important to realize that it is a tool designed to suck you into an in-group/out-group process and distort your thinking. So you need to approach it defensively. That doesn't mean don't use social media. There are good reasons to use it: to keep in touch with people. I enjoy doing that myself. But I think that defensive attitude is healthier.