100 WVIA Way
Pittston, PA 18640

Phone: 570-826-6144
Fax: 570-655-1180

Copyright © 2025 WVIA, all rights reserved. WVIA is a 501(c)(3) not-for-profit organization.
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
STAND WITH WVIA: Federal Funding Is Cut, Click Here To Support Our Essential Services Now.

Author asks ChatGPT for advice on her book about tech — here's what it said

TERRY GROSS, HOST:

This is FRESH AIR. I'm Terry Gross. Here's the kind of conflicted relationship my guest has with Big Tech. Tech journalist and award-winning novelist Vauhini Vara has ethical reasons why she shouldn't shop on Amazon and at least as many reasons why she does. Then there's Google. She's opposed to how Google monetizes our personal information to sell ads geared to our interests. But she appreciates the archive of her own stored searches, many of which she lists in her book because of what they reveal about different periods of her life.

As a tech reporter, she got access to a predecessor of ChatGPT. She loves playing with AI and has found ways it can be helpful, but she's skeptical of its use as an aid for writers. She's written twice about testing a chatbot in that capacity - first, in an essay that went viral called "Ghosts," in which she asked AI to help her write about her sister's death, and now again in Vara's new book, "Searches: Selfhood In The Digital Age." After she wrote chapters of the book, she fed the chapters to ChatGPT and asked for help with her writing. Then she analyzed the advice and what it says about the abilities, shortcomings and biases of the chatbot. She added her interactions with ChatGPT to her book. The theme of the book is how tech is helping and exploiting us.

Vara started as a reporter at the Stanford University campus paper, where she edited its first article about Facebook when Stanford became the third university to get access to it. She covered tech for The Wall Street Journal, was a tech writer and editor for the business section of The New Yorker's website and now contributes to Bloomberg Businessweek. Her novel, "The Immortal King Rao," was nominated for a Pulitzer Prize. Her short story, "I, Buffalo," won an O. Henry Award.

Vauhini Vara, welcome to FRESH AIR. And thank you for getting here. Your windshield got shattered by who knows what on the way over to the studio. I'm so grateful to you for making it.

VAUHINI VARA: It was worth it. I was like, I'm getting to that studio.

GROSS: And you did. Thank you, and welcome. And I enjoyed your book. So you did this exercise of feeding chapters of your book to ChatGPT, asking for advice. What did you tell the chatbot? Why did you tell it you wanted its help?

VARA: I'm glad you asked the question that way because I'm really interested in the way in which we sort of perform different versions of ourselves when we communicate, whether it's with other human beings or with technologies. And so I was definitely playing a role with the chatbot. I told the chatbot that I needed help with my writing, and I was going to feed it a couple of chapters of what I was working on, and I wanted to hear its thoughts. The reality was that I wanted to see how ChatGPT would respond. And so the interplay between sort of my performance and its performance was super interesting to me.

GROSS: I have an ethical question for you, Vauhini. Is it ethical to mislead a chatbot...

VARA: (Laughter).

GROSS: ...And ask questions under a kind of false pretense?

VARA: A hundred percent. I say that as a journalist, with the full expertise and authority of my role as a journalist. You know, I think so. I think our relationship with these products is really different from our relationships with other human beings. I feel really strongly about, obviously, things like accuracy and ethical standards in my daily life when I talk to other human beings, whether it's as a reporter or not. What I think is really interesting about technologies, whether it's ChatGPT or something else, is the way in which we can sort of play with these ideas of how people are supposed to communicate in ways that are, I think, pretty interesting and freeing.

GROSS: After you got some feedback on the first couple of chapters, you asked the chatbot if it's OK to share a couple of more chapters. And ChatGPT answered, absolutely. Feel free to share more chapters. I'm looking forward to reading them and continuing our discussion. And that gets to a very fundamental question that you asked ChatGPT about, which is, when a chatbot uses the first person I, what exactly does it mean? Because it is not a person. It is not an I. It is artificial intelligence. It's a computer program. It's - you know, it's basically a machine. So what is the I? What does that mean, the chatbots using I?

VARA: Yeah. I mean, I would argue that that I is a fictional creation of the company OpenAI that created ChatGPT. So we think about these technologies, I think, sometimes as being very separate from human experience and human desires and goals. But in fact, there's this company called OpenAI, whose investors want it to be very financially successful. And the way to be financially successful is to get a lot of people using a product. The way to get a lot of people using a product is to make people feel comfortable with the product, to trust the product. And one device that a company might use in order to do that is to use - have the product use language that makes you feel a bit like you're talking to another person.

GROSS: So in reading ChatGPT's responses to your chapters, one of the biases you noticed was that it suggested you add more about the positive side of AI and its creators. I thought that was interesting. Did that say to you that it was revealing the chatbot's bias or pointing out your negative bias or both?

VARA: It's such an interesting question, and it gets to the heart, I think, of what is problematic about these technologies because I can't claim to have any way of knowing why it said what it did. So basically, I fed it these chapters about big technology companies, and I said, what feedback do you have for me? And it said, you could be more positive. And then later, it goes on to provide these sample paragraphs it thinks I should include in the book about how Sam Altman, the CEO of OpenAI, is a visionary leader who's also a pragmatist, like, this really glowing stuff. It would be fun to be - and it would support a strong critique to be able to say, oh, clearly OpenAI has built this product in such a way that it's deliberately having the product spout this propaganda about its CEO that's positive.

It's certainly possible that that's the case, but there are all kinds of other explanations for it, too. It's possible that the language that the technology behind the chatbot absorbed in order to "learn," quote-unquote, how to produce language happens to be somewhat biased toward people like Sam Altman. There are all kinds of possible reasons, and we just don't know. But I think that not knowing is a problem.

GROSS: Did you use any of the chatbot's advice about balancing your reactions to AI and including more positive aspects of it?

VARA: I didn't. So one thing that I wanted to be really thoughtful about was actually not writing a book that was influenced in any way by the rhetoric of the chatbot that I was then conversing with about the book. And so I wrote the entire book, and after doing that, I fed it to the chatbot. And certainly, later, there were these edits that I made in the editing process with my editors at my publishing company. But those were not in response to integrating anything that the chatbot said because on a philosophical level, I did not want to integrate anything the chatbot suggested.

GROSS: So you asked the chatbot about how it seemed programmed to flatter and to sound empathetic and kind because before it gives you any kind of criticism ever, it tells you, like, this is very well written, and this brings out, like, a combination of, like, tech history with your personal history. And, oh, I have to say - let me just interject here - that just to see what happened, I asked ChatGPT, what questions would Terry Gross ask Vauhini Vara in a FRESH AIR interview? And it was very flattering to me. It praised me as a good interviewer with, like, sensitivity and deep questions. So, you know, just another example that it can be very flattering. So, you asked it about how it seemed programmed to flatter and to sound empathetic and kind, and it responded, the way I communicate is designed to foster trust and confidence in my responses, which can be both helpful and potentially misleading. And I thought, that's so true. It can be...

VARA: Yes.

GROSS: ...Helpful and misleading. And I thought maybe the chatbot is actually very transparent.

VARA: So, OpenAI has said that ChatGPT and its other products are designed to foster trust, to appear helpful. So it says those things explicitly. And, right, by saying, this is how I function, yeah, I guess ChatGPT is being as transparent as it can be. I don't know that I would trust it to always be as transparent as that, or even to know how to be transparent - right? - because it's just a machine that's generating words. It has no way of definitively always generating material that's even accurate, right? And so I have fairly low confidence that that transparency is always going to be there. I am curious, though, about what you thought of the questions that it produced, whether any of them were, like, at all interesting to you.

GROSS: They were kind of pretty broad general questions. They all touched on themes in your book.

VARA: Yeah.

GROSS: They were all subjects I wanted to explore. But they were so, like, broad in nature that, you know, there was no personality. I'm not saying, like, well, look at my personality, but there was no point of view, and there was no follow-up. It was just, like, a list of questions. But there was no follow-up to try to go deeper after the answer. And I know it wouldn't have heard your answer, but still, when I prepare an interview, like, one question leads to the next question to go deeper into that answer. And there was nothing like that.

VARA: Yeah. Well, what's interesting, I think, about that, and what I experienced, too, using ChatGPT in the context that I did is that there's something fundamental about human communication about, like, two people talking to each other, or the fact that right now, for example, you and I are talking to each other, but I imagine you always also have an awareness of the fact that eventually, millions of other people will hear the same conversation. And so both you and I are keeping in mind, like, this kind of complex idea about who we're communicating to, what the communication is for, and, like, our own backgrounds and experiences and ways of communicating come into it. And a chatbot is, like, not doing any of that. It seems like it is 'cause the words it produces sound kind of like the language humans use, but it's not using language in any way that's remotely like how we do.

GROSS: My guest is tech reporter and fiction writer Vauhini Vara. Her new book is called "Searches: Selfhood In The Digital Age." She's now writing for Bloomberg Business Week. We'll be right back. This is FRESH AIR.

(SOUNDBITE OF PAQUITO D'RIVERA QUINTET'S "CONTRADANZA")

GROSS: This is FRESH AIR. Let's get back to my interview with tech journalist and fiction writer Vauhini Vara. Her new book, "Searches: Selfhood In The Digital Age," is about how tech is helping and exploiting us and about her own conflicted relationship with big tech like Google, Amazon and AI. She fed the chapters of her book to ChatGPT, asking for advice, and used the chatbot's responses as the basis for her critique of the chatbot.

So in 2021, when you wrote your essay "Ghosts," which went viral, it was published initially in the Believer, it was adapted into a This American Life story. And the premise of this was you had wanted to write about the death of your sister. She was diagnosed with cancer, Ewing sarcoma, when you were a freshman in high school and she was a junior, and she died when you were both in college. And you felt like you just didn't have the words to describe how bereft you were, what a life-changing event this was in every way. And so in 2021, while you were playing around with a predecessor of ChatGPT, you asked it to help you write the essay. And I want you to describe what your process was.

VARA: This was before ChatGPT was out. This was before AI was a big part of our lives, but I got early access to this AI model. And the way it worked was that there was this webpage, and it had a big white box, and you could type in that box and then press a button and it would seem to complete the thought for you. It would complete the text for you. And so I'd been playing around with it for a while. I would just, like - I put in the beginning of "Moby Dick," which is my favorite novel, and hit the button just to see what would happen. I did all kinds of stuff like that. And then I started to think about what the promise was that a technology like this was making. And it seemed to me that the promise was that it could produce words for you when you were at a loss for words. And because I'm a writer, I tend to want to make the effort to come up with my own words to describe my experiences or things I've observed.

But there was this thing and continues to be this thing that I have a really hard time finding words for, which is the death of my sister and my grief over it. I mean, I think anybody who has experienced death or any other kind of loss will be familiar with that feeling of not knowing how to come up with the words to describe this experience that was so profound. And so I kind of took this technology somewhat at face value. I thought to myself, well, if this is the promise this technology is making, let me try to get it to communicate for me about my sister's death. And so I sat down and I wrote this sentence, which was when I was in my freshman year of high school and my sister was in her junior year, she was diagnosed with Ewing sarcoma. And then I hit the button, and it produced this story that really had very little to do with my actual experience and my actual sister. And the last line of that little, you know, three- or four-paragraph story it produced was, she's doing great now.

GROSS: (Laughter) Yeah.

VARA: And - which was, like - it was, like, the opposite of what I wanted this chatbot to do, right? Like, it was producing a lie, a falsehood that was sort of like the worst possible falsehood that it could produce, right? Because my sister - my challenge was trying to communicate the reality of what had happened, and this was, like, the opposite of that reality.

And so I thought, OK, well, I know enough about these technologies to know that if I give it more words and hit the button, then it'll have more to go off of, and it might match more closely my experience. And so I erased everything it wrote, kept my first sentence. I wrote a little more, hit the button again. And in some ways, it did get a little closer. And I did that over and over, sort of deleting what the chatbot wrote every time. And the strange thing that happened was that as I did that, the technology did get closer to describing something that resembled grief or my experience of it in a way that was, like, weirdly moving to me and impressive to me.

But ultimately, this machine was not me, and so it couldn't say anything authentic about my actual experience. And so I realized, eventually, toward the end of writing this thing, that there was nothing that the technology could come up with that was actually going to fulfill my desire to be able to communicate because it wasn't me. It wasn't doing the thing that I wanted to do, which was to communicate, myself, on my own. And so, ultimately, I published that experiment as an essay. And I thought it was so interesting how it showed both the ways in which a product like this can legitimately produce language that somebody can find moving and intellectually stimulating and interesting and yet be doing something very different from what a human is doing when we're trying to communicate.

GROSS: Well, a couple of things. Early on when you were giving it very little information, it twice had you as, like, very athletic.

VARA: Yes.

GROSS: I don't know if you are or not, but, like, in one, you're like, a lacrosse player. And in the other, it's like, you run for, like, miles and miles and miles. And it also seemed to have a bias toward a happy ending. You know, she's fine now. It's like he watched too many mediocre movies (laughter).

VARA: Yeah. I mean...

GROSS: Notice how I personalized it. Like, he watched.

VARA: Well, and it's...

GROSS: And I genderized (ph) it.

VARA: Totally. And it's interesting that you genderize it, too, because in that second one, the one where it thought that I was a runner, it seemed to think that I, the writer, was male. So I think...

GROSS: Oh.

VARA: ...There are all kinds of things going up. But then later, it realizes that I'm female and then ends up generating this meet-cute between me and this, like, handsome professor who helps me deal with my grief.

GROSS: (Laughter).

VARA: And so there are all these tropes and biases that are embedded in what it's producing.

GROSS: So I want to give an example of some writing that is, like, very dramatic but, also, like, very puzzling, very odd. So this is how hard it is to describe your sister and what you felt for her.

VARA: This is the technologies.

GROSS: Right. This is AI speaking here. This is AI writing on your behalf.

VARA: Exactly.

(Reading) So I can't describe her to you, but I can describe what it felt like to have her die. It felt like my life was an accident - or worse, a mistake. I'd made a mistake in being born, and now to correct it, I would have to die. I'd have to die, and someone else, a stranger, would have to live in my place. I was that stranger. I still am.

GROSS: What? Like, it sounds very dark and interesting, but I'm not sure it makes any sense. What do you think?

VARA: It's funny because I think so much of the experience of reading is about making your own meaning as a reader. And so, for me, I think there's something that, like, in my reading of it is kind of poignant. I read it as saying, when somebody who's very close to you whose existence is a big part of your identity dies, you have to then rebuild a new version of yourself - right? - like, a kind of new identity. So I read this as talking about the period after my sister died, and I had to become a new version of myself, and I was learning who that new version of myself was. And so that person was kind of like a stranger. And in a way, there's a sense of estrangement that continues.

What's interesting is, like, I read it that way, but I read it that way because I'm a reader making meaning from language that a technology generated with no particular intent, no knowledge of what my experience of grief was.

GROSS: I'm thinking of how weird it is that you and I are doing literary criticism of AI.

(LAUGHTER)

GROSS: It must sound a little strange, don't you think?

VARA: Yeah, I agree. And I think the funny thing about it is that we're two human beings trying to make meaning out of something that is fundamentally, one could argue, meaningless in that the entity that created the language wasn't doing it with any consciousness - right? - any intent behind it.

GROSS: Well, let's take another short break here. If you're just joining us, my guess is Vauhini Vara. Her new book is called "Searches: Selfhood In The Digital Age." There's a lot more to talk about, so stick around. We'll be right back. I'm Terry Gross, and this is FRESH AIR.

(SOUNDBITE OF MUSIC)

GROSS: This is FRESH AIR. I'm Terry Gross. Let's get back to my interview with tech journalist and fiction writer Vauhini Vara. She currently writes for Bloomberg Businessweek. Her new book, "Searches: Selfhood In The Digital Age," is about her conflicted relationship as a consumer with Big Tech like Google, Amazon and AI. She also asks big questions about what it means when chatbots talk to us as if they're human beings, while we're beginning to outsource our own writing, research and thinking to AI. She fed chapters of her new book to ChatGPT, asking for advice. She used those interactions to critique AI's ability to serve as an editor and writer. She reprints those interactions in her new book.

So based on your interactions with AI, what are your thoughts about chatbots and the use of AI for writing or editing? It wasn't that useful for you. It was very instructive about AI. But do you think there are other people that it would be very useful for?

VARA: There was a study out of Cornell a couple of years ago that I was - found really interesting, where they had some people write an essay about social media just on their own. And then they gave these two other groups special AI models. For one group, they gave them an AI model that was predisposed to produce positive opinions about social media. And then they gave this other group an AI model that was predisposed to produce negative opinions about social media. What they found was that when people wrote essays with the help, quote-unquote, of these AI models, they were twice as likely to produce essays that reflected the quote-unquote opinion of the AI model.

It seems, from that research and other research that's emerged since then, that even if we are using these AI companies' products to edit our work or ask for feedback on our work, there's a real danger that the responses that we're going to get are going to change our writing in fundamental ways that we might not even be aware of.

GROSS: Your father uses AI, including...

VARA: Yes.

GROSS: ...To write haiku. So how does he use it, and what do you think of that?

VARA: Oh, my gosh. We could do a whole interview about my dad's use of AI. My dad has recently started sending me messages on WhatsApp where the whole text of the message is something he asked ChatGPT to write. So, for example, he recently sent me one that was, it's hard to decide whether to retire in Canada, the United States or India. Here are some pros and cons for each option. So he never said to me, I'm wondering whether I should move...

GROSS: (Laughter).

VARA: ...To India or Canada for my retirement. He just sent me that response. And so the subtext - like, what he's communicating through ChatGPT - is the thing that's actually unsaid. And so there are a lot of people out there - I think my dad is one of many people who want to communicate something. My dad was explaining to me on the phone the other day that he's not a writer. He can't communicate these things himself. But if he gives ChatGPT enough of a prompt, it can communicate the thing he wants it to communicate. And there are things that I find problematic about that, for sure.

GROSS: You know, AI in some ways is being used like a personal Cyrano de Bergerac. Like, you want to express your love for someone. You don't have the words, so you have this other guy write it as if you're saying it and signing your name to it.

VARA: Yeah.

GROSS: So how do you use AI for real in your life?

VARA: So the truth is that I use AI in very limited ways. The fact that I fed large portions of this book to ChatGPT might give people the impression that I'm some huge AI superuser, which I'm not. I'm a journalist who writes about AI. So to the extent that that's part of my work, I think it's really important for me to engage with the products. At the same time, I'm really concerned about all the things we don't know about how these products function and how the companies behind these products are - might ultimately use everything we're putting into their products to exploit us, to expand their own wealth and power.

I sometimes use ChatGPT. A use that comes to mind is, like, if there's a word on the tip of my tongue, I'll go to ChatGPT and write a sentence and - with a blank in it and kind of explain the gist of what I'm looking for. And one thing it's pretty good at is coming up with what that word was that was on the tip of my tongue. So that's a small example of how I use it. When I do use it, I tend not to log into it. I tend to just go to ChatGPT, use the interface without logging in so that my use of it is not associated with my account. I do still have an account because again, as a journalist, I want to be able to have access to these products.

GROSS: So I'm unclear, since I've only used it twice, each time to ask it questions to help me understand how it worked for the interview I was about to do, as I did in your case. Because, you know, AI is at the center of the interview, so I wanted to ask it some meta questions. You know?

VARA: Yeah.

GROSS: But, you know, I used it for free. I don't have an account. I just put my question in, and it came up with stuff. What are some of the ways you expect it to be monetized in the future that it's not monetized for yet? 'Cause I feel like being able to use it at all for free is kind of like a teaser until, like...

VARA: Yeah.

GROSS: ...No one can use it for free. I'm...

VARA: Yes.

GROSS: ...Just speculating. I have no knowledge.

VARA: Yeah. So I'm speculating to an extent, too, but these products are really, really expensive to build. And so investors are putting a lot of money. Companies themselves are putting a lot of money into building these products. And some small number, some small percentage of users are paying for premium versions of the product, but that's just not enough to turn these companies into the enormous businesses that the investors are betting that they are going to be.

And so that leaves us in this really interesting situation in 2025 where the companies are starting to say, OK, we're going to need to figure out how to monetize our free users, is how they put it. And the CFO of OpenAI said to the Financial Times last year that the company is looking into advertising as an option. Other AI companies, and here I'm talking about big companies like Google and Microsoft, also seem to be thinking about this.

So this is speculation, but here's one way in which it would be obvious for AI companies to monetize our use of the products. When people trust these products a lot, they end up going to these products with all kinds of personal information - their marital struggles, their conflict with their boss at work. And while we focus a lot on the question of, like, how accurate or unbiased or useful the information is that these products are giving us, I think something we kind of forget about is everything we're providing to the makers of these products in asking them these questions about really intimate details of our lives.

And so eventually, these companies are going to know a lot about who we are, about what kind of language can be used with each specific user to persuade them of something, to influence them in a particular way. And that puts these companies in a position to, for example, recommend products to us using language that's geared toward us specifically and our circumstances and our vulnerabilities, and ultimately collect this huge database of all of us who are using these products, who we are and what makes us tick.

GROSS: Yeah. And it sounds like, you know, as you're saying, that AI and the companies that own the AI products are going to know a lot more than, say, knowledge based on what I search for on Google or the books I bought on Amazon or the TV shows I'm watching on Netflix, where the algorithms are going to recommend what I want to buy or watch next.

VARA: Exactly. Yeah.

GROSS: There's parts of your book where you describe your life through your searches because you don't like the fact that Google has a lot of information on you based on your searches, but you do like the fact that your searches have been archived. You can access that archive and learn about where you were at different periods of your life based on what you searched for. How did you start thinking of searches as a record of your life?

VARA: So the first thing that I ever wrote that ended up in this book was this chapter made up entirely of my Google searches. I wrote it in 2019. I had been covering tech for a long time by that point, and so I knew that Google kept records of our searches unless we turned off its ability to do that. And I could have sworn I'd turned it off, but I hadn't. And so for the past, you know, 15 years - off and on, but mostly on - Google had been collecting all my searches.

Realizing that - it freaked me out on one level. But then also, I found it fascinating because as a writer, as a journalist, I'm always interested in archives, right? And I used to keep a diary when I was a kid, but I haven't in a long time. And it occurred to me that probably the best possible archive of my life was the archive of everything I'd searched for over the years. And it made me think about the way in which, like, it's sort of too simplistic to say, these companies exploit us and we have no say in the matter, or to do what the companies say in turn, which is, you're only using these products because they're useful to you. You could stop using them tomorrow if you really wanted to.

I think there are these, like, very binary positions. And I think the reality is that the exploitation and the usefulness totally go hand in hand with all of these products. And I think what makes that really uncomfortable for us as users is that then we have to contend with our own complicity. Like, our own role in the exploitation that's taking place when these companies, you know, collect our personal information and use it to become more wealthy, to become more powerful, to influence political systems. We have to admit, like, well, that's partly our fault because here we are using these products and giving them permission to keep archives of our lives.

GROSS: Well, let's take another break here. Let me reintroduce you. If you're just joining us, my guest is Vauhini Vara. Her new book is called "Searches: Selfhood In The Digital Age." We'll be right back. This is FRESH AIR.

(SOUNDBITE OF JEFF BABKO'S "NOSTALGIA IS FOR SUCKAS")

GROSS: This is FRESH AIR. Let's get back to my interview with tech journalist and fiction writer Vauhini Vara. Her new book, "Searches: Selfhood In The Digital Age," is about how tech is helping and exploiting us and about her own conflicted relationship with Big Tech like Google, Amazon and AI.

Is the internet and social media making you feel obsolete as a novelist?

VARA: (Laughter).

GROSS: And also worried that all your writing - your essays, your journalism - will be appropriated by AI?

VARA: Yeah. I mean, what I would like to think is that we have choices here. And part of the reason that in this book, and when I think about my own personal use of these products, I'm so interested in, like, our choice and agency in the matter is that if it's the case that big technology companies are just going to continue to amass more wealth and power, and AI is here and so AI is going to be even bigger in the future and take everything over - like, that suggests that we don't have a choice in the matter, right? However, if we say we have a choice in the matter, and we can actually decide to choose a different future because we are unhappy with the one we're currently in, then we can potentially build a future that's different from the one that we're in now.

But I think, like, in 2025, we're in this really interesting, crucial period where not as many people are using AI as I think we think. So in the U.S., for example, most people have never tried ChatGPT - still, in 2025. And so we're in this interesting position where we can actually decide as individuals, as communities, as societies the extent to which we want AI to be a part of the future, the extent to which we want AI generating novels or generating something that is going to substitute a newspaper or magazine article or a radio show.

GROSS: I have to ask you about the spelling bee.

VARA: (Laughter).

GROSS: A moment of semi-fame in your younger years was when you were third in the National Spelling Bee. I always wonder - what is the point of asking young people to spell obscure words that no one uses and no one even can define? Can you explain that? 'Cause it makes no sense to me.

VARA: Yes. I have so many thoughts on this, Terry. I continue to love spelling. I love language. I think - you know, to get philosophical about it, I think we could ask that question about anything, honestly. I think we could say, what's the point of trying to run a three-minute mile, right? Or, like, I've taken up rock climbing. What's the point of, like, trying to climb to the top of a rock wall when elevators exist, right (laughter)?

GROSS: Yes.

VARA: And I think it speaks to this thing about AI, too. What's the point of trying to write an essay if AI can write it for you? And I think the point is that we as humans are, like, idiosyncratic, curious creatures, and we created this thing called language that's really important to us. And for me personally - who knows why? I don't know why. I love words. I love language. I love knowing how sounds fit together and produce meaning. Like, that feels - that's always been fascinating and meaningful to me. And so, like, I love it because I love it. And I kind of love about spelling that there is - you're right - there is no point, especially in the age of spell-check and AI. Like, there's probably even less of a point than there was in the mid-'90s. But strong supporter of spelling. I think everybody should be in spelling bees.

GROSS: Yes, but the question, though, was about asking young people to spell words that nobody ever uses, that no one can even define, really obscure words. Do you take pleasure in spelling words that you didn't even know existed?

VARA: Yes. I love words that I didn't even know existed. I take pleasure in spelling them. I take pleasure in, like, knowing that they exist when previously I didn't know that they exist. I - no, I love it. And I think they do, too. I think there's this misconception about spelling bee kids that, like, they're all doing it 'cause their parents are making them so that they can get into college 10 years later or something. But every - I wrote an article about spelling bee kids in 2018, I think it was, for the magazine Harper's. And so I got to spend time with, like, a more recent generation of spelling kids. And the thing that I think they had in common with the spelling kids I knew in the mid-'90s is just, like, this genuine, idiosyncratic, strange love for words and how they're put together.

GROSS: So you placed third in the National Spelling Bee. What was your losing word?

VARA: Oh, Terry, the losing word was periplus.

GROSS: Periplus?

VARA: Can you spell it?

GROSS: Yeah, but again, I've never heard the word before.

VARA: (Laughter).

GROSS: I have no idea what it means. I would spell it - peraplus (ph) or paraplus (ph)? 'Cause one sounds like an animal and the other sounds like an amount.

VARA: OK. So the word is periplus. There is an alternate pronunciation, which is periplouse (ph).

GROSS: OK. So I would spell it P-A-R-A-P-L-U-S or P-L-U-S-S or if it's periplouse...

VARA: (Laughter) This is cheating.

GROSS: ...P-L-O-U-S-E. Oh, I know, but this isn't a spelling bee, so I'm allowed to do this.

VARA: You're out. You're out, Terry.

GROSS: I'm out. I'm - after three or four tries, I'm out. OK.

VARA: (Laughter) You're still out. Yeah. So it's P-E-R-I...

GROSS: Oh, but you said paraplus.

VARA: I know. I think it...

GROSS: You didn't say periplus.

VARA: That's fair. It might be my pronunciation. But it is P-E-R-I-P-L-U-S, and I spelled it P-E-R-I-P-L-U-S-S-E.

GROSS: Ah, OK. You got a little fancy there.

VARA: (Laughter).

GROSS: You over-fancified it.

VARA: Yeah, exactly. Exactly.

GROSS: And what does it mean?

VARA: It has to do with, like, the - I don't know the - I don't remember the exact definition. But it has to do with, like, the log that is kept on ships when they circumnavigate, like, when they're trying to see where the borders of islands or continents are.

GROSS: Well, I think it's only fitting that we started the conversation doing a literary critique of AI's writing, and we're ending it with spelling. (Laughter) So...

VARA: It's amazing. Perfect. Full circle.

GROSS: Yes.

(LAUGHTER)

GROSS: Thank you. It was really great to talk with you. I really enjoyed it. And thank you again for coming in spite of the fact that your windshield shattered on the way over.

VARA: (Laughter).

GROSS: I should let you go and get it repaired before it rains or whatever.

VARA: I appreciate it. This was so fun. It was a real honor to get to talk to you, Terry.

GROSS: Vauhini Vara's new book is called "Searches: Selfhood In The Digital Age." Our TV critic David Bianculli will talk about the significance of CBS's cancellation of "The Late Show With Stephen Colbert" after a break. This is FRESH AIR.

(SOUNDBITE OF THE DAVE BRUBECK QUARTET'S "UNSQUARE DANCE") Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Terry Gross
Combine an intelligent interviewer with a roster of guests that, according to the Chicago Tribune, would be prized by any talk-show host, and you're bound to get an interesting conversation. Fresh Air interviews, though, are in a category by themselves, distinguished by the unique approach of host and executive producer Terry Gross. "A remarkable blend of empathy and warmth, genuine curiosity and sharp intelligence," says the San Francisco Chronicle.