AI and the Trust Revolution

When experts worry about young people’s relationship with information online, they typically assume that young people are not as media literate as their elders. But ethnographic research conducted by Jigsaw—Google’s technology incubator—reveals a more complex and subtle reality: members of Gen Z, typically understood to be people born after 1997 and before 2012, have developed distinctly different strategies for evaluating information online, ones that would bewilder anyone over 30. They do not consume news as their elders would—namely, by first reading a headline and then the story. They do typically read the headlines first, but then they jump to the online comments associated with the article, and only afterward delve into the body of the news story. That peculiar tendency is revealing. Young people do not trust that a story is credible simply because an expert, editorial gatekeeper, or other authority figure endorses it; they prefer to consult a crowd of peers to assess its trustworthiness. Even as young people mistrust institutions and figures of authority, the era of the social web allows them to repose their trust in the anonymous crowd.

A subsequent Jigsaw study in the summer of 2023, following the release of the artificial intelligence program ChatGPT, explored how members of Gen Z in India and the United States use AI chatbots. The study found that young people were quick to consult the chatbots for medical advice, relationship counseling, and stock tips, since they thought that AI was easy to access, would not judge them, and was responsive to their personal needs—and that, in many of these respects, AI advice was better than advice they received from humans. In another study, the consulting firm Oliver Wyman found a similar pattern: as many as 39 percent of Gen Z employees around the world would prefer to have an AI colleague or manager instead of a human one; for Gen Z workers in the United States, that figure is 36 percent. A quarter of all employees in the United States feel the same way, suggesting that these attitudes are not only the province of the young.

Such findings challenge conventional notions about the importance and sanctity of interpersonal interactions. Many older observers lament the rise of chatbots, seeing the new technology as guilty of atomizing people and alienating them from larger society, encouraging a growing distance between individuals and a loss of respect for authority. But seen another way, the behavior and preferences of Gen Z also point to something else: a reconfiguration of trust that carries some seeds of hope.

Analysts are thinking about trust incorrectly. The prevailing view holds that trust in societal institutions is crumbling in Western countries today, a mere two percent of Americans say they trust Congress, for example, compared with 77 percent six decades ago; although 55 percent of Americans trusted the media in 1999, only 32 percent do so today. Indeed, earlier this year, the pollster Kristen Soltis Anderson concluded that “what unites us [Americans], increasingly, is what we distrust.”

But such data tells only half the tale. The picture does seem dire if viewed through the twentieth-century lens of traditional polling that asks people how they feel about institutions and authority figures. But look through an anthropological or ethnographic lens—tracking what people do rather than what they simply tell pollsters—and a very different picture emerges. Trust is not necessarily disappearing in the modern world; it’s migrating. With each new technological innovation, people are turning away from traditional structures of authority and toward the crowd, the amorphous but very real world of people and information just a few taps away.

This shift poses big dangers; the mother of a Florida teenager who committed suicide in 2024 filed a lawsuit accusing an AI company’s chatbots of encouraging her son to take his own life. But the shift could also deliver benefits. Although people who are not digital natives might consider it risky to trust a bot, the fact is that many in Gen Z seem to think that it is as risky (if not riskier) to trust human authority figures. If AI tools are designed carefully, they might potentially help—not harm—interpersonal interactions: they can serve as mediators, helping polarized groups communicate better with one another; they can potentially counter conspiracy theories more effectively than human authority figures; they can also provide a sense of agency to people who are suspicious of human experts. The challenge for policymakers, citizens, and tech companies alike is to recognize how the nature of trust is evolving and then design AI tools and policies in response to this transformation. Younger generations will not act like their elders, and it is unwise to ignore the tremendous change they are ushering in.

TRUST FALL

Trust is a basic human need: it glues people and groups together and is the foundation for democracy, markets, and most aspects of social life today. It operates in several forms. The first and simplest type of trust is that between individuals, the face-to-face knowledge that often binds small groups together through direct personal links. Call this “eye-contact trust.” It is found in most nonindustrialized settings (of the sort often studied by anthropologists) and also in the industrialized world (among groups of friends, colleagues, schoolmates, and family members).

When groups grow big, however, face-to-face interactions become insufficient. As Robin Dunbar, an evolutionary biologist, has noted, the number of people a human brain can genuinely know is limited; Dunbar reckoned the number was around 150. “Vertical trust” was the great innovation of the last few millennia, allowing larger societies to function through institutions such as governments, capital markets, the academy, and organized religion. These rules-based, collective, norm-enforcing, resource-allocating systems shape how and where people direct their trust.

The digitization of society over the past two decades has enabled a new paradigm shift beyond eye-contact and vertical trust to what the social scientist Rachel Botsman calls “distributed trust,” or large-scale, peer-to-peer interactions. That is because the Internet enables interactions between groups without eye contact. For the first time, complete strangers can coordinate with one another for travel through an app such as Airbnb, trade through eBay, entertain one another by playing multiplayer video games such as Fortnite, and even find love through sites such as Match.com.

To some, these connections might seem untrustworthy, since it is easy to create fake digital personas, and no single authority exists to impose and enforce rules online. But many people nevertheless act as if they do trust the crowd, partly because mechanisms have arisen that bolster trust, such as social media profiles, “friending,” crowd affirmation tools, and online peer reviews that provide some version of oversight. Consider the ride-sharing app Uber. Two decades ago, it would have seemed inconceivable to build a taxi service that encourages strangers to get into one another’s private cars; people did not trust strangers in that way. But today, millions do that, not just because people trust Uber, as an institution, but because a peer-to-peer ratings system—the surveillance of the crowd—reassures both passengers and drivers. Over time and with the impetus of new technology, trust patterns can shift.

NO JUDGMENT

AI offers a new twist in this tale, one that could be understood as a novel form of trust. The technology has long been quietly embedded in daily lives, in tools such as spell checkers and spam filters. But the recent emergence of generative AI marks a distinct shift. AI systems now boast sophisticated reasoning and can act as agents, executing complex tasks autonomously. This sounds terrifying to some; indeed, an opinion poll from Pew suggests that only 24 percent of Americans think that AI will benefit them, and 43 percent expect to see it “harm” them.

But American attitudes toward AI are not universally shared. A 2024 Ipsos poll found that although around two-thirds of adults in Australia, Canada, India, the United Kingdom, and the United States agreed that AI “makes them nervous,” a mere 29 percent of Japanese adults shared that view, as did only around 40 percent of adults in Indonesia, Poland, and South Korea. And although only about a third of people in Canada, the United Kingdom, and the United States agreed that they were excited about AI, almost half of people in Japan and three-quarters in South Korea and Indonesia did.

Meanwhile, although people in Europe and North America tell pollsters that they fear AI, they constantly use it for complex tasks in their lives, such as getting directions with maps, identifying items while shopping, and fine-tuning writing. Convenience is one reason: getting hold of a human doctor can take a long time, but AI bots are always available. Customization is another. In earlier generations, consumers tended to accept “one size fits all” services. But in the twenty-first century, digitization has enabled people to make more personalized choices in the consumer world, whether with music, media, or food. AI bots respond to and encourage this growing desire for customization.

Another, more counterintuitive factor is privacy and neutrality. In recent years, there has been widespread concern in the West that AI tools will “steal” personal data or perform with bias. This may sometimes be justified. Ethnographic research suggests, however, that a cohort of users prefer AI tools precisely because they seem more “neutral,” less controlling, and less intrusive than humans. One of the Gen Zers interviewed by Jigsaw explained her affinity for talking to AI in blunt terms: “The chatbot can’t ‘cancel’ me!”

Another recent study of people who believe conspiracy theories found that they were far more willing to discuss their beliefs with a bot than with family members or traditional authority figures, even when the bots challenged their ideas, which suggests one way that human-machine interactions can trump eye-contact and vertical trust mechanisms. As one person told the researchers: “Now this is the very first time I have gotten a response that made real, logical, sense.” For people who feel marginalized, powerless, or cut off from the elite—like much of Gen Z—bots seem less judgmental than humans and thus give their users more agency. Perhaps perversely, that makes them easier to trust.

FROM HAL TO HABERMAS

This pattern might yet shift again, given the speed of technological change and the rise of “agentic intelligence,” the more sophisticated and autonomous successor to today’s generative AI tools. The major AI developers, including Anthropic, Google, and OpenAI, are all advancing toward new “universal assistants” capable of seeing, hearing, chatting, reasoning, remembering, and taking action across devices. This means that AI tools will be able to make complex decisions without direct human supervision, which will allow them to bolster customer support (with chatbots that can meet customer needs) and coding (with agents who can help engineers with software development tasks).

New generations of AI tools are also gaining stronger persuasive capabilities, and in some contexts they seem to be as persuasive as humans. This invites obvious dangers if these tools are deliberately created and used to manipulate people—or if they simply misfire or hallucinate. Nobody should downplay those risks. Thoughtful design, however, can potentially mitigate this: for example, researchers at Google have shown that it is possible to develop tools and prompts that train the AI to identify and avoid manipulative language. And as with existing apps and digital tools, agentic AI allows users to exercise control. Consider wearable technology, such as a Fitbit or an Apple Watch, that can monitor vital signs, detect concerning patterns, recommend behavioral changes, and even alert health-care providers if necessary. In all these cases, it is the user, not the bot, who decides whether to respond to such prompts and which data will be used in the AI programs; your Fitbit cannot force you to go jogging. So, too, with financial planning bots or those used for dating: technology is not acting like a dictator but like a member of an online crowd of friends, offering tips that can be rejected or accepted.

Having an AI tool act in this way can obviously make people more efficient and also help them better organize their lives. But what is less evident is that these tools can potentially also improve peer-to-peer interaction within and between groups. As trust in authority figures has faded and people have tried to customize their information sources and online “crowd” to their individual tastes, societies have become more polarized, trapped in echo chambers that do not interact or understand each other. Human authority figures cannot easily remedy that, given widespread distrust. But just as AI tools can translate between languages, they are also starting to have the potential to translate between “social languages”: that is, between worldviews. A bot can scan online conversations between different groups and find patterns and points of common interest that can be turned into prompts that potentially enable one “crowd” of people to “hear” and even “understand” others’ worldviews better. For instance, researchers from Google DeepMind and the University of Oxford have developed an AI tool called the “Habermas Machine” (an homage to the German philosopher Jürgen Habermas) that aspires to mediate disputes between groups with opposing political perspectives. It generates statements that reflect both the majority and the minority viewpoints in a group that relate to a political issue and then proposes areas of common ground. In studies involving over 5,000 participants, the AI-generated statements were preferred over those created by human mediators, and using them led to greater agreement about paths forward on divisive issues.

For people who feel marginalized, bots seem less judgmental than humans.

So how can societies reap the benefits of AI without falling prey to its dangers? First, they need to recognize that trust is a multifaceted phenomenon that has shifted before (and will keep shifting) and that technological change is occurring amid (and exacerbating) social flux. That means AI developers need to proceed very cautiously and humbly, discussing and mitigating the risks of the tools they develop. Google, for its part, has tried to do this by publishing an ambitious 300-page collection of recommendations about the ethical labyrinth of advanced AI assistants, exploring how to maintain safeguards that prevent AI from emotionally manipulating users, and what it means to measure human well-being. Other firms, such as Anthropic, are doing the same. But much more attention from the private sector is needed to tackle these uncertainties.

Consumers also need real choice among developers, so that they can select the platforms that offer the most privacy, transparency, and user control. Governments can encourage this by using public policy to promote responsible AI development, as well as open science and open software. This approach can create some safety risks. But it also creates more checks and balances by injecting competition between different systems. Just as customers can “shop around” for banks or telecom services if they dislike how one system treats them, they should be able to switch between AI agents to determine which platform offers them the most control.

Increasing human agency should be the goal when thinking about how people interact with AI platforms. Instead of viewing AI as a despotic, robotic overlord, developers need to present it more as a superintelligent member of people’s existing online crowds. That does not mean people place blind faith in AI or use it to displace human-to-human interactions; that would be disastrous. But it would be equally foolish to reject AI simply because it seems alien. AI, like humans, has the potential to do good and bad and to act in trustworthy and untrustworthy ways. If we want to unlock the full benefits of AI, we need to recognize that we live in a world where trust in leaders is crumbling, even as we put more faith in the wisdom of crowds—and ourselves. The challenge, then, is to use this digital boost to the wisdom of crowds to make us all wiser.

Loading…

Great Job Yasmin Green, Gillian Tett & the Team @ FA RSS Source link for sharing this story.

#FROUSA #HillCountryNews #NewBraunfels #ComalCounty #LocalVoices #IndependentMedia

Felicia Ray Owens
Felicia Ray Owenshttps://feliciarayowens.com
Felicia Ray Owens is a media founder, cultural strategist, and civic advocate who creates platforms where power meets lived truth. As the voice behind C4: Coffee. Cocktails. Culture. Conversation and the founder of FROUSA Media, she uses storytelling, public dialogue, and organizing to spotlight the issues that matter most—locally and nationally. A longtime advocate for community wellness and political engagement, Felicia brings experience as a former Precinct Chair and former Chief Communications Officer of Indivisible Hill Country. Her work bridges culture, activism, and healing through curated spaces designed to inspire real change. Learn more at FROUSA.org

Latest articles

spot_img

Related articles

Leave a reply

Please enter your comment!
Please enter Your First & Last Name here

Leave the field below empty!

spot_img
Secret Link