Why HI + AI = Trust
There’s an AI confidence gap. Here’s how a human filter can help bridge it.
It’s a foregone conclusion that AI will change the way we work.
Nearly all chief communications officers surveyed by Ipsos (89%) believe that AI will fundamentally transform business operations, according to the latest Ipsos Reputation Council report.
But amid that speed and transformation, there’s a stark gap in confidence: Only 11% feel existing ethical policies are sufficient for the widespread adoption of AI.
Corporate leaders are using AI, but still skeptical of it
believe AI will fundamentally transform business operations
believe existing ethical policies are sufficient for widespread AI adoption
Source: The Ipsos Reputation Council 2025, a survey of 161 senior-level executives across 19 global markets
The fact that even true believers — those who see AI as the ironclad future of business — see major hurdles illustrates the widespread trust barrier businesses face in the AI era.
And as AI platforms mature and look for revenue to make up for their immense capital costs, they risk further alienating their core users.
Two in three Americans (63%) say that having ads in AI search results will make them trust the results less. And that’s not just non-users being skeptical. The number is consistent among Americans who say they’re familiar with AI (65% would lose trust) as well as full-time employed Americans (67% would lose trust).
These are the people who will be the growth engine for platforms as AI technology goes from toy to tool — and if trust is lost as companies pivot towards practicality, the bottom is at risk of falling out.
The number of Americans who say having ads in AI search results will make them trust the results less
Source: The Ipsos Consumer Tracker, fielded Jan. 27-28, 2026, among 1,085 Americans
There’s a similar story playing out of mass adoption, but also hesitancy, among B2B sellers and buyers: 94% of buyers now use AI in their purchase process, and 88% of sellers, according to LinkedIn and Ipsos’ report, “The Trust Advantage.”
That ubiquity is creating another problem: When everyone has access to the same tools and information, everything starts to sound the same. Less than half of B2B buyers describe the sellers they encounter as trustworthy.
So what’s the key to building trust? A human filter: Human sources and human connections working in harmony with modern technology.
A growing body of work emphasizes that collaborative intelligence (human intelligence + artificial intelligence) yields the best outcomes. Rather than AI simply replicating or replacing human cognition, the focus of this camp is on complementarity: How humans and AI systems can cooperate to solve problems more effectively together.
In medicine, studies have shown that AI systems can match expert radiologists in cancer detection, but performance improves further when clinicians and AI work together. In forecasting, hybrid human–algorithm models outperform unaided experts by combining statistical pattern recognition with contextual judgment. And in creative and problem-solving tasks, experiments show that AI expands the range of ideas generated, while humans refine, evaluate, and strategically select the most viable options.
Working Americans are on board with humans and AI working together. Asking about 11 AI use cases, we found that humans using AI were trusted more than AI alone every time. And in two key areas — analyzing business data and innovating new products — full-time employed Americans say they trust humans using AI as much or more than they trust humans alone.
Would you trust humans, AI, or humans using AI to do the following:
Source: The Ipsos Consumer Tracker, fielded July 29-30, 2025, among 502 full-time employed Americans
Would you trust humans, AI, or humans using AI to do the following:
Source: The Ipsos Consumer Tracker, fielded July 29-30, 2025, among 502 full-time employed Americans
The key lies in where and how, exactly, to deliver personalized, expert insight at the right moments.
Manuel Garcia-Garcia, PhD, Global Science Lead at Ipsos, shows the most effective actions in his new book, “Smarter Together”:

Emotional and social signals drive early trust: Humans instinctively use social and emotional cues, such as empathetic tone, responsiveness, and conversational fluency, to judge trustworthiness, even with machines. Because our brains apply social heuristics (like CASA — computers are social actors — and anthropomorphism), AI that feels emotionally attuned can quickly earn a sense of trust, sometimes before functional reliability has been demonstrated. This makes empathy-aware design a strategic lever in early AI adoption.

Trust must be calibrated, not just elevated: Trust in AI is a calibration problem — over-trust (automation bias) can lead users to rely on incorrect recommendations, while under-trust (algorithm aversion) can cause them to dismiss accurate insights. Both extremes reduce value: one creates risk, and the other leaves potential benefits unrealized. Effective trust design aligns perceived competence with actual performance.

Explainability and predictability anchor functional trust: Users demand transparency to form functional trust. Unlike people, where mental models help explain behavior, AI can feel opaque, increasing skepticism. AI that clearly communicates how and why it reached a decision embeds predictability and reliability into the interaction, helping users feel confident collaborating with it.

Design choices shape accountability expectations: Human-like features can make AI feel approachable, but they also raise expectations of fairness, responsibility, and ethical behavior. When anthropomorphism increases perceived agency, failures are judged more harshly. Trust design must balance social cues with clear boundaries and accountability signals that reflect real capabilities.

Context and culture influence trust preferences: People’s trust in AI versus humans varies with context and cultural background. In environments where institutional trust is low, individuals may prefer algorithmic decisions; in emotionally charged or high-stakes domains, users may demand human oversight. Understanding these contextual drivers is essential for designing AI systems and strategies that fit user expectations and norms.
Once you’ve added human intelligence to artificial intelligence in the right places, you’re well on the path to trust that can bolster your business internally and externally. But which humans you trust matters, too. Read on to learn more.
