Public Trust in AI Declines as Americans Embrace Use

A national survey finds Americans using AI more but trusting it less. This analysis explains the paradox, public concerns about jobs and data centers, and practical steps for building trust in AI.

Public trust in AI is falling even as adoption rises

A recent national survey of nearly 1,400 Americans reveals a striking paradox: more people are using artificial intelligence for research, writing, work and data analysis, yet trust in AI remains low. Three out of four respondents reported trusting AI only rarely or sometimes, while roughly one in five said they trust AI most or almost all of the time. At the same time, the share of Americans who have never used AI dropped from about one-third to roughly one-quarter in the last year.

Why do Americans distrust AI?

The gap between adoption and confidence points to several persistent anxieties. Respondents cited worries about job displacement, environmental and infrastructure impacts from data centers, inadequate transparency from companies, and weak government oversight. These concerns are not abstract: many people expect AI to change daily life in ways that could be net-negative.

Key survey findings

  • 76% say they trust AI only rarely or sometimes; 21% trust it most or almost all of the time.
  • Only 6% report being very excited about AI; 62% are either not so excited or not at all excited.
  • 80% are either very concerned or somewhat concerned about AI’s future effects.
  • 55% believe AI will do more harm than good in their day-to-day lives; about one-third think it will do more good than harm.
  • 65% oppose building AI data centers in their communities, citing electricity and water use.
  • 70% think AI advancements will reduce job opportunities; only 7% think AI will increase jobs.
  • Among employed respondents, 30% worry AI could make their job obsolete (up from 21% last year).

These results underscore a broad lack of confidence in how the technology is being developed, deployed and governed.

How can businesses and policymakers close the trust gap?

Addressing public distrust will require coordinated action across industry, government and civil society. The survey shows two-thirds of Americans believe businesses are not transparent enough about AI use, and the same share believe regulators are not doing enough. That twin perception of opacity and regulatory weakness fuels skepticism and resistance.

Practical steps to rebuild trust

  • Transparency and explainability: Companies should publish clear explanations of how systems make decisions and what data they use.
  • Independent audits: Regular third-party audits of accuracy, bias and safety can reduce uncertainty.
  • Workforce transition programs: Invest in reskilling, apprenticeships and job-matching services to support workers displaced by automation.
  • Local impact assessments: Before building data centers or deploying large-scale systems, assess community-level energy and water impacts and share mitigation plans.
  • Stronger regulation with public input: Design rules that reflect civic priorities and include mechanisms for enforcement and redress.

These approaches are complementary: transparency without enforcement may be insufficient, and regulation without clear technical standards can be toothless. The public wants both accountability and the assurance that AI will deliver social benefit.

What do Americans fear most: jobs, energy or safety?

The survey identifies three dominant concern clusters: labor market disruption, environmental and infrastructure strain from AI data centers, and safety or misuse of AI systems.

Jobs and the labor market

Concerns about employment are front and center. A large majority expect AI to reduce the number of available jobs, and younger cohorts are especially pessimistic. Gen Z respondents were the most likely to predict fewer opportunities in the labor market. Yet many employed respondents do not believe their own jobs are immediately at risk, suggesting a view that disruption will be broad but diffuse.

For deeper context on how AI is reshaping work and what policy can do to ease the transition, see our coverage of AI Job Displacement: Early Signs, Skills Gap, and Policy.

Energy, water and local infrastructure

Public opposition to local AI data centers is high. Respondents worry about electricity consumption, water use for cooling, and the broader environmental footprint of intensive compute. Even supportive communities often demand clear plans for energy sourcing and community benefits before welcoming new facilities.

For an evidence-driven discussion about AI energy consumption and practical solutions, consult our report on AI Energy Consumption: Myths, Facts & Solutions 2026.

Safety and misuse

Safety concerns range from misinformation and deceptive outputs to more severe harms. The public expects companies to prevent misuse, and many respondents feel current safeguards are inadequate. Trust erodes when systems produce misleading, biased or harmful results without clear accountability.

Explore our analysis of chatbot safety and legal lessons in AI Chatbot Safety: What the Gemini Lawsuit Teaches, which highlights the stakes of accountability and the need for robust safety culture.

How can the private sector demonstrate good faith?

Businesses building and deploying AI must move beyond marketing and invest in credible, measurable practices that address public concerns:

  1. Publish model cards, datasets summaries and risk assessments in accessible language.
  2. Create clear channels for reporting errors and harms, with timely remediation.
  3. Offer community benefit agreements where infrastructure projects are proposed.
  4. Fund retraining partnerships with local educational institutions and industry consortia.
  5. Support independent review boards and make audit results public when possible.

When companies act transparently and provide tangible benefits to workers and communities, trust can begin to recover. But trust-building is an ongoing process, not a one-off PR campaign.

Featured snippet question: What explains rising AI use but low trust?

Short answer: convenience with caution. Many Americans use AI because it speeds tasks and amplifies productivity, yet they remain wary because of unclear governance, documented harms, and broader social disruption. Key drivers include:

  • Immediate utility: people adopt tools that save time on writing, research and data tasks.
  • Perceived opacity: users often do not know how models generate outputs or what data they were trained on.
  • High-profile failures: visible mistakes or abuses lower confidence even among routine users.
  • Macro concerns: fear that AI will reduce job opportunities and strain local infrastructure.

What readers should watch next

Expect several trends to shape public sentiment in the coming year:

  • More legislation and state-level rulemaking that will clarify obligations for safety and transparency.
  • Growing demand for independent audits and model provenance as an industry standard.
  • Increased corporate investment in workforce transition programs to address job risk narratives.
  • Local debates about data center siting and energy sourcing that will influence community acceptance.

How to evaluate new AI claims

When you encounter new AI products or company announcements, ask these questions:

  • Is there a clear explanation of what the model does and its limitations?
  • Has the system been independently tested or audited?
  • Are there documented measures to protect workers and communities?
  • What channels exist for reporting harms, and how are they enforced?

Conclusion: Adoption without assurance is fragile

The survey paints a picture of cautious adoption. Americans are willing to use AI because it delivers convenience and capability, yet they remain unconvinced that companies and regulators are handling the technology responsibly. That combination—widespread use married to deep hesitation—creates a fragile environment where a few high-profile failures could dramatically reduce public support.

Rebuilding trust will require concrete, sustained commitments: transparency from developers, enforceable regulation from governments, investments in workforce resilience, and meaningful community engagement around infrastructure projects. Without these elements, the benefits of AI risk being overshadowed by legitimate public concerns.

Take action: join the conversation

What do you think policymakers and companies should prioritize to restore public trust in AI? Share your thoughts in the comments, subscribe for ongoing analysis, and read our related coverage on job displacement, energy impacts, and AI safety linked above.

Call to action: Subscribe to Artificial Intel News for weekly briefings on AI policy, infrastructure and workforce trends, and sign up to receive our deep-dive guides on building trustworthy AI.

Leave a Reply

Your email address will not be published. Required fields are marked *