Human vs. AI Trust Study
Human vs. AI Trust Study
My Role
LMS Training Coordinator
· Data Analyst
· Committee Member
· LMS Training Coordinator
· Data Analyst
· Committee Member
· LMS Training Coordinator
· Data Analyst
· Committee Member
Organization
San Diego County,
Public Health Services
Scope
·5 trainings
· 8 PHS branches
· 812 total staff
Focus
· Racial equity
· Workforce development
· Community health
Organization
San Diego County,
Public Health Services
Scope
·5 trainings
· 8 PHS branches
· 812 total staff
Focus
· Racial equity
· Workforce development
· Community health

Background
As AI gets embedded into more decisions that affect people's lives, most products assume users will trust it. But trust isn't a given. It's built, withheld, and highly dependent on context.
This study set out to understand how people actually form trust in AI versus human agents, and what that means for the products we design.
Research Question and Hypothesis
Research Question Is there a meaningful difference in how people trust AI compared to humans, and does that difference shift depending on the type of task?
Hypothesis Participants would report lower trust in AI than in humans overall, with the gap widening for tasks involving judgment, emotion, or personal context.
The Problem
Why this research matters
AI-powered experiences are being deployed across healthcare, finance, social services, and civic systems, often in sensitive, high-stakes contexts. Many of these products are designed around capability, not perception.
But if users don't trust an AI system in the context it's being used, capability is irrelevant. The product fails before the feature ever runs.
Without clarity on how trust is formed or withheld across different situations, designers and product teams are building on assumptions rather than evidence.
Study Design
How the research was structured
This was a quantitative survey study comparing trust perceptions toward AI and human agents across multiple real-world scenarios.
Participants represented a range of ages and backgrounds, allowing for comparison across trust contexts rather than demographic segmentation. Survey responses used Likert-scale trust measures and were analyzed comparatively across task categories.
The scenarios were designed to span a spectrum from objective and data-driven to subjective and emotionally sensitive, because that range was where the hypothesis predicted the most meaningful variation.
Research Question and Hypothesis
Research Question Is there a meaningful difference in how people trust AI compared to humans, and does that difference shift depending on the type of task?
Hypothesis Participants would report lower trust in AI than in humans overall, with the gap widening for tasks involving judgment, emotion, or personal context.
Study Design
How the research was structured
This was a quantitative survey study comparing trust perceptions toward AI and human agents across multiple real-world scenarios.
Participants represented a range of ages and backgrounds, allowing for comparison across trust contexts rather than demographic segmentation. Survey responses used Likert-scale trust measures and were analyzed comparatively across task categories.
The scenarios were designed to span a spectrum from objective and data-driven to subjective and emotionally sensitive, because that range was where the hypothesis predicted the most meaningful variation.
What the Data Showed
3 findings with direct design implications
1. Trust in AI is context-dependent. Participants expressed meaningfully higher trust in AI for objective, data-driven tasks. That trust dropped significantly when tasks involved nuance, subjectivity, or emotional stakes. AI isn't universally distrusted — it's situationally distrusted.
2. Humans are trusted more in ambiguous situations. When a task required empathy, judgment, or interpretation, participants consistently favored human agents. The more a situation felt personal, the more a human presence mattered.
3. Transparency shifts willingness to trust. Participants reported greater openness to AI when its limitations and oversight structures were clearly communicated. Transparency didn't eliminate skepticism, but it meaningfully reduced resistance.






Human vs. AI Trust Study
Human vs. AI Trust Study
My Role
LMS Training Coordinator
· Data Analyst
· Committee Member
· LMS Training Coordinator
· Data Analyst
· Committee Member
· LMS Training Coordinator
· Data Analyst
· Committee Member
Organization
San Diego County,
Public Health Services
Scope
·5 trainings
· 8 PHS branches
· 812 total staff
Focus
· Racial equity
· Workforce development
· Community health
Organization
San Diego County,
Public Health Services
Scope
·5 trainings
· 8 PHS branches
· 812 total staff
Focus
· Racial equity
· Workforce development
· Community health

Background
As AI gets embedded into more decisions that affect people's lives, most products assume users will trust it. But trust isn't a given. It's built, withheld, and highly dependent on context.
This study set out to understand how people actually form trust in AI versus human agents, and what that means for the products we design.
Research Question and Hypothesis
Research Question Is there a meaningful difference in how people trust AI compared to humans, and does that difference shift depending on the type of task?
Hypothesis Participants would report lower trust in AI than in humans overall, with the gap widening for tasks involving judgment, emotion, or personal context.
The Problem
Why this research matters
AI-powered experiences are being deployed across healthcare, finance, social services, and civic systems, often in sensitive, high-stakes contexts. Many of these products are designed around capability, not perception.
But if users don't trust an AI system in the context it's being used, capability is irrelevant. The product fails before the feature ever runs.
Without clarity on how trust is formed or withheld across different situations, designers and product teams are building on assumptions rather than evidence.
Study Design
How the research was structured
This was a quantitative survey study comparing trust perceptions toward AI and human agents across multiple real-world scenarios.
Participants represented a range of ages and backgrounds, allowing for comparison across trust contexts rather than demographic segmentation. Survey responses used Likert-scale trust measures and were analyzed comparatively across task categories.
The scenarios were designed to span a spectrum from objective and data-driven to subjective and emotionally sensitive, because that range was where the hypothesis predicted the most meaningful variation.
Research Question and Hypothesis
Research Question Is there a meaningful difference in how people trust AI compared to humans, and does that difference shift depending on the type of task?
Hypothesis Participants would report lower trust in AI than in humans overall, with the gap widening for tasks involving judgment, emotion, or personal context.
Study Design
How the research was structured
This was a quantitative survey study comparing trust perceptions toward AI and human agents across multiple real-world scenarios.
Participants represented a range of ages and backgrounds, allowing for comparison across trust contexts rather than demographic segmentation. Survey responses used Likert-scale trust measures and were analyzed comparatively across task categories.
The scenarios were designed to span a spectrum from objective and data-driven to subjective and emotionally sensitive, because that range was where the hypothesis predicted the most meaningful variation.
What the Data Showed
3 findings with direct design implications
1. Trust in AI is context-dependent. Participants expressed meaningfully higher trust in AI for objective, data-driven tasks. That trust dropped significantly when tasks involved nuance, subjectivity, or emotional stakes. AI isn't universally distrusted — it's situationally distrusted.
2. Humans are trusted more in ambiguous situations. When a task required empathy, judgment, or interpretation, participants consistently favored human agents. The more a situation felt personal, the more a human presence mattered.
3. Transparency shifts willingness to trust. Participants reported greater openness to AI when its limitations and oversight structures were clearly communicated. Transparency didn't eliminate skepticism, but it meaningfully reduced resistance.






Human vs. AI Trust Study
Human vs. AI Trust Study
My Role
LMS Training Coordinator
· Data Analyst
· Committee Member
· LMS Training Coordinator
· Data Analyst
· Committee Member
· LMS Training Coordinator
· Data Analyst
· Committee Member
Organization
San Diego County,
Public Health Services
Scope
·5 trainings
· 8 PHS branches
· 812 total staff
Focus
· Racial equity
· Workforce development
· Community health
Organization
San Diego County,
Public Health Services
Scope
·5 trainings
· 8 PHS branches
· 812 total staff
Focus
· Racial equity
· Workforce development
· Community health

Background
As AI gets embedded into more decisions that affect people's lives, most products assume users will trust it. But trust isn't a given. It's built, withheld, and highly dependent on context.
This study set out to understand how people actually form trust in AI versus human agents, and what that means for the products we design.
Research Question and Hypothesis
Research Question Is there a meaningful difference in how people trust AI compared to humans, and does that difference shift depending on the type of task?
Hypothesis Participants would report lower trust in AI than in humans overall, with the gap widening for tasks involving judgment, emotion, or personal context.
The Problem
Why this research matters
AI-powered experiences are being deployed across healthcare, finance, social services, and civic systems, often in sensitive, high-stakes contexts. Many of these products are designed around capability, not perception.
But if users don't trust an AI system in the context it's being used, capability is irrelevant. The product fails before the feature ever runs.
Without clarity on how trust is formed or withheld across different situations, designers and product teams are building on assumptions rather than evidence.
Study Design
How the research was structured
This was a quantitative survey study comparing trust perceptions toward AI and human agents across multiple real-world scenarios.
Participants represented a range of ages and backgrounds, allowing for comparison across trust contexts rather than demographic segmentation. Survey responses used Likert-scale trust measures and were analyzed comparatively across task categories.
The scenarios were designed to span a spectrum from objective and data-driven to subjective and emotionally sensitive, because that range was where the hypothesis predicted the most meaningful variation.
Research Question and Hypothesis
Research Question Is there a meaningful difference in how people trust AI compared to humans, and does that difference shift depending on the type of task?
Hypothesis Participants would report lower trust in AI than in humans overall, with the gap widening for tasks involving judgment, emotion, or personal context.
Study Design
How the research was structured
This was a quantitative survey study comparing trust perceptions toward AI and human agents across multiple real-world scenarios.
Participants represented a range of ages and backgrounds, allowing for comparison across trust contexts rather than demographic segmentation. Survey responses used Likert-scale trust measures and were analyzed comparatively across task categories.
The scenarios were designed to span a spectrum from objective and data-driven to subjective and emotionally sensitive, because that range was where the hypothesis predicted the most meaningful variation.
What the Data Showed
3 findings with direct design implications
1. Trust in AI is context-dependent. Participants expressed meaningfully higher trust in AI for objective, data-driven tasks. That trust dropped significantly when tasks involved nuance, subjectivity, or emotional stakes. AI isn't universally distrusted — it's situationally distrusted.
2. Humans are trusted more in ambiguous situations. When a task required empathy, judgment, or interpretation, participants consistently favored human agents. The more a situation felt personal, the more a human presence mattered.
3. Transparency shifts willingness to trust. Participants reported greater openness to AI when its limitations and oversight structures were clearly communicated. Transparency didn't eliminate skepticism, but it meaningfully reduced resistance.






Human vs. AI Trust Study
Human vs. AI Trust Study
My Role
LMS Training Coordinator
· Data Analyst
· Committee Member
· LMS Training Coordinator
· Data Analyst
· Committee Member
· LMS Training Coordinator
· Data Analyst
· Committee Member
Organization
San Diego County,
Public Health Services
Scope
·5 trainings
· 8 PHS branches
· 812 total staff
Focus
· Racial equity
· Workforce development
· Community health
Organization
San Diego County,
Public Health Services
Scope
·5 trainings
· 8 PHS branches
· 812 total staff
Focus
· Racial equity
· Workforce development
· Community health

Background
As AI gets embedded into more decisions that affect people's lives, most products assume users will trust it. But trust isn't a given. It's built, withheld, and highly dependent on context.
This study set out to understand how people actually form trust in AI versus human agents, and what that means for the products we design.
Research Question and Hypothesis
Research Question Is there a meaningful difference in how people trust AI compared to humans, and does that difference shift depending on the type of task?
Hypothesis Participants would report lower trust in AI than in humans overall, with the gap widening for tasks involving judgment, emotion, or personal context.
The Problem
Why this research matters
AI-powered experiences are being deployed across healthcare, finance, social services, and civic systems, often in sensitive, high-stakes contexts. Many of these products are designed around capability, not perception.
But if users don't trust an AI system in the context it's being used, capability is irrelevant. The product fails before the feature ever runs.
Without clarity on how trust is formed or withheld across different situations, designers and product teams are building on assumptions rather than evidence.
Study Design
How the research was structured
This was a quantitative survey study comparing trust perceptions toward AI and human agents across multiple real-world scenarios.
Participants represented a range of ages and backgrounds, allowing for comparison across trust contexts rather than demographic segmentation. Survey responses used Likert-scale trust measures and were analyzed comparatively across task categories.
The scenarios were designed to span a spectrum from objective and data-driven to subjective and emotionally sensitive, because that range was where the hypothesis predicted the most meaningful variation.
Research Question and Hypothesis
Research Question Is there a meaningful difference in how people trust AI compared to humans, and does that difference shift depending on the type of task?
Hypothesis Participants would report lower trust in AI than in humans overall, with the gap widening for tasks involving judgment, emotion, or personal context.
Study Design
How the research was structured
This was a quantitative survey study comparing trust perceptions toward AI and human agents across multiple real-world scenarios.
Participants represented a range of ages and backgrounds, allowing for comparison across trust contexts rather than demographic segmentation. Survey responses used Likert-scale trust measures and were analyzed comparatively across task categories.
The scenarios were designed to span a spectrum from objective and data-driven to subjective and emotionally sensitive, because that range was where the hypothesis predicted the most meaningful variation.
What the Data Showed
3 findings with direct design implications
1. Trust in AI is context-dependent. Participants expressed meaningfully higher trust in AI for objective, data-driven tasks. That trust dropped significantly when tasks involved nuance, subjectivity, or emotional stakes. AI isn't universally distrusted — it's situationally distrusted.
2. Humans are trusted more in ambiguous situations. When a task required empathy, judgment, or interpretation, participants consistently favored human agents. The more a situation felt personal, the more a human presence mattered.
3. Transparency shifts willingness to trust. Participants reported greater openness to AI when its limitations and oversight structures were clearly communicated. Transparency didn't eliminate skepticism, but it meaningfully reduced resistance.






