AI Companions: Public Sentiment on Digital Friendship and Emotional Support

The rise of sophisticated AI companions prompts a critical question: are we ready for digital confidants? This research explores public attitudes towards AI designed for friendship and emotional support, revealing a society largely apprehensive yet generationally divided on the potential and perils of these emerging technologies.

Artificial intelligence is increasingly woven into the fabric of daily life, and its next frontier may be our most personal connections. The concept of AI companions – virtual entities designed for friendship, emotional support, or even romance – is rapidly moving from science fiction to plausible reality. As these technologies evolve, they present a complex tapestry of opportunities and challenges.

This research investigates public perceptions surrounding AI companionship in May 2025. We explore the initial reactions, perceived benefits, potential drawbacks, and the profound ethical questions that arise. Understanding societal readiness and concerns is crucial for navigating the development and integration of AI in such intimate roles. Are AI companions viewed as a comforting evolution of connection or a troubling step into the unknown? This study aims to shed light on these critical questions.

How this data was generated:

The insights presented here are derived from a simulated survey campaign run on the SocioSim platform. An audience profile representing 897 simulated respondents, characteristics of the general public in May 2025 with diverse demographics, was defined to capture a broad spectrum of opinions. The survey questionnaire, focusing on perceptions, attitudes, and concerns regarding AI companionship under the theme \"AI Pals: Next Best Friend or Digital Nightmare?\", was developed using SocioSim's AI-assisted tools. Responses were then simulated based on the defined audience profile and the survey structure, reflecting potential public reactions to this emerging technology.

Key Findings

1. AI Companions: Predominantly "Creepy" Public Reaction, Except Among Younger Generations

Initial public sentiment towards AI companions is largely negative. Data from the '“When you hear the term "AI companion" (a virtual entity designed for friendship, emotional support, or even romance), what is your immediate, gut reaction?” (Distribution)' slice (Slice Index 0) reveals that a combined 80.94% of respondents find the concept 'Creepy' (51.06%) or 'Very Creepy' (29.88%). Only a small fraction reported feeling 'Comforting' (5.35%) or 'Very Comforting' (0.22%).

However, a significant generational divergence emerges from the '“When you hear the term "AI companion" (a virtual entity designed for friendship, emotional support, or even romance), what is your immediate, gut reaction?” by “Age Group (as of 2025)”' slice (Slice Index 27). Older generations express the most apprehension: among Boomers (61+ years old), 60.07% of their reactions were 'Very Creepy'. This contrasts sharply with Gen Z (18-28 years old), where only 1.12% of their reactions fell into the 'Very Creepy' category. Conversely, looking at those who found AI companions 'Comforting' (a total of 48 respondents), Gen Z constituted 66.67% of this group, while Boomers made up 0.00%. Similarly, for 'Very Comforting' reactions (2 respondents), 50.00% came from Gen Z and another 50.00% from Millennials, with 0.00% from Boomers or Gen X. This highlights a clear shift in perception among younger demographics.

Generational Divide in Gut Reactions to AI Companions
Stacked bar chart showing gut reactions (Comforting, Creepy, Neutral, Very Comforting, Very Creepy) to AI companions, broken down by age group (Boomers, Gen X, Gen Z, Millennials). Highlights Boomers' high 'Very Creepy' percentage versus Gen Z's higher 'Comforting' and 'Neutral' percentages.

Figure 1: Gut reaction to AI companions by age group. Row percentages shown. Source: AI Pals Survey, May 2025 (n=897).

View Detailed Data Table
Age Group (as of 2025)
When you hear the term "AI companion" (a virtual entity designed for friendship, emotional support, or even romance), what is your immediate, gut reaction? Very Creepy (N≈268) Creepy (N≈458) Neutral / Unsure (N≈121) Comforting (N≈48) Very Comforting (N≈2)
Gen Z (18-28 years old) (N≈217) 1.1% 26.6% 48.8% 66.7% 50.0%
Millennials (29-44 years old) (N≈240) 5.6% 36.7% 35.5% 27.1% 50.0%
Gen X (45-60 years old) (N≈219) 33.2% 24.0% 14.0% 6.2% 0.0%
Boomers (61+ years old) (N≈221) 60.1% 12.7% 1.7% 0.0% 0.0%
Download Finding 1 Data

Note: The 'Very Comforting' category had only 2 respondents overall, so percentages within this row should be interpreted with extreme caution. However, the trend for 'Comforting' (n=48) and 'Very Creepy' (n=268) reactions shows substantial generational differences.


2. The AI Companion Conundrum: Many Potential Users Expect Worsened Human Relationships

A striking paradox emerges from the data: individuals who would consider using AI companions for core relational needs like combating loneliness or emotional support simultaneously believe these AIs will weaken human-to-human relationships. According to the '“For which ONE of the following primary purposes would you MOST LIKELY consider using an AI companion?” by “Do you believe AI companions will ultimately strengthen or weaken human-to-human relationships?”' slice (Slice Index 37):

  • 100% of respondents who would primarily use an AI companion for 'Combating loneliness' (n≈132 for this purpose) believe AI companions will 'Moderately Weaken' human relationships.
  • Similarly, 89.66% of those considering AI for 'Emotional support' (n≈29 for this purpose) also believe they will 'Moderately Weaken' human ties.

This suggests that for a segment of the population, the perceived immediate benefits of AI companionship might outweigh their concerns about broader societal impacts on human connection, or they see it as a supplement in the face of existing deficiencies rather than a direct cause of weakening.

Conflict: Using AI for Support Despite Expecting Harm to Human Ties
Stacked bar chart showing how people who would use AI companions for various purposes (e.g., Combating loneliness, Emotional support) believe AI will impact human-to-human relationships (Strengthen, Weaken, No Impact). Highlights that users for loneliness/support largely expect a weakening effect.

Figure 2: Belief about AI's impact on human relationships by primary reason for considering AI companion. Row percentages shown. Source: AI Pals Survey, May 2025 (n=897).

View Detailed Data Table
Do you believe AI companions will ultimately strengthen or weaken human-to-human relationships?
For which ONE of the following primary purposes would you MOST LIKELY consider using an AI companion? Casual conversation / chit-chat (N≈30) Combating loneliness (N≈132) Emotional support (N≈29) Advice / Problem-solving (N≈189) Entertainment / Games (N≈27) Practicing social skills (N≈56) Romantic companionship (N≈0) I would not consider using an AI companion for any purpose (N≈434)
Significantly Weaken (N≈250) 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 57.6%
Moderately Weaken (N≈515) 53.3% 100.0% 89.7% 55.0% 44.4% 73.2% 0.0% 42.4%
No Significant Impact (N≈101) 30.0% 0.0% 10.3% 32.8% 48.1% 25.0% 0.0% 0.0%
Moderately Strengthen (N≈28) 13.3% 0.0% 0.0% 11.6% 3.7% 1.8% 0.0% 0.0%
Significantly Strengthen (N≈3) 3.3% 0.0% 0.0% 0.5% 3.7% 0.0% 0.0% 0.0%
Download Finding 2 Data

Note: The group considering AI for 'Emotional support' (n≈29) is relatively small, but the 100% agreement among those considering it for 'Combating loneliness' (n≈132) is a strong signal.


3. Generational Gap in Believing AI Can Genuinely Simulate Emotions

Perceptions of AI's capacity to simulate emotions convincingly are sharply divided by age. The '“To what extent do you think AI companions could ever genuinely possess or convincingly simulate emotions in a way that feels real to the user?” by “Age Group (as of 2025)”' slice (Slice Index 97) reveals a clear trend:

  • Boomers (61+ years old) are highly skeptical, with 56.87% believing AI emotional simulation is 'Not at all possible'.
  • In stark contrast, Gen Z (18-28 years old) are the most optimistic, with 59.79% finding it 'Very possible, to an extent that could feel genuinely real'.
  • Millennials (29-44 years old) also lean towards possibility, with 37.13% considering AI emotional simulation 'Possible, to a moderate extent that might feel somewhat real' and 29.90% finding it 'Very possible'.

This generational disparity in belief about AI's emotional capabilities likely underpins differing attitudes towards acceptance and potential uses of AI companions.

Age Gap in Belief: Can AI Genuinely Simulate Emotions?
Stacked bar chart illustrating the extent to which different age groups believe AI can simulate emotions. Shows Boomers' skepticism versus Gen Z's strong belief in AI's capability.

Figure 3: Belief in AI's emotion simulation capability by age group. Row percentages shown. Source: AI Pals Survey, May 2025 (n=897).

View Detailed Data Table
Age Group (as of 2025)
To what extent do you think AI companions could ever genuinely possess or convincingly simulate emotions in a way that feels real to the user? Not at all possible (N≈313) Possible, but only in a very superficial way (N≈215) Possible, to a moderate extent that might feel somewhat real (N≈272) Very possible, to an extent that could feel genuinely real (N≈97) Unsure / Hard to say (N≈0)
Gen Z (18-28 years old) (N≈217) 1.3% 15.8% 44.5% 59.8% 0.0%
Millennials (29-44 years old) (N≈240) 10.5% 35.8% 37.1% 29.9% 0.0%
Gen X (45-60 years old) (N≈219) 31.3% 29.3% 17.6% 10.3% 0.0%
Boomers (61+ years old) (N≈221) 56.9% 19.1% 0.7% 0.0% 0.0%
Download Finding 3 Data

4. Experience Shapes Perception: Frequent Chatbot Users and Tech Experts More Open to AI Companions

Familiarity with AI, particularly through chatbot interactions and self-assessed tech savviness, strongly correlates with a more positive gut reaction to AI companions. The '“When you hear the term "AI companion" (...)?” by “Experience with AI Chatbots/Virtual Assistants”' slice (Slice Index 33) shows:

  • Respondents who 'Use them frequently for complex interactions/conversation' reported a 100% 'Comforting' (for the 'Comforting' reaction type) or 100% 'Very Comforting' (for the 'Very Comforting' reaction type) gut reaction.
  • Conversely, 57.46% of those who 'Never use' chatbots found AI companions 'Very Creepy'.

Supporting this, the '“When you hear the term "AI companion" (...)?” by “Self-Reported Tech Savviness”' slice (Slice Index 31) indicates that 97.92% of 'Expert' tech users who found AI companions 'Comforting' belong to this tech savviness group, and 100% of 'Very Comforting' reactions came from 'Experts'. Those 'Not at all savvy' or 'Slightly savvy' were significantly more likely to report 'Very Creepy' reactions (37.31% and 52.61% of their respective gut reactions). This suggests that direct experience and confidence with AI technologies significantly reduce apprehension towards AI companionship.

Chatbot Experience Softens 'Creepy' Reaction to AI Companions
Stacked bar chart comparing gut reactions to AI companions based on respondents' experience with AI chatbots/virtual assistants. Highlights positive reactions among frequent users versus negative reactions among non-users.

Figure 4: Gut reaction to AI companions by AI chatbot experience. Row percentages shown. Source: AI Pals Survey, May 2025 (n=897).

View Detailed Data Table
Experience with AI Chatbots/Virtual Assistants
When you hear the term "AI companion" (a virtual entity designed for friendship, emotional support, or even romance), what is your immediate, gut reaction? Very Creepy (N≈268) Creepy (N≈458) Neutral / Unsure (N≈121) Comforting (N≈48) Very Comforting (N≈2)
Never use them (N≈154) 57.5% 0.0% 0.0% 0.0% 0.0%
Rarely use them (N≈170) 38.8% 14.4% 0.0% 0.0% 0.0%
Use them occasionally for simple tasks (N≈252) 3.7% 48.0% 18.2% 0.0% 0.0%
Use them frequently for simple tasks (N≈111) 0.0% 17.0% 27.3% 0.0% 0.0%
Use them frequently for complex interactions/conversation (N≈210) 0.0% 20.5% 54.5% 100.0% 100.0%
Download Finding 4 Data

Note: The 'Very Comforting' gut reaction group is very small (n=2 overall). The trend for 'Comforting' reactions (n=48) provides a more robust view of positive sentiment.


5. Near-Universal Demand for Ethical Guidelines and Regulation of AI Companions

There is overwhelming public consensus on the need for governmental or regulatory oversight of AI companions. Data from the '“How important do you think it is for governments or regulatory bodies to establish ethical guidelines and regulations for AI companions?” (Distribution)' slice (Slice Index 8) shows:

  • A vast majority, 73.91%, believe establishing such guidelines is 'Essential'.
  • An additional 23.86% consider it 'Very Important'.

Combined, approximately 97.77% of respondents see a strong need for regulation. This near-unanimous call for oversight underscores widespread societal concern about the potential implications of AI companions and a desire for proactive measures to ensure ethical development and deployment.

Overwhelming Support for AI Companion Regulation
Horizontal bar chart showing the distribution of responses to the importance of regulating AI companions. Highlights 'Essential' and 'Very Important' as the dominant categories.

Figure 5: Perceived importance of government/regulatory guidelines for AI companions. Source: AI Pals Survey, May 2025 (n=897).

View Detailed Data Table
How important do you think it is for governments or regulatory bodies to establish ethical guidelines and regulations for AI companions? Respondents Percentage
Not at all Important 4 0.4%
Slightly Important 6 0.7%
Moderately Important 10 1.1%
Very Important 214 23.9%
Essential 663 73.9%
Download Finding 5 Data

6. Parental Status Amplifies AI Companion Skepticism, Especially Among Parents of Adults

Parental status significantly influences perceptions of AI companions, with parents generally showing more skepticism. The '“When you hear the term "AI companion" (...)?” by “Parental Status”' slice (Slice Index 29) reveals distinct differences in gut reactions:

  • Parents of adult children exhibit the highest level of apprehension, with 76.49% of their reactions being 'Very Creepy'.
  • This is substantially higher than 'Not a parent' respondents, where only 9.70% of their reactions were 'Very Creepy'.
  • Furthermore, 100% of those who found AI companions 'Very Comforting' were 'Not a parent'. Similarly, 91.67% of 'Comforting' reactions came from non-parents.

Concerns among parents, particularly those with adult children (who may have witnessed more life stages and relationship dynamics), are pronounced. For example, data from '“Which ONE of the following AI companion scenarios, if any, do you find MOST unsettling or 'creepy'?” by “Parental Status”' (Slice Index 137) shows that 73.90% of Parents of adult children found an AI mimicking a deceased loved one most unsettling, a scenario that resonated strongly with this group compared to non-parents (23.16%).

Parental Status Hardens Stance Against AI Companions
Stacked bar chart comparing gut reactions to AI companions based on respondents' parental status. Shows significantly higher 'Very Creepy' reactions among parents of adult children compared to non-parents.

Figure 6: Gut reaction to AI companions by parental status. Row percentages shown. Source: AI Pals Survey, May 2025 (n=897).

View Detailed Data Table
Parental Status
When you hear the term "AI companion" (a virtual entity designed for friendship, emotional support, or even romance), what is your immediate, gut reaction? Very Creepy (N≈268) Creepy (N≈458) Neutral / Unsure (N≈121) Comforting (N≈48) Very Comforting (N≈2)
Not a parent (N≈398) 9.7% 49.6% 81.8% 91.7% 100.0%
Parent of young children (under 12) (N≈116) 6.3% 18.6% 9.1% 6.2% 0.0%
Parent of teenagers (12-17) (N≈95) 7.5% 14.6% 5.8% 2.1% 0.0%
Parent of adult children (N≈288) 76.5% 17.2% 3.3% 0.0% 0.0%
Download Finding 6 Data

Note: The specific concerns from Slice Index 137 are mentioned for context; the primary chart visualizes gut reactions from Slice Index 29. The 'Very Comforting' category (n=2) and 'Comforting' (n=48) show strong skews towards non-parents.


Voices from the Simulation

The open-ended questions provided deeper context into how individuals perceive the potential value of AI companions. Here are some recurring themes and illustrative (synthesized) quotes from their responses:

What would an AI companion have to be able to do or provide for you to genuinely consider it a valuable companion, rather than just a novelty or tool?

  • Unwavering Preference for Human Connection: Many respondents expressed a fundamental belief that AI cannot replicate or replace the depth and authenticity of human relationships, regardless of its capabilities. They emphasized that true companionship is inherently human and not replicable by machines.

    Frankly, nothing an AI could do would make it a 'companion' in my eyes. Companionship is about shared human experience, empathy, and genuine connection, things I find with people, not programs. For me, a machine is just a machine, no matter how smart it gets.

  • Aspiration for a Safe, Non-Judgmental Confidant and Coach: For AI companionship to be valuable, particularly for those open to the concept, it must offer a secure and understanding space for personal growth, advice, and emotional expression without the complexities or pressures of human judgment. The ability to remember past interactions and offer tailored support was also highlighted.

    I'd consider it valuable if it could provide consistently non-judgmental feedback, helping me practice social interactions or explore thoughts in a truly safe environment. It would need to remember our conversations and offer genuinely thoughtful support, almost like a personal coach or a diary that listens and responds with understanding, without ever making me feel exposed.

  • Demand for Sophisticated, Trustworthy Practical Support: Beyond emotional or social aspects, the utility of an AI companion hinges on its ability to provide intelligent, proactive assistance in daily life, manage complex tasks, and offer reliable, unbiased advice. Paramount to this was the absolute assurance of data privacy and security.

    For an AI to be a genuine companion beyond just a tool, it would need to intelligently anticipate my needs, manage my schedule effectively, and offer practical, unbiased advice that truly helps simplify my life. Crucially, I'd need ironclad guarantees about data privacy and security before entrusting it with personal aspects of my life or relying on it for important reminders.


Limitations of this Simulation

It's important to note that this data is based on a simulation run via the SocioSim platform. While the audience profile and response patterns are designed to be representative based on sociological principles and LLM capabilities, they do not reflect responses from real individuals. The simulation provides valuable directional insights and hypotheses for further real-world investigation.

Key limitations include:

  • Simulated data cannot capture the full complexity and unpredictability of human attitudes and behaviors
  • The model is based on general patterns observed in similar demographic groups rather than specific individuals
  • Cultural nuances and rapidly evolving attitudes toward technology may not be fully represented
  • Regional differences in technology access and adoption are not fully accounted for

Read more about simulation methodology and validation.

Conclusion

This simulated survey exploring public sentiment towards AI companions in May 2025 reveals a complex and often contradictory landscape. While a segment of the population, notably younger and more tech-literate individuals, expresses openness and even optimism about AI's potential in relational roles, the overarching initial reaction is one of apprehension and skepticism. The perception of AI companions as \"creepy\" is prevalent, particularly among older demographics and parents.

Key takeaways indicate a significant generational divide not only in acceptance but also in the belief regarding AI's capacity to convincingly simulate emotions. Furthermore, a notable paradox exists where potential users acknowledge AI's utility for combating loneliness or seeking advice, yet simultaneously anticipate a negative impact on genuine human-to-human connections. Despite these divergent views on the technology itself, there is near-unanimous agreement on the critical need for robust ethical guidelines and governmental regulation. These simulated findings underscore the societal imperative to proactively address the ethical, social, and psychological implications of AI companionship as the technology continues to advance.


Conduct Your Own Sociological Research with SocioSim

Unlock deeper insights into your specific research questions.

  • Define Complex Audiences: Create nuanced demographic and psychographic profiles
  • AI-Assisted Survey Design: Generate relevant questions aligned with your research goals
  • Rapid Simulation: Get directional insights in hours, not months
  • Explore & Visualize: Use integrated tools to analyze responses Premium
  • Export Data: Download simulated data for further analysis Premium
Join the waitlist Request a demo