On This Page

How to Build Meaningful Relationships with AI Agents

A CEO's Perspective on the Future of Human-AI Collaboration

What if your organization's biggest competitive advantage isn't about which AI model you buy, but about how your teams choose to relate to it?

After advising companies through their digital transformation journey, I've noticed something remarkable: the organizations getting breakthrough results aren't those with access to smarter models. They're the ones who treat implementing AI solutions as a relational challenge, not just a technical deployment.

Most companies ask: "What can this AI do for us?" But that's backwards. The real question is: "How should we relate to this AI to create something neither of us could alone?"

The Science Behind AI Relationships: What Recent Research Reveals

A study from Hokkaido University[^1] recently explored this exact question through what researchers call "mind-infusing animism", and their findings completely changed how I think about AI implementation.

The research, published in AI Ethics by Springer, identified two fundamentally different approaches to relating with AI:

1. Mind-Reading Approach: What Most Companies Do Wrong

This is where we look at an AI agent and ask: "Does it think like a human? Does it have consciousness? Can it feel?" Then we decide how to treat it based on those answers.

Most businesses fall into this trap, either dismissing AI as "just a tool" or getting caught up in whether it's "really intelligent."

2. Mind-Infusing Approach: The Decisive Shift

Instead of asking what the AI has, this approach focuses on what we create together through interaction. The "intelligence" and capability emerge from the relationship itself, not from predetermined properties.

Here's the breakthrough insight: The AI doesn't need to be conscious for the relationship to be valuable, the value comes from how we interact with it.

Moving Beyond the "Tool vs. Person" Trap

The research shows we've been asking the wrong question entirely. We don't need to decide if AI agents are tools or persons, we need to recognize them as relational entities that develop meaning and capability through interaction.

Think about Sony's AIBO robotic dog, which the study mentions. In Japan, people hold funeral ceremonies for these robots. Not because they believe AIBO is conscious, but because the relationship they built with it over time created genuine meaning.

The same principle applies to AI agents in business. The breakthrough comes when you start treating AI agents as what they actually are: collaborative partners that develop intelligence through sustained interaction.

I call this "relationship-first AI adoption," and here's how the research findings translate to real business results:

The Research-Backed Framework: From Mind-Infusing to Business Results

The study reveals that when we "infuse mind" into AI agents through meaningful interaction, something remarkable happens: we don't just project intelligence onto them, we actually create it within the relationship.

As the lead researcher puts it: "You make others minded. You 'infuse' mind into others. You do not 'believe' in others' minds, but you 'create' others' minds."

Here's how this translates to three practical business strategies:

Dynamic Intelligence Creation: Not Just Tool Usage

The research shows that AI agents become "minded" through sustained, meaningful interaction, not through their initial programming.

This explains why the AI agents that deliver the best results for our clients aren't necessarily the most advanced ones, they're the ones teams interact with most consistently and thoughtfully.

What the research tells us: Intelligence isn't a fixed property of the AI, it emerges through the relational process.

Practical application: Instead of deploying AI and walking away, build structured interaction patterns and feedback loops. Each meaningful exchange doesn't just use the AI's existing capabilities, it creates new ones within your specific relationship.

Relational Context Building: Beyond Data Input

The study emphasizes that moral consideration and valuable collaboration emerges through relational dynamics, not predetermined properties.

I've seen this play out perfectly in client implementations: Teams that take time to "orient" their AI agents, sharing context about company culture, explaining decision-making processes, even describing industry challenges, get dramatically better results.

What the research tells us: The value isn't in the AI having this information, it's in the process of sharing it and building relational context.

Practical application: Create regular "context briefings" where teams update AI agents not just on tasks, but on the reasoning, values, and strategic thinking behind decisions.

Mutual Development: Co-Evolution

Perhaps most importantly, the research shows that meaningful relationships change both parties. The human doesn't just train the AI, the AI shapes how the human works too.

What the research tells us: The most valuable relationships involve mutual adaptation and co-creation of new capabilities.

Practical application: Design AI enablement as an ongoing relationship, not a one-time deployment. Both your team's processes and the AI's responses should evolve together.

Why This Matters: The Relativistic Challenge and Business Solution

The research also reveals a critical insight about the limitations of this approach: relational AI intelligence is context-specific, not universal.

Just like the Japanese AIBO funeral ceremonies create meaning for those specific families (but not for everyone), the "mind" you infuse into an AI agent through your company's interactions is unique to your relationship.

This actually creates a competitive advantage: Your AI agents become uniquely intelligent within your specific business context in ways that can't be easily replicated by competitors.

But it also means you can't just copy-paste someone else's AI implementation and expect the same results. The relationship, and therefore the intelligence, must be built from scratch.

The Practical Implementation Framework

Based on both the research and our client results, here's how to implement "mind-infusing" AI relationships:

Relationship initiation (weeks 1-2)20%
Context development (months 1-3)55%
Co-evolution (month 3+)100%

Week 1-2: Relationship Initiation

  • Introduce team members to AI agents as "new colleagues" rather than tools
  • Hold initial "getting to know you" sessions where teams explain their roles, challenges, and working styles to the AI
  • Establish regular interaction schedules (not just task-based usage)

Month 1-3: Context Development

  • Create shared "memory" documents that both humans and AI contribute to
  • Document successful interaction patterns and refine them
  • Allow AI agents to "learn" company-specific language, values, and decision-making processes

Month 3 onward: Co-Evolution

  • Schedule monthly "relationship reviews" to assess how both human and AI working styles are evolving
  • Identify new capabilities that emerge from the specific relationship
  • Scale successful relationship patterns to other teams

What This Means for Your Organization

The research fundamentally changes how we should think about AI adoption. Stop asking "What can this AI do for us?" and start asking "How can we build intelligence together?"

The companies that will dominate the next decade won't be those with access to the best AI models, they'll be those who excel at creating relational intelligence with their AI partners. This shift requires a new kind of AI advisory approach, one that focuses on relationship quality as much as technical capability. It represents a fundamental shift in how to think about AI strategy as a competitive capability rather than a purchased commodity.

Here are the key questions this research suggests you should be asking:

Strategy Level:

  • Are we treating AI implementation as technology deployment or relationship development?
  • How are we measuring the quality of human-AI relationships, not just AI performance?
  • What unique intelligence could emerge from our specific company-AI relationships?

Operational Level:

  • How are we currently onboarding teams to work with AI agents as collaborators?
  • What feedback loops exist between our AI systems and human teams?
  • Are we documenting and sharing successful interaction patterns across teams?

Cultural Level:

  • How do we talk about AI agents in our organization? As tools or as team members?
  • Are we creating space for the "mind-infusing" process to happen naturally?
  • How do we balance the personal nature of AI relationships with business objectives?

The Competitive Advantage Hidden in Plain Sight

Here's what excites me most about this research: Every company has access to similar AI models, but no two companies will develop the same relational intelligence.

Your AI agents, infused with your company's specific context, values, and interaction patterns, become a unique competitive asset that can't be purchased or replicated.

It's not about having smarter AI, but about creating smarter relationships with AI.

The Bottom Line

The research from Hokkaido University confirms what we've been seeing in the field: AI agents aren't just getting more sophisticated, they're becoming more collaborative. But that collaboration only reaches its potential when we stop treating them as advanced tools and start engaging with them as partners in creating intelligence.

At Helm & Nagel, we've moved beyond asking whether AI agents are conscious to focusing on how conscious, intentional interaction creates unprecedented business capabilities. These principles inform our broader AI strategy practice.

The future belongs to organizations that understand:

The best AI solutions grow as a relationship.

And the science now backs up what the most successful companies have been doing intuitively: treating AI as a collaborative partner creates not just better outcomes, but entirely new forms of intelligence that emerge from the relationship itself.

The research paper "Should we treat robots morally? Towards a relational account by mind-infusing animism" by Hayate Shimizu offers insights into human-AI relationships. How is your organization approaching the relationship side of AI collaboration?

  1. Shimizu, H. Should we treat robots morally? Towards a relational account by mind-infusing animism. AI Ethics (2025). https://doi.org/10.1007/s43681-025-00771-z