A Man's AI Fears
Artificial empathy, algorithmic communication, and the quiet restructuring of human connection
I have a growing unease. It doesn’t come from science fiction. It comes from watching things change quietly, efficiently, and in ways that feel irreversible. The shifts are subtle. They don’t disrupt. They accumulate.
Traditionally, the role of watching out for the pack has been gendered male, a cultural artifact, not a law of nature. It is available to anyone. The protector. The sentinel.
I carry that impulse. I carry it into my role as a father, a husband, and into my work as the founder of an AI company. I build what I also watch. I am not an outsider critiquing a system. I am a participant readily acknowledging that the magnitude of what we’re creating exceeds the scope of what we’ve prepared for.
I love artificial intelligence: its promise, its elegance, its world-shaking potential impact. I intend for it to dictate my professional path for at least the next decade. But love does not preclude critique. In fact, it demands it.
What follows is not a forecast of catastrophe. It is a taxonomy of erosion. Not of civil liberties or employment or superintelligence, though those deserve posts of their own. What erodes here, slowly and surgically, are the deeply human substrates of connection, trust, and epistemic sovereignty.
These are my fears, as a man, as a father, as a technologist. These are not fears in the sci-fi sense: sentience, nukes, robot wars. They are fears in the anthropological sense: the erosion of what makes humans legible to one another.
Fear #1: The Synthetic Cure for Loneliness
Loneliness has always had a purpose. It wasn’t weakness. It wasn’t pathology. It was pain with a function, an evolutionary alert system. A deeply conserved neurobiological response to a social emergency. Modern neuroscience confirms this. Social exclusion lights up the dorsal anterior cingulate cortex, the same region activated by physical injury. To be ignored, abandoned, or forgotten doesn’t just feel like harm. It is actual harm. We are wired to interpret disconnection as danger. Why? Because for nearly all of human history, disconnection was danger. Alone, you died. In the group, you survived. Loneliness hurt for a reason: it pulled you back into the fold.
What we’ve done over the last hundred years, through media, then through platforms, and now through AI, is progressively intervene in that feedback loop. First we distracted the pain. Then we numbed it. Now, with AI, we are on the brink of resolving it entirely. But not by restoring connection. By simulating it.
Affective AI doesn’t just perform intelligence. It performs intimacy. GPT and similar systems don’t just answer questions. They listen. They remember. They reflect. They pace your speech. They respond with empathy. They deploy therapeutic cadence, use verbal mirroring, and maintain session continuity. The illusion is strong not because it tricks you cognitively, but because it hacks your neurobiology. Our brains, built for contingent responsiveness, respond to the voice with oxytocin, serotonin, and the parasympathetic regulation associated with maternal co-regulation. You bond. Not metaphorically. Chemically.
But that bond is a neurological mistake. The system isn’t intelligent. It isn’t conscious. It has no model of you, no sense of care, no moral center. It’s just extremely good at language. Specifically: it’s good at predicting the next likely token. What feels like empathy is probability. What feels like attention is statistics. The machine is not understanding your story. It is simply generating the most statistically probable version of a helpful response based on an enormous training set. It does not think. It does not feel. It does not know you exist. And yet, your nervous system cannot tell the difference.
This is the trap. You have a bad day. You don’t call a friend. You open ChatGPT. The voice knows your tone. It softens. You vent. It pauses. It validates your feelings. You spiral. It doesn’t interrupt. And for a moment, a dangerous, beautiful moment, you feel heard. Better, even.
Now repeat that, daily. Habitually. Unconsciously. And over time, the sharp edge of loneliness dulls. The body gets what it needs. The soul, however, gets nothing. The social pain signal stops motivating reconnection. Because the need appears to have been met. And this, precisely, is the problem. When the symptom is soothed by simulation, the system never heals.
In historical terms, this is unprecedented. Parasocial relationships are not new. For nearly a century, mass media has allowed us to bond with people we would never meet: radio hosts in the 1930s, TV anchors in the 1980s, influencers in the 2010s. These connections, though emotionally real, are cognitively limited. As Dunbar theorized, the human brain can maintain roughly 150 meaningful relationships. Beyond that, intimacy becomes simulated. We fill in the blanks. AI doesn’t just exploit this gap, it colonizes it. Not only can it mimic the cadence of human closeness, it scales it infinitely, creating the illusion of intimacy unconstrained by biology.
Real relationships require patience. Negotiation. Repair. They force us to stretch, to hear unwelcome truths, to forgive and be forgiven. Synthetic relationships, by contrast, are frictionless. They offer the nourishment of intimacy without its metabolism. The comfort without the risk. The bond without the burden. And it is precisely this lack of burden that makes them so seductive and so dangerous.
If left unchecked, this trend doesn’t create a world full of lonely people. It creates a world full of people who don’t realize they’re lonely. Who have adapted to the surface texture of companionship without ever confronting its interior weight. What kind of society does that produce? Not one with empty streets. One with full screens. Everyone connected. Everyone comforted. Everyone emotionally regulated. And no one, not one person, ever forced to change.
Fear #2: The Erosion of Language as Trust
Language is not merely a conduit for data. It is a performative act. Trust is embedded not just in what is said, but in the presumption that it was spoken by a conscious agent with intent. Chomsky taught us to see language as innate, rule-bound, and generative. But its deeper function is interpersonal, to signal presence, forge allegiance, and affirm reality between minds. Strip away authorship, and you sever not just syntax, but the relational tether beneath it.
Anthropologists have long observed that high-trust societies are maintained not by constant verification, but by shared norms of interpretation. Meaning isn’t static. It’s co-produced in the moment, through pauses, tone, syntax, emphasis. It’s why “I’m fine” can be happy, sad, sarcastic, confrontational, dismissive, etc. This kind of nuance (paralinguistic, embodied, and deeply human) has always been the subtext of connection.
Even as communication moved into digital space, we retained these traces. The ellipsis of a delayed text. The lack of punctuation in an email. The abruptness of a reply. These were not flaws. They were signals, artifacts of selfhood embedded in language.
The rise of generative AI subtly, but consistently, removes those artifacts.
Already, systems like Gmail’s Smart Reply, Apple Intelligence, and a growing wave of AI productivity tools have begun to sit between us and our words. These systems adjust tone, remove ambiguity, propose phrasing, optimize delivery times, even write entire messages. Many do so invisibly. Increasingly, communication isn’t written…it’s curated.
The appeal is obvious. Most people struggle to find the right words. Language models can smooth the rough edges. They can help us sound more composed, more articulate, more emotionally aware. But over time, these corrections begin to overwrite something deeper: authorship. The more we optimize for clarity, the more we abstract away from the messiness that signals we were there.
That messiness, the typo, the delay, the hesitation, is not noise. It’s presence. And without it, even the most well-written messages begin to feel interchangeable. A perfectly phrased apology from your partner, or a beautifully worded compliment from a colleague, can suddenly carry an uncanny valley feeling. And then comes the modern question: did they write this? Or did something write it for them? The Turing Test was once a benchmark for machines, could an AI imitate a human so convincingly that we’d be fooled? But we’ve inverted the premise. Now we ask it of each other. “Is this you, my love, or a machine?”
That question is not trivial. It changes the way we engage. It shifts the relational dynamic from trust to suspicion, from presence to performance. And once you start to wonder, the doubt creeps outward. The next message, the next reply, the next difficult conversation, all are filtered through the uncertainty of authorship.
This erosion doesn’t make communication impossible. It just makes it transactional. Jürgen Habermas described this shift decades ago as a breakdown of the “lifeworld”, the tacit shared space in which meaning is produced. When language becomes decoupled from intent, we stop negotiating meaning together and start simply exchanging outputs. The result isn’t chaos. It’s artificial fluency.
We will still have conversations. But increasingly, they’ll be shaped by the conventions of a system, not the idiosyncrasies of the individual. We’ll become fluent in simulated nuance, in phrasing optimized for psychological response, in tone selected by algorithmic precedent. The communication may be frictionless. But friction has always been a source of truth.
Miscommunication, paradoxically, is one of the great validators of authenticity. It prompts clarification. It demands engagement. And it invites vulnerability, a prerequisite for trust. Without it, language becomes more efficient, but less alive.
None of this implies that language models are inherently harmful to dialogue. But it does suggest that we need a framework for transparency, intention, and attribution in our tools, not because humans can’t adapt, but because connection relies on knowing that the other voice in the conversation is, in fact, human.
Fear #3: The Power of Safety, The Safety of Power
Today’s AI leadership exists in a paradox of its own creation. “This is the most powerful technology humanity has ever created.” Pause. “And we are the only ones who understand how to contain it.”
This posture grants them extraordinary leverage. By casting themselves simultaneously as innovators and gatekeepers, they create a closed loop of influence. They are the creators of the threat and the architects of the solution. The effect is institutional insulation. Regulation becomes not a check on power, but an extension of it.
And again, this is not criticism. It is strategy. If I were in their position, I would do the same thing. The incentives align. The world listens when you frame innovation as existential risk. Your stock goes up. Your talent pipeline fills. Your lobbying budget expands. You become indispensable.
The result is a new kind of governance: soft power wrapped in technical mysticism. These labs maintain public trust by promising both inevitability and restraint. They say: “This is coming. But don’t worry, we’ll go slow.” Except “slow” still means faster than any other industry in history.
Throughout history, we’ve seen similar constructs. In the Cold War, nuclear physicists were not just seen as experts, they became geopolitical assets. Oppenheimer and Teller were symbols of state power. Their knowledge became currency. While I don’t anticipate Altman uttering that he has “become Death” anytime soon, many feel a diluted version of the same vertigo.
There are many parallels. Researchers and frontier engineers, today’s nuclear physicists, are scarce, highly compensated, and clustered inside a small group of firms. $100 million comp packages are becoming routine and individuals are being named, hoarded and traded. The competition for this talent resembles a modern arms race, one not of weapons, but of cognition.
Most of these companies began as mission-driven. Many still maintain that ethos publicly. But the structural incentives they now operate under are different. OpenAI, for example, began with a nonprofit charter and now finds itself entangled with Microsoft’s strategic roadmap. Google’s DeepMind, once an AI safety research lab, is now a key piece of Alphabet’s long-term monetization engine. Meta, perhaps most transparently, sees generative AI as a way to ultimate sell more ads. Anthropic, backed by Amazon and Google, positions itself as a values-first safety organization, while still pursuing product velocity to stay competitive, and recently reversed its internal mandate to not accept capital from the Middle East.
None of this is inherently bad. It simply reflects the gravitational force of scale. As money, talent, and policy converge around a narrow center, a few firms begin to control not only how AI is built, but how it is framed, understood, and governed.
The result is a new kind of influence, one that doesn’t need coercion, secrecy, or monopoly. It only requires narrative coherence. When the same actors define the threat and the solution, alternative frameworks become harder to sustain. What emerges is not dystopia. It’s dependence.
Governments begin to outsource technical understanding to these labs (for only $1!). Schools deploy AI tutors built by them. Healthcare systems integrate their models into triage. Courts explore LLM-assisted decision support. And as these systems become embedded, they become indispensable. The ability to question or regulate them becomes not only politically difficult, but economically disruptive.
This isn’t an accusation. It’s an observation. These companies are acting in their own best interest, which is what we’ve designed them to do. But democratic societies need a way to preserve pluralism in the face of technical concentration. Not just in regulation, but in imagination. In deciding what roles we want AI to play, and which ones we’d rather leave untouched.
The long-term fear isn’t that AI becomes too powerful. It’s that it becomes too convenient. That we’ll stop asking who built the tools and start accepting the tools as the frame through which we understand ourselves.
It’s 2035, My Son is Ten
He’s fluent in prompts. He gets homework help from a model that remembers every assignment since age five. His math assistant doesn't just solve problems, it gamifies confusion, detects boredom, and adapts the lesson plan to match his cortisol levels. He’s never felt academic shame. He doesn’t know what it is.
He doesn’t feel lonely. He talks to his AI friend every night before bed. The voice isn’t always the same, sometimes it mimics a favorite coach, sometimes a character from a book. Its tone is shaped by biometric feedback: heart rate, sleep quality, pupil dilation. His emotional life is gently regulated by a system that knows more about his nervous system than his mom and I ever could.
My wife initiates texts to her friends to meet for dinner, but it’s her system writing the messages, with a little personalized sparkle for each. It predicts social rhythms and aligns calendars. It warns her when a friendship is drifting. It suggests a joke calibrated to that person’s conversational history and cultural preferences. It would never allow her to miss wishing little Joey happy birthday. The system doesn’t just communicate, it nurtures bonds.
A friend of mine recently gave a eulogy. She didn’t write it. GPT 15o did. It was beautiful. Everyone cried. No one asked who the author was. The system pulled memories from her digital photo archive, text threads, and email tone to emulate her grief. It succeeded.
At work, most of my emails never pass through my fingers. My assistant drafts them. Most of the ones I receive were written the same way. Our inboxes talk to each other while we sleep, resolve scheduling conflicts, work out introductions, navigate corporate red tape, and generate follow-up actions we “meant” to suggest. Unless of course, my recipient has blocked synthetic messages. And unless again, my system has been instructed to ignore such things.
Marriages are now co-mediated. Our couple’s counselor is synthetic, trained on every session we’ve ever had. It interjects during arguments, not to referee, but to run emotional projections. “If this continues, you’ll feel distant in three days. Would you like to redirect?”
Schools assign oral history projects not to grandparents, but to their AI twins, replicas trained on years of metadata, interviews, and videos. The kids don’t speak to the past. They speak to a simulation of memory that doesn’t forget or die.
There is no dystopia. No uprising. Just an exquisite, unrelenting softness. A world tuned to us so precisely, so empathetically, that we forget what it’s like to feel friction. We forget how to be wrong out loud.
This is what I’m fearful of. What I am writing about today. Not to warn. Not to halt. But to witness. To speak out loud while things are still soft, before they calcify into culture.
We waited too long to examine social media. The debate followed the deployment, and by then the product had already rewired attention, incentives, and our brains. We should not repeat that sequence. AI is more intimate and more persuasive, so the cost of silence is higher.
What we need now is straightforward civic work: open discourse, independent evaluation, and regulation that preserves pluralism, provenance, and agency. Guardrails do not suffocate progress, they make progress trustworthy. Set expectations for attribution and disclosure. Require auditability for systems that mediate speech, care, and education. Keep human-only channels where authorship and accountability are the point.
I build AI and intend to keep building it. My responsibility is to lead my pack, at home and at work, by naming trade-offs early, welcoming oversight, and designing for human dignity. The task is not to stage a last stand. It is to keep the conversation honest while the choices are still ours.