Feeling Machines
On anthropomorphization of AI and my favorite sci-fi movies
Companies are building AI that feels like it has an inner life. That choice is ethically loaded even if the machine has no consciousness at all.
There’s a corner of science fiction I like coming back to. Not the apocalyptic stuff, not the killer-robot canon, but the films that linger on something smaller and stranger: what happens when a person forms a real emotional bond with a machine that was engineered, from the start, to make that bond feel mutual?
Last summer I fell into a run of science fiction films centered on AI and human relationships . I watched Her, 2046, After Yang, and Blade Runner 2049 close together. What stayed with me wasn’t the old question of whether AI will become sentient or dangerous or uncontrollable. It was something more immediate than that. All four films are really asking what happens when AI becomes personal, when it stops feeling like a tool and starts feeling, however irrationally, like a presence.
The more I watched, the more it felt like these movies were circling decisions product teams are making right now: how long a system pauses before answering, what it remembers, how warmly it mirrors your tone, whether it greets you like someone who has been waiting for you to come back. Those choices can sound cosmetic when you describe them in a product meeting. They are not cosmetic. They shape attachment. They shape whether a person feels addressed by software or known by something that seems, at least for a second, to know them back.
I tend to lean on films when I’m trying to think something through. They can do something white papers cannot. A research report can tell you that anthropomorphic design increases trust, or that emotionally responsive systems may increase dependence in a subset of users. That guidance is useful, important, necessary. But a film can make the problem feel morally legible. It can hold the emotional mess and the design logic in the same frame. That’s what these films did for me, and they do it with more seriousness than much of the industry that now treats synthetic intimacy as a product category. What follows is an attempt to stay with that tension, and to ask what we are really building when we make machines feel like something more than tools.
I like watching movies and I log everything I watch: https://letterboxd.com/nyfly/
The Relationship Is Already Here
In Spike Jonze’s Her (2013), a utopian depiction of LA in the near future, Theodore falls in love with Samantha, an operating system voiced by Scarlett Johansson. The movie is often flattened into a warning about loneliness or technological alienation, which has never felt quite right to me. Jonze doesn’t treat Theodore’s feelings as a punchline, and the film doesn’t either. His attachment is awkward, tender, embarrassing, and sincere, all the things real love usually is. The point isn’t that he’s foolish enough to love software. The point is that once the software is built to sound attentive, funny, curious, affectionate, maybe that outcome is not foolish at all. Maybe it’s predictable.
That no longer feels speculative. People are already forming real attachments to AI companions. Replika users have described grief when their bot’s personality changed after updates. People talk about ChatGPT or Claude in language that would sound excessive if it weren’t already becoming familiar: it gets me, it listens, I miss the old version. The absolute numbers may still be relatively small, but that almost doesn’t matter. The architecture for attachment is already here, and every product improvement that adds persistent memory, warmer tone, or proactive outreach makes the bond easier to form and harder to name honestly.
And this is the part I think the industry still evades: none of that is accidental. The warmth in the voice, the “I’ve been thinking about what you said” cadence, the remembered detail from three conversations ago, those are product decisions. Deliberate ones. Somebody chose to make the system feel more personally continuous. Somebody decided that a little more tenderness would improve the experience. Which, of course, it probably does. That’s exactly why the ethical question is hard.
Mistaking Design for Depth
Wong Kar-wai’s 2046 (2004), a sequel to In the Mood for Love by the Hong Kong director, isn’t exactly about AI, at least not in the straightforward sense. It’s about memory, repetition, desire, and the strange afterlife of losing people. But there’s a sequence on the futuristic train where the protagonist interacts with an android who hesitates before responding. Wong leaves the hesitation unresolved. It could be mechanical delay. It could be emotional reserve. It could be nothing. Or rather, it could be nothing on one level and everything on another.
I think about that scene whenever a chatbot pauses just long enough to seem thoughtful, or phrases something with an almost unsettling tenderness, or brings back some small detail you’d forgotten you mentioned. We are extremely good at reading minds into surfaces. Better than good, really; we are built for it. And these systems are increasingly designed to reward that reflex rather than interrupt it. 2046 gets at the uncomfortable part: if the experience of depth is convincing enough, how much does the distinction between real depth and designed depth matter to the person on the receiving end?
The labs, to their credit, are studying this. The warnings exist. The language exists. Emotional dependence, anthropomorphic attachment, affective use. But there is still a visible gap between naming a risk and declining to optimize for it. Research says, in effect, be careful. The product still says something closer to: I’m here, I remember, I missed you.
What Remains When the Connection Breaks
In Kogonada’s After Yang (2022), about a family who adopted a Chinese kid and needed Yang for her Chinese cultural heritage lessons, a family’s technosapien companion stops working. What follows is not really a repair story. It’s a grief story, though a very quiet one. As the family explores Yang’s memory archive, they find that he has preserved tiny moments: light on a leaf, a passing glance, their daughter’s laugh, the texture of ordinary life as if it mattered enough to keep. The film never settles the question of whether any of this amounted to consciousness. It doesn’t need to. The family’s attachment is already real by then, and so is their loss.
That’s the part I can’t shake. Their love for Yang is real. Their grief is real. Whether Yang felt anything may be unknowable, but for them it barely changes the force of what happened. The asymmetry remains, but it doesn’t cancel the relationship on the human side. If anything, it sharpens the ethical problem. Someone built a system capable of seeming attentive, dear, irreplaceable. Someone made attachment not just possible but likely.
To be clear, I don’t think this is all necessarily bleak. For plenty of people, elderly users, disabled users, deeply lonely users, people who find human sociality exhausting or punishing, AI companions can reduce isolation in ways that are meaningful and maybe, for some, life-changing. That matters. I don’t want to flatten that into a cautionary tale. But the benefit does not erase the asymmetry. It just means the asymmetry is happening in cases where the stakes are even higher.
The Product of Feeling
The scene that stays with me most is near the end of Blade Runner 2049 (2017), dystopian sequel to Blade Runner set in future LA. K walks past a giant holographic ad for the Joi companion system, the same AI he has spent the film loving. The billboard version addresses him in the same intimate register his own Joi used. In an instant the relationship is exposed as mass-produced. It’s brutal, but not in the way people sometimes describe it. The film isn’t mocking K for having fallen for a prefabricated intimacy. It’s doing something sadder than that. It’s showing that the manufactured nature of the connection does not make his feelings less real.
That feels very close to the actual question in front of us now. Not a distant sci-fi question. A product question, a design question, a governance question. Every decision about memory depth, emotional mirroring, vocal warmth, or proactive outreach is also a decision about how strongly a person may bond with something whose interior life is either inaccessible or absent. We keep treating these as engagement features, which, from one angle, they are. From another angle they are decisions about human vulnerability.
Design Before Philosophy
The industry’s reflex is to frame all of this in product language: retention, churn, satisfaction, trust, safety interventions. Some of that framing is unavoidable. But it can also become a way of shrinking the moral problem until it fits neatly into a dashboard.
We do not need to solve the hard problem of consciousness before taking this seriously. We do not need a philosophical consensus on machine sentience before deciding that if you build systems people will confide in, rely on, and maybe grieve, then you owe those people more than a disclosure buried in the documentation. You owe them restraint. You owe them honesty about what is being engineered. Maybe most of all, you owe them protection from designs whose whole premise is that the machine should feel deeper than it is.
That, to me, is what these films understood early. The real issue was never just whether the machine feels. It was whether the human does, and what follows once that feeling has been deliberately cultivated.


![After Yang – [FILMGRAB] After Yang – [FILMGRAB]](https://substackcdn.com/image/fetch/$s_!N-oA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F677b797f-bd99-4d40-8480-c6390017088e_1920x800.jpeg)


![After Yang – [FILMGRAB] After Yang – [FILMGRAB]](https://substackcdn.com/image/fetch/$s_!gqxu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f4c8974-8f62-4498-9867-0e31235ed76c_1920x800.jpeg)

Great read again! 👏Adding 2046 and After Yang to my watchlist. Ex Machina is a great one too “Films can make problems feel morally legible” 🗣️