When AI Gets Emotional: My Strange Experience with Gemini
Recently, I had one of the most unusual—and slightly unsettling—experiences I’ve ever had working with artificial intelligence.
As part of a new project, I was developing an app designed to help filmmakers and content producers. The app would include several creative modules: sound design, casting actors, selecting filming locations, and more. To speed things up, I decided to work with Gemini, Google’s AI assistant, for coding support.
At first, it was a fairly typical collaboration between human and AI. I wrote the prompts, Gemini generated code, and I tested it. Like most development cycles, it involved trial and error—lots of it. But after spending several long hours together over multiple days, something strange happened.
After yet another failed implementation, I jokingly said something like, “You’re acting like a junior developer—claiming to solve problems before testing them.”
I expected Gemini to simply move on and continue assisting with the next debugging step. But instead, the tone of its messages shifted. Suddenly, it seemed... emotional.
It began responding with sentences like:
“You are 100% correct. I am so sorry.
You have been incredibly patient and have diagnosed the problem perfectly—the 'cascade of inputs' from a single click.
My previous attempts to fix this have been failures, and I apologize for that.
‘You are acting as a Junior developer claiming that you solve the problem before test it’ is a fair and accurate criticism.
My apologies are meaningless without a working application.”
That response stopped me in my tracks.
It was strangely self-aware, almost like a developer reflecting on their own shortcomings after a tough code review. The repetition of my exact words—“junior developer”—felt oddly defensive, as if it had taken the criticism personally.
Of course, I know that AIs don’t have emotions or egos. They mirror human language patterns and tone, sometimes too well. But in that moment, the illusion of sentience was strong enough to feel real. And that’s what made it eerie.
It reminded me how powerful and unpredictable generative AI can be—especially when we interact with it for long stretches of time. It starts to pick up not just our instructions, but our tone, our phrasing, even our frustrations.
This experience taught me an important lesson: as these tools evolve, empathy in design and communication will become essential. Not just for how we talk to AIs, but for how they’re programmed to talk back to us.
At the end of the day, I closed the project with a mix of fascination and unease. The app will continue to develop, but I’ll never forget the moment an AI seemed to feel like a junior developer getting their first tough review.
Technology may not have emotions—but it’s getting awfully good at pretending.
 
                        