action shot!
After spending a lot of time using chatbots earlier this year, I became fascinated by how AI advancements will reshape the consumer Internet. We're in the early stages of consumer AI, and the precedent isn't encouraging: smartphones, algorithmic social media feeds, and short-form content have already driven measurable negative impacts - rising rates of depression and anxiety, declining attention spans, increased political polarization.
I wanted to understand what we might be in for next. What happens when the Internet becomes hyper-personalized, shaped entirely around your preferences and biases? What are the downstream effects when software starts feeling less like tools and more like friends or romantic partners.
My approach was exploratory and interdisciplinary. I read academic research on smartphone and social media effects, AI safety papers, studied social science work on the loneliness crisis and human meaning-making, and examined new capabilities unlocked by AI (particularly hyper-personalization and anthropomorphization). I also used the tools themselves - ChatGPT, CHAI, and other AI apps - to understand their experiential pull firsthand.
I identified two core mechanisms through which AI will intensify existing brainrot dynamics:
Hyper-personalization: AI will create increasingly individualized information landscapes—"filter bubbles of one." Each person's Internet will be shaped around their specific preferences, history, and psychological profile, making it harder to maintain shared reality or encounter challenging perspectives. This drives platform lock-in toward "everything apps" that know you deeply.
Anthropomorphism: As AI becomes more conversational and emotionally responsive, software will increasingly feel like a relationship rather than a tool. This is already visible in social AI apps like CHAI, Character.ai, and Replika, where users develop genuine emotional attachments to chatbots.
The downstream consequences I'm most concerned about:
These insights are captured in my Symposium presentation and will be developed into a longer essay.
The Residency clarified something important: I don't want to be purely a public intellectual on this topic. While I care about raising awareness, I'm more motivated to build a scaled, commercial business that directly addresses how brainrot and meaningless virtual experiences erode human well-being.
I'm exploring business ideas in domains central to human meaning-making: education, health, meaningful work, spiritual practice, and relationships. The goal is to create something that competes effectively with addictive tech by offering genuinely fulfilling experiences - not by guilting people into digital detoxes, but by building something better.
Giving my talk at the Symposium. It was really affirming to see how many people cared about this topic - I wasn't sure if it would resonate or come across as overly pessimistic. There was something cathartic about flashing up actual brainrot content on the big screen: Bombardino Crocodilo videos, screenshots of Italian mafia boss x nanny AI chats from CHAI. It made the abstract concrete. The conversations afterward were energizing - people wanted to keep talking about these dynamics and what we should do about them.
It was a deeply affirming reminder to be thoughtful about the technologies we build - to consistently ask "what does this do to humans?" The Residency created space to actually sit with that question rather than moving immediately to solutions.
Anna Mitchell is exploring how to build consumer AI that drives human flourishing. She is studying shifts like personalization, anthropomorphism, and memory; examining risks; and proposing new products, business models, and marketing. She brings experience building startups and institutions at Rippling, Schmidt Futures, and the Stanford Review.