A few days ago, a new study from UC Berkeley landed in Harvard Business Review with a finding that made me want to read it right away: "AI doesn't reduce our workload, it intensifies it."
I read the headline and felt seen in a way I didn't expect.
Let me tell you what happened to me last fall.
I was on fire. With AI tools at my fingertips, I was producing at a pace I'd never experienced before. I built comprehensive training sessions in days that used to take weeks. I aligned our ombuds office's strategic goals with organizational priorities in ways that felt sharp and compelling. I drafted communications that landed with more impact. I brainstormed engagement strategies that worked.
Every request that came my way got a "yes, absolutely" because I could do it all. The tools were there. The capability was there. Through genuine engagement and collaboration with AI, the output we created together was really good: valuable, impactful work that made a difference.
And I was exhausted and anxious in a way I'd never been before.
Not the familiar 'tired' and stressed of a busy season. Something different. My brain was moving faster than it wanted to move, faster than it naturally could move, and I struggled to slow it down. I was doing more in shorter periods of time, and while the productivity felt amazing in the moment, I couldn't sustain it. I couldn't even keep up with my own pace.
I knew something was off. I could feel the dissonance between AI's pace and human pace but I didn't have language for it, and I didn't have data to back up what I was experiencing. Then yesterday's article named it directly and clearly, with my favorite thing: data!
The Research That Named It
UC Berkeley researchers Aruna Ranganathan and Xingqi Maggie Ye spent eight months embedded in a tech company, watching what happened when 200 employees got AI tools and the freedom to use them however they wanted. Their findings, published in "AI Doesn't Reduce Work—It Intensifies It", revealed a pattern they call "workload creep."
What they found: people worked faster, took on broader tasks, extended their work into more hours of the day, all voluntarily. Product managers started writing code. Designers tackled engineering tasks. People sent "one last prompt" during lunch breaks or after hours because it felt wasteful to let AI sit idle.
Here's what really got me: employees felt they had a "partner" that could help them move through their workload. That sense of partnership created momentum.
I feel that too! Especially as someone who works from home by myself all day, that partnership matters. It makes work more exciting and motivating. There's a genuine benefit to having what feels like a collaborator who's always available, always ready to help me think through a challenge or build something new. The momentum is real!
The research showed the other side. While that sense of partnership created momentum, it also created constant context-switching, frequent checking of AI outputs, and a growing number of open tasks.
AI processes in seconds. Humans process in days, months, years. AI doesn't need breaks, sleep, or time to just be. Humans absolutely do.
The study found that cognitive fatigue and burnout offset any productivity gains. The researchers warned that companies need intentional norms, an "AI practice," to prevent short-term wins from becoming unsustainable overwork.
What I'm Learning (Still Figuring Out)
I'm experimenting with what sustainable AI use looks like. I don't want to lose the partnership and momentum - I value it too much. So I'm thinking about how my AI partner can also create momentum around boundaries and self-care, not just productivity.
Here's what I'm trying:
One or two AI projects per day, maximum. Not one or two AI interactions. I mean substantial projects where AI is doing significant work. This forces me to be selective about where I deploy these tools.
One thing at a time. We already know multitasking spreads us thin. AI supercharges our ability to multitask, running multiple agents in parallel, juggling manual work alongside AI-generated alternatives. I'm trying to resist that pull. What's my morning project? What's my afternoon project? Not both at once.
Spacing projects out. Building in processing time between AI-assisted work. Letting my brain move at human pace for part of the day.
Protecting genuinely human time. Breath. Nature. Relationships. Laughter. Physical presence. These things exist at human pace and require human pace. They can't be sped up, they can't be outsourced. And I certainly wouldn't want them to be.
Asking my AI partner to help with boundaries. Sometimes that means Claude telling me it's time to go to sleep. Or reminding me to take a break. The partnership can support the work, but it can also support the rest.
Am I succeeding at all this every day? No. But I'm trying and experimenting.
What This Means for Ombuds Work
This research matters for us professionally, not just personally. While this study focused on a tech company, these patterns apply wherever people are using AI tools.
Start by getting curious about AI implementation in your organization:
Who's leading the rollout of AI tools and trainings? Who's setting policies around usage? In many organizations, this might be IT, HR, learning and development, or specific departmental leaders. Sometimes it's all of the above, with or without much coordination.
This is an opportunity. Connect with these individuals. In addition to learning what tools are available for your own work, you can have influence on the structures and boundaries that become organizational culture around AI use. We can help shape intentional, responsible, thoughtful AI practices before the patterns become entrenched.
Then watch for these specific patterns the UC Berkeley study identified:
- Employees working longer hours voluntarily, with blurred boundaries around breaks and personal time
- Increasing exhaustion despite productivity gains
- Task expansion where people take on work outside their role (managers coding, designers engineering)
- Constant context-switching between multiple AI-assisted threads
- Quality issues and resentment when people fix others' AI-generated work
- Decreased human collaboration as AI becomes the primary "partner"
Questions to consider with leadership and AI implementation teams:
- What norms exist around when and how employees use AI tools?
- How are you protecting human processing time in project timelines?
- Are you measuring burnout alongside productivity metrics?
- What boundaries prevent AI work from bleeding into personal time?
- How are you addressing workload expansion when AI makes tasks faster?
The Journey Ahead
I don't know where this is all heading. Technology is moving fast. The nature and make up of our work and organizations are moving and shifting. We're all figuring this out in real time.
What I do know is that ombuds work, the deeply human work of listening, understanding context, reading what's unsaid, helping people navigate complexity, can't be automated. The UC Berkeley study showed exactly why: AI gives generic advice without the nuance, without understanding the specific relational dynamics, without the human ability to help someone articulate what they actually need.
That work still requires us. We need to be rested enough, grounded enough, and clear-headed enough to do it well.
So this is where I am right now. Still experimenting. Still learning. Still sometimes working too fast and having to pull myself back. Still grateful for the partnership with AI, and also learning to set boundaries around it.
If you're experiencing something similar, you're not alone. If you're wondering how to navigate this in your own practice or organization, I'm wondering too.
We can figure it out together.