How Are People Using ChatGPT?
Wondering if more people use ChatGPT to write their emails or put together a delicious dinner recipe? You can stop wondering! “How People Use ChatGPT” (Chatterji et al., Sept 2025) is the first economics study to analyze internal ChatGPT message data. The research used a privacy‑preserving setup to map real usage at global scale. Think of it as a product‑analytics deep dive for the world’s most popular AI teammate.
A Diffusion Curve with No Precedent
From launch in Nov 2022 to July 2025, ChatGPT reached ~700 million weekly active users (about 10% of the world’s adults) who sent ~18 billion messages per week. That’s ~2.5 billion per day or roughly 29,000 messages per second. Hopefully the GPUs and data warehouses are getting naps in. But the kicker was that usage didn’t just scale from new signups; older cohorts kept increasing their own use over time, suggesting people continue discovering new jobs for the tool as models and features improve.
The Surprising Leader: Non‑Work Use
One would think work productivity would be the biggest usage. Not so much: non‑work messages outpaced work messages from mid‑2024 to mid‑2025, rising from 53% to ~73% of consumer usage. Crucially, the shift happened within existing cohorts which implies large “home production” value as people use ChatGPT for tutoring, trip planning, translations, recipes, and everyday troubleshooting. That pattern dovetails with external estimates of large consumer surplus from generative AI (the authors note work valuing U.S. surplus at roughly $97B in 2024) and suggests that skills learned off‑the‑clock spill back into the office. In non-research speak: as users get to know the LLMs outside of work, that usage makes them better when using AI at work.
What People Actually Do: Guidance, Info, and Writing
Three topics dominate nearly 80% of conversations: Practical Guidance, Seeking Information, and Writing. If your mental model is “coding help and therapy,” adjust expectations:
Practical Guidance covers tutoring, how‑to advice, and creative ideation—like a patient coach who doesn’t bill by the hour.
Seeking Information operates like a conversation‑first search engine.
Writing is the anchor task at work (40% of work messages by July 2025). Importantly, roughly two‑thirds of writing tasks modify user‑provided text (editing, critiquing, translating) rather than generating from scratch, underscoring AI’s edge as an enhancer of human drafts.
Two surprises: coding is a smaller slice than many assume (~4.2% of messages), and companionship/role‑play are tiny (~2.4% combined). Education shows up strongly: tutoring/teaching requests are about 10% of all messages.
The “Asking” advantage: AI as decision support
The authors classify intent as Asking, Doing, or Expressing. Across consumer usage, Asking leads (~49%), Doing follows (~40%), and Expressing trails (~11%)—and Asking grew faster than Doing over the past year. Even better, Asking tends to earn higher interaction‑quality scores, which is a fancy way of saying people feel better served when they ask for advice or clarification. The paper’s thesis: the biggest economic value is decision support—AI as an advisor or research assistant—especially in knowledge‑intensive roles.
Who’s Using ChatGPT?
Gender parity: Early usage skewed heavily male (~80% of active users had typically masculine names), but by June 2025 that flipped to near‑parity with a slight tilt toward typically feminine names. Topic preferences differ at the margins—women lean into Writing and Practical Guidance; men lean a bit toward Technical Help, Seeking Info, and Multimedia.
Age: Nearly half of adult messages come from users under 26, though work‑related usage rises with age.
Global diffusion: Adoption grew fastest in low‑ and middle‑income countries (GDP per capita ~$10k–$40k), pointing to potential narrowing of digital gaps.
Education/occupation: Highly educated, well‑paid professionals are more likely to use ChatGPT for work and more for Asking than Doing at work.
What does this translate too for leaders? The user base looks increasingly like all types of humans with distinct but predictable work patterns among professionals.
The Work Everyone Does: Getting Info, Making Decisions, Documenting
Mapping messages to O*NET work activities, the study finds striking convergence across jobs. The Generalized Work Activities “Getting Information” and “Making Decisions and Solving Problems” consistently rank at or near the top; “Documenting/Recording Information,” “Interpreting for Others,” and “Working with Computers” round out the short list. In other words, regardless of your job title, your AI teammate most often helps you understand, decide, and communicate.
Methodology That Takes Privacy Seriously
This is the first economics paper to use internal ChatGPT message data. And nary a human eye saw any content. Messages were PII‑scrubbed and classified via LLM‑based pipelines. For employment and education analyses, the team used a secure data clean room with strict aggregation thresholds (minimum 100 users per cell). It’s a credible template for large‑scale usage research without compromising user privacy.
Who Cares? Practical Takeaways for Leaders
Invest in “decision support” workflows. If Asking drives quality, prioritize AI patterns that help teams compare options, assess risks, and synthesize evidence—before they hit “send” on the brief. Pair guidance prompts with lightweight checklists.
Embrace “edit‑my‑draft” as the default. Two‑thirds of writing use is about improving user text. Build prompts and guardrails around editing, critiquing, and translating—your fastest ROI is polishing what people already produce.
Plan for broad, non‑work spillovers. Employees who benefit at home often upskill faster at work. Consider ethical guidelines that recognize off‑the‑clock experimentation (and maybe don’t begrudge the perfect latte caption).
Design for inclusivity and global scale. With gender parity emerging and rapid growth in lower‑income countries, user education and onboarding should be jargon‑free, mobile‑friendly, and culturally adaptable.
Measure what matters. Track Asking vs. Doing in your own usage data, plus topic mix (Writing/Info/Guidance). If Asking quality correlates with fewer rework cycles or faster approvals, you’re on the right path.
Generative AI’s usage pattern in this dataset is clear: people turn to ChatGPT to understand, decide, and polish both at home and at work. For leaders, the winning move is to standardize "asking" workflows, treat editing-as-default as a feature not a bug, and measure outcomes (fewer reworks, faster approvals) rather than output volume. Do it with strong privacy guardrails and inclusive onboarding, and you’ll compound the value employees are already creating after hours. The models will keep changing; advantage goes to teams that learn faster than the feature set.
