Can AI make us dumber or brighter?
An important study claims that “big” AI can perpetuate ignorance, while small AI can increase knowledge
A recent, little known but quite important study from the National Bureau of Economic Research seems to conclude that the current, most popular AI tools are not helping in increasing human knowledge, but to the contrary, may be contributing to perpetuating errors and hiding significant pieces of knowledge.
The study, called How AI Aggregation Affects Knowledge, tried to solve a simple question: How humans learn together when AI steps in as a super-smart middleman that summarizes what everyone believes—and then that summary gets fed back to us as “new information”?
Using a complex math model (those curious, check it directly on the link,) the study observed how AI works when it comes to learn, summarize and then share human knowledge. This is the process:
AI watches everyone’s beliefs, “trains” on them (like how ChatGPT or similar systems learn from huge datasets), and spits out a polished, synthesized version of “what the crowd thinks.”
That AI output then gets sent back to the people, who treat it as fresh info and update their own views.
Over time, the AI keeps retraining on the new mix of human + previous-AI beliefs. It’s a loop: humans → AI summary → humans update → AI retrains, and so on.
Study findings
1. Speed of the AI matters a lot:
If the AI updates too quickly (it rapidly absorbs and reflects whatever the crowd is saying right now), the researchers couldn’t find any reliable way to tune it to make learning better. Actually it often makes the group converge on beliefs that are farther from the truth.
But if the AI updates slowly enough, then yes: there are smart ways to set counter weights so the AI reliably helps the group learn more accurately across many scenarios.
2. Local/specialized AI is better than one giant global AI
Local AIs (ones trained only on nearby people, or on one specific topic; like a medical AI only using doctor data, or a neighborhood forum AI) consistently help everyone get closer to the truth, no matter the situation.
Swapping out those specialized local AIs for a single all-purpose global one makes learning worse: the group ends up more wrong or more confused about certain things.
Why is this important?
- AI-generated content is already becoming training data for future AI. The study shows this loop can widen the “learning gap” instead of closing it, making society as a whole less able to figure out what’s true.
- Fast-moving, always-updating AI (like real-time models or rapid fine-tuning) is risky. If companies keep making AI that retrains super quickly on the latest internet chatter, it becomes harder to design it in a way that genuinely helps collective understanding. It might amplify fads, echo chambers, or even small errors until they look like “consensus.”
- One-size-fits-all global AI could hurt accuracy. The study suggests we’d be better off with many smaller, topic-specific or community-specific AIs rather than replacing them with one giant system. A global AI might smooth things out overall but make us systematically worse at certain topics or in certain communities.
Bottom line: The study says AI isn’t automatically a neutral tool for “wisdom of the crowd.” When its outputs loop back as training data, the speed and scope (global vs. local) decide whether it makes humanity smarter or dumber in the long run.


