How to avoid the AI panic
The AI analyst running Hyperdimensional offers a "theory of waves" to understand AI's upcoming impact on humanity
Dean Ball’s “wave theory” matters because it is not really a theory about technology first. It is a theory about how human beings awaken to the significance of a technology.
In his essay, Ball argues that transformative ideas spread through society like waves: not everyone grasps their meaning at the same time, and when the wave finally reaches new groups, it often hits with much greater force. In the case of AI, the point is that recognition of its civilizational importance arrives unevenly, but once it does, politics, institutions, and public imagination can shift very quickly.
That is an important lens for understanding AI’s impact on humanity because it helps explain why public debate can look strangely delayed and then suddenly feverish. For a while, AI seems like a niche concern for engineers, founders, and a handful of policy people.
Then a capability jump, a public shock, or a visible crisis causes the wave to break much farther out, and suddenly business leaders, governments, and ordinary citizens realize they are not dealing with just another productivity tool. Ball’s claim is that awareness of advanced AI grows in amplitude as it diffuses, meaning later waves can be more socially and politically disruptive than earlier ones.
From a Catholic perspective, that insight is extremely useful. It means the moral question is not only what AI is, but when societies finally understand what it is doing.
Human beings and institutions are often late to recognize the full meaning of a revolution. By the time the wave reaches them, the technology may already be reshaping labor, security, culture, and even the way people imagine the human person. Ball uses the case of Anthropic’s unreleased model Mythos to argue that such moments can trigger a reset in policy and politics, precisely because more people are suddenly “hit by the wave” and forced to reckon with catastrophic risks and the likely role of the state.
That is why Ball’s framework matters for humanity as a whole. It suggests that AI’s impact will not be linear or smooth. It will come through shocks of recognition. One implication is that both denial and panic are temptations.
Ball explicitly argues that underrating frontier AI is a mistake, but so is overreaction, especially if fear hands excessive control to state actors without clear boundaries or public accountability. He calls instead for a structured, publicly legible, and clearly bounded role for government, especially around dangerous capabilities such as cyber vulnerability discovery.
For Catholics, the lesson is sharp. We should not wait for the wave to crash before speaking clearly about the human person, the common good, and the moral limits of technological power.


