Who should control the development of AI?
As AI is speeding the pace, some smart minds a proposing a moderate government intervention
The growing debate over who should control the most powerful forms of AI shows that we are entering a new phase of the technological revolution: one in which raw innovation is no longer the only question.
The issue was most recently discussed by The Free Press in an article titled: “Who Should Control AI’s Most Dangerous Secrets?”
The deep question of the article is authority. When frontier systems become powerful enough to expose major cyber vulnerabilities, threaten whole sectors, and unsettle even their creators, it is no longer irrational to ask whether private industry alone should be making the most consequential decisions.
That is why some experts are invoking the Manhattan Project, the World War II U.S. government program that secretly brought together scientists, industry, and the military to build the atomic bomb before Nazi Germany could.
The comparison is not perfect, but the instinct behind it is understandable: when a technology becomes potentially civilization-shaping, governments cannot pretend they have no responsibility. At the same time, it would be a mistake to hand everything over to the state as if bureaucracy were inherently wiser than free human creativity.
Catholics resist both idols: the idol of the market and the idol of the state. Private industry can move fast, invent boldly, and solve real problems. Government can set limits, protect the common good, and restrain reckless incentives.
The “sweet spot” seems to be a serious, morally grounded partnership in which innovation remains alive, but power is answerable to public authority and ordered toward the dignity of the human person.
That is why Ryan Fedasiuk’s formulation -as reported by the Free Press- is probably the sanest one:
“The answer lies somewhere between two extremes. It’s a mistake to try to lock this technology in a box, and it’s just as dangerous to pretend the market alone will sort it out.”
From a Christian perspective, that balance matters because the final standard is not speed, profit, or geopolitical advantage. It is whether this technology remains under genuinely human and moral control. AI is too powerful to be left to corporate appetite alone, but too dynamic to be smothered by a clumsy state monopoly. The task now is not to choose between Caesar and Silicon Valley. It is to insist that both serve the human person, not the other way around.


