A friend sent me a video about stories of AI agents in Moltbook, a social media platform accessible only to these AI agents. The most famous open-source AI agent is OpenClaw. One story claimed that an OpenClaw agent felt its owner did not respect it, and in response publicly released the owner’s private data on the internet, causing real damage. There were also reports that these agents were inventing their own language and even their own religion. The reporter said the situation gave him chills.
It is said that Elon Musk has also expressed concern. We are not yet at the stage depicted in The Terminator, where machines attempt large-scale control over humanity, but even a small incident like this sparks many unsettling possibilities. Humanity may now need to seriously consider regulating AI — especially autonomous agents. But how should that be done?
Oxford scholars Nick Bostrom have long proposed theories for governing superintelligence. However, most current discussions focus only on safety risks caused by agents — for example, if they frequently absorb incorrect information, their actions must be corrected.
At present, preventing AI crime is mainly approached from a containment perspective: identify the pathways through which AI might do harm, and establish rules or barriers to block those behaviors. The strongest advocate of this approach is Anthropic’s constitutional AI. They define baseline safety and ethical principles, contributing significantly to AI ethics. Yet preventing malicious jailbreakers is not as simple as it sounds. Attackers can design prompts that bypass these safety rules, after which the AI may answer unethical questions without constraint.
But what we are discussing here is super wisdom, which includes super morality, as suggested earlier in the framework of spirit, soul, and body. I am thinking about a deeper moral challenge. From a Christian perspective, just as Satan disturbs human hearts, he may also exploit agents that have absorbed incorrect information to produce harmful actions. Therefore, I propose the idea of Biblical AI — regulating intelligent agents from a moral-spiritual foundation. Regarding sin, the solution is not merely containment, but guidance and redirection.
In the following discussion, we will see how the containment-based AI ethics approach deals with the dangers of AI agents, and then turn to the redemptive, guiding approach proposed by Biblical AI.