The moral paradox of LLM technology
Hard-coding morality into an AI sounds reasonable. It is probably impossible. And the attempt may be more dangerous than doing nothing at all.
Writing on AI risk, security, governance, and the systems we're building faster than we understand them.
Hard-coding morality into an AI sounds reasonable. It is probably impossible. And the attempt may be more dangerous than doing nothing at all.
Your voice, your face, your writing style: all of it can be cloned from what you have already posted online. The question is whether your grandmother would notice, or go broke.
We are about to flood the workforce with people who know how to talk to AI and have no idea how it works. We are calling that a skill. The security failures will follow.
A 12 year old girl was fined $2,000 for downloading songs. Meta torrented 82 terabytes of pirated books and walked away with a court win. Welcome to fair use in 2026.
AI and propaganda. It is not about manufacturing a single dramatic lie. It is about the slow, patient shaping of what feels normal. Which questions get asked. Which framings feel natural. Which conclusions seem correct.