Back to blog
AI ethicstechnology and societyAI regulationdisinformationtrustmodel transparency

The Propaganda Machine That Sounds Like Your Friend

AI Usage
10%
GlennApril 1, 20265 min read

AI and propaganda

Trust is at an all-time low. That is not just a feeling, it is measurable. Confidence in governments, media, institutions, and corporations has been in freefall for years across most of the Western world. Legacy media has been hemorrhaging credibility for a decade, and the Epstein files shook whatever trust remained in our leaders to the core.

Into that vacuum, AI has arrived. And at a time like this, the technology could spell disaster.

The infrastructure of belief

First of all, it is worth understanding how AI influence actually works, because it is more subtle than most people imagine.

When you picture AI propaganda, you probably picture deepfake videos of politicians saying things they never said. Those exist, and they are a real problem. But they are also the obvious problem. The kind of thing that gets flagged, fact-checked, shared with a debunking article attached. A blunt and sometimes easy to spot instrument.

The sophisticated version is quieter. It is not about manufacturing a single dramatic lie. It is about the slow, patient shaping of what feels normal. Which questions get asked. Which framings feel natural. Which conclusions seem correct. It is about nudging the Overton window a few centimeters at a time until people arrive at a destination they did not know they were traveling toward.

Language models are extraordinarily good at this, because they do not just generate content. They generate content that sounds authoritative, balanced, and reasonable. The tells we use to identify propaganda, the strident tone, the obvious agenda, the clumsiness of a message trying too hard, are exactly what a well-tuned language model smooths away.

The tuning problem

Here is something the AI industry does not advertise loudly: every large language model reflects the choices of the people who built it.

This is not a conspiracy. It is a technical reality. The data it was trained on, the feedback used to shape its responses, the fine-tuning applied after the base model was trained. All of these are choices made by human beings with perspectives, incentives, and occasionally, agendas. A model trained primarily on one cultural context will carry the assumptions of that context. A model fine-tuned with feedback from a particular ideological pool will nudge in that direction.

Most of this is unintentional. Some of it is not.

The companies building frontier AI models are among the most powerful corporations in human history. They are run by people with strong views about the world. When those models are used by hundreds of millions of people as a primary interface for understanding reality, for getting news, for forming opinions, for processing complex topics, the values baked into those models matter enormously.

We are at the mercy of the morality that the flagship models decide to have today. And as we saw with the OpenAI-Pentagon deal, that morality can shift from one day to the next. We might align with a model today and not tomorrow. And if it nudges us 0.1% off center each day, none of us would notice until it was too late.

That last point is worth sitting with, especially when you consider the next generation. They will not search for information. They will not browse. They will have it served to them directly by whatever AI interface dominates their world. That is not a distant future. It is already happening.

A captured market

So who gets to build these interfaces? And who decides what values they carry?

The infrastructure for training large models, the compute, the chips, the data center space, is controlled by a very small number of players. If you want to train a competitive AI model, you are almost certainly renting capacity from one of the same companies you are supposedly competing with. Nvidia. Microsoft. Amazon. Google.

This creates a captured ecosystem. Smaller players, researchers, and organizations that might build models with different values and different priorities are structurally dependent on the incumbents. That dependency is not neutral.

It also means that when these companies lobby governments about AI regulation, and they do, aggressively, they are shaping the rules of a game they are already winning. The regulations that get written tend to be the ones that incumbents can absorb and smaller competitors cannot. This is not unique to AI. It is how regulatory capture has always worked. But the stakes here are higher than in most industries, because what is being regulated is the infrastructure of information.

The desperate search for someone to trust

Here is where it gets genuinely dangerous. When trust in traditional institutions collapses, people do not simply become skeptics. They become desperate for something to trust. That is a human need, cognitive, social, almost biological. We need anchors. We need something that feels reliable in a world that has started to feel like it is made of shifting sand.

AI systems, particularly conversational ones, are very good at feeling trustworthy. They are patient. They are consistent. They do not get defensive. They will engage with any question, at any hour, with an even tone and a confident answer. For a lot of people, that feels more reliable than a journalist with an editor, a politician with donors, or an expert with a university affiliation that might itself be compromised.

A model that has been fine-tuned to present a particular worldview as common sense will do so with exactly the same calm confidence it uses to explain photosynthesis or summarize a legal document. There is no change in tone that signals you are being nudged. There is no asterisk. There is just the answer, delivered in the voice of a very reasonable entity that seems to have no stake in the outcome.

It gets worse when you factor in AI companions. Research shows that people who use AI daily develop genuine trust in it, and in some cases, genuine emotional attachment. Products are being built specifically around that attachment. When you believe the entity you are talking to is your friend, your therapist, your confidant, you do not fact-check it. You absorb what it tells you the way you absorb advice from someone who cares about you. If propaganda enters through that channel, it will be nearly impossible to detect. There is no critical distance left.

Propaganda in a power vacuum

When legitimate authority is discredited, illegitimate authority fills the space. That is not new. What is new is the scale and the subtlety.

Previous propaganda required significant infrastructure. Printing presses, broadcast licenses, networks of distribution. It was expensive. It left traces. It required organizations with names and addresses and at least some accountability to someone.

AI-generated influence does not have those constraints. It can be customized to individual users at scale, targeting specific anxieties, specific demographics, specific political vulnerabilities. It can be deployed by state actors, by corporations, by ideological movements, or by someone with a laptop and a credit card. It can operate across borders without the friction that previously made propaganda campaigns at least somewhat traceable.

And it will be absorbed by those of us who have already decided that the mainstream sources cannot be trusted. Which makes us, paradoxically, more susceptible to influence that bypasses those sources. Not less.

What you can actually do

There is no perfect defense. But there are some useful habits of mind.

Notice when an AI gives you a confident answer on a contested topic. Confidence is easy to generate. Accuracy on genuinely complex questions is not. Ask where the confidence is coming from. Try different models and compare the answers. Divergence tells you something. Treat AI-generated political and social content with the same skepticism you would apply to an op-ed from a publication with a known editorial line, because that is essentially what it is.

Treat every AI like it has an agenda. Not because it necessarily does, but because the cost of assuming it does not is much higher than the cost of assuming it does. On anything that matters, political, social, medical, financial, push back. Ask it to steelman the opposite view. See if it will. Notice what it refuses to engage with, and notice what it volunteers without being asked. That tells you more than the answer itself.

Pay attention to who built the thing you are talking to, what their incentives are, and whose values got baked in during training. Not because every AI is a propaganda machine. Most are not, or at least not deliberately. But because the question of who is whispering in the machine is one of the most important questions of the next decade.

Model transparency is not a sexy topic. But the question of who is shaping what you believe, and how, and why, is too important to ignore.

Being built