The Emotional Ecosystem of Propaganda: Digital Dynamics of Emotion, Technology, and Policy

Abstract

Contemporary propaganda thrives not on ideology but on emotion. This article introduces and applies the Emotional Ecosystem of Propaganda (EEP) framework (Maxwell 2025) to explain how fear, anger, and pride—augmented by hope, love, and empathy—fuel manipulative influence in today’s digital media landscape. We integrate interdisciplinary literature in communication, political psychology, and neuroscience to illustrate how emotional triggers hijack attention, how algorithmic and social feedback loops amplify these emotions, and how group identity solidifies misperceptions. Through case studies ranging from viral misinformation campaigns and microtargeted political advertising to empathy-based humanitarian appeals, we demonstrate the EEP model’s components in action. The analysis reveals a self-reinforcing cycle of emotional activation, rationalization, and social reinforcement that undermines critical thinking and pluralistic dialogue. We conclude with a discussion of ethical frameworks and concrete policy recommendations—such as transparency standards, algorithmic accountability, and public resilience-building—to mitigate the harms of manipulative emotional propaganda while harnessing prosocial emotional engagement for the public good.

Introduction

Around the world, policymakers and scholars are grappling with the proliferation of “post-truth” propaganda in digital ecosystems. Unlike the propaganda of the 20th century that often centered on coherent ideologies or overt censorship, modern propaganda inundates society via social networks, viral memes, and targeted advertisements that systematically exploit human emotions . Emotions like fear, anger, and pride serve as psychological hooks, grabbing attention and short-circuiting critical reasoning. Propagandists blend these visceral appeals with more “positive” emotions—hope, love, empathy—to broaden their appeal while masking manipulative intent. The result is a dynamic interplay of emotional triggers and technological amplification that sustains misinformation and extreme partisanship even in the face of contrary facts.

Brian Maxwell’s refined Emotional Ecosystem of Propaganda (EEP) framework offers a systematic way to understand this phenomenon. The EEP moves beyond linear models of influence (“Leader → Masses”) and instead conceptualizes propaganda as an interactive emotional ecosystem . In this ecosystem, propagandists strategically activate emotions, audiences actively participate in spreading content, and algorithmic platforms amplify the most engaging (i.e. emotional) messages. Feedback loops form as emotionally-charged content circulates and intensifies, reinforcing beliefs and group identities. Over time, the public discourse can become dominated by what Maxwell calls a “constant emotional charge,” in which narratives thrive or fade based on emotional resonance rather than factual merit .

This article, aimed primarily at policymakers but with theoretical depth for scholars, develops the EEP framework into an academic analysis of how emotional mechanisms underpin propaganda in the digital age. We first review relevant literature on emotion in persuasion, social identity, and digital media dynamics to ground the framework empirically. We then explicate the components of the EEP model: the core emotional drivers (fear, anger, pride), the supplemental appeals to hope, love, and empathy, the rational-ideological veneer that gives emotional narratives an appearance of legitimacy, and the technological and social amplifiers that create a self-sustaining cycle. Through real-world case studies—including online disinformation networks, the Facebook–Cambridge Analytica microtargeting scandal, and viral advocacy campaigns—we illustrate each aspect of the model in practice. Finally, we discuss the ethical implications of the EEP, proposing criteria to distinguish benign emotional appeals from manipulative ones, and present actionable policy interventions to foster resilience against toxic propaganda. In doing so, we aim to equip both policymakers and academics with a deeper understanding of how propaganda weaponizes emotion, and how multi-pronged oversight can help protect the integrity of public discourse in an age of algorithmic amplification.

Literature Review: Emotion, Propaganda, and Digital Media

Emotions as Drivers of Persuasion: Decades of research in psychology and communication have demonstrated that emotional appeals can be extraordinarily persuasive, often more so than rational arguments. Fear appeals, for instance, have been shown to influence attitudes and behaviors under certain conditions (Tannenbaum et al. 2015). By triggering alarm and a sense of threat, fear can bypass deliberative thought and prompt immediate action; in evolutionary terms, a fearful stimulus demands quick response rather than careful analysis. Neuroscientific studies underscore the role of the amygdala in processing threats and negative stimuli, linking higher sensitivity to fear with differences in political attitudes (Oxley et al. 2008; cf. Kanai et al. 2011). Anger, meanwhile, tends to confer confidence and urgency—Lerner and Tiedens (2006) found that anger can create optimism bias and risk-seeking, emboldening people to support aggressive actions. Pride, as a positive self-conscious emotion, has received less experimental attention, but political psychology observes that appeals to group pride or nationalist sentiment can strongly cement in-group loyalty (Smith et al. 2008). These insights affirm a core premise of EEP: emotions can rapidly shape perceptions of credibility and urgency, often irrespective of the quality or logic of the information.

Cognitive Biases and Misinformation: Emotional arousal often intertwines with well-documented cognitive biases that make misinformation “stickier.” Confirmation bias leads individuals to favor information that validates their pre-existing beliefs and to discount contradictory evidence. Once an emotionally charged belief is internalized, people selectively attend to supporting anecdotes or data. Studies in political communication have found that corrections frequently fail to reduce misperceptions among the intended audience and can even backfire by reinforcing the initial false belief (Nyhan and Reifler 2010 ). In one experiment, providing a factual correction to a misinformation item not only did not change the minds of strongly committed partisans but sometimes increased their belief in the misinformation (the “backfire effect”) . The EEP framework builds on this by suggesting that an emotional investment in a narrative creates social and psychological incentives to reject corrections—admitting error may threaten one’s group identity or sense of security. Indeed, social identity theory (Tajfel and Turner 1979) posits that a portion of one’s self-concept derives from group memberships. People have a motivation to preserve a positive social identity, which can mean clinging to narratives that cast their in-group as righteous, even in the face of contrary evidence. When propaganda offers a sense of belonging or moral superiority, individuals may interpret attempts to debunk it as personal or group attacks. This dynamic helps explain the “sealed” nature of many propaganda-driven worldviews: believers treat disconfirming information as suspect, coming from “outsiders” who don’t share the in-group’s values.

Digital Media Algorithms and Echo Chambers: The rise of algorithm-driven platforms has drastically altered the scale and speed at which emotional content spreads. Engagement algorithms on platforms like Facebook, Twitter, and YouTube prioritize content that elicits strong reactions—whether likes, shares, comments, or viewing time. Research by Brady et al. (2017) found that the presence of moral-emotional language in tweets increased their retweet rate significantly, by as much as 20% per additional moral-emotional word, on average . In other words, posts that expressed outrage or heartfelt moral sentiment (for example, indignation at an injustice) spread farther and faster within homogeneous networks than more neutral posts (Brady et al. 2017). This leads to algorithmically sustained “echo chambers” (Sunstein 2017) where users are continually fed content that resonates with their existing emotions and views. Personalization filters reinforce this effect: if a user frequently engages with content asserting a particular conspiracy theory or partisan narrative, the platform’s recommendations will funnel more of the same to that user . Over time, individuals may rarely encounter dissenting perspectives in their feeds, a phenomenon observed in both observational studies and experiments on social media behavior (Cinelli et al. 2021). The EEP framework incorporates these findings by describing technology as an amplifier and accelerator of emotional propaganda. The “networked” nature of propaganda means that everyday users—through retweets, shares, and comments—become unwitting propagators of the message, adding their own emotional interpretations as fuel. As we will discuss, this participatory aspect makes modern propaganda part bottom-up and part top-down, blurring the line between orchestrated manipulation and organic grassroots passion.

Propaganda and Democratic Discourse: Scholars of media and democracy have raised alarms about the implications of emotionally charged misinformation for civil society. A predominance of fear and anger in public discourse correlates with polarization and distrust (Iyengar et al. 2019). Intense negative partisanship—when political affiliation is driven less by support for one’s own side than by hatred of the other side—is both fueled by and a driver of emotional propaganda. Meanwhile, positive emotional appeals can also be co-opted. Hope and empathy are pillars of healthy social movements and pro-social campaigns, but propaganda can mimic these to create false legitimacy (Stanley 2015). For example, a disinformation campaign may adopt the veneer of humanitarian concern (“think of the children!”) to justify extreme policies or to demonize opponents as heartless. This complicates the task of regulation: not all emotional appeals are harmful, and indeed democratic politics relies on a certain amount of passion to mobilize participation. The challenge for policymakers is discerning when emotional messaging crosses into deception or manipulation. As a framework, EEP contributes to this need by outlining specific criteria (intent, truthfulness, transparency, outcome) to evaluate emotional communications. Before turning to those normative criteria, we will map out the inner workings of the emotional ecosystem itself—laying out how each component operates and reinforces the others.

In summary, existing literature indicates that (a) emotional stimuli have outsized influence on attention and memory, (b) individuals in emotionally aroused states or tight in-groups are less receptive to counter-arguments, and (c) digital media architecture magnifies the reach of emotional content. The EEP framework synthesizes these insights to explain contemporary propaganda’s power. We next present the framework in detail, dissecting its components and illustrating them with concrete examples.

The Emotional Ecosystem of Propaganda (EEP) Framework

The Emotional Ecosystem of Propaganda framework conceives of propaganda as a cycle of emotional activation and reinforcement rather than a one-directional message from propagandist to public. Figure 1 (conceptual diagram) can be envisioned as a loop: emotional triggers → public reaction → algorithmic amplification → social reinforcement → feedback to propagandist, which refines further emotional triggers. In this section, we break down the EEP into its key components: (1) core emotional drivers (fear, anger, pride), (2) an expanded palette of positive or prosocial emotions (hope, love, empathy) that propagandists also harness, (3) the rational-ideological overlay that gives the emotional appeals an appearance of legitimacy or logic, and (4) technological and social amplifiers that turn individual reactions into large-scale movements. Throughout, we integrate case study illustrations to show these elements in action.

Core Emotional Drivers: Fear, Anger, and Pride

Maxwell’s model centers on three potent emotions—fear, anger, and pride—as the core drivers of propaganda’s power. These emotions are repeatedly invoked because they serve distinct but complementary functions in securing audience compliance and enthusiasm . Each creates a different psychological pressure point:

  • Fear is used to grab attention and induce a sense of threat. In the EEP, fear is the entry point: it makes an audience feel endangered or at risk, thus overriding other considerations. Fear tactics may involve alarmist narratives about public safety, health, or security. For example, a viral disinformation campaign in 2020 circulated false rumors of impending nationwide lockdowns and military enforcement during the COVID-19 pandemic, causing panic-buying and distrust in official information. By exaggerating or fabricating threats, propagandists exploit fear to disrupt critical thinking . Psychologically, fear triggers the amygdala’s fight-or-flight response, narrowing individuals’ focus to the source of threat. Under these conditions, people become more susceptible to authoritative guidance or simple solutions that promise safety. In propaganda, the function of fear is to create a population of anxious followers looking for a protector or savior . Mechanism: Repetition of frightening claims (e.g. “crime is skyrocketing in your city” or “a deadly disease is being covered up”) causes chronic anxiety. Even if the claims are unsubstantiated, the emotional impact accumulates. Outcome: Audiences internalize a sense of crisis. They may come to see the propagandist (or whoever echoes the fear message) as an indispensable guardian. Alternative voices that attempt to downplay the threat are often ignored or seen as naïve. Over time, fear-based propaganda can normalize extreme measures (surveillance, crackdowns, even violence) as acceptable protections against the inflated threat.

  • Anger is marshaled once fear has set the stage. If fear asks “what’s going to happen to us?”, anger offers an answer: “it’s their fault.” Anger focuses attention on a target or culprit and galvanizes people to action . In propaganda, anger often manifests as hate speech, scapegoating, or calls for punitive justice. For instance, the “Pizzagate” conspiracy theory in 2016 falsely alleged that a Washington D.C. pizzeria was harboring a child-trafficking ring linked to political figures. This outrageous claim (fueled by fear for children’s safety) quickly morphed into anger toward the supposed perpetrators, leading one believer to attack the restaurant with a firearm. Anger in the EEP serves to mobilize and unify an in-group against an out-group. By channeling the diffuse anxiety induced by fear into a concrete antagonist (“the corrupt elite,” “immigrants taking our jobs,” “traitors among us”), propagandists create an “us vs. them” mentality. Mechanism: Anger-based messages use moral outrage—language of betrayal, injustice, and barbarity attributed to the out-group—to raise indignation. Social media posts heavy with such content tend to go viral as like-minded users amplify their collective outrage . Notably, anger can be addictive in a social sense; expressing anger can garner social reward in echo chambers (likes, agreement) which reinforces one’s position. Outcome: Anger justifies aggressive behavior and silences empathy. Once anger is stoked, audiences may support or even participate in harassment campaigns, stringent punishments, or exclusionary policies. The emotional high of righteous anger also deepens loyalty to the cause or leader that validates that anger, while further vilifying anyone outside the circle.

  • Pride is leveraged to cement loyalty and a sense of superiority or virtue. Pride in this context is not just personal pride but collective pride—patriotism, religious pride, cultural pride, ideological pride. Propagandists appeal to pride by flattering the audience: “you are part of a great nation/faith/movement that is morally or historically exceptional.” Pride has a binding function in the EEP . It makes followers feel that supporting the propagandist’s message is a mark of honor or nobility. For example, state propaganda in authoritarian regimes often emphasizes national pride and a narrative of unique greatness (e.g., slogans about being the “true defenders” of a cherished value or the “real patriots”). In democratic contexts, political movements both left and right use pride to rally bases—consider how campaign rhetoric frames supporters as the “true Americans” or “champions of justice,” implicitly casting opponents as less virtuous. Mechanism: Prideful propaganda glorifies symbols, history, or values and ties them to the cause. Maxwell (2025) notes the use of selective historical narratives—myths of a golden past or heroic victories—to imbue followers with a sense of participating in something grand . Outcome: Pride reinforces group cohesion and resistance to criticism. When people are proud of their identity as “members” of the propaganda-fed group, they react defensively to outside critique. Dissenting information is not just factually disputed; it is interpreted as an attack on the group’s honor. Thus, pride helps “lock in” beliefs by making them integral to one’s positive self-image . It also encourages public displays of loyalty (e.g., sharing propaganda content to demonstrate one’s alignment), further spreading the message.

Within the EEP, these three emotions often operate in sequence and synergy. Fear alerts and unsettles people, anger directs them toward a target and an action, and pride reassures them that they are on the right side. Together, they create what Maxwell terms an “emotive triad” that can be far more persuasive than any logical argument or evidence. Empirical observations of online extremist communities show this pattern clearly: first, warnings of existential threats (fear), then blaming of an enemy (anger), then self-congratulation about belonging to an enlightened or virtuous group (pride). Once this cycle is established, it tends to be self-perpetuating. As one component intensifies, it reinforces the others—fear of outsiders increases group pride; shared pride makes the out-group seem more threatening, which in turn can spark new anger if that out-group is seen as challenging the in-group’s status, and so on . The emotional triad thus forms the beating heart of the propaganda ecosystem.

Beyond Fear and Anger: Hope, Love, and Empathy as Propaganda Tools

While fear, anger, and pride are often viewed as negative or divisive emotions, the EEP framework emphasizes that propaganda also harnesses ostensibly positive emotions to broaden its appeal . Hope, love, and empathy can be double-edged swords: they inspire and unite, but when manipulated, they can mislead and entrench false narratives just as effectively.

  • Hope – the promise of a better future or imminent victory – is a powerful motivator. Hope is typically considered constructive, and indeed many social movements use hope to galvanize change. However, propagandists can wield hope to make extreme or baseless claims attractive. Mechanism: The propagandist paints a rosy picture (“Make X Great Again” or a utopian vision of society after eliminating some “problem”) and positions their agenda as the only path to that future . Importantly, hope in propaganda is often paired with fear: a looming catastrophe and a singular solution. For example, a leader might say: “We are on the brink of disaster (fear), but if you give me power, I alone can fix it and restore glory (hope).” This one-two punch creates a dependency; people cling to the hopeful solution because the alternative (the fearful scenario) is intolerable . Outcome: Audiences under the spell of hope may suspend skepticism. The excitement of a promised reward – economic revival, moral renewal, etc. – can override questions about feasibility or truth. In political advertising, microtargeting campaigns have used tailored optimistic messages to specific voters (e.g., promising economic resurgence in a depressed factory town) to win trust, even if the broader campaign rhetoric is divisive. Hope propaganda often results in disillusionment if promises fail, but propagandists may continually extend the horizon of hope (“true change takes time…”) to keep followers engaged.

  • Love – here referring to social or patriotic love, devotion to a leader or cause – can manifest as “love of the in-group or leader.” Authoritarian regimes frequently cultivate a personality cult where the leader is to be loved as a paternal or maternal figure. Democratic movements too sometimes build campaigns around highly charismatic figures with devoted followings. In propaganda terms, love is invoked through messages of unity, solidarity, and care: “We are all one family/nation; our leader cares for us like a parent; we must care for each other and our cause.” This sentiment can indeed foster positive community, but propagandists can also use it to smother dissent. Mechanism: Language of familial or unconditional love is attached to the leader or ideology (“Our Dear Leader,” “the Motherland,” “brothers and sisters in this movement”) . Rituals, songs, and symbols reinforce affectionate loyalty (think of mass spectacles or even hashtag trends that lavish praise). Outcome: **“Love” propaganda encourages a form of emotional loyalty that makes critical thought seem like disloyalty or ingratitude. Followers come to feel that questioning the leader or cause is an act of betrayal, since they have bound their identity in a relationship of trust and reverence. This dynamic has been observed in cults and extremist groups where members endure hardships yet remain committed due to emotional bonds. In political contexts, it can lead to a scenario where a leader’s misconduct is forgiven by supporters (“he has our best interests at heart”), and opponents are derided as having no heart or being hateful. Love in propaganda thus can shield leadership from accountability and encourage a forgiving, even zealous attitude among followers.

  • Empathy – the ability to feel others’ pain – is typically a virtue. However, empathy-based appeals can be highly selective and manipulative in propaganda . Propagandists may spotlight particular victims or injustices intensely to evoke public compassion, while ignoring or downplaying other contexts that would paint a more complex picture. A classic example is a viral humanitarian appeal: a single poignant image or story (often a child in distress) is shared widely to rally support for a cause. In 2015, the photograph of a drowned Syrian toddler, Alan Kurdi, sparked global empathy and influenced European refugee policy discussions. While that empathy was genuine and arguably positive, propagandists can use similar tactics for ulterior motives. They may present one group’s suffering in a vacuum to justify extreme measures (“look at these victims of Opponent X; any action is justified to save them”). Mechanism: Identifiable victims and emotional anecdotes are deployed to create an immediate empathetic reaction . Social media again plays a role: platforms favor emotionally gripping personal stories, which go viral more readily than abstract analysis. Outcome: Empathy-driven propaganda can generate strong public support for interventions or policies without a full consideration of facts. For instance, a fabricated or staged atrocity story can compel citizens to back military action or draconian laws out of compassion for victims who may not even exist as described. Moreover, empathy is selective: focusing on one group’s suffering can blind the public to other groups’ suffering (including those caused by the proposed solution). In conflict propaganda, each side highlights its own civilian casualties to justify further aggression, appealing to the empathy of their base while dismissing the enemy’s pain. Thus, empathy—when channeled through a biased lens—“bypasses scrutiny,” as Maxwell writes, making people feel morally righteous in supporting a narrative that might be incomplete or deceptive .

It is crucial to note that in the EEP model, positive emotions do not operate in isolation from the negative triad. Rather, they complement and modulate the fear-anger-pride cycle . A propagandist might oscillate: after inducing fear and anger, they offer hopeful or loving messages to prevent burnout and to cast the campaign as not purely negative. For example, a political demagogue may thunder about threats (fear) and corrupt enemies (anger) in one speech, and in the next breath extol the “love for our nation” and a bright future (pride and hope) if people stand with him. This calibrated emotional rollercoaster keeps audiences engaged—tension followed by emotional reward. It also widens the net: those resistant to negative emotions might be drawn in by the positive framing. As Maxwell observes, “not all positivity is innocent” . A regime can organize feel-good rallies, charity drives, or optimistic propaganda pieces (“progress is on the horizon”) to soften its image and recruit citizens who seek constructive outlets, all the while continuing to suppress dissent and manipulate information behind the scenes.

In summary, the EEP’s expanded emotional palette acknowledges that hope, love, and empathy can be weaponized just as fear and anger are. They play a critical role in sustaining the propaganda ecosystem by balancing the harsher emotions and cloaking manipulative intent in a veneer of virtue. Figure 1 would illustrate these as the “softer” emotional currents that feed into the main cycle, ensuring that propaganda can appeal both to those motivated by rage and those motivated by idealism. The interplay of positive and negative feelings creates a compelling emotional symphony that can captivate a broad audience, making propaganda incredibly adaptive and resilient. Next, we examine how propagandists add a layer of apparent reason and morality on top of these emotions, to further insulate their messaging from critique.

Rational and Moral Veneer: Selective Facts, Buzzwords, and Conspiracy

Emotional appeals alone, powerful as they are, often come packaged with a rational-ideological overlay that gives audiences permission to believe with less guilt or doubt. The EEP framework highlights how propagandists use selective facts, moral buzzwords, and conspiracy theories to create a veneer of logic and morality that justifies the raw emotions being stoked . This overlay exploits people’s desire to feel not only emotionally validated but intellectually and morally justified in their beliefs.

  • Selective Data and Facts: Propagandists frequently cherry-pick statistics, studies, or factoids that appear to support the emotional narrative, while ignoring context or contrary data. This gives the impression that the propaganda is backed by evidence. For instance, an anti-vaccine propagandist might cite a handful of isolated cases of adverse reactions out of millions of vaccine doses to “prove” vaccines are dangerous, omitting the overwhelming scientific consensus on safety. By highlighting small truths to tell a big lie, they make fear and anger seem rational responses. As Maxwell notes, this tactic creates an “illusion of empirical rigor” . Mechanism: Repetition of the cherry-picked facts (e.g., a statistic about crime or a quote from a dubious expert) across tweets, videos, and speeches reinforces their salience . The audience, bombarded with the same few data points, comes to believe “where there’s smoke, there’s fire,” unaware of how skewed the presentation is. Outcome: Audiences feel “intellectually vindicated” in their emotional stance . For example, someone who is emotionally inclined to distrust a minority group will feel confirmed in their prejudice after seeing a chart (perhaps misleading or out of context) that purports to show higher crime rates for that group. The selective use of facts thus inoculates the propaganda against factual correction—any contrary evidence can be dismissed as an anomaly or a lie, since “we have our facts too.” In sum, selective facts turn emotional narratives into seemingly evidence-based arguments, bolstering their credibility among followers.

  • Moral Buzzwords and Frames: Propaganda often employs loaded buzzwords like “freedom,” “family values,” “patriotic duty,” “social justice,” or “law and order.” These terms carry positive connotations and moral weight in society . By associating their cause with such values, propagandists imbue their emotional messages with a moral aura. For instance, an extremist group might dub itself a “freedom movement,” instantly framing its anger and fear-mongering as a fight for a noble cause. Mechanism: Buzzwords work as cognitive shortcuts; audiences do not dissect policy details if the label triggers an automatic moral approval . A term like “family values” can evoke a constellation of cherished ideas (home, tradition, stability) without specifics. Propagandists wield these terms to discourage dissent—who could be against “freedom” or “human rights”? In practice, of course, these words are redefined to serve the agenda at hand (e.g., “religious freedom” as a pretext to vilify a particular group’s practices). Outcome: The use of moral frames creates an environment where doubting the propaganda is cast as doubting the virtue itself . Critiques can be labeled unpatriotic, anti-family, or anti-justice, which pressures neutral observers to stay silent or even converts them to the cause out of social desirability. Over time, entire movements can coalesce around these emotionally charged slogans (“Make X Great Again” became more than a campaign slogan—it was a tribal identifier laced with nostalgia and pride). Buzzwords and moral framing essentially blur the line between factual debate and moral imperative, raising the emotional stakes of the conversation and thus shielding propaganda from rational refutation.

  • Conspiracy Theories: Conspiracy narratives provide a grand pseudo-intellectual framework that ties together unrelated events into a seemingly coherent whole. They serve the EEP by supplying a compelling story that validates fear and anger: the reason for your woes is a hidden, nefarious plot . Conspiracy theories (e.g., “a secret cabal controls the government” or “the media is lying about event X as part of a cover-up”) claim to offer deep analysis or insider knowledge, which flattered followers feel privileged to learn. Mechanism: Conspiracy propaganda often involves **complex “evidence” – charts of connections, leaked “documents,” or expert-sounding jargon – that creates an illusion of depth *. In reality, these are a mix of real tidbits and speculative leaps. Social media algorithms have unfortunately facilitated the proliferation of conspiratorial content; recommendation systems on video platforms have been known to lead users from innocuous content to conspiracy videos through successive suggestions. A well-known case is the QAnon conspiracy (2017–2021), which started on fringe internet forums but spread to millions via mainstream platforms. QAnon’s emotionally charged core (children in danger, a battle of good vs evil) was bolstered by an ever-expanding web of supposed secret information that followers would decode. Outcome: Believers feel “uniquely enlightened” and thus empowered . This cements their loyalty: they are no longer just followers of a narrative, they become part of an “exclusive” community that sees the hidden truth. Conspiracy theories also insulate the propaganda ecosystem by making it self-sealing: any counter-evidence can be dismissed as part of the conspiracy. For example, fact-checkers are then seen as paid shills, negative media reports as deliberate smears orchestrated by the conspirators. Over time, conspiracy-minded propaganda creates an echo chamber of unreality where adherents only trust information from within their group and treat all outsiders with extreme skepticism. This is the apotheosis of the EEP – an emotionally charged community entirely resistant to factual correction, sustained by an internally coherent (if externally baseless) belief system.

Taken together, these rational and moral overlays form what we can call a “facade of justification.” They do not replace the emotional core of propaganda, but rather reinforce it. Maxwell describes this as assembling “loose fragments of facts and values into a coherent argument” that masks the emotional roots of the narrative . In the EEP model diagram, one might imagine a thin layer enveloping the emotional cycle, labeled “Justificatory Narratives.” This layer interacts with emotions in that it soothes cognitive dissonance: a fearful person is relieved to have data that “proves” their fear is grounded; an angry mob is reassured that “experts agree” with their anger or that history is on their side. The veneer of logic thereby amplifies commitment – people fight harder for a cause when they feel both heart and head are in it.

Importantly, the rational veneer also serves a propagandist’s strategic needs. It broadens the appeal to those who might not be swayed by raw emotion alone. Some segments of the public pride themselves on being logical; providing talking points and “evidence” caters to them. Additionally, it creates plausible deniability. The architects of propaganda can claim they are just raising legitimate questions or citing legitimate sources, which can muddy the waters for regulators or fact-checkers. It complicates content moderation on platforms as well: outright hate speech is easy to identify, but content that cites real statistics in misleading ways or couches extremist ideas in scholarly language is harder to judge and remove.

In sum, the rational-moral overlay is an integral part of the emotional ecosystem. It is the glue that binds the emotional impulses to a stable narrative, ensuring that propaganda is not dismissed as “hysteria” but perceived as “truth-seeking” or “justice-seeking.” This further entraps audiences in the ecosystem. Next, we examine how technology and social networks amplify all of these elements, turning small-scale emotional manipulation into massive societal currents.

Technological and Social Amplifiers

Modern propaganda’s reach and impact would be severely limited without the affordances of 21st-century communication technology and networked communities. The EEP framework dedicates a component to Technological & Network Amplifiers, recognizing that digital platforms and participatory culture turbocharge the emotional and conspiratorial messages at the framework’s core . We identify four major sub-components: algorithmic amplification, participatory propagation, coordinated inauthentic behavior (astroturfing) creating grassroots appearances, and real-time feedback loops for message optimization.

  • Algorithmic Amplification: Social media algorithms determine which content users see most frequently. As noted earlier, these algorithms generally prioritize content with high engagement metrics – a proxy that often corresponds with emotional resonance. Function: Within the EEP, algorithms act as a force-multiplier for fear, anger, and pride. When a provocative post triggers an onslaught of comments or shares (even if many are to condemn it, the algorithm doesn’t distinguish sentiment), the platform is likely to show that post to more users . Over time, algorithmic curation can make emotionally charged propaganda ubiquitous in a user’s online experience. Example: Facebook’s newsfeed algorithm (circa 2016-2019) was found to disproportionately spread sensational political news, some of which was false or extremely partisan, because such articles got more clicks and reactions (Facebook later adjusted the weighting of “angry” reactions as they realized it was promoting toxic content). Similarly, YouTube’s recommendation system was reported in studies to often lead users from innocuous political content toward more extreme videos through suggestion chains. Outcome: High-arousal content dominates feeds . Nuanced journalism or moderate viewpoints struggle to gain traction in this landscape, a dynamic that has worried many observers of democratic discourse (Persily and Tucker 2020). The algorithmic boost not only widens propaganda’s audience but also can create a false consensus effect: if “everyone” on your timeline is talking about a supposed crime wave or a dubious scandal (because the algorithm pushed it), you might believe it’s a pressing truth accepted by the majority. This can skew public perceptions of reality and urgency, aligning them with the propaganda narrative.

  • Participatory Propagation (Users as Amplifiers): In the EEP, everyday people are not passive receivers of propaganda; they become active agents in spreading it . The design of platforms encourages sharing and remixing. Function: Propagandists rely on this participatory culture to exponentially increase their message’s reach. Users as propagandists may forward content to friends, create their own posts or videos reinforcing the message, or engage influencers who inadvertently amplify it further by reacting to it. This is how a fringe idea can “go viral” without significant top-down advertising spend. Example: The COVID-19 “Plandemic” video (2020), which falsely claimed a global elite conspiracy behind the pandemic, was initially posted on niche websites but got shared by millions of users on Facebook, Twitter, and Instagram within days, each share often accompanied by personal captions of shock or concern that lent the misinformation a sense of credibility among social networks. Outcome: The peer-to-peer spread lends authenticity: people are more likely to believe information shared by a friend or family member than by an unknown source or official channel . Thus, participatory propagation has a legitimizing effect (“my neighbor whom I trust posted this”). It also helps propaganda adapt linguistically and culturally – as users remix messages (through memes, local language translation, etc.), the propaganda can penetrate communities in their own vernacular, which a single centralized message might not do. This crowdsourced amplification is both an asset for propagandists and a challenge for regulators, since content moderation must then consider countless individual shares and variations, not just an original post.

  • “Grassroots” Appearances (Astroturfing): Propagandists often coordinate covert campaigns that simulate organic grassroots movements. Termed astroturfing, these efforts might use bots, fake accounts, or paid “influencers” to kickstart a narrative, which then genuine users pick up . Function: The aim is to manufacture the perception of widespread public sentiment. For example, a handful of operatives might generate thousands of Twitter posts under different guises all praising a policy or slamming a political opponent, making it trend nationally. Real users see the trending topic and join in, believing “everyone is talking about this.” Example: The Internet Research Agency (IRA) from Russia infamously ran such campaigns during the 2016 U.S. election, creating Facebook groups and Twitter accounts that impersonated Americans (on both left and right) to sow division. They organized rallies (one IRA-controlled group would even promote a protest while another promoted a counter-protest) that drew real citizens. More recently, political consultancies have been caught employing armies of bots to boost their candidates in countries from Mexico to the Philippines. Outcome: When successful, astroturfing convinces bystanders that a given emotional narrative has broad support or urgency . Under the EEP, this means fear or anger appear communal rather than isolated. Legislators or media may take the issue more seriously, thinking there is genuine voter passion behind it. The emotional engagement thus gets another layer of validation: “so many of my fellow citizens feel this way, there must be truth to it.” Additionally, once authentic grassroots participants join, it becomes harder to unravel the campaign—by then, the line between fake initiation and real uptake is blurred. The propagandist can step back and watch the narrative sustain itself through the community’s own momentum.

  • Real-Time Feedback and Optimization: Digital analytics provide propagandists instant feedback on what messages work best. Maxwell emphasizes that modern propaganda is iterative – content creators observe which posts get the most shares, which slogans resonate, which falsehoods slip past gatekeepers, and continuously refine their strategy . Function: This is akin to A/B testing in marketing. A propagandist might try multiple emotional angles and double down on the one that gains traction. For instance, they may find that anger outranks fear in engagement for a particular audience, and thus shift emphasis to more anger-inducing content . Or they may discover a hopeful message unexpectedly goes viral in one community, prompting them to incorporate that message more broadly. Social media dashboards and analytics tools (even basic metrics like views, likes, reshares) guide this adaptive approach. Outcome: The propaganda becomes highly responsive to audience sentiment, which ironically makes it feel even more authentic to the audience (since it evolves in line with their reactions). It creates a self-reinforcing loop: propagandists push emotional content, audience reacts, propagandists refine content to amplify that reaction, and so on . Over time, this leads to escalation—content tends to grow more extreme or more tailored to the biases of sub-groups because those versions elicit the strongest response. A concrete example is how certain YouTube political channels became more radical between 2016 and 2018; by closely monitoring audience feedback (views, comment sentiments), creators learned that more incendiary takes yielded higher subscriber growth, incentivizing them to intensify their rhetoric. The net effect is that propaganda can maximize emotional impact per message more efficiently than in the past, when feedback was slower and less precise.

The synergy of these amplifiers yields what Maxwell calls a “self-sustaining high-intensity information climate.” In this climate, emotional narratives don’t just spread; they often drown out competing voices . The public sphere thus can become saturated with one outrage after another, one viral claim chasing the next, in a cycle that keeps citizens in a perpetual state of arousal or tribally-split fervor. Figure 1’s flowchart would show arrows looping back from the audience and network nodes to the propaganda source, indicating how propagation and feedback complete the circuit.

To illustrate, consider the case of the 2019 viral WhatsApp lynchings in India: False rumors about child kidnappers spread via WhatsApp groups (participatory propagation), included grainy videos as “evidence” (veneer of truth), and triggered mob violence (fear->anger response). WhatsApp’s encryption and forwarding features acted as amplifiers (difficult to monitor, easy to forward to large groups). Local police struggled to stop the cycle because by the time they debunked one rumor, another had taken its place through the same network. This is an example of an emotional propaganda ecosystem (though “propagandist” here was decentralized) running to tragic completion because the technology allowed rumor to outpace truth and group emotion to trump individual reason.

Having dissected the EEP’s components—emotional triggers, positive emotional modifiers, rationalization overlay, and digital amplifiers—we can now step back and see how they interlock to form a self-sustaining ecosystem. The next section synthesizes how these elements interact to lock audiences into a cycle of reinforcement, and we will tie in additional case study evidence to demonstrate the full ecosystem in operation.

Self-Sustaining Emotional Ecosystem: Feedback Loops and Reinforcement

When the components described above work in concert, the result is a self-reinforcing cycle of propaganda. Each element amplifies the others, creating a closed ecosystem that can be remarkably resistant to external intervention. To summarize this dynamic, we walk through the cycle step by step (as a reader might follow in Figure 1 or a flowchart):

  1. Emotional Activation: A propagandist injects an emotional trigger into the public (a fear-inducing claim, an anger-inciting story, or a prideful declaration). For example, a message like “Crime is up 200% because of refugees” immediately hits fear (personal safety) and anger (blame on refugees) buttons. The public’s immediate attention is captured—people start discussing, sharing, and reacting with shock or outrage . This activation is the spark that initiates the EEP cycle.

  2. Social Amplification and Group Formation: As the message spreads (via algorithms and user shares), it begins to create or reinforce a community of believers. Individuals rally around the narrative—commenting in agreement, joining groups dedicated to the issue, perhaps attending rallies or online forums. Through this, an in-group identity often solidifies (those “awake” to the issue) and out-groups are further vilified . Pride creeps in here: participants feel part of a virtuous cause defending their community or values.

  3. Rationalization and Narrative Growth: The initial emotional claim gets buttressed by selective facts or elaborated into a mini-narrative. Community members and propagandists alike contribute to this—someone shares a news clipping that seems to support the claim, others add anecdotes (“I heard about a similar incident in my town…”), perhaps a conspiracy theory develops (“it’s all organized by a hidden network”). This provides talking points and justifications that solidify the group’s conviction .

  4. Feedback to Propagandist / Message Refinement: Monitoring the reaction, propagandists (or community influencers) see what aspects of the story resonate most. They may then double down on certain angles. For instance, if the anger at refugees is high but fear of crime is somewhat secondary in discussions, subsequent messages emphasize anger and punitive solutions more. They might introduce a slogan or hashtag to unify the movement’s sentiment (e.g., “#DefendOurStreets”). The message evolves in real time to better fit the audience’s emotional pulse .

  5. Escalation and Sustained Engagement: With each loop through steps 1–4, the emotional intensity tends to escalate. Fear leads to anger, anger to deeper pride in the in-group, pride to more zealous propagation of the message. The group becomes more insular and confident, dismissing outside voices. Any external challenge (e.g., a fact-check article debunking the crime statistic) is absorbed as further proof of conspiracy or attacked as biased. This “immunity” to correction is the mark of a closed ecosystem . Engagement remains high because the group constantly finds new “content” in the form of reactions, counter-reactions, and plans (like organizing petitions or attacks). The propagandist or core message doesn’t need to push as hard now; the community is carrying the torch, and algorithms keep promoting the content as it remains high-engagement.

  6. Outcome – Entrenchment of a Narrative: Over time, the repeated emotional hits and social reinforcement entrench a narrative as a dominant lens through which followers interpret reality . In our example, people deeply convinced of the “refugee crime wave” might start interpreting any news about refugees or crime as fitting that story, and they ignore or forget data that crime overall is down or that refugees commit fewer crimes on average. The public dialogue gets polarized: adherents vs. skeptics. Cooperative problem-solving (say, rational discussion on immigration policy or crime prevention) becomes nearly impossible among the broader public because the issue is now so emotionally charged and factually polarized. The propaganda ecosystem has effectively created a parallel reality for its adherents, complete with its own “experts,” heroes, and villains. Fear, anger, and pride keep cycling, ensuring followers remain mobilized and difficult to win back with moderate arguments.

This cycle can continue indefinitely, self-propelling as long as new stimuli or targets can be introduced to keep emotions from flagging. When one outrage fades, another is introduced. Indeed, many propagandists deliberately string together a sequence of crises or enemies to keep their base continuously mobilized (a technique reminiscent of Orwell’s Two Minutes Hate, but drawn out over news cycles). Maxwell’s key insight is that each element of the ecosystem compensates for weaknesses in the others, making the overall system robust . If factual rebuttals emerge (attacking the veneer of logic), the emotional commitment and group identity can override them. If emotional fatigue sets in, a hopeful or loving message can rejuvenate spirits. If organic spread slows, artificial amplification through bots can kick it into trending status again. The ecosystem is adaptive.

A vivid real-world instantiation of a near-complete EEP cycle was the QAnon conspiracy movement in the late 2010s in the United States. It began with fear (children in danger from a supposed cabal of predators), leveraged anger (against said cabal, often translating to political figures), built a community with strong pride (Q “patriots” believed they were saving the nation and children – moral superiority), developed an elaborate conspiracy narrative (texts full of symbols, codes, “research” tying disparate events together), amplified via algorithms (YouTube and Facebook groups recommended QAnon content, Twitter bots helped trend Q hashtags), and iterated (when predictions failed to happen, QAnon promoters simply adjusted timelines or targets, and followers adapted). The result was a subset of citizens who were so entrenched in that worldview that they participated in the January 6, 2021 Capitol attack, sincerely believing in a coming apocalyptic event (“The Storm”). Even after crackdowns on QAnon online, many followers rebranded their beliefs and carried on, showing how sticky and self-perpetuating the emotional ecosystem can be.

For those outside the ecosystem, the beliefs of those inside can seem bewildering or absurd. But from within, all the reinforcement mechanisms make it internally coherent and deeply meaningful. This points to the sobering truth that we are not merely dealing with isolated propaganda messages that can be debunked one by one, but with entire self-contained ecosystems of belief and emotion.

Understanding this cycle in holistic terms is crucial for formulating responses. It’s not enough to fact-check a lie (that treats only one part of the system) if the emotional and social drivers remain unaddressed. Breaking the cycle likely requires interrupting multiple links at once – e.g., diluting the reach via algorithm changes, offering alternative emotional communities (perhaps channelling fear into constructive dialogue, anger into institutional trust-building), and exposing the manipulative intent behind the messages to erode the moral veneer.

Before moving to solutions and policy interventions, we consolidate key insights from applying the EEP framework:

  • Emotional synergy: Fear, anger, and pride reinforce each other powerfully in a looping sequence . Together with hope, love, and empathy used strategically, they create an all-encompassing emotional narrative that can appeal to very broad audiences (from the cynical and frightened to the optimistic and altruistic).

  • Cognitive-affective lock-in: Once people are emotionally invested and have a rationale (no matter how flimsy) to justify it, they will reinterpret new information to fit the narrative . Traditional informational counter-measures (like posting accurate data) often fail to penetrate this affective armor.

  • Digital magnification: Technology makes the propagation of propaganda faster, farther, and more insidiously personal (peer-to-peer) than traditional mass media propaganda . The ecosystem can grow large rapidly and operate 24/7 globally.

  • Social reinforcement: The need for belonging and community is a central vulnerability that propaganda exploits. People will go along with extreme ideas if their in-group consensus leans that way, a phenomenon backed by social identity theory and seen in cult dynamics. The EEP’s emotional bonds are often rooted in the fundamental human need for connection and validation .

  • Ethical ambiguity: Emotional propaganda often operates in a gray zone between persuasion and manipulation. The very same emotional appeals can be used for prosocial mobilization (e.g., empathy to donate to disaster relief) or for malicious ends (empathy to stoke biased outrage). This ambiguity makes outright bans or simple moral condemnation of emotional appeals problematic . It calls for more nuanced oversight.

With these understandings, we turn to the question of what can be done. How can policymakers and civil society mitigate the manipulative aspects of the emotional propaganda ecosystem without stifling free expression or genuine grassroots sentiment? In the next section, we outline ethical guidelines and policy recommendations, building on the EEP’s criteria for ethical use of emotional appeals and examining interventions at the levels of technology, education, and regulation.

Ethical Oversight and Policy Interventions

The Emotional Ecosystem of Propaganda framework not only describes how propaganda operates but also implicitly points to leverage points for intervention. To devise solutions, we first clarify what distinguishes ethical persuasion from manipulative propaganda in this context. We then propose concrete measures for policymakers, tech platforms, and civic educators to dampen the harmful effects of the EEP cycle.

Defining Ethical Emotional Appeals vs. Manipulative Propaganda

Not all use of emotion in communication is unethical or harmful. Policymakers themselves often legitimately appeal to emotions—hope in a campaign speech or compassion in a public health message. The key is intent, transparency, truthfulness, and outcomes, as Maxwell suggests. We can articulate four guiding criteria (in the spirit of a Chicago author-date style, we cite Maxwell 2025 for the framework logic):

  • Intent: Ethical communicators use emotion to inform or inspire constructive action; manipulators use emotion to deceive or divide for power. Key question: Is the aim to empower the audience with understanding or to exploit their feelings for ulterior motives? Propaganda fails this test when it preys on fears or resentments that the speaker knows are unfounded (for example, spreading rumors the speaker knows are false to create fear). Policy implication: Regulators and watchdogs can scrutinize campaigns for patterns of deliberate deception and malign intent (such as foreign influence operations or knowing dissemination of false content).

  • Truthfulness: Ethical appeals are grounded in accurate, verifiable information, even if dramatized; manipulative propaganda is built on lies, cherry-picked data, or conspiracy fiction. Key question: Do the core claims hold up under evidence? Are facts omitted or distorted to provoke emotion? An honest public health fear appeal, for instance, will present a real threat (like smoking causes cancer) along with truthful data and remedies. In contrast, propaganda might inflate statistics or fabricate links (e.g., claiming a certain minority group is largely responsible for crime without evidence). Policy implication: Strengthen fact-checking infrastructure and credibility labeling. For example, election ads could be required to undergo factual review, with sanctions for deliberate falsehoods. Tech platforms can down-rank content proven to be consistently misleading, thereby disrupting one fuel of the EEP.

  • Transparency: Ethical persuasion discloses its source and intent; propaganda often hides who is behind a message or uses front groups and fake accounts. Key question: Is the origin of the message clear? Are we aware who is trying to persuade us and why? Manipulative efforts operate in shadows (e.g., bots impersonating ordinary people, sponsored content not marked as such, deepfake videos with no disclaimer). Policy implication: Mandate transparency in political advertising and online influence. Proposed policies include requiring social media political ads to display sponsor information and forbidding foreign or anonymous funding of issue ads. Governments could also push for “Know Your Customer” rules for bot accounts to curb inauthentic amplification – Twitter, for example, could label accounts as “automated” when detected. Transparency also involves algorithmic transparency: platforms should be more open about how content is prioritized, allowing auditors to detect if algorithms unduly amplify certain extreme content (as was done in the Facebook and Twitter research cited above).

  • Outcome (Beneficial vs. Harmful): Ethical use of emotion seeks outcomes that serve the public interest (e.g., increased awareness, positive social change) and respects individual autonomy; propaganda yields division, violence, or undue manipulation of behavior. Key question: Does the emotional messaging improve democratic discourse and public welfare, or does it leave the public misinformed, polarized, or mobilized for harmful ends? Sometimes this is clear cut: a campaign that results in ethnic violence or undermines faith in democratic processes (via lies) is certainly harmful. Other times, outcome can be debated; however, one measure is the presence of dialogue: Ethical appeals invite engagement and critical thought (even if stirring), whereas propaganda discourages questioning and often results in harassment of dissenters. Policy implication: Foster norms and possibly regulations around outcome-based accountability. For instance, platforms could be encouraged (or mandated) to perform “impact assessments” of viral misinformation events (similar to how environmental assessments are done for projects) and share data on the societal impact of certain content spread. Civic organizations can also highlight case studies where propaganda led to harm (such as the WhatsApp lynchings or election violence) as cautionary tales to raise public awareness of these outcomes.

Using these criteria, policymakers can frame their interventions not as partisan actions to suppress certain viewpoints, but as neutral safeguards against deceptive and dangerous manipulation. Upholding truth and transparency, and preventing violence, are principles that can be broadly agreed upon.

Policy Recommendations

Building on the above, here are actionable strategies at various levels:

1. Platform Design and Regulation: Social media and messaging platforms are the primary theater of the EEP. Policy interventions could include:

  • Algorithmic Accountability: Encourage or require platforms to adjust algorithms that overtly favor outrage. For example, after internal research showed Facebook’s algorithms disproportionately amplified divisive content, there have been calls for oversight boards or independent audits of recommendation systems (Napolitano 2022). One idea is a “circuit breaker” algorithm: if a piece of content is rapidly going viral, especially if it has certain misinformation signals, temporarily slow its spread pending fact-check (an idea borrowed from stock market circuit breakers). Another approach is giving users more control, such as chronologically ordered feeds or the ability to turn off algorithmic personalization, which can mitigate echo chamber effects.

  • Transparency Requirements: As mentioned, laws could mandate labeling of bot accounts and political ads. The European Union’s recent proposals on regulating political advertising online (2022) aim to ban microtargeting based on sensitive data (like religion or ethnicity) without consent. Similarly, requiring ads to disclose the criteria used for targeting (why am I seeing this ad?) can alert users to potential manipulation attempts (Levy and Schroders 2020). In the U.S., the Honest Ads Act (proposed) seeks to apply broadcast political ad disclosure rules to the internet. These moves help peel back the anonymity that propaganda often exploits.

  • Counter-Disinformation Policies: Governments are experimenting with units to rapidly debunk viral falsehoods (e.g., the UK’s 77th Brigade and France’s “Votee” initiative for elections). Platforms can assist by giving verified fact-checks algorithmic boost and by notifying users who shared or were exposed to false content once it’s debunked. Some research suggests that timely corrections, delivered in a user’s feed in a compelling format, can reduce belief in false news (Vraga and Bode 2020), though as we noted, corrections alone aren’t sufficient, they must be part of a larger strategy that addresses emotions too.

2. Media and Digital Literacy Initiatives: To tackle the demand side of propaganda, many experts emphasize educating citizens. Traditional media literacy programs now incorporate “social media literacy” and “emotional literacy.” For example:

  • Recognizing Emotional Manipulation: Training citizens, starting in high school, to spot when a piece of content is trying to make them feel a certain way as a persuasion technique. This might involve exercises in analyzing propaganda posters or viral memes and identifying loaded language, scapegoating, or buzzwords. By naming these techniques, individuals may gain a slight cognitive distance, enabling more critical evaluation. Researchers have found that “inoculation” – exposing people to small doses of misinformation along with explanations of the misinformative tactics – can make them more resistant to real fake news (van der Linden et al. 2017). Initiatives like the “Bad News Game” (a digital game that lets you play as a fake news creator to learn their tricks) have shown promise in boosting resilience in this way.

  • Promoting Constructive Engagement: Teaching strategies for engaging in online discourse without amplifying propaganda. This includes simple habits like not sharing content before verifying, pausing to fact-check startling claims (perhaps encouraging the use of the platform’s built-in fact-check tools or external sites like Snopes), and not reflexively retweeting rage. It also means learning to avoid amplifying extremists by quote-tweeting or arguing in their threads, which only feeds algorithms. Instead, experts advise “truth sandwich” approaches (present truth, mention false claim, then refute and return to truth) when correcting someone, to avoid reinforcing the myth by repetition.

  • Empathy and Group Dynamics Awareness: Interestingly, just as propaganda exploits the need to belong, education can re-channel that. Intergroup dialogue programs and perspective-taking exercises can reduce susceptibility to propaganda that vilifies an out-group by humanizing that out-group. Policymakers might support community exchanges (even virtual ones) between different political or ethnic groups. When people have direct positive interactions with those they are told to fear or hate, propaganda’s job gets harder. On the flip side, literacy includes recognizing when one’s sense of solidarity is being artificially inflamed against a scapegoat.

Public awareness campaigns can complement these educational efforts. For example, a government or NGO campaign could publicize: “Check before you share – misinformation spreads panic” featuring examples of viral hoaxes and their real consequences (like the COVID bleach cure myth leading to poisonings). Such messaging appeals to the public’s sense of responsibility and perhaps pride in not being “duped” – turning a bit of pride back against propaganda.

3. Support Ethical Media and Dialogue: A healthier information ecosystem is the long-term antidote. Policymakers can bolster quality journalism and community forums that compete with propaganda for public attention:

  • Funding Local and Public-Interest Journalism: When local news dies out, research shows polarization and misinformation fill the void (because people rely more on social media and partisan national sources). Supporting local journalism (through grants, tax incentives, or public funding models) provides communities with more trustworthy sources that address their real concerns, which can preempt the emotional void exploited by propagandists.

  • Platform for Civil Discourse: The government and civil society can create or endorse online platforms that foster civil discourse across divides – essentially safe spaces to discuss hot issues with moderation and fact-facilitation present. While such forums might not attract those deep in a propaganda ecosystem at first, they can serve the large majority who are not yet polarized and prevent them from being sucked in. They also act as a model for the tech industry that design matters; features like enforced fact-based grounding (with moderators) and community norms of respect can maintain engagement without algorithmic rage-bait.

  • Rapid Response and Mediation: In moments of high societal tension (e.g., elections, pandemics), having a rapid response team of cross-partisan credible figures who can jointly address the public to dispel rumors and appeal for calm can interrupt an escalating EEP cycle. For instance, a bipartisan commission on election integrity, if trusted, could quickly rebut viral fraud claims in 2020 – unfortunately, such voices were lacking or drowned. Policymakers should invest in trust-building measures well before crises (since trust can’t be conjured during a crisis if not present).

4. Legal Accountability for Harms: When propaganda clearly crosses into incitement of violence or other unlawful action, enforce existing laws. Hate speech that explicitly calls for harm, libelous falsehoods, and coordinated disinformation by foreign adversaries often violate laws or platform policies. Strengthening enforcement (while respecting free speech—focus on illegal behavior, not opinion) is key. For example, indictments of foreign troll farm operators (as done by the US DOJ in 2018) signal consequences. Domestically, if an individual knowingly spreads false claims that cause measurable harm (like fraud or public endangerment), tailored laws could allow for civil penalties. This is tricky—care must be taken not to criminalize legitimately contested speech. But clear cases like deliberate health disinformation for profit might be addressed (some jurisdictions are exploring penalties for anti-vaccine propaganda that is verifiably false and sold for monetary gain).

5. Ethical Frameworks and Oversight Bodies: Finally, establish frameworks that institutionalize ethical reflection on public communication. For instance, an independent Digital Public Integrity Commission could be formed to continually monitor the information ecosystem, issue guidelines, and advise policymakers on emerging threats (akin to how central banks monitor financial systems). This commission would draw on experts in tech, psychology, and law. It could help develop codes of conduct for political campaigns (e.g., pledges not to use deepfakes or not to microtarget based on fear), much like there are codes of conduct for advertising (truth-in-advertising). While compliance might be voluntary, shining a spotlight on ethical lapses can pressure actors to do better.

In sum, mitigating the manipulative emotional ecosystem requires a multi-layered approach: redesigning platform incentives, educating and empowering the public, promoting credible information, and, where needed, applying targeted regulation and enforcement. No single measure is sufficient. A theme across these recommendations is the need to address both the supply of propaganda (through transparency and friction on spread) and the demand (through societal resilience and alternative narratives).

Conclusion

In the digital age, propaganda is less a series of discrete messages and more an ecosystem of influence, an ever-churning cycle that plays on the deepest human emotions. Brian Maxwell’s Emotional Ecosystem of Propaganda framework provides a valuable lens to understand this phenomenon by illuminating how fear, anger, and pride interlock with our technologies and social networks to distort our shared reality. The case studies and mechanisms examined in this article demonstrate that we are confronting not just false statements or extreme ideologies in isolation, but self-reinforcing emotional currents that can capture communities and even nations.

Key insights from this deep dive include: (1) Emotional design now often outranks ideological consistency in propaganda – many propagandists focus on provoking feelings regardless of whether the underlying claims cohere, because maintaining an emotional high ground keeps audiences engaged. (2) The integration of multiple media and platforms means propaganda is multi-node and interlocking ; a narrative will be echoed by influencers, bots, news outlets, and personal contacts in tandem, making it harder to disrupt. (3) There is an inherent ethical ambiguity to emotional appeals – the same frameworks can either inspire noble action or inflame hate, forcing us to look at context and intent rather than banning emotional rhetoric outright. (4) The human need for belonging is a core vulnerability; propaganda often fills a social or identity void, which is why fact-checking alone cannot break its spell without offering people a way to exit that ecosystem without shame or isolation. (5) Because participants in an emotional propaganda cycle come to dismiss counter-evidence preemptively , interventions must go beyond presenting facts – they must engage the emotional and social dimensions, whether through narrative, community leaders, or other persuasive emotional communication in return.

This analysis also underscores that the line between legitimate persuasion and harmful propaganda lies in trust and transparency. When emotional appeals are made in good faith – transparently and anchored in truth – they can bolster democratic engagement and solidarity. But when they are made cynically – hiding lies behind emotions – they erode the informational environment that democracy needs to function. Policymakers, tech companies, and civil society all have roles to play in drawing this line and enforcing it.

Ultimately, confronting the emotional ecosystem of propaganda is about strengthening the ecosystem of democracy. This means cultivating an informed public that can feel deeply and think critically – citizens who can respond to fear with courage and curiosity, to anger with reason and channel it into justice, to calls for love and empathy with inclusive, not exclusionary, compassion. It also means modernizing our democratic institutions (from education to media regulation) to keep pace with how technology has amplified age-old techniques of influence.

The stakes are high. As we have seen, emotional propaganda can undermine public health, provoke violence, and destabilize trust in elections and institutions. But the insights and recommendations offered here chart a path forward. By recognizing propaganda’s emotional ecosystem, we can begin to craft a more resilient civic ecosystem – one that harnesses the positive power of emotions (hope, unity, righteous anger at injustice) while inoculating against their malicious exploitation. In doing so, we honor the very emotions that make us human without allowing those emotions to be turned against our collective well-being.

References (Chicago author-date):

Brady, William J., Julian A. Wills, John T. Jost, Joshua A. Tucker, and Jay J. Van Bavel. 2017. “Emotion shapes the diffusion of moralized content in social networks.” Proceedings of the National Academy of Sciences 114 (28): 7313–7318 .

Cadwalladr, Carole, and Emma Graham-Harrison. 2018. “Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach.” The Guardian, March 17, 2018 .

Maxwell, Brian. 2025. The Emotional Ecosystem of Propaganda: White Paper on Modern Influence Dynamics. (Framework content referenced throughout article) .

Nyhan, Brendan, and Jason Reifler. 2010. “When Corrections Fail: The Persistence of Political Misperceptions.” Political Behavior 32 (2): 303–330 .

Salvi, Carola, Paola Iannello, Alice Cancer, et al. 2021. “Going Viral: How Fear, Socio-Cognitive Polarization and Problem-Solving Influence Fake News Detection and Proliferation During COVID-19 Pandemic.” Frontiers in Communication 5: 562588 .

Tajfel, Henri, and John C. Turner. 1979. “An integrative theory of intergroup conflict.” In The Social Psychology of Intergroup Relations, edited by William G. Austin and Stephen Worchel, 33–47. Monterey, CA: Brooks/Cole .

Van der Linden, Sander, et al. 2017. “Inoculating against misinformation.” Science 358 (6367): 1141–1142.

Witte, Kim. 1992. “Putting the fear back into fear appeals: The extended parallel process model.” Communication Monographs 59 (4): 329–349.

(Note: Additional sources include the UT Austin News release on Salvi et al. (2021) and positive psychology reference on social identity , as well as content from Maxwell’s EEP white paper.)



Brian Maxwell
The Integrity Dispatch