Category Archives: Opinion

THE CHRYSANTHEMUM PARADOX

Japan’s first female prime minister promises history, but her ascent may only deepen the old order.

By Michael Cummins, Editor, October 4, 2025

Sanae Takaichi has become Japan’s first female prime minister—a milestone that might look like progress but carries a paradox at its core. Takaichi, sixty-four, rose not by challenging her party’s patriarchal order but by embracing it more fiercely than her male rivals. Her vow to “work as hard as a carriage horse” captured the spirit of her leadership: endurance without freedom, strength yoked to duty. In a nation where women hold less than sixteen percent of parliamentary seats and most are confined to low-paid, “non-regular” work, Takaichi’s ascension is less rupture than reinforcement. She inherits the ghost of Shinzo Abe, with whom she shared nationalist loyalties, and she confronts a fragile coalition, an aging electorate, and a looming Trump visit. Her “first” is both historic and hollow: the chrysanthemum blooms, but its shadow may reveal that Japan’s old order has merely found a new face.

Under the humming fluorescent lights of the Liberal Democratic Party’s headquarters in Tokyo, the old men in gray suits shifted in their seats. The air was thick with the stale perfume of cigarettes and the accumulated dust of seventy years in power. The moment came suddenly, almost anticlimactically: after two rounds of voting, Sanae Takaichi was named leader. The room stirred, applause pattered weakly. She stepped to the podium, bowed with a precision that was neither humble nor triumphant, and delivered the line that will echo through history: “I will work as hard as a carriage horse.”

Why that image? Why not the fox of Japanese cunning, or the crane of elegance, or the swift mare of legend? A carriage horse is strength without freedom. It pulls because it must. Its labor is endurance, not glory. In that metaphor lay the unsettling heart of the moment: Japan’s first woman prime minister announcing herself not as a breaker of chains but as the most dutiful beast of burden. Ushi mo aru kedo, hito mo aru—“Even cattle have their place, but so do people.” Here, in this paradoxical victory, the human became the horse.

In Japan, the ideal of gaman—stoic endurance in the face of suffering—is praised as virtue. The samurai ethos of bushidō elevated loyalty above will. Women, in particular, have long been praised for endurance in silence. Takaichi’s metaphor was no slip. It was a signal: not rebellion, but readiness to shoulder a system that has never bent for women, only asked them to carry it. In the West, the “first woman” often suggests liberation; in Japan, Takaichi presented herself as a woman who could wear the harness more tightly than any man.

The horse metaphor might also be personal. Takaichi was not a scion of a dynasty like her rival, Koizumi. Her mother served as a police officer; her father worked for a car company. Her strength was forged in the simple, demanding work of postwar Japan—the kind of tireless labor she was now vowing to revive for the nation.

For the newspapers, the word hajimete—first—was enough. But scratch the lacquer, and the wood beneath showed a different grain. The election was not of the people; it was an internal ballot, a performance of consensus by a wounded party. Less than one percent of Japan had any say. The glass ceiling had not been lifted by collective will but punctured by a carefully aimed projectile. The celebration was muted, as if everyone sensed that this “first” was also a kind of last, a gesture of desperation dressed in history’s robes.

Deru kugi wa utareru—“The nail that sticks out gets hammered down.” Takaichi did not stick out. She was chosen precisely because she could wield the hammer.

Her rise was born of collapse. The LDP, which had dominated Japanese politics like Mount Fuji dominates the horizon, was eroded, its slopes scarred by landslides. In the 2024 Lower House election alone, it lost sixty-eight seats, a catastrophic erosion. After another defeat in 2025, it found itself, for the first time in memory, a minority in both houses of the Diet. Populist formations shouting Nippon daiichi!—Japan First—had seized the public imagination, promising to protect shrines from outsiders and deer in Nara from the kicks of tourists. Stagnant wages, rising prices, and the heavy breath of globalization made their slogans ring like temple bells.

Faced with collapse, the LDP gambled. It rejected the fresh-faced Shinjiro Koizumi, whose cosmopolitan centrism seemed too fragile for the moment, and crowned the hard-line daughter of Nara, the protégé of Shinzo Abe. In choosing Takaichi, the LDP announced that its path back to power would not be through moderation, but through continuity.

The ghost of Abe hovers over every step she takes. His assassination in 2022 froze Japan in a perpetual twilight of mourning. His dream—constitutional revision, economic reflation, nationalist revival—remained unfinished. Takaichi walks in his shadow as if she carries his photograph tucked inside her sleeve. She echoes his Abenomics: easy money, big spending. She continues his visits to Yasukuni Shrine, where the souls of Japan’s war dead—among them Class A criminals—are enshrined. Each bow she makes is both devotion and provocation.

Hotoke no kao mo san-do—“Even a Buddha’s face only endures three times.” How many times will China and South Korea endure her visits to Yasukuni?

And yet, for all the historic fanfare, her stance on women is anything but transformative. She has opposed allowing a woman to reign as emperor, resisted reforms to let married couples keep separate surnames, and dismissed same-sex marriage. Mieko Nakabayashi at Waseda calls her bluntly “a roadblock to feminist causes.” Yet she promises to seat a cabinet of Nordic balance, half men and half women. What does equality mean if every woman chosen must genuflect to the same ideology? One can imagine the photograph: a table split evenly by gender, yet every face set in the same conservative mold.

In that official photograph, the symmetry was deceptive. Each woman had been vetted not for vision but for loyalty. One wore a pearl brooch shaped like a torii gate. Another quoted Abe in her opening remarks. Around the table, the talk was of fiscal stimulus and shrine etiquette. Not one mentioned childcare, wage gaps, or succession. The gender balance was perfect. The ideological balance was absolute.

This theater stood in stark opposition to the economic reality she governs. Japan’s gender wage gap is among the widest in the OECD; women earn barely three-quarters of men’s wages. Over half are trapped in precarious “non-regular” work, while fewer than twelve percent hold managerial posts. They are the true carriage horses of Japan—pulling without pause, disposable, unrecognized. Takaichi, having escaped this trap herself, now glorifies it as national virtue. She is the one horse that broke free—only to tell the herd to pull harder.

The global press, hungry for symbols, crowned her with headlines: “Japan Breaks the Glass Ceiling.” But the ceiling had not shattered—it had been painted over. The myth of the female strongman—disciplined, unflinching, ideologically pure—has become a trope. Conservative systems often prefer such women precisely because they prove loyalty by being harsher than the men who trained them. Takaichi did not break the mold; she was cast from it.

Other nations offer their mirrors: Thatcher, the Iron Lady who waged war on unions; Park Geun-hye, whose scandal-shattered rule rocked South Korea; Indira Gandhi, who suspended civil liberties during India’s Emergency. Each became a vessel for patriarchal power, proving strength through obedience rather than disruption. Takaichi belongs to this lineage, the chrysanthemum that blooms not in a wild meadow but in a carefully tended imperial garden.

Her campaign rhetoric made plain her instincts. She accused foreigners of kicking sacred deer in Nara, of swinging from shrine gates. The imagery was almost comic, but in Japan symbols are never trivial. The deer, protectors of Shinto shrines, bow to visitors as if performing eternal reverence. To strike them is to wound purity. The torii gates mark thresholds between profane and sacred worlds; to defile them is to profane Japan itself. By weaponizing these cultural symbols, Takaichi sought to steal the thunder of far-right groups like Sanseitō, consolidating the right-wing vote under the LDP’s battered banner.

But the weight of Takaichi’s ideological baggage—the nationalism that served her domestically—was instantly transferred to the fragile carriage of Japan’s foreign policy. To survive, the LDP must keep its coalition with Komeito, the Buddhist-backed party rooted in Soka Gakkai’s pacifism. Already the monks grumble. Nationalist education reform? No. Constitutional militarism? Impossible. Imagine the backroom: tatami mats creaking, voices low, one side invoking the Lotus Sutra, the other brandishing polls. Ni usagi o ou mono wa issai ezu—“He who chases two rabbits catches none.”

Over all this looms America. Donald Trump, swaggering toward a late-October Asia tour, may stop in Tokyo. Takaichi once worked in the U.S.; she speaks the language of its boardrooms. But she campaigned as a renegotiator, a fighter against tariffs. Now reality intrudes. Japan has already promised $550 billion in investment and loan guarantees to secure a reprieve from harsher duties. How she spends it will define her. To appear submissive is to anger voters; to defy Trump is to risk reprisal. Imagine the summit: Trump beaming, Takaichi bowing, their hands clasped in an awkward grip, photographers snapping.

Even her economics carry ghosts. She revives Abenomics when inflation demands restraint. But Abenomics was of another time, when Japan had fiscal breathing room. Reviving it now is less a strategy than nostalgia, an emotional tether to Abe himself.

These contradictions sharpen into paradox. She is the first woman prime minister, yet she blocks women from the throne. She promises parity, yet delivers loyalty. She vows to pull the carriage harder than any man, yet the cart itself has only three wheels.

Imagine the year 2035. A museum exhibit in Tokyo titled The Chrysanthemum Paradox: Japan’s Gendered Turn. Behind glass: her campaign poster, a porcelain deer, a seating chart from her first cabinet. A small screen plays the footage of her victory speech. Visitors lean in, hear the flat voice: “I will work as hard as a carriage horse.”

A child tugged at her mother’s sleeve. “Why is the horse sad?” she asked, pointing to the animated screen where a cartoon carriage horse trudged endlessly. The mother hesitated. “She worked very hard,” she said. “That’s what leaders do.” The child frowned. “But where was she going?”

Outside, chrysanthemums bloom in autumn, petals delicate yet precise, the imperial crest stamped on passports and coins. The carriage horse keeps pulling, hooves clattering against cobblestones, sweat darkening its flanks. Will the horse break, or the carriage? And if both break together, what then?

Shōji wa issun saki wa yami—“The future is pitch-dark an inch ahead.” That is the truth of her victory. The chrysanthemum shines, but its shadow deepens. The horse pulls, but no one knows toward what horizon. The first woman had arrived, but the question lingered like incense in an empty hall: Was this history’s forward march, or merely the perfect, tragic culmination of the old order?

THIS ESSAY WAS WRITTEN AND EDITED UTILIZING AI

HOW COMEDY KILLED SATIRE

The weapon that wounded kings and emperors is now just another punchline between commercials.

By Michael Cummins, Editor, September 1, 2025

In the long arc of literary history, satire has served as a weapon—precise, ironic, and often lethal. It was the art of elegant subversion, wielded by writers who understood that ridicule could wound more deeply than rhetoric. From the comic stages of Athens to the viral feed of TikTok, satire has always been a mirror turned against power. But mirrors can be polished, fogged, or stolen. Today, satire has been absorbed into the voracious machinery of entertainment. Its sting has dulled. Its ambiguity has been flattened. It no longer provokes—it performs.

But what did it once mean to laugh dangerously? In Athens, 423 BCE, Aristophanes staged The Clouds. Socrates appeared not as a revered philosopher but as a dangling charlatan in a basket, teaching young Athenians to twist language until truth dissolved. The joke was more than a joke. It ridiculed sophistry, intellectual fads, and the erosion of civic virtue. The audience laughed, but the laughter was perilous—Socrates himself would later be tried and executed for corrupting the youth. To laugh was to risk.

Two centuries later, in Rome, Juvenal sharpened satire into civic indictment. His Satires accused senators of corruption, women of decadence, and citizens of surrendering their dignity for “bread and circuses.” The phrase endures because it captured a political truth: distraction is the oldest tool of power. Juvenal’s lines were barbed enough to threaten exile. Was he clown or conscience? In truth, he was both, armed with venom.

What happens when laughter moves from the tavern into the church? During the Renaissance, Erasmus wrote The Praise of Folly, putting words of critique into the mouth of Folly herself. Popes, princes, pedants—all were skewered by irony. Erasmus knew that Folly could say what he could not, in an age when heresy trials ended in fire. Is irony a shield, or a sword? François Rabelais answered with giants. His sprawling Gargantua and Pantagruel gorged on food, sex, and grotesque humor, mocking scholasticism and clerical hypocrisy. Laughter here was not polite—it was unruly, earthy, subversive. The Church censored, readers copied, the satire lived on.

And what of Machiavelli? Was The Prince a straight-faced manual for power, or a sly parody exposing its ruthlessness? “Better to be feared than loved” reads as either strategy or indictment. If satire is a mirror, what does it mean when the mirror shows only cold pragmatism? Perhaps the ambiguity itself was the satire.

By the seventeenth century, satire had found its most enduring disguise: the novel. Cervantes’s Don Quixote parodied the exhausted chivalric romances of Spain, sending his deluded knight tilting at windmills. Is this comedy of madness, or a lament for a lost moral world? Cervantes left the reader suspended between mockery and mourning. A century later, Alexander Pope wrote The Rape of the Lock, transforming a petty quarrel over a stolen lock of hair into an epic drama. Why inflate the trivial to Homeric scale? Because by exaggerating, Pope revealed the emptiness of aristocratic vanity, exposing its fragility through rhyme.

Then came the most grotesque satire of all: Swift’s A Modest Proposal. What kind of society forces a writer to suggest, with impeccable deadpan, that poor families sell their children as food? The horror was the point. By treating human suffering in the cold language of economics, Swift forced readers to recognize their own monstrous indifference. Do we still have the stomach for satire that makes us gag?

Voltaire certainly thought so. In Candide (1759), he set his naïve hero wandering through war, earthquake, and colonial exploitation, each scene puncturing the optimistic doctrine that “all is for the best in the best of all possible worlds.” Candide repeats the phrase until it collapses under its own absurdity. Was Voltaire laughing or grieving? The satire dismantled not only Leibnizian philosophy but the pieties of church and state. The novel spread like wildfire, banned and beloved, dangerous because it exposed the absurdity of power’s justifications.

By the nineteenth century, satire had taken on a new costume: elegance. Oscar Wilde, with The Importance of Being Earnest (1895), skewered Victorian morality, marriage, and identity through dazzling wordplay and absurd plot twists. “The truth is rarely pure and never simple,” Wilde’s characters remind us, a line as sharp as Swift’s grotesqueries but dressed in lace. Wilde’s satire was aesthetic subversion: exposing hypocrisy not with shock but with wit so light it almost floated, until one realized it was dynamite. Even comedy of manners could destabilize when written with Wilde’s smile and sting.

And still, into the modern age, satire carried power. Joseph Heller’s Catch-22 in 1961 named the absurd circularity of military bureaucracy. “Catch-22” entered our lexicon, becoming shorthand for the paradoxes of modern life. What other art form can gift us such a phrase, a permanent tool of dissent, smuggled in through laughter?

But something changed. When satire migrated from pamphlets and novels to television, radio, and eventually social media, did it lose its danger? Beyond the Fringe in 1960s London still carried the spirit of resistance, mocking empire and militarism with wit. Kurt Vonnegut wrote novels that shredded war and bureaucracy with absurdist bite. Yet once satire was packaged as broadcast entertainment, the satirist became a host, the critique a segment, the audience consumers. Can dissent survive when it must break for commercials?

There were moments—brief, electrifying—when satire still felt insurgent. Stephen Colbert’s October 2005 coinage of “truthiness” was one. “We’re not talking about truth,” he told his audience, “we’re talking about something that seems like truth—the truth we want to exist.” In a single satirical stroke, Colbert mocked political spin, media manipulation, and the epistemological fog of the post-9/11 era. “Truthiness” entered the lexicon, even became Word of the Year. When was the last time satire minted a concept so indispensable to describing the times?

Another moment came on March 4, 2009, when Jon Stewart turned his sights on CNBC during the financial crisis. Stewart aired a brutal montage of Jim Cramer, Larry Kudlow, and other personalities making laughably wrong predictions while cheerleading Wall Street. “If I had only followed CNBC’s advice,” Stewart deadpanned, “I’d have a million dollars today—provided I’d started with a hundred million dollars.” The joke landed like an indictment. Stewart wasn’t just mocking; he was exposing systemic complicity, demanding accountability from a financial press that had become entertainment. It was satire that bit, satire that drew blood.

Yet those episodes now feel like the last gasp of real satire before absorption. Stewart left his desk, Colbert shed his parody persona for a safer role as late-night host. The words they gave us—truthiness, CNBC’s complicity—live on, but the satirical force behind them has been folded into the entertainment economy.

Meanwhile, satire’s safe zones have shrunk. Political correctness, designed to protect against harm, has also made ambiguity risky. Irony is flattened into literal meaning, especially online. A satirical tweet ripped from context can end a career. Faced with this minefield, many satirists preemptively dilute their work, choosing clarity over provocation. Is it any wonder the result is content that entertains but rarely unsettles?

Corporations add another layer of constraint. Once the targets of satire, they now sponsor it—under conditions. A network late-night host may mock Wall Street, but carefully, lest advertisers revolt. Brands fund satire as long as it flatters their values. When outrage threatens revenue, funding dries up. Doesn’t this create a new paradox, where satire exists only within the boundaries of what its sponsors will allow? Performers of dissent, licensed by the very forces they lampoon.

And the erosion of satire’s political power continues apace. Politicians no longer fear satire—they embrace it. They appear on comedy shows, laugh at themselves, retweet parodies. The spectacle swallows the subversion. If Aristophanes risked exile and Swift risked scandal, today’s satirists risk nothing but a dip in ratings. Studies suggest satire still sharpens critical thinking, but when was the last time it provoked structural change?

So where does satire go from here? Perhaps it will retreat into forms that cannot be so easily consumed: encrypted narratives layered in metaphor, allegorical fiction that critiques through speculative worlds, underground performances staged outside the reach of advertisers and algorithms. Perhaps the next Voltaire will be a coder, the next Wilde a playwright in some forgotten theater, the next Swift a novelist smuggling critique into allegory. Satire may have to abandon laughter altogether to survive as critique.

Imagine again The Laughing Chamber, a speculative play in which citizens are required to submit jokes to a Ministry of Cultural Dissent. Laughter becomes a loyalty test. The best submissions are broadcast in a nightly “Mock Hour,” hosted by a holographic jester. Rebellion is scripted, applause measured, dissent licensed. Isn’t our entertainment already inching toward that? When algorithms decide which jokes are safe enough to go viral, which clips are profitable, which laughter is marketable, haven’t we already built the laughing chamber around ourselves?

Satire once held a mirror to power and said, “Look what you’ve become.” Aristophanes mocked philosophers, Juvenal mocked emperors, Erasmus mocked bishops, Rabelais mocked pedants, Cervantes mocked knights, Pope mocked aristocrats, Swift mocked landlords, Voltaire mocked philosophers, Wilde mocked Victorians, Heller mocked generals, Stewart mocked the financial press, Colbert mocked the epistemology of politics. Each used laughter as a weapon sharp enough to wound authority. What does it mean when that mirror is fogged, the reflection curated, the laughter canned?

And yet, fragments of power remain. We still speak of “bread and circuses,” “tilting at windmills,” “truthiness,” “Catch-22.” We quote Wilde: “The truth is rarely pure and never simple.” We hear Voltaire’s refrain—“all is for the best”—echoing with bitter irony in a world of war and crisis. These phrases remind us that satire once reshaped language, thought, even imagination itself. The question is whether today’s satirists can once again make the powerful flinch rather than chuckle.

Until then, we live in the laughing chamber: amused, entertained, reassured. The joke is on us.

THIS ESSAY WAS WRITTEN AND EDITED UTILIZING AI

TOMORROW’S INNER VOICE

The wager has always been our way of taming uncertainty. But as AI and neural interfaces blur the line between self and market, prediction may become the very texture of consciousness.

By Michael Cummins, Editor, August 31, 2025

On a Tuesday afternoon in August 2025, Taylor Swift and Kansas City Chiefs tight end Travis Kelce announced their engagement. Within hours, it wasn’t just gossip—it was a market. On Polymarket and Calshi, two of the fastest-growing prediction platforms, wagers stacked up like chips on a velvet table. Would they marry before year’s end? The odds hovered at seven percent. Would she release a new album first? Forty-three percent. By Thursday, more than $160,000 had been staked on the couple’s future, the most intimate of milestones transformed into a fluctuating ticker.

It seemed absurd, invasive even. But in another sense, it was deeply familiar. Humans have always sought to pin down the future by betting on it. What Polymarket offers—wrapped in crypto wallets and glossy interfaces—is not a novelty but an inheritance. From the sheep’s liver read on a Mesopotamian altar to a New York saloon stuffed with election bettors, the impulse has always been the same: to turn uncertainty into odds, chaos into numbers. Perhaps the question is not why people bet on Taylor Swift’s wedding, but why we have always bet on everything.


The earliest wagers did not look like markets. They took the form of rituals. In ancient Mesopotamia, priests slaughtered sheep and searched for meaning in the shape of livers. Clay tablets preserve diagrams of these organs, annotated like ledgers, each crease and blemish indexed to a possible fate.

Rome added theater. Before convening the Senate or marching to war, augurs stood in public squares, staffs raised to the sky, interpreting the flight of birds. Were they flying left or right, higher or lower? The ritual mattered not because birds were reliable but because the people believed in the interpretation. If the crowd accepted the omen, the decision gained legitimacy. Omens were opinion polls dressed as divine signs.

In China, emperors used lotteries to fund walls and armies. Citizens bought slips not only for the chance of reward but as gestures of allegiance. Officials monitored the volume of tickets sold as a proxy for morale. A sluggish lottery was a warning. A strong one signaled confidence in the dynasty. Already the line between chance and governance had blurred.

By the time of the Romans, the act of betting had become spectacle. Crowds at the Circus Maximus wagered on chariot teams as passionately as they fought over bread rations. Augustus himself is said to have placed bets, his imperial participation aligning him with the people’s pleasures. The wager became both entertainment and a barometer of loyalty.

In the Middle Ages, nobles bet on jousts and duels—athletic contests that doubled as political theater. Centuries later, Americans would do the same with elections.


From 1868 to 1940, betting on presidential races was so widespread in New York City that newspapers published odds daily. In some years, more money changed hands on elections than on Wall Street stocks. Political operatives studied odds to recalibrate campaigns; traders used them to hedge portfolios. Newspapers treated them as forecasts long before Gallup offered a scientific poll.

Henry David Thoreau, wry as ever, remarked in 1848 that “all voting is a sort of gaming, and betting naturally accompanies it.” Democracy, he sensed, had always carried the logic of the wager.

Speculation could even become a war barometer. During the Civil War, Northern and Southern financiers wagered on battles, their bets rippling into bond prices. Markets absorbed rumors of victory and defeat, translating them into confidence or panic. Even in war, betting doubled as intelligence.

London coffeehouses of the seventeenth century were thick with smoke and speculation. At Lloyd’s Coffee House, merchants laid odds on whether ships returning from Calcutta or Jamaica would survive storms or pirates. A captain who bet against his own voyage signaled doubt in his vessel; a merchant who wagered heavily on safe passage broadcast his confidence.

Bets were chatter, but they were also information. From that chatter grew contracts, and from contracts an institution: Lloyd’s of London, a global system for pricing risk born from gamblers’ scribbles.

The wager was always a confession disguised as a gamble.


At times, it became a confession of ideology itself. In 1890s Paris, as the Dreyfus Affair tore the country apart, the Bourse became a theater of sentiment. Rumors of Captain Alfred Dreyfus’s guilt or innocence rattled markets; speculators traded not just on stocks but on the tides of anti-Semitic hysteria and republican resolve. A bond’s fluctuation was no longer only a matter of fiscal calculation; it was a measure of conviction. The betting became a proxy for belief, ideology priced to the centime.

Speculation, once confined to arenas and exchanges, had become a shadow archive of history itself: ideology, rumor, and geopolitics priced in real time.

The pattern repeated in the spring of 2003, when oil futures spiked and collapsed in rhythm with whispers from the Pentagon about an imminent invasion of Iraq. Traders speculated on troop movements as if they were commodities, watching futures surge with every leak. Intelligence agencies themselves monitored the markets, scanning them for signs of insider chatter. What the generals concealed, the tickers betrayed.

And again, in 2020, before governments announced lockdowns or vaccines, online prediction communities like Metaculus and Polymarket hosted wagers on timelines and death tolls. The platforms updated in real time while official agencies hesitated, turning speculation into a faster barometer of crisis. For some, this was proof that markets could outpace institutions. For others, it was a grim reminder that panic can masquerade as foresight.

Across centuries, the wager has evolved—from sacred ritual to speculative instrument, from augury to algorithm. But the impulse remains unchanged: to tame uncertainty by pricing it.


Already, corporations glance nervously at markets before moving. In a boardroom, an executive marshals internal data to argue for a product launch. A rival flips open a laptop and cites Polymarket odds. The CEO hesitates, then sides with the market. Internal expertise gives way to external consensus. It is not only stockholders who are consulted; it is the amorphous wisdom—or rumor—of the crowd.

Elsewhere, a school principal prepares to hire a teacher. Before signing, she checks a dashboard: odds of burnout in her district, odds of state funding cuts. The candidate’s résumé is strong, but the numbers nudge her hand. A human judgment filtered through speculative sentiment.

Consider, too, the private life of a woman offered a new job in publishing. She is excited, but when she checks her phone, a prediction market shows a seventy percent chance of recession in her sector within a year. She hesitates. What was once a matter of instinct and desire becomes an exercise in probability. Does she trust her ambition, or the odds that others have staked? Agency shifts from the self to the algorithmic consensus of strangers.

But screens are only the beginning. The next frontier is not what we see—but what we think.


Elon Musk and others envision brain–computer interfaces, devices that thread electrodes into the cortex to merge human and machine. At first they promise therapy: restoring speech, easing paralysis. But soon they evolve into something else—cognitive enhancement. Memory, learning, communication—augmented not by recall but by direct data exchange.

With them, prediction enters the mind. No longer consulted, but whispered. Odds not on a dashboard but in a thought. A subtle pulse tells you: forty-eight percent chance of failure if you speak now. Eighty-two percent likelihood of reconciliation if you apologize.

The intimacy is staggering, the authority absolute. Once the market lives in your head, how do you distinguish its voice from your own?

Morning begins with a calibration: you wake groggy, your neural oscillations sluggish. Cortical desynchronization detected, the AI murmurs. Odds of a productive morning: thirty-eight percent. Delay high-stakes decisions until eleven twenty. Somewhere, traders bet on whether you will complete your priority task before noon.

You attempt meditation, but your attention flickers. Theta wave instability detected. Odds of post-session clarity: twenty-two percent. Even your drifting mind is an asset class.

You prepare to call a friend. Amygdala priming indicates latent anxiety. Odds of conflict: forty-one percent. The market speculates: will the call end in laughter, tension, or ghosting?

Later, you sit to write. Prefrontal cortex activation strong. Flow state imminent. Odds of sustained focus: seventy-eight percent. Invisible wagers ride on whether you exceed your word count or spiral into distraction.

Every act is annotated. You reach for a sugary snack: sixty-four percent chance of a crash—consider protein instead. You open a philosophical novel: eighty-three percent likelihood of existential resonance. You start a new series: ninety-one percent chance of binge. You meet someone new: oxytocin spike detected, mutual attraction seventy-six percent. Traders rush to price the second date.

Even sleep is speculated upon: cortisol elevated, odds of restorative rest twenty-nine percent. When you stare out the window, lost in thought, the voice returns: neural signature suggests existential drift—sixty-seven percent chance of journaling.

Life itself becomes a portfolio of wagers, each gesture accompanied by probabilities, every desire shadowed by an odds line. The wager is no longer a confession disguised as a gamble; it is the texture of consciousness.


But what does this do to freedom? Why risk a decision when the odds already warn against it? Why trust instinct when probability has been crowdsourced, calculated, and priced?

In a world where AI prediction markets orbit us like moons—visible, gravitational, inescapable—they exert a quiet pull on every choice. The odds become not just a reflection of possibility, but a gravitational field around the will. You don’t decide—you drift. You don’t choose—you comply. The future, once a mystery to be met with courage or curiosity, becomes a spreadsheet of probabilities, each cell whispering what you’re likely to do before you’ve done it.

And yet, occasionally, someone ignores the odds. They call the friend despite the risk, take the job despite the recession forecast, fall in love despite the warning. These moments—irrational, defiant—are not errors. They are reminders that freedom, however fragile, still flickers beneath the algorithm’s gaze. The human spirit resists being priced.

It is tempting to dismiss wagers on Swift and Kelce as frivolous. But triviality has always been the apprenticeship of speculation. Gladiators prepared Romans for imperial augurs; horse races accustomed Britons to betting before elections did. Once speculation becomes habitual, it migrates into weightier domains. Already corporations lean on it, intelligence agencies monitor it, and politicians quietly consult it. Soon, perhaps, individuals themselves will hear it as an inner voice, their days narrated in probabilities.

From the sheep’s liver to the Paris Bourse, from Thoreau’s wry observation to Swift’s engagement, the continuity is unmistakable: speculation is not a vice at the margins but a recurring strategy for confronting the terror of uncertainty. What has changed is its saturation. Never before have individuals been able to wager on every event in their lives, in real time, with odds updating every second. Never before has speculation so closely resembled prophecy.

And perhaps prophecy itself is only another wager. The augur’s birds, the flickering dashboards—neither more reliable than the other. Both are confessions disguised as foresight. We call them signs, markets, probabilities, but they are all variations on the same ancient act: trying to read tomorrow in the entrails of today.

So the true wager may not be on Swift’s wedding or the next presidential election. It may be on whether we can resist letting the market of prediction consume the mystery of the future altogether. Because once the odds exist—once they orbit our lives like moons, or whisper themselves directly into our thoughts—who among us can look away?

Who among us can still believe the future is ours to shape?

THIS ESSAY WAS WRITTEN AND EDITED UTILIZING AI

SHADOW GOVERNANCE, ACCELERATED

How an asynchronous presidency exploits the gap between platform time and constitutional time to bend institutions before the law can catch up.

By Michael Cummins, Editor, August 30, 2025

On a sweltering August afternoon in Washington, the line to the federal courthouse wraps around the block like a nervous necklace. Heat shimmers off the stone; gnats drift in lazy constellations above the security checkpoint. Inside, air-conditioning works harder than dignity, and the benches fill with reporters who’ve perfected the face that precedes calamity. A clerk calls the room to order. The judge adjusts her glasses. Counsel step to the lectern as if crossing a narrow bridge over fast water. Then the question—plain, improbable—arrives: can a president’s social-media post count as legal notice to fire a governor of the Federal Reserve?

What does it mean when the forum for that answer is a courtroom and the forum for the action was a feed? The gulf is not merely spatial. One realm runs on filings, exhibits, transcripts—the slow grammar of law. The other runs on velocity and spectacle, where a single post can crowd out a dozen briefings. The presidency has always tested its borders, but this one has learned a new technique: act first in public at speed; force the law to catch up in private at length. It is power practiced asynchronously—governance that unfolds on different clocks, with different rewards.

Call it latency as strategy. Declare a cause on a platform; label the declaration due process; make the firing a fact; usher the lawyers in after to domesticate what has already happened. The point is not to win doctrine immediately. The point is to harvest the days and weeks when a decision stands as reality while the courts begin their pilgrimage toward judgment. If constitutional time is meticulous, platform time is ruthless, and the space between them is policy.

In the hearing, the administration’s lawyer stands to argue that the Federal Reserve Act says “for cause” and leaves the rest to the president’s judgment. Why, he asks, should a court pour old meanings into new words? The statutory text is lean; executive discretion is broad. On the other side, counsel for Lisa Cook speaks a language almost quaint in the rapid glare of the moment: independence, notice, a chance to be heard—dignities that exist precisely to slow the hand that wields them. The judge nods, frowns, asks what independence means for an institution the law never designed to be dragged at the pace of a trending topic. Is the statute a rail to grip, or a ribbon to stretch?

When the hearing breaks, the stream outside is already three headlines ahead. Down the hill, near the White House, a combat veteran strikes a match to the hem of a flag. Fire crawls like handwriting. Two hours earlier, the president signed an executive order urging prosecutions for acts of flag “desecration” under “content-neutral” laws—no frontal attack on the First Amendment’s protection of symbolic speech, only an invitation to ticket for the flame, not the message. Is that a clever accommodation to precedent, or a dare?

The veteran knows the history; anyone who has watched the long argument over Texas v. Johnson does. The Supreme Court has repeatedly said that burning the flag as protest, however detestable to many, is speech. Yet symbolic speech lives in real space, and real space has ordinances: no open flames without a permit, no fires on federal property, no damage to parks. The order makes a temporal bet: ticket now; litigate later. The government may lose the grand constitutional fight, but it may win smaller battles quick enough to chill an afternoon’s protest. In the gap between the moment and the merits, who blinks first?

Back at the courthouse, a reporter asks a pragmatic question: even if the president can’t fire a Fed governor for mere allegations, will any of this matter for interest rates? Not in September, the expert shrugs. The committee is larger than one vote, dissent is rare. But calendars have leverage. February—when reappointments can shift the composition of the body that sets the price of money—looms larger than any single meeting. If the decision remains in place long enough, the victory is secured by time rather than law. Isn’t that the whole design?

Administration lawyers never say it so plainly. They don’t have to. The structure does the talking. Announce “cause” in a forum that rewards proclamation; treat the announcement as notice; act; then invite the courts to reverse under emergency standards designed to be cautious. Even a win for independence later may arrive late enough to be moot. In the arithmetic of acceleration, delay is not neutral; it is bounty.

If this sounds like a single episode, it is not. The same rhythm animates the executive order on flag burning. On paper, it bows to precedent; in practice, it asks police and prosecutors to find neutral hooks fast enough to produce a headline, a citation, an arrest photo. Months later, the legal machine may say, as it must, that the burning was protected and the charge pretextual. But how many will light a match the next day, knowing the ticket will be instant and the vindication slow?

And it animates something quiet but immense: the cancellation of thousands of research grants at the National Institutes of Health because proposals with words like “diversity,” “equity,” or “gender” no longer fit the administration’s politics. A district judge calls the cuts discriminatory. On the way to appeal, the litigation splits like a river around a rock: one channel to test the legality of the policy guidance, another to ask for money in a tribunal known mostly to contractors and procurement lawyers. The Supreme Court steps in on an emergency basis and says, for now, the money shouldn’t flow. Why should taxpayers pay today for projects that might be unlawful tomorrow?

Because science does not pause on command. Because a lab is not a spreadsheet but a choreography of schedules and salaries and protocols that cannot be put on ice for a season. Because a freeze that looks tidy in a docket entry becomes layoffs and abandoned lines of research in ordinary rooms with humming incubators. The Court’s concern is neat—what if the government cannot claw back dollars later?—but the neatness ignores what time does to fragile ecosystems. What is a remedy worth when the experiment that needed it has already died?

It is tempting to divide all this along ideological lines, to tally winners and losers as if the story were primarily about whose agenda prevails. But ideology is not the tool that fits. Time is. One clock measures orders, posts, firings, cancellations—the moves that define a day’s narrative. Another measures notice, hearing, record, reason—the moves by which a republic persuades itself that force has been tamed by law. When the first clock is always fast and the second is always slow, acceleration becomes a kind of authority in itself. Isn’t that the simplest way to understand what’s happening—that speed is taking up residence where statute once did?

Consider again the hearing. The administration’s brief is lean, the statute is shorter still, and the claim is stark: “for cause” is what the president says it is. To demand more—to import the old triad of “inefficiency, neglect of duty, or malfeasance in office,” to insist on a pre-removal process—is, in this telling, to romanticize independence and hobble accountability. Yet independence is not romance. It is architecture—an effort to keep central banking from becoming another branch of daily politics. If “for cause” becomes a slogan that can be made true after the fact by the simple act of saying it early and everywhere, what remains of the cordon the law tried to draw?

The judge knows this, and also knows the constraints of her role. Emergency relief is meant to preserve the status quo, not rewrite the world. But what is the status quo when the action has already been taken? How do you freeze a river that has been diverted upstream? The presidency practices motion, and then asks the judiciary for patience. Can a court restore a person to an office as easily as a timeline restored a post? Can an injunction rewind a vote composition that turned while the case wound its way forward?

Meanwhile, in the park across from the White House, the veteran’s fire has gone out. The citations are not for speech, officials insist, but for the flame and the scarring of public property. Somewhere between these statements and the executive order that prompted them sits the puzzle of pretext. If a president announces that he seeks to stop a type of speech and urges prosecutors to deploy neutral laws to do so, isn’t the neutrality already contaminated? The doctrine can handle the distinction. But the doctrine’s victory will arrive, at best, months later, and the message lands now: the state is watching, and the nearest hook will serve.

The research world hears its own version of that message. Grants are not gifts; they are contracts, explicit commitments that enable work across years. When a government cancels them mid-stream for political reasons and the courts respond by asking litigants to queue in separate lines—legality here, money there—the signal is not subtle. A promise from the state is provisional. A project can become a pawn. If the administration can accelerate the cut, and the law can only accelerate the analysis, who chooses a life’s work inside such volatility?

There are names for this pattern that sound technocratic—“latency arbitrage,” “platform time versus constitutional time”—and they are accurate without being sufficient. The deeper truth is simpler: a republic’s most reliable tools to restrain power are exactly the tools an accelerated executive least wants to use. Notice means warning; hearing means friction; record means reasons; reason means vulnerability. If you can do without them today and answer for their absence tomorrow, why wouldn’t you?

Well, because the institutions you bend today may be the ones you need intact when the wind shifts. A central bank nudged toward loyalty ceases to be ballast in a storm and becomes a sail. A public square patrolled by pretext breeds fewer peaceful protests and more brittle ones. A research ecosystem that learns that politics can zero out the future will deliver fewer cures and more exits. Isn’t it a curious form of victory that leaves you poorer in the very capacities that make governing possible?

Which brings the story back, inevitably, to process. Process is dull in the way bridges are dull—unnoticed until they fail. The seduction of speed lies in its drama: the crispness of the order, the sting of the arrest, the satisfying finality of a cancellation spreadsheet. Process is the opposite of drama. It is the insistence that power is obliged to explain itself before it acts, to create a record that can be tested, to bear, on the front end, the time it would rather push to the back. Why does that matter now? Because the tactic on display is not merely to defeat process, but to displace it—to make its protections arrive as afterthoughts, paper bandages for facts on the ground.

There are ways to close the gap. The law can require that insulated offices come with front-loaded protections: written notice of cause, an opportunity to respond, an on-the-record hearing before removal becomes effective, and automatic temporary relief if the dispute proceeds to court. The Department of Justice can be made to certify, in writing and in real time, that any arrest touching expressive conduct was green-lighted without regard to viewpoint, and courts can be given an expedited path to vacate citations when pretext is shown—not in a season, but in a week. Mid-cycle grant cancellations can trigger bridge funding and a short status-quo injunction as the default, with the government bearing the burden to prove genuine exigency. Even the Supreme Court can add small guardrails to its emergencies: reasoned, public minutes; sunset dates that force merits briefing on an actual clock rather than letting temporary orders congeal into policy by inertia. Would any of this slow governance? Yes. That is the point.

These are technical moves to answer a political technique, temporal fixes for a temporal hack. They do not hobble the presidency; they resynchronize it with the law. More than doctrine, they aim to withdraw the dividend that acceleration now pays: the days and weeks when action rules unchallenged simply because it happened first.

The images persist. A clerk emerges from chambers carrying two cardboard boxes heavy enough to bow in the middle: motions, exhibits, transcripts—the record, dense and unglamorous, the way reality usually is. The clerk descends the marble steps carefully because there is no other way to do it without spilling the case on the stairs. Across town, another draft order blinks on a screen in a bright room. One world moves on arms and gravity; the other moves on keystrokes and publish buttons. Which will shape the country more?

It is easy to say the law can win on the merits—often, it can. It is harder to say the law can win on time. If we let the presidency define the day with a cascade of acts and then consign the republic’s answer to months of briefs and polite argument, we will continue to confuse the absence of immediate correction with consent. The choice is not between nimbleness and stodginess; it is between a politics that cashes the check before anyone can read it and a politics that pauses long enough to ask what the money is for.

And so, one more question, the kind that lingers after the cameras have left: in a government becoming fluent in acceleration, can we persuade ourselves that synchronization is not obstruction but care? The future of independence, of speech, of public knowledge may turn less on who writes the next order than on whether we are willing to match speed with proportionate process—so that when power moves fast, law is not a distant echo but a present tense. Outside the courthouse, the air is still hot. The boxes are still heavy. The steps are still steep. There is a way to carry them, and there is a way to drop them, and the difference, just now, is the measure of our self-government.

THIS ESSAY WAS WRITTEN AND EDITED UTILIZING AI

The Man Who Invented the Modern Thriller

Before Hitchcock or Highsmith, there was Pietro Aretino—Renaissance Venice’s scandalous satirist who turned gossip into cliffhangers and obscenity into art. The man who terrified popes may also have invented the modern thriller.

By Michael Cummins, Editor, August 29, 2025

Venice, 1537

The candle gutters in its brass dish, casting a crooked halo on the damp walls of a salon off the Grand Canal. Pietro Aretino leans back in his chair, one boot propped on a velvet footstool, his voice curling through the smoke like a blade. He does not write—he dictates. A scribe, young and ink-stained, hunches over parchment, trying to keep pace. The letter—addressed, perhaps, to a cardinal, perhaps to a painter—will contain more than pleasantries. It will contain a threat, veiled as an observation, wrapped in a joke.

“Princes fear me more than the plague,” Aretino murmurs, eyes half-lidded. “For I do not kill bodies—I murder reputations.”

The scribe pauses, startled. Aretino waves him on. “Write it. Let them tremble.”

Tomorrow, this page will cross the lagoon, board a courier’s horse, and ignite tremors in Rome or Paris. It may be copied, whispered, condemned. It may be burned. But it will be read.

It was Aretino’s genius to recognize that scandal was not merely gossip—it was architecture. A scaffolding of insinuation and revelation designed to leave its victim dangling. In his six volumes of Lettere (1537–1557), he sharpened that architecture to a fine point. Written to popes, kings, artists, and courtesans, the letters are part autobiography, part political commentary, and wholly performance. “I speak to the powerful as I would to a neighbor,” he crowed, “for truth makes no bow.” What terrified his recipients was not what he said but what he withheld. His words worked like cliffhangers: each letter a suspense novel in miniature.

Aretino liked to imagine himself not born in Arezzo, as the records claimed, but in his own tongue. The myth suited him: a man conjured out of ink and scandal rather than flesh and baptismal water. By the 1520s, he was notorious as the flagello dei principi—the scourge of princes. The title was not a label pinned on him by enemies; it was one he cultivated, polished, and wore like armor. “I carry more lives in my inkpot than the hangman in his noose,” he declared, and few doubted it.

His life was a play in which he cast himself as both author and protagonist. When Pope Clement VII hesitated to pay him, Aretino wrote slyly, “Your Holiness, whose charity is beyond compare, surely requires no reminder of the poverty that afflicts your devoted servant.” In another letter, he praised the Pope’s mercy while threatening to reveal “those excesses which Rome whispers but dares not record.” He lived by double edge: each compliment a prelude, each benediction a warning.

The tactic was not confined to popes. To Michelangelo he sent fulsome admiration: “Your brush moves like lightning, striking down the pride of the ancients.” To Titian he became impresario, writing to Francis I of France that no royal gallery could be complete without Titian’s brush. But the same pen could turn against friend or patron in an instant. A single phrase from Aretino could undo a reputation; a withheld rumor could ruin a night’s sleep.

His enemies often answered with violence. In Rome, in 1525, mercenaries burst into his lodgings after he lampooned the papal indulgence sellers in his Frottole. They dragged him into the street and beat him nearly to death. Neighbors recalled him crawling, bloodied, back to his rooms. Later, when asked why he returned to writing almost immediately, he grinned through broken teeth: “Even death cannot silence a tongue as sharp as mine.” The scars became his punctuation. “My scars,” he wrote in the Lettere, “are the punctuation marks of my story.”

Aretino’s letters functioned like serialized thrillers. Each installment built tension, each cliffhanger left its audience half-terrified, half-delighted. He understood that suggestion could be more devastating than revelation, that anticipation was more dangerous than disclosure. He used ambiguity as a weapon, seeding his pages with conditional phrases: “It is said,” “One hears,” “Were I less discreet…” They were not evasions. They were traps.

One courtier compared the experience to “sitting at supper and finding the meat still bleeding.” The reader was implicated, made complicit in the scandal’s unfolding. Aretino’s genius lay in turning the audience into co-conspirators.

And Venice—city of masks, labyrinths, and whispered betrayals—was practically designed as the birthplace of the thriller. Long before the genre had a name, its ingredients were already steeping in the canals: duplicity, desire, surveillance, and the ever-present threat of exposure. Aretino didn’t write thrillers in form, but he mastered their emotional architecture. His letters were suspenseful, his dialogues scandalous, his persona a walking cliffhanger. Venice gave him the perfect mise-en-scène: a place where truth wore a disguise and reputation was currency. The city itself functioned like a thriller plot—beautiful on the surface, treacherous underneath.

And consider the mechanics: the masked ball becomes the thriller’s false identity. The gondola ride at midnight becomes the covert rendezvous. The whispered rumor in a candlelit salon becomes the inciting incident. The Contarini garden becomes the secret meeting place where alliances shift and truths unravel. It is no accident that Henry James, Daphne du Maurier, Patricia Highsmith, and Donna Leon all returned to Venice when they wanted to explore psychological tension and moral ambiguity. The city doesn’t just host thrillers—it is one.

Imagine a summer evening in 1537. The garden is fragrant with jasmine and fig. Aretino reclines beneath a pergola, flanked by Titian and a Greek scholar from Crete. A courtesan named Nanna pours wine into silver cups.

“You paint gods,” Aretino says to Titian, “but I paint men. And men are far more dangerous.”

Titian chuckles. “Gods do not pay commissions.”

The scholar leans in. “And men do not forgive.”

Nanna smirks, leaning on the marble balustrade. “And yet men pay both of you—in gold for their portraits, in secrets for his letters.”

Aretino raises his cup. “Which is why I never ask forgiveness. Only attention.”

Venice itself became a character: beautiful, deceptive, morally ambiguous. Its canals mirrored the duplicity of its citizens. Its masks—literal and figurative—echoed Aretino’s own performative identity.

But letters were only one weapon. In 1527, Aretino detonated another: the Sonetti lussuriosi, written to accompany Giulio Romano’s engravings known as I Modi. The sonnets made no attempt at discretion. In one, a woman gasps mid-embrace, “Oh God, if this be sin, then let me sin forever!” In another, a lover interrupts her partner’s poetic boasting with the sharp command: “Speak less and thrust more.” The verses shocked even worldly Rome. Pope Clement VII banned the work, copies were burned, and Aretino’s name became synonymous with obscenity. Yet suppression only heightened its allure. “My verses are daggers,” he later said, “that caress before they strike.”

He followed with the Ragionamenti (1534–1536), dialogues between prostitutes and matrons that turned confession into carnival. In the Dialogo della Nanna e della Antonia, one woman scoffs, “The cardinals pray with their lips while their hands wander beneath the skirts.” In the Dialogo nel quale la Nanna insegna a la Pippa, the older courtesan instructs a young girl in survival: “A woman must learn to wield her body as men wield their swords.” These were not just bawdy jokes but philosophical inversions. They exposed hypocrisy with laughter and turned vice into discourse.

His comedies struck with equal force. In La Cortigiana (1534), a satire of Roman society, a friar assures his audience: “Do as I say, not as I do—for my sins are a privilege of office.” In Il Marescalco, a groom forced into marriage laments, “Better to wed a sword than a wife, for steel at least does not betray.” In La Talanta, he boasted with characteristic swagger: “My tongue is the scourge of princes and the trumpet of truth.” These plays were not staged fantasies but mirrors held to the world. Rome and Venice recognized themselves, and recoiled.

Even his occasional pieces carried teeth. During the sack of Rome, he penned the Frottole (1527), short verses filled with bitter humor: “The Germans loot the altars, the Spaniards strip the nuns, and Christ hides his face behind the clouds.” Earlier still, in Il Testamento dell’Elefante Hanno (1516), he composed a mock will for Pope Leo X’s pet elephant. The beast bequeathed its tusks to the cardinals and its dung to the faithful: “For the people, my eternal gift, what Rome already feeds them daily.” Juvenile, grotesque, and brilliant, it set the tone for a lifetime of satiric violence.

Was Aretino a moralist or a manipulator? The question haunts his legacy. Like Machiavelli, he understood power. Like Montaigne, he understood performance. His satire was not disinterested—it was strategic. He exposed corruption, yes, but he also profited from it. His critics accused him of blackmail, of cruelty, of vulgarity. But Aretino saw himself as a mirror. “I do not invent,” he wrote, “I reflect.” The discomfort lay not in his words, but in their accuracy.

The dilemma still feels modern. When does exposure serve truth, and when does it become spectacle? Is scandal a form of justice—or just another form of entertainment? To read Aretino is to feel that question sharpen into relevance. He knew the intoxicating pleasure of watching a hypocrite stripped bare, but he also knew the profit of keeping the knife just shy of the skin.

For centuries, Aretino was dismissed as a pornographer and blackmailer, an obscene footnote beside Petrarch and Ariosto. But scandal has a way of surviving. Nineteenth-century Romantics rediscovered him as a prophet of modernity. Today, critics trace his fingerprints across satire, reportage, and fiction. Balzac’s Parisian intrigues, Wilde’s aesthetic scandals, Patricia Highsmith’s Venetian thrillers—all echo Aretino’s mix of desire and dread.

And then there are the heirs who claimed him outright. The Marquis de Sade, that relentless anatomist of transgression, drew directly from Aretino’s playbook. Sade’s philosophical obscenities echo the structures of the Ragionamenti and the Sonetti lussuriosi: dialogues in which sexuality becomes both performance and interrogation, the bed a courtroom, the embrace a cross-examination. Like Aretino, Sade deployed eroticism not only to shock but to dismantle. Both men wielded obscenity as an intellectual weapon, stripping religion and politics of their sanctity by exposing their hypocrisies in the stark light of desire. When Sade has his libertines sneer at clerics who preach chastity while gorging on pleasure, he repeats Aretino’s barbed observation from a century earlier: “The cardinals pray with their lips while their hands wander beneath the skirts.”

Sade shared Aretino’s radical anti-clericalism, his love of dialogue as a tool of exposure, and his cultivation of notoriety as a literary strategy. The “Divine Marquis” may have been locked in the Bastille, but he carried in his cell Aretino’s scandalous legacy: the belief that obscenity could be philosophy, that provocation itself could be a mode of truth-telling.

Three centuries later, Guillaume Apollinaire would rediscover Aretino with a different eye. In the early twentieth century, Apollinaire praised him as a master who combined “the obscene with the sublime.” In works like Les Onze Mille Verges (The Eleven Thousand Rods), Apollinaire blurred the line between pornography and poetry, scandal and art, just as Aretino had done in his Venetian salons. He admired Aretino’s ability to turn audacity into literature, to make provocation itself a kind of aesthetic. “There is,” Apollinaire wrote of Aretino, “a grandeur in obscenity when it reveals the soul of an age.”

Apollinaire saw in Aretino a precedent for his own experiments: erotic audacity, satirical edge, literary innovation, and a fascination with scandal as aesthetic principle. Where Aretino staged dialogues between courtesans and matrons, Apollinaire crafted delirious erotic parables; where Aretino mocked clerics in his comedies, Apollinaire mocked bourgeois morality with surreal extravagance. Both men made literature dangerous again—texts that could be banned, burned, whispered, yet still survive.

In this long genealogy, Aretino is less a Renaissance curiosity than the origin point of a scandalous tradition that threads through Sade’s prisons, Apollinaire’s Paris, and our own scandal-hungry media. Each recognized that literature need not be safe, that scandal could be structure, that provocation could outlast sermons.

Most uncanny is how current Aretino feels. “What is whispered,” he mused in the Ragionamenti, “weighs more than what is spoken.” That line could be Twitter’s motto, or the tagline of an exposé-driven news cycle. Aretino would have thrived online: the cryptic tweet, the artful insinuation, the screenshot without context. He would have understood the logic of cancel culture, the way scandal circulates as performance, the way innuendo becomes currency.

Imagine him at the end, older now, dictating one last letter. The room is quieter, the scars deeper, the city outside still murmuring with intrigue. He knows his enemies wait for him to fall silent, but he also knows the page will outlive him. The candlelight no longer dances—it trembles. His scribe, older now too, no longer rushes. They have learned the rhythm of Aretino’s menace: slow, deliberate, inevitable.

He pauses mid-sentence, gazing out toward the lagoon. The bells of San Zanipolo toll the hour. A gondola glides past, its oars whispering against the water. Somewhere in the Palazzo Contarini dal Zaffo garden, jasmine blooms in the dark.

“Write this,” he says finally. “To be feared is to be remembered. To be remembered is to be read.”

The scribe hesitates. “And to be read?”

Aretino smiles. “Is to survive.”

He signs his name with a flourish—Pietro Aretino—and sets the quill down. The letter will travel, as they always have, faster than truth and deeper than rumor. It will be copied, misquoted, condemned, and preserved. It will be read by those who hate him and those who become him.

Centuries later, in a world of digital whispers and algorithmic outrage, his voice still echoes. In every scandal that unfolds like a story, in every tweet that wounds like a dagger, in every exposé that trembles with withheld revelation—Aretino is there. Not as ghost, but as architect. He understood what we are still learning: that scandal is not the opposite of art. It is one of its oldest forms. And in the hands of a master, it becomes not just spectacle, but structure. Not just provocation, but prophecy.

The trumpet still sounds. The question is not whether we hear it. The question is whether we recognize the tune.

THIS ESSAY WAS WRITTEN AND EDITED UTILIZING AI

Möbius Dreams: A Journey Of Identity Without End

From Nietzsche’s wanderings to Brodsky’s winters in Venice, identity loops like a Möbius strip—and augmented reality may carry those returns to us all.

By Michael Cummins, Editor, August 25, 2025

It begins, as so many pilgrimages of mind and imagination do, in Italy. To step into one of its cities—Florence with its domes, Rome with its ruins, Venice with its waters—is to experience time folding over itself. Stones are worn by centuries of feet; bells still toll hours as they did five hundred years ago; water mirrors façades that have witnessed empires rise and fall. Italy resists linearity. It does not advance from one stage to another; it loops, bends, recurs. For those who enter it, identity itself begins to feel less like a straight line than like a Möbius strip—a single surface twisting back on itself, where past and present, memory and desire, fold into one another.

Friedrich Nietzsche felt that pull most keenly. His journeys through Italy in the 1870s and 1880s were more than therapeutic sojourns for his fragile health; they were laboratories for thought. He spent time in Sorrento, where the Mediterranean air and lemon groves framed his writing of Human, All Too Human. In Genoa, he walked the cliffs above the port, watching the sun rise and fall in a rhythm that struck him as recurrence itself. In Turin, under its grand porticoes, he composed letters and aphorisms before his final collapse in 1889. And in Venice, he found a strange equilibrium between the city’s music, its tides, and his own restlessness. To his confidant Peter Gast, he wrote: “When I seek another word for ‘music,’ I never find any other word than ‘Venice.’” The gondoliers’ calls, the bells of San Marco, the lapping water—all repeated endlessly, yet never the same, embodying the thought that came to define him: the eternal return.

For Nietzsche, Italy was not a backdrop but a surface on which recurrence became tangible. Each city was a half-twist in the strip of his identity: Sorrento’s clarity, Genoa’s intensity, Turin’s collapse, Venice’s rhythm. He sensed that to live authentically meant to live as though each moment must be lived again and again. Italy, with its cycles of light, water, and bells, made that philosophy palpable.

Henry James —an American expatriate author with a different temperament—also found Italy less a destination than a structure. His Italian Hours (1909) reveals both rapture and unease. “The mere use of one’s eyes in Italy is happiness enough,” he confessed, yet he described Venice as “half fairy tale, half trap.” The city delighted and unsettled him in equal measure. He wandered Rome’s ruins, Florence’s galleries, Venice’s piazzas, and found that they all embodied a peculiar temporal layering—what he called “a museum of itself.” Italy was not history frozen; it was history repeating, haunting, resurfacing.

James’s fiction reflects that same looping structure. In The Aspern Papers, an obsessive narrator circles endlessly around an old woman’s letters, desperate to claim them, caught in a cycle of desire and denial. In The Portrait of a Lady, Isabel Archer discovers that the freedom she once thought she had secured returns as entrapment; her choices loop back on her with tragic inevitability. Even James’s prose mirrors the Möbius curve: sentences curl and return, digress and double back, before pushing forward. Reading James can feel like walking Venetian alleys—you arrive, but only by detour.

Joseph Brodsky, awarded the 1987 Nobel Prize in Literature after being exiled from the Soviet Union in 1972, found in Venice a winter refuge that became ritual. Each January he returned, until his death in 1996, and from those returns came Watermark (1992), a prose meditation that circles like the canals it describes. “Every January I went to Venice, the city of water, the city of mirrors, perhaps the city of illusions,” he wrote. Fog was his companion, “the city’s most faithful ghost.” Brodsky’s Venice was not Nietzsche’s radiant summer or James’s bustling salons. It was a city of silence, damp, reflection—a mirror to exile itself.

He repeated his returns like liturgy: sitting in the Caffè Florian, notebook in hand, crossing the Piazza San Marco through fog so dense the basilica dissolved, watching the lagoon become indistinguishable from the sky. Each January was the same, and yet not. Exile ensured that Russia was always present in absence, and Venice, indifferent to his grief yet faithful in its recurrence, became his Möbius surface. Each year he looped back as both the same man and someone altered.

What unites these three figures—Nietzsche, James, Brodsky—is not their similarity of thought but their recognition of Italy as a mirror for recurrence. Lives are often narrated as linear: childhood, youth, adulthood, decline. But Italy teaches another geometry. Like a Möbius strip, it twists perspective so that to move forward is also to circle back. An old anxiety resurfaces in midlife, but it arrives altered by experience. A desire once abandoned returns, refracted into new form. Nietzsche’s eternal return, James’s recursive characters, Brodsky’s annual exiles—all reveal that identity is not a line but a fold.

Italy amplifies this lesson. Its cities are not progressions but palimpsests. In Rome, one stands before ruins layered upon ruins: the Colosseum shadowed by medieval houses, Renaissance palaces built into ancient stones. In Florence, Brunelleschi’s dome rises above medieval streets, Renaissance paintings glow under electric light. In Venice, Byzantine mosaics shimmer beside Baroque marble while tourists queue for modern ferries. Each city is a surface where centuries loop, never erased, only folded over.

Philosophers and writers have groped toward metaphors for this looping. Nietzsche’s eternal return insists that each moment recurs infinitely. Derrida’s différance plays on the way meaning is always deferred, never fixed, endlessly circling. Borges imagined labyrinths where every turn leads back to the start. Gloria Anzaldúa’s Borderlands describes identity as hybrid, cyclical, recursive. Italy stages all of these. To walk its piazzas is to feel history as Möbius surface: no beginning, no end, only continuous return.

But the Möbius journey of return is not without strain. Increasing overcrowding in Venice has made Piazza San Marco feel at times like a funnel for cruise-ship day trippers, raising questions of whether the city can survive its admirers. Rising costs of travel —inflated flights, pricier accommodations, surcharges for access—place the dream of pilgrimage out of reach for many. The very recurrence that writers once pursued with abandon now risks becoming the privilege of the few. And so the question arises: if one cannot return physically, can another kind of return suffice?

The answer is already being tested. Consider the Notre-Dame de Paris augmented exhibition, created by the French startup Histovery. Visitors carry a HistoPad, a touchscreen tablet, and navigate 850 years of the cathedral’s history. Faux stone tiles line the floor, stained-glass projections illuminate the walls, recordings of tolling bells echo overhead. With a swipe, one moves from the cathedral’s medieval construction to Napoleon’s coronation, then to the smoke and flames of the 2019 fire, then to the scaffolds of its restoration. It is a Möbius strip of architecture, looping centuries in minutes. The exhibition has toured globally, making Notre-Dame accessible to millions who may never step foot in Paris.

Italy, with its fragile architecture and layered history, is poised for the same transformation. Imagine a virtual walk through Venice’s alleys, dry and pristine, free of floods. A reconstructed Pompeii, where one can interact with residents moments before the eruption. Florence restored to its quattrocento brilliance, free of scaffolding and tourist throngs. For those unable to travel, AR offers an uncanny loop: recurrence of experience without presence.

Yet the question lingers: if one can walk through Notre-Dame without smelling the stone, without hearing the echo of one’s own footsteps, have they truly arrived? Recurrence, after all, has always been embodied. Nietzsche needed the Venetian fog to sting his lungs. James needed to feel the cold stones of a Florentine palazzo. Brodsky needed the damp silence of January to write his Watermark. The Möbius loop of identity was sensory, mortal, physical. Can pixels alone replicate that?

Perhaps this is too stark a contrast. Italy itself has always been both ruin and renewal, both stone and scaffolding, both presence and representation. Rome is simultaneously crumbling and rebuilt. Florence is both painted canvas and postcard reproduction. Venice is both sinking and endlessly photographed. Italy has survived by layering contradictions. Augmented reality may become one more layer.

Indeed, there is hope in this possibility. Technology can democratize what travel once restricted. The Notre-Dame exhibition allows a child in Kansas to toggle between centuries in an afternoon. It lets an elder who cannot fly feel the weight of medieval Paris. Applied to Italy, AR could make the experience of recurrence more widely available. Brodsky’s fog, Nietzsche’s bells, James’s labyrinthine sentences—these could be accessed not only by the privileged traveler but by anyone with a headset. The Möbius strip of identity, always looping, would expand to include more voices, more bodies, more experiences.

And yet AR is not a replacement so much as an extension. Those who can still travel will always seek stone, water, and bells. They will walk the Rialto and feel the wood beneath their feet; they will stand in Florence and smell the paint and dust; they will sit in Rome’s piazzas and feel the warmth of stone in the evening. These are not illusions but recurrences embodied. Technology will not end this; it will supplement it, add folds to the Möbius strip rather than cutting it.

In this sense, the Möbius book of identity continues to unfold. Nietzsche’s Italian sojourns, James’s expatriate wanderings, Brodsky’s winter rituals—all are chapters inscribed on the same continuous surface. Augmented reality will not erase those chapters; it will add marginalia, footnotes, annotations accessible to millions more. The loop expands rather than contracts.

So perhaps the hopeful answer is that recurrence itself becomes more democratic. Italy will always be there for those who return, in stone and water. But AR may ensure that those who cannot return physically may still enter the loop. A student in her dormitory may don a headset and hear the same Venetian bells that Nietzsche once called music. A retiree may walk through Florence’s restored galleries without leaving her home. A child may toggle centuries in Notre-Dame and begin to understand what it means to live inside a Möbius strip of time.

Identity, like travel, has never been a straight line. It is a fold, a twist, a surface without end. Italy teaches this lesson in stone and water. Technology may now teach it in pixels and projections. The Möbius book has no last page. It folds on—Nietzsche in Turin, James in Rome, Brodsky in Venice, and now, perhaps, millions more entering the same loop through new, augmented doors.

The self is not a line but a surface, infinite and recursive. And with AR, more of us may learn to trace its folds.

THIS ESSAY WAS WRITTEN AND EDITED UTILIZING AI

The Envelope of Democracy

How a practice born on Civil War battlefields became the latest front in America’s fight over trust, law, and the vote.

By Michael Cummins, Editor, August 23, 2025

On a raw November morning in 1864, somewhere in a Union encampment in Virginia, soldiers bent over makeshift tables to mark their ballots. The war was not yet won; Grant’s men were still grinding through the trenches around Petersburg. Yet Abraham Lincoln insisted that these men, scattered across muddy fields and far from home, should not be denied the right to vote. Their ballots were gathered, sealed, and carried by courier and rail to their home states, where clerks would tally them beside those cast in person. For the first time in American history, large numbers of citizens voted from a distance—an innovation spread across 19 Union states by hasty wartime statutes and improvised procedures (National Park Service; Smithsonian).

Lincoln understood the stakes. After the votes were counted, he marveled that “a people’s government can sustain a national election, in the midst of a great civil war” (Library of Congress). To deny soldiers their ballots was to deny the Union the very legitimacy for which it fought. Then, as now, critics fretted about fraud and undue influence: Democrats accused Republicans of manufacturing ballots in the field; rumors spread of generals pressuring soldiers to vote for Lincoln. Newspapers thundered warnings about the dilution of the franchise. But the republic held. Soldiers voted, the ballots were counted, and Lincoln was re-elected.

A century and a half later, the envelope has become a battlefield again. Donald Trump has promised to “end mail-in ballots” and scrap voting machines, declaring them corrupt, even while bipartisan experts explain that nearly all U.S. ballots are already paper, with machines used only for tabulation and auditing (AP; Bipartisan Policy Center). The paradox is striking: modern tabulators are faster and more accurate than human tallies, while hand counts are prone to fatigue and error (Time).

But how did a practice with Civil War pedigree come to be portrayed as a threat to democracy itself? What, at root, do Americans fear when they fear the mailed ballot?

In a Phoenix suburb not long ago, a first-time voter—call her Teresa—dropped her ballot at a post office with pride. She liked the ritual: filling it out at her kitchen table, checking the boxes twice, signing carefully. Weeks later, she learned her ballot had been rejected for a signature mismatch with an old ID on file. She had, without knowing it, missed the deadline to “cure” her ballot. “It felt like I didn’t exist,” one young Arizonan told NPR, voicing the frustration of many. Across the country, younger and minority voters are disproportionately likely to have their mail ballots rejected for administrative reasons such as missing signatures or late arrival. If fraud by mail is vanishingly rare, disenfranchisement by process is not.

Meanwhile, on the factory floor of American vote-by-mail, the ordinary hum of democratic labor continues. Oregon has conducted its elections almost entirely by mail for a quarter century, with consistently high participation and confidence (Oregon Secretary of State). Colorado followed with its own all-mail model, paired with automatic registration, ballot tracking, and risk-limiting audits (Colorado Secretary of State). Washington and Utah have joined in similar fashion. Election officials talk about the efficiency of central counting centers, the ease of auditing paper ballots, the increased access for rural and working-class voters. One clerk described her office during election week as “a warehouse of democracy,” envelopes stacked in trays, staff bent over machines that scan and sort. In one corner, a team compares signatures with the care of art historians verifying provenance. The scene is not sinister but oddly moving: democracy reduced to thousands of small acts of faith, each envelope a declaration that one voice counts.

And yet suspicion lingers. Part of it is ritual. The image of democracy for generations has been the polling place: chalkboard schedules, folding booths, poll books fat with names. The mailed ballot decentralizes the ceremony. It moves civic action into kitchens and break rooms, onto couches and barracks bunks. For some, invisibility breeds mistrust; for others, it is the genius of the thing—citizenship woven into home life, not just performed in public.

Part of the anxiety is legal. The Constitution’s Elections Clause gives the states authority over the “Times, Places and Manner” of congressional elections but empowers Congress to “make or alter such Regulations” (Constitution Annotated). Presidents have no such power. The White House cannot ban absentee ballots by decree. Congress could attempt to standardize or limit the use of mail ballots in federal elections—though any sweeping restriction would run headlong into litigation from voters who cannot be present on Election Day, from soldiers on deployment to homebound citizens.

And we have seen how precarious counting can be when law and logistics collide. In 2000, Florida’s election—and the presidency—turned not on fraud but on ballots: “hanging chads,” the ambiguous punch-card remnants that confounded machines and humans alike. The Supreme Court’s decision in Bush v. Gore halted a chaotic recount and left many Americans convinced that the true count would forever be unknowable (Oyez). The lesson was not that ballots are fraudulent, mailed or otherwise, but that the process of counting and verifying them is fragile, and that the legitimacy of outcomes depends on rules agreed to before the tally begins.

It is tempting, in moments of panic, to look abroad for calibration. In the United Kingdom, postal ballots are an ordinary convenience governed by clear rules (UK Electoral Commission). Canadians deploy a “special ballot” system that lets voters cast by post from the Yukon to Kandahar (Elections Canada). The Swiss have made postal voting a workaday part of civic life (Swiss Confederation). Fraud exists everywhere—but serious cases are exceptional, detected, and punished.

Back home, the research is blunt. The Brennan Center for Justice finds that fraud in mail balloting is “virtually nonexistent.” A Stanford–MIT study found that universal vote-by-mail programs in California, Utah, and Washington had no partisan effect—undercutting claims that the method “rigs” outcomes rather than simply broadening access. And those claims that machines slow results? Election administrators, backed by Wisconsin Watch, explain that hand counts tend to be slower and less accurate, while scanners paired with paper ballots and audits deliver both speed and verifiability.

Still, mistrust metastasizes, not from facts but from fear. A rumor in Georgia about “suitcases of ballots,” long debunked, lingers as a meme. A Michigan voter insists he saw a neighbor mail five envelopes, unaware they were for a household of five registered voters. Conspiracy thrives in the gap between visibility and imagination.

Yet even as the mailed ballot feels embattled, the next frontier is already under debate. In recent years, pilot projects have tested whether citizens might someday cast votes on their phones or laptops, secured not by envelopes but by cryptographic ledgers. The mobile voting platform Voatz, used experimentally in West Virginia and a few municipal elections, drew headlines for its promise of accessibility but also for its flaws: researchers at MIT found vulnerabilities tied to third-party cloud storage and weak authentication, prompting urgent warnings (MIT Technology Review). GoatBytes’ 2023 review noted that blockchain frameworks like Hyperledger Sawtooth and Fabric might one day offer stronger, verifiable digital ballots, and even the U.S. Postal Service has patented a blockchain-based mobile voting system (USPTO Patent). Capitol Technology University traced this shift as the latest stage in the long evolution from paper to punch cards to optical scanners, with AI now assisting ballot tabulation (Capitol Tech University). For proponents, mobile systems are less about novelty than necessity: the disabled veteran, the soldier abroad, the homebound elder—all could vote with a tap.

But here, too, the fault lines are visible. The American Bar Association recently cautioned that while blockchain and smartphone voting might expand access, they raise thorny questions about privacy, coercion, and verification—how to ensure a vote cast on a personal device is both secret and authentic. TIME Magazine spotlighted the allure of digital voting for those long underserved by the system, even as groups like Verified Voting warned that premature adoption could expose elections to risks far graver than those posed by paper mail ballots (TIME). In this telling, technology is Janus-faced: a path to broaden democracy’s reach, and a Pandora’s box of new vulnerabilities. If the mailed envelope embodies trust carried by hand, the mobile ballot would ask citizens to entrust their franchise to lines of code. Whether Americans are ready to make that leap remains an open question.

If there is a flaw to worry about, it is not the specter of rampant fraud, but the small, fixable frictions that disenfranchise well-meaning voters: needlessly strict signature-match policies, short cure windows, postal delays for ballots requested late, confusing instructions, and uneven funding for local election offices. The remedy comes not from abolishing the envelope, but from investing in the infrastructure around it: clear statewide standards for verification and cure; robust voter education about deadlines; modernized voter registration databases; secure drop boxes; and the budget lines that let county clerks hire and train staff.

In the end, the mailed ballot is less a departure from American tradition than a continuation of it. The ritual has changed—less courthouse, more kitchen table—but the bargain is the same. When a soldier in 1864 dropped his folded ballot into a wooden box, he entrusted strangers to carry it home. When a modern voter seals an envelope in Denver or Tacoma, she entrusts a chain of clerks, scanners, and auditors. Trust, not spectacle, is the beating heart of the system.

And perhaps that is why the envelope matters so much now. To defend it is not merely to defend convenience; it is to defend a vision of democracy capacious enough to reach the absent, the disabled, the far-flung, the over-scheduled—our fellow citizens whose lives do not always bend to a Tuesday line at a nearby gym. To reject it is to narrow the franchise to those who can appear on command.

Imagine Lincoln again, weary at the White House in the fall of 1864, reading dispatches about alleged fraud in soldier ballots and still insisting the votes be counted. Imagine a first-time voter in Phoenix who lost her chance over a mismatched squiggle, and the next one who won’t because the state clarified its cure rules. Imagine the county clerk who will never trend on social media, but who builds public confidence day by day with plain procedures and paper trails.

At the end of the day, American democracy may still come down to envelopes—white, yellow, blue—carried in postal bins, stacked in counting rooms, marked by the smudges of human hands. They are fragile, yes, but they are resilient too. The Civil War ballots survived trains and rivers; today’s ballots survive disinformation and delay. The act is the same: a citizen marks a choice, seals it, and sends it forth with faith that it will be received. If democracy is government of, by, and for the people, then every envelope is its emissary.

What would we lose if we tore that emissary up? Not only the votes of those who cannot stand in line, but the habit of trust that keeps the republic breathing. Better, then, to do what we have done at our best moments—to keep counting, keep auditing, keep improving, keep faith. The mailed ballot is not a relic of pandemic panic; it is a tested tool of a sprawling republic that has always asked its citizens to speak from wherever they are.

THIS ESSAY WAS WRITTEN AND EDITED UTILIZING AI

ADVANCING TOWARDS A NEW DEFINITION OF “PROGRESS”

By Michael Cummins, Editor, August 9, 2025

The very notion of “progress” has long been a compass for humanity. Yet, what we consider an improved state is a question whose answer has shifted dramatically over time. As the Cambridge Dictionary defines it, progress is simply “movement to an improved or more developed state.” But whose state is being improved? And toward what future are we truly moving? The illusion of progress is perhaps most evident in technology, where breathtaking innovation often masks a troubling truth: the benefits are frequently unevenly shared, concentrating power and wealth while leaving many behind.

Historically, the definition of progress was a reflection of the era’s dominant ideology. The medieval period saw it as a spiritual journey toward salvation. The Enlightenment shattered this, replacing it with the ascent of humanity through reason, science, and the triumph over superstition. This optimism fueled the Industrial Revolution, where thinkers like Auguste Comte and Herbert Spencer saw progress as an unstoppable climb toward knowledge and material prosperity. But this vision was a mirage for many. The same steam engines that powered unprecedented economic growth subjected workers to brutal, dehumanizing conditions. The Gilded Age enriched railroad magnates and steel barons while workers struggled in poverty and faced violent crackdowns.

Today, a similar paradox haunts our digital age. Meet Maria, a fictional yet representative 40-year-old factory worker in Flint, Michigan. For decades, her livelihood was a steady source of income. But last year, the factory where she worked introduced an AI-powered assembly line, and her job, along with hundreds of others, was automated away. Maria’s story is not an isolated incident; it’s a global narrative that reflects the experiences of billions. Technologies like the microchip and generative AI promise to solve complex problems, yet they often deepen inequality in their wake. Her story is a poignant call to arms, demanding that we re-examine our collective understanding of progress.

This essay argues for a new, more deliberate definition of progress—one that moves beyond the historical optimism rooted in automatic technological gains and instead prioritizes equity, empathy, and sustainability. We will explore the clash between techno-optimism—a blind faith in technology’s ability to solve all problems—and techno-realism—a balanced approach that seeks inclusive and ethical innovation. Drawing on the lessons of history and the urgent struggles of individuals like Maria, we will chart a course toward a progress that uplifts all, not just the powerful and the privileged.


The Myth of Automatic Progress

The allure of technology is a siren’s song, promising a frictionless world of convenience, abundance, and unlimited potential. Marc Andreessen’s 2023 “Techno-Optimist Manifesto” captured this spirit perfectly, a rallying cry for the belief that technology is the engine of all good and that any critique is a form of “demoralization.” However, this viewpoint ignores the central lesson of history: innovation is not inherently a force for equality.

The Industrial Revolution, while a monumental leap for humanity, was a masterclass in how progress can widen the chasm between the rich and the poor. Factory owners, the Andreessens of their day, amassed immense wealth, while the ancestors of today’s factory workers faced dangerous, low-wage jobs and lived in squalor. Today, the same forces are at play. A 2023 McKinsey report projected that up to 30% of U.S. jobs could be automated by 2030, a seismic shift that will disproportionately affect low-income workers, the very demographic to which Maria belongs.

Progress, therefore, is not an automatic outcome of innovation; it is a result of conscious choices. As economists Daron Acemoglu and Simon Johnson argue in their pivotal 2023 book Power and Progress, the distribution of a technology’s benefits is not predetermined.

“The distribution of a technology’s benefits is not predetermined but rather a result of governance and societal choices.” — Daron Acemoglu and Simon Johnson, Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity

Redefining progress means moving beyond the naive assumption that technology’s gains will eventually “trickle down” to everyone. It means choosing policies and systems that uplift workers like Maria, ensuring that the benefits of automation are shared broadly rather than being captured solely as corporate profits.


The Uneven Pace of Progress

Our perception of progress is often skewed by the dizzying pace of digital advancements. We see the exponential growth of computing power and the rapid development of generative AI and mistakenly believe this is the universal pace of all human progress. But as Vaclav Smil, a renowned scholar on technology and development, reminds us, this is a dangerous illusion.

“We are misled by the hype of digital advances, mistaking them for universal progress.” — Vaclav Smil, The Illusion of Progress: The Promise and Peril of Technology

A look at the data confirms Smil’s point. According to the International Energy Agency (IEA), the global share of fossil fuels in the primary energy mix only dropped from 85% to 80% between 2000 and 2022—a change so slow it’s almost imperceptible. Simultaneously, global crop yields for staples like wheat have largely plateaued since 2010, and an estimated 735 million people were undernourished in 2022, a stark reminder that our most fundamental challenges aren’t being solved by the same pace of innovation we see in Silicon Valley.

Even the very tools of the digital revolution can be a source of regression. Social media, once heralded as a democratizing force, has become a powerful engine for division and misinformation. For example, a 2023 BBC report documented how WhatsApp was used to fuel ethnic violence during the Kenyan elections. These platforms, while distracting us with their endless streams of content, often divert our attention from the deeper, more systemic issues squeezing families like Maria’s, such as stagnant wages and rising food prices. Yet, progress is possible when innovation is directed toward systemic challenges. The rise of microgrid solar systems in Bangladesh, which has provided electricity to millions of households, demonstrates how targeted technology can bridge gaps and empower communities. Redefining progress means prioritizing these systemic solutions over the next shiny gadget.


Echoes of History in Today’s World

Maria’s job loss in Flint isn’t a modern anomaly; it’s an echo of historical patterns of inequality and division. It resonates with the Gilded Age of the late 19th century, when railroad monopolies and steel magnates amassed colossal fortunes while workers faced brutal, 12-hour days in unsafe factories. The violent Homestead Strike of 1892, where workers fought against wage cuts, is a testament to the bitter class struggle of that era. Today, wealth inequality rivals that gilded age, with a recent Oxfam report showing that the world’s richest 1% have captured almost two-thirds of all new wealth created since 2020. Families like Maria’s are left to struggle with rising rents and stagnant wages, a reality far removed from the promise of prosperity.

“History shows that technological progress often concentrates wealth unless society intervenes.” — Daron Acemoglu and Simon Johnson, Power and Progress

Another powerful historical parallel is the Dust Bowl of the 1930s. Decades of poor agricultural practices and corporate greed led to an environmental catastrophe that displaced 2.5 million people. This is an eerie precursor to our current climate crisis. A recent NOAA report on California’s wildfires shows how a similar failure to prioritize long-term well-being is now displacing millions more, just as it did nearly a century ago.

In Flint, the social fabric is strained, with some residents blaming immigrants for economic woes—a classic scapegoat tactic that ignores the significant contributions of immigrants to the U.S. economy. This echoes the xenophobic sentiment of the 1920s Red Scare. Unchecked AI-driven misinformation and viral “deepfakes” are the modern equivalent of 1930s radio propaganda, amplifying fear and division.

“We shape our tools, and thereafter our tools shape us, often reviving old divisions.” — Yuval Noah Harari, Homo Deus: A Brief History of Tomorrow

Yet, history is also a source of hope. Germany’s proactive refugee integration programs in the mid-2010s, which trained and helped integrate hundreds of thousands of migrants into the workforce, show that societies can choose inclusion over exclusion. A new definition of progress demands that we confront these cycles of inequality, fear, and division. By choosing empathy and equity, we can ensure that technology serves to bridge divides and uplift communities like Maria’s, rather than fracturing them further.


The Perils of Techno-Optimism

The belief that technology will, on its own, solve our most pressing problems is a seductive but dangerous trap. It promises a quick fix while delaying the difficult, structural changes needed to address crises like climate change and social inequality. In their analysis of climate discourse, scholars Sofia Ribeiro and Viriato Soromenho-Marques argue that techno-optimism is a distraction from necessary action.

“Techno-optimism distracts from the structural changes needed to address climate crises.” — Sofia Ribeiro and Viriato Soromenho-Marques, The Techno-Optimists of Climate Change

The Arctic’s indigenous communities, like the Inuit, face the existential threat of melting permafrost. Meanwhile, some oil companies tout expensive and unproven technologies like direct air capture to justify continued fossil fuel extraction, all while delaying the real solutions—a massive investment in renewable energy. This is not progress; it is a corporate strategy to delay accountability, echoing the tobacco industry’s denialism of the 1980s. As Nathan J. Robinson’s 2023 critique in Current Affairs notes, techno-optimism is a form of “blind faith” that ignores the need for regulation and ethical oversight, risking a repeat of catastrophes like the 2008 financial crisis.

The gig economy is a perfect microcosm of this peril. Driven by AI platforms like Uber, it exemplifies how technology can optimize for profits at the expense of fairness. A recent study from UC Berkeley found that a significant portion of gig workers earn below the minimum wage, as algorithms prioritize efficiency over worker well-being. Today, unchecked AI is amplifying these harms, with a 2023 Reuters study finding that a large percentage of content on platforms like X is misleading, fueling division and distrust.

“Technology without politics is a recipe for inequality and instability.” — Evgeny Morozov, The Net Delusion: The Dark Side of Internet Freedom

Yet, rejecting blind techno-optimism is not a rejection of technology itself. It is a demand for a more responsible, regulated approach. Denmark’s wind energy strategy, which has made it a global leader in renewables, is a testament to how pragmatic government regulation and public investment can outpace the empty promises of technowashing. Redefining progress means embracing this kind of techno-realism.


Choosing a Techno-Realist Path

To forge a new definition of progress, we must embrace techno-realism—a balanced approach that harnesses innovation’s potential while grounding it in ethics, transparency, and human needs. As Margaret Gould Stewart, a prominent designer, argues, this is an approach that asks us to design technology that serves society, not just markets.

This path is not about rejecting technology, but about guiding it. Think of the nurses in rural Rwanda, where drones zip through the sky, delivering life-saving blood and vaccines to remote clinics. This is technology not as a shiny, frivolous toy, but as a lifeline, guided by a clear human need. History and current events show us that this path is possible. The Luddites of 1811 were not fighting against technology; they were fighting for fairness in the face of automation’s threat to their livelihoods. Their spirit lives on in the European Union’s landmark AI Act, which mandates transparency and safety standards to protect workers like Maria from biased algorithms. In Chile, a national program is retraining former coal miners to become renewable energy technicians, demonstrating that a just transition to a sustainable future is possible.

The heart of this vision is empathy. Finland’s national media literacy curriculum, which has been shown to be effective in combating misinformation, is a powerful model for equipping citizens to navigate the digital world. In Mexico, indigenous-led conservation projects are blending traditional knowledge with modern science to heal the land. As Nobel laureate Amartya Sen wrote, true progress is about a fundamental expansion of human freedom.

“Development is about expanding the freedoms of the disadvantaged, not just advancing technology.” — Amartya Sen, Development as Freedom

Costa Rica’s incredible achievement of powering its grid with nearly 100% renewable energy is a beacon of what is possible when a nation aligns innovation with ethics. These stories—from Rwanda’s drones to Mexico’s forests—prove that technology, when guided by history, regulation, and empathy, can serve all.


Conclusion: A Progress We Can All Shape

Maria’s story—her job lost to automation, her family struggling in a community beset by historical inequities—is not a verdict on progress but a powerful, clear-eyed challenge. It forces us to confront the fact that progress is not an inevitable, linear march toward a better future. It is a series of deliberate choices, a constant negotiation between what is technologically possible and what is ethically and socially responsible. The historical echoes of inequality, environmental neglect, and division are loud, but they are not our destiny.

Imagine Maria today, no longer a victim of technological displacement but a beneficiary of a new, more inclusive model. Picture her retrained as a solar technician, her hands wiring a community-owned energy grid that powers Flint’s homes with clean energy. Imagine her voice, once drowned out by economic hardship, now rising on social media to share stories of unity and resilience. This vision—where technology is harnessed for all, guided by ethics and empathy—is the progress we must pursue.

The path forward lies in action, not just in promises. It requires us to engage in our communities, pushing for policies that protect and empower workers. It demands that we hold our leaders accountable, advocating for a future where investments in renewable energy and green infrastructure are prioritized over short-term profits. It requires us to support initiatives that teach media literacy, allowing us to discern truth from the fog of misinformation. It is in these steps, grounded in the lessons of history, that we turn a noble vision into a tangible reality.

Progress, in its most meaningful sense, is not about the speed of a microchip or the efficiency of an algorithm. It is about the deliberate, collective movement toward a society where the benefits of innovation are shared broadly, where the most vulnerable are protected, and where our shared future is built on the foundations of empathy, community, and sustainability. It is a journey we must embark on together, a progress we can all shape.


Progress: movement to a collectively improved and more inclusively developed state, resulting in a lessening of economic, political, and legal inequality, a strengthening of community, and a furthering of environmental sustainability.


THIS ESSAY WAS WRITTEN AND EDITED UTILIZING AI

Essay: The Corporate Contamination of American Healthcare

By Michael Cummins, Editor, Intellicurean, August 1, 2025

American healthcare wasn’t always synonymous with bankruptcy, bureaucracy, and corporate betrayal. In its formative years, before mergers and market forces reshaped the landscape, the United States relied on a patchwork of community hospitals, charitable clinics, and physician-run practices. The core mission, though unevenly fulfilled, was simply healing. Institutions often arose from religious benevolence or civic generosity, guided by mottos like “Caring for the Community” or “Service Above Self.” Medicine, while never entirely immune to power or prejudice, remained tethered to the idea that suffering shouldn’t be monetized. Doctors frequently knew their patients personally, treating entire families across generations, with decisions driven primarily by clinical judgment and the patient’s best interest, not by algorithms from third-party payers.

Indeed, in the 1950s, 60s, and 70s, independent physicians took pride in their ability to manage patient care holistically. They actively strove to keep patients out of emergency rooms and hospitals through diligent preventative care and timely office-based interventions. During this era, patients generally held their physicians in high esteem, readily accepting medical recommendations and taking personal responsibility for following through on advice, fostering a collaborative model of care. This foundational ethos, though romanticized in retrospect, represented a clear distinction from the profit-driven machine it would become.

But this premise was systematically dismantled—not through a single malicious act, but via incremental policies that progressively tilted the axis from service to sale. The Health Maintenance Organization (HMO) Act of 1973, for instance, championed by the Nixon administration with the stated aim of curbing spiraling costs, became a pivotal gateway for private interests. It incentivized the creation of managed care organizations, promising efficiency through competition and integrated services. Managed care was born, and with it, the quiet, insidious assumption that competition, a force lauded in other economic sectors, would somehow produce compassion in healthcare.

It was a false promise, a Trojan horse for commercialization. This shift led to a strained patient-physician relationship today, contrasting sharply with earlier decades. Modern interactions are often characterized by anxiety and distrust, with the “AI-enabled patient,” frequently misinformed by online data, questioning their doctor’s expertise and demanding expensive, potentially unnecessary treatments. “A little bit of knowledge is a dangerous thing. Drink deep, or taste not the Pierian spring,” as Alexander Pope observed in “An Essay on Criticism” in 1711. Worse still, many express an unwillingness to pay for these services, often accumulating uncollectible debt that shifts the financial burden elsewhere.

Profit Motive vs. Patient Care: The Ethical Abyss Deepens

Within this recoding of medicine, ethical imperatives have been warped into financial stratagems, creating an ethical abyss that compromises the very essence of patient care. In boardrooms far removed from the sickbed, executives, often without medical training, debate the cost-benefit ratios of compassion. The pursuit of “efficiency” and “value” in these settings often translates directly into cost-cutting measures that harm patient outcomes and demoralize medical professionals. The scope of this problem is vast: total U.S. healthcare spending exceeded $4.5 trillion in 2022, representing over 17% of the nation’s GDP, far higher than in any other developed country.

“American healthcare has been able to turn acute health and medical conditions into a monetizable chronic condition.” (The editor of Intellicurean)

Insurance companies—not medical professionals—routinely determine what qualifies as “essential” medical care. Their coverage decisions are often based on complex algorithms designed to minimize payouts and maximize profits, rather than clinical efficacy. Denials are issued algorithmically, often with minimal human review. For instance, a 2023 study by the Kaiser Family Foundation revealed that private insurers deny an average of 17% of in-network claims, translating to hundreds of millions of denials annually. These aren’t minor rejections; they often involve critical surgeries, life-saving medications, or extended therapies.

Appeals become Kafkaesque rituals of delay, requiring patients, often already sick and vulnerable, to navigate labyrinthine bureaucratic processes involving endless phone calls, mountains of paperwork, and protracted legal battles. For many patients, the options are cruelly binary: accept substandard or insufficient care, or descend into crippling medical debt by paying out-of-pocket for treatments deemed “non-essential” by a corporate entity. The burden of this system is vast: a 2023 KFF report found that medical debt in the U.S. totals over $140 billion, with millions of people owing more than $5,000.

Another significant burden on the system comes from patients requiring expensive treatments that, while medically necessary, drive up costs. Insurance companies may cover these treatments, but the cost is often passed on to other enrollees through increased premiums. This creates a cross-subsidization that raises the price of healthcare for everyone, even for the healthiest individuals, further fueling the cycle of rising costs. This challenge is further complicated by the haunting specter of an aging population. While spending in the last 12 months of life accounts for an estimated 8.5% to 13% of total US medical spending, for Medicare specifically, the number can be as high as 25-30% of total spending. A significant portion of this is concentrated in the last six months, with some research suggesting nearly 40% of all end-of-life costs are expended in the final month. These costs aren’t necessarily “wasteful,” as they reflect the intense care needed for individuals with multiple chronic conditions, but they represent a massive financial burden on a system already straining under corporate pressures.

“The concentration of medical spending in the final months of life is not just a statistical anomaly; it is the ultimate moral test of a system that has been engineered for profit, not for people.” (Dr. Samuel Chen, Director of Bioethics at the National Institute for Public Health)

The ethical abyss is further widened by a monumental public health crisis: the obesity epidemic. The Centers for Disease Control and Prevention (CDC) reports that over 40% of American adults are obese, a condition directly linked to an array of chronic, expensive, and life-shortening ailments. This isn’t just a lifestyle issue; it’s a systemic burden that strains the entire healthcare infrastructure. The economic fallout is staggering, with direct medical costs for obesity-related conditions estimated to be $173 billion annually (as of 2019 data), representing over 11% of U.S. medical expenditures.

“We’ve created a perverse market where the healthier a population gets, the less profitable the system becomes. The obesity epidemic is a perfect storm for this model: a source of endless, monetizable illness.” (Dr. Eleanor Vance, an epidemiologist at the Institute for Chronic Disease Studies)

While the healthcare industry monetizes these chronic conditions, a true public health-focused system would prioritize aggressive, well-funded preventative care, nutritional education, and community wellness programs. Instead, the current system is engineered to manage symptoms rather than address root causes, turning a public health emergency into a profitable, perpetual business model. This same dynamic applies to other major public health scourges, from alcohol and substance use disorders to the widespread consumption of junk food. The treatment for these issues—whether through long-term addiction programs, liver transplants, or bariatric surgery—generates immense revenue for hospitals, clinics, and pharmaceutical companies. The combined economic cost of alcohol and drug misuse is estimated to be over $740 billion annually, according to data from the National Institutes of Health.

The food and beverage industry, in turn, heavily lobbies against public health initiatives like soda taxes or clear nutritional labeling, ensuring that the source of the problem remains profitable. The cycle is self-sustaining: corporations profit from the products that cause illness, and then the healthcare system profits from treating the resulting chronic conditions. These delays aren’t accidents; they’re operational strategies designed to safeguard margins.

Efficiency in this ecosystem isn’t measured by patient recovery times or improved health metrics but by reduced payouts and increased administrative hurdles that deter claims. The longer a claim is delayed, the more likely a patient might give up, or their condition might worsen to the point where the original “essential” treatment is no longer viable, thereby absolving the insurer of payment. This creates a perverse incentive structure where the healthier a population is, and the less care they use, the more profitable the insurance company becomes, leading to a system fundamentally at odds with public well-being.

Hospitals, once symbols of community care, now operate under severe investor mandates, pressuring staff to increase patient throughput, shorten lengths of stay, and maximize billable services. Counseling, preventive care, and even the dignified, compassionate end-of-life discussions that are crucial to humane care are often recast as financial liabilities, as they don’t generate sufficient “revenue per minute.” Procedures are streamlined not for optimal medical necessity or patient comfort but for profitability and rapid turnover. This relentless drive for volume can compromise patient safety. The consequences are especially dire in rural communities, which often serve older, poorer populations with higher rates of chronic conditions.

Private equity acquisitions, in particular, often lead to closures, layoffs, and “consolidations” that leave entire regions underserved, forcing residents to travel vast distances for basic emergency or specialty care. According to data from the American Hospital Association, over 150 rural hospitals have closed since 2010, many after being acquired by private equity firms, which have invested more than $750 billion in healthcare since 2010 (according to PitchBook data), leaving millions of Americans in “healthcare deserts.”

“Private equity firms pile up massive debt on their investment targets and… bleed these enterprises with assorted fees and dividends for themselves.” (Laura Katz Olson, in Ethically Challenged: How Private Equity Firms Are Impacting American Health Care)

The metaphor is clinical: corporate entities are effectively hemorrhaging the very institutions they were meant to sustain, extracting capital while deteriorating services. Olson further details how this model often leads to reduced nurse-to-patient ratios, cuts in essential support staff, and delays in equipment maintenance, directly compromising patient safety and quality of care. This “financial engineering” transforms a vital public service into a mere asset to be stripped for parts.

Pharmaceutical companies sharpen the blade further. Drugs like insulin—costing mere dollars to produce (estimates place the manufacturing cost for a vial of insulin at around $2-$4)—are sold for hundreds, and sometimes thousands, of dollars per vial in the U.S. These exorbitant prices are shielded by a labyrinth of evergreening patents, aggressive lobbying, and strategic maneuvers to suppress generic competition. Epinephrine auto-injectors (EpiPens), indispensable and time-sensitive for severe allergic reactions, similarly became emblematic of this greed, with prices skyrocketing by over 400% in less than a decade, from around $100 in 2009 to over $600 by 2016. Monopoly pricing isn’t just unethical—it’s lethal, forcing patients to ration life-saving medication, often with fatal consequences.

“The U.S. pays significantly more for prescription drugs than other high-income countries, largely due to a lack of government negotiation power and weaker price regulations.” (A Commonwealth Fund analysis)

This absence of negotiation power allows pharmaceutical companies to dictate prices, viewing illnesses as guaranteed revenue streams. The global pharmaceutical market is a massive enterprise, with the U.S. alone accounting for over 40% of global drug spending, highlighting the industry’s immense financial power within the country.

Meanwhile, physicians battle burnout at rates previously unimaginable, a crisis that predates but was exacerbated by recent global health challenges. But the affliction isn’t just emotional; it’s systemic.

“The healthcare system contributes to physician suffering and provides recommendations for improving the culture of medicine.” (Dimitrios Tsatiris, in his 2025 book, Healthcare Is Killing Me: Burnout and Moral Injury in the Age of Corporate Medicine)

Tsatiris highlights how administrative burdens—such as endless electronic health record (EHR) documentation, pre-authorization requirements, and quality metrics that often feel detached from actual patient care—consume up to half of a physician’s workday. The culture, as it stands, is one of metrics, audits, and profound moral dissonance, where doctors feel increasingly alienated from their core mission of healing.

This moral dissonance is compounded by the ever-present threat of malpractice litigation. Today’s physician is often criticized for sending too many patients to the emergency room, perceived as an unnecessary cost driver. However, the alternative is fraught with peril: in the event they don’t send a patient to the ER and a severe outcome occurs, they can be sued and held personally liable, driving up malpractice insurance premiums and fostering a culture of defensive medicine. This creates a perverse incentive to err on the side of caution—and higher costs—even when clinical judgment might suggest a less aggressive, or more localized, approach.

Doctors are punished for caring too much, for spending extra minutes with a distressed patient when those minutes aren’t billable. Nurses are punished for caring too long, forced to oversee overwhelming patient loads due to understaffing. The clinical encounter, once sacred and unhurried, has been disfigured into a race against time and billing software, reducing human interaction to a series of data entries. This systemic pressure ultimately compromises the quality of care and the well-being of those dedicated to providing it.

The Missing Half of the Equation: Patient Accountability

The critique of corporate influence, however, cannot absolve the patient of their role in this crisis. A sustainable and ethical healthcare system requires a reciprocal relationship between providers and recipients of care. While the system is engineered to profit from illness, the choices of individuals can either fuel this machine or actively work against it. This introduces a critical and often uncomfortable question: where does personal responsibility fit into a system designed to treat, not prevent, disease?

The most significant financial and physical burdens on the American healthcare system are a direct result of preventable chronic conditions. The obesity epidemic, for instance, is not just a statistical anomaly; it is a profound failure of both a profit-driven food industry and a culture that has de-emphasized personal well-being. A system that must manage the downstream effects of sedentary lifestyles, poor nutrition, and substance abuse is inherently overstretched. While the system profits from treating these conditions, the individual’s choices contribute to the collective cost burden for everyone through higher premiums and taxes. A true reformation of healthcare must therefore be a cultural one, where individuals are empowered and incentivized to engage in self-care as a civic duty.

Preventative care is often framed as an action taken in a doctor’s office—a check-up, a screening, a vaccination. But the most impactful preventative care happens outside of the clinic. It is in the daily choices of diet, exercise, stress management, and sleep. A reformed system could and should champion this type of self-care. It would actively promote nutritional education and community wellness programs, recognizing that these are not “extras” but essential, cost-saving interventions.

“Patients bear a moral and practical responsibility for their own health through lifestyle choices. By engaging in preventative care and healthy living, they not only improve their personal well-being but also act as a crucial partner in the stewardship of finite healthcare resources. A just system of care must therefore recognize and support this partnership by making treatment accessible through means-based financial responsibility, ensuring that necessary care is never a luxury, but rather a right earned through shared commitment to health.” (From reviews of publications like the AMA Journal of Ethics, as cited by Intellicurean)

This approach would reintroduce a sense of shared responsibility, where patients are not just passive consumers but active participants in their own health journey and the health of the community. This is not about blaming the sick; it’s about building a sustainable and equitable system where every member plays a part.

A System of Contradictions: Advanced Technology, Primitive Access

American healthcare boasts unparalleled technological triumphs: robotic surgeries, groundbreaking gene therapies, AI-driven diagnostics, and personalized medicine that seemed like science fiction just a decade ago. And yet, for all its dazzling innovation, it remains the most inaccessible system among wealthy nations. This isn’t a paradox—it’s a stark, brutal contradiction rooted in profiteering, a testament to a system that prioritizes cutting-edge procedures for a few over basic access for all.

Millions remain uninsured. Even with the Affordable Care Act (ACA), approximately 26 million Americans remained uninsured in 2023, representing 8% of the population, according to the U.S. Census Bureau. Millions more endure insurance plans so riddled with exclusions, high deductibles, and narrow networks that coverage is, at best, illusory—often referred to as “junk plans.” For these individuals, a single emergency room visit can summon financial ruin.

The Commonwealth Fund’s 2024 report, “The Burden of Health Care Costs on U.S. Families,” found that nearly half of U.S. adults (49%) reported difficulty affording healthcare costs in the past year, with 29% saying they skipped or delayed care due to cost. This isn’t the failure of medical science or individual responsibility; it’s the direct consequence of policy engineered for corporate profit, where profit margins are prioritized over public health and economic stability.

“Patients being saddled with high bills, less accessible health care.” (Center for American Progress, in its September 2024 report “5 Ways Project 2025 Puts Profits Over Patients”)

The statistics are blunt, but the human toll is brutal—families delaying crucial preventative screenings, rationing life-sustaining medications, and foregoing necessary doctor visits. This forced delay or avoidance of care exacerbates chronic conditions, leads to more severe acute episodes, and ultimately drives up overall healthcare costs as untreated conditions become emergencies.

The marketplace offers these “junk” plans—low-premium, high-deductible insurance packages that cover little and confuse much. They are often marketed aggressively, sold with patriotic packaging and exploiting regulatory loopholes, but they deliver little beyond financial instability and false security. These plans disproportionately affect lower-income individuals and communities of color, who are often steered towards them as their only “affordable” option.

For instance, Black and Hispanic adults are significantly more likely to report medical debt than their White counterparts, even when insured. A 2022 study published in JAMA Network Open found that Black adults were 50% more likely to hold medical debt than White adults, and Hispanic adults were 30% more likely. This disparity reflects deeper systemic inequities, where a profit-driven system exacerbates existing racial and economic injustices.

Core public health services—mental health, maternal care, chronic disease management, and preventative care—receive paltry funding and are consistently difficult to access unless they are highly monetizable. The economic logic is ruthless: if a service doesn’t generate significant revenue, it doesn’t merit substantial corporate investment. This creates a fragmented system where crisis intervention is prioritized over holistic well-being, leading to a mental health crisis, rising maternal mortality rates (especially among Black women, who are 2.6 times more likely to die from pregnancy-related causes than White women), and uncontrolled epidemics of chronic diseases like diabetes and heart disease.

Even public institutions like the Centers for Disease Control and Prevention (CDC) and the Food and Drug Administration (FDA), once considered bastions of scientific authority and public trust, have seen their credibility questioned. The decline isn’t a function of conspiracy or scientific incompetence—it’s the direct consequence of their proximity to, and perceived capture by, corporate interests. Pharmaceutical lobbyists heavily influence drug approval timelines and post-market surveillance. Political appointees, often with ties to industry, dilute public health messaging or prioritize economic considerations over scientific consensus. The suspicion is earned, and it undermines the very infrastructure of collective health protection.

“Forced to devote substantial time and resources to clear insurer-imposed administrative hurdles, physicians feel powerless and wholly unable to provide patients with timely access to evidence-based care.” (Dr. Jack Resneck Jr., MD, former President of the American Medical Association (AMA))

The physician’s lament crystallizes the crisis. This reflects a profound loss of professional autonomy and moral injury among those dedicated to healing. Medicine is no longer a nuanced conversation between expert and patient—it is a transaction administered by portal, by code, by pre-authorization, stripping away the human connection that is vital to true care.

The Rising Resistance: Reclaiming the Soul of Medicine

Yet even amid this profound disillusionment and systemic capture, resistance blooms. Physicians, nurses, activists, policy architects, and millions of ordinary Americans have begun to reclaim healthcare’s moral foundation. Their campaign isn’t merely legislative or economic—it’s existential, a fight for the very soul of the nation’s commitment to its people.

Grassroots organizations like Physicians for a National Health Program (PNHP) and Public Citizen are at the forefront, vigorously arguing for a publicly funded, universally accessible system. Their premise isn’t utopian but ethical and pragmatic: health is a fundamental human right, not a commodity to be bought or a reward for economic success. They point out the immense administrative waste inherent in the current multi-payer system, where billions are spent on billing, marketing, and claims processing rather than direct patient care.

A 2020 study published in the Annals of Internal Medicine estimated that U.S. administrative healthcare costs amounted to $812 billion in 2017, representing 34% of total healthcare expenditures, significantly higher than in comparable countries with universal systems. This staggering figure represents money siphoned away from nurses’ salaries, vital equipment, and preventative programs, disappearing into the bureaucratic machinery of profit.

Nursing unions have emerged as fierce and indispensable advocates for patient safety, pushing for legally mandated staffing ratios, equitable compensation, and genuinely patient-centered care. They understand that burnout isn’t an individual failure but an institutional betrayal, a direct result of corporate decisions to cut corners and maximize profits by overloading their frontline workers. Their strikes and advocacy efforts highlight the direct link between safe staffing and patient outcomes, forcing a public conversation about the true cost of “efficiency.”

“A unified system run by health care professionals—not politicians or commercial insurers—that offers universal coverage and access.” (Gilead I. Lancaster, in his 2023 book, Building a Unified American Health Care System: A Blueprint for Comprehensive Reform)

Lancaster’s blueprint provides a detailed roadmap for a system that puts medical expertise and public health at its core, stripping away the layers of financial intermediation that currently obfuscate and obstruct care.

The Medicare for All proposal, while polarizing in mainstream political discourse, continues to gain significant traction among younger voters, disillusioned professionals, and those who have personally suffered under the current system. It promises to erase premiums, eliminate deductibles and co-pays, and expand comprehensive access to all medically necessary services for every American. Predictably, it faces ferocious and well-funded opposition from the entrenched healthcare industry—an industry that spends staggering sums annually on lobbying. According to OpenSecrets, the healthcare sector (including pharmaceuticals, health services, and insurance) spent over $675 million on federal lobbying in 2024 alone, deploying an army of lobbyists to protect their vested interests and sow doubt about single-payer alternatives.

Terms like “government takeover” and “loss of choice” pollute the public discourse, weaponized by industry-funded campaigns. But what “choice” do most Americans actually possess? The “choice” between financial ruin from an unexpected illness or delaying life-saving care isn’t liberty—it’s coercion masked as autonomy, a perverse redefinition of freedom. For the millions who face medical debt, unaffordable premiums, or simply lack access to specialists, “choice” is a cruel joke.

The resistance is deeply philosophical. Reformers seek to restore medicine as a vocation—an act of trust, empathy, and collective responsibility—rather than merely a transaction. They reference global models: Canada’s single-payer system, the UK’s National Health Service, France’s universal coverage, Germany’s multi-payer but non-profit-driven system. These systems consistently offer better health outcomes, lower per-capita costs, and vastly fewer financial surprises for their citizens. For instance, the U.S. spends roughly $13,490 per person on healthcare annually, nearly double the average of other high-income countries, which spend an average of $6,800 per person (according to the OECD). This stark contrast provides irrefutable evidence that the U.S. system’s astronomical cost isn’t buying better health, but rather fueling corporate profits.

The evidence is not in dispute. The question, increasingly, is whether Americans will finally demand a different social contract, one that prioritizes health and human dignity over corporate wealth.

The Path Forward: A New Social Contract

The corporate contamination of American healthcare isn’t an organic evolution; it’s engineered—through decades of deliberate policy decisions, regulatory capture, and a dominant ideology that privileged profit over people. This system was built, brick by brick, by powerful interests who saw an opportunity for immense wealth in the vulnerabilities of the sick. And systems that are built can, with collective will and sustained effort, be dismantled and rebuilt.

But dismantling isn’t demolition; it’s reconstruction—brick by ethical brick. It requires a profound reimagining of what healthcare is meant to be in a just society. Healthcare must cease to be a battleground between capital and care. It must become a sanctuary—a fundamental social commitment embedded in the national psyche, recognized as a public good, much like education or clean water. This commitment necessitates a radical reorientation of values within the system itself.

This will require bold, transformative legislation: a fundamental redesign of funding models, payment systems, and institutional accountability. This includes moving towards a single-payer financing system, robust price controls on pharmaceuticals, stringent regulations on insurance companies, and a re-evaluation of private equity’s role in essential services.

As editor of Intellicurean, I propose an innovative approach: establishing new types of “healthcare cash accounts,” specifically designated and utilizable only for approved sources of preventative care. These accounts could be funded directly by a combination of tax credits from filed tax returns and a tax on “for-profit” medical system owners and operators, health insurance companies, pharmaceutical companies, publicly held food companies, and a .05% tax on billionaires and other sources.

These accounts could be administered and accounted for by approved banks or fiduciary entities, ensuring transparency and appropriate use of funds. Oversight could be further provided by an independent review board composed of diverse stakeholders, including doctors, clinicians, and patient advocates, ensuring funds are directed towards evidence-based wellness initiatives rather than profit centers.

As a concrete commitment to widespread preventative health, all approved accountholders, particularly those identified with common deficiencies, could also be provided with essential, evidence-backed healthy supplements such as Vitamin D, and where appropriate, a combination of Folic Acid and Vitamin B-12, free of charge. This initiative recognizes the low cost and profound impact of these foundational nutrients on overall well-being, neurological health, and disease prevention, demonstrating a system that truly invests in keeping people healthy rather than simply treating illness.

Americans must shed the pervasive consumerist lens through which healthcare is currently viewed. Health isn’t merely a product or a service to be purchased; it’s a shared inheritance, intrinsically linked to the air we breathe, the communities we inhabit, and the equity we extend to one another. We must affirm that our individual well-being is inextricably tethered to our neighbor’s—that human dignity isn’t distributable by income bracket or insurance plan, but is inherent to every person. This means fostering a culture of collective responsibility, where preventative care for all is understood as a collective investment, and illness anywhere is recognized as a concern for everyone.

The path forward isn’t utopian; it’s political, and above all, moral. It demands courage from policymakers to resist powerful lobbies and courage from citizens to demand a system that truly serves them. Incrementalism, in the face of such profound systemic failure, has become inertia, merely postponing the inevitable reckoning. To wait is to watch the suffering deepen, the medical debt mount, and the ethical abyss widen. To act is to restore the sacred covenant between healer and healed.

The final question is not one of abstract spirituality, but of political will. The American healthcare system, with its unparalleled resources and cutting-edge innovations, has been deliberately engineered to serve corporate interests over public health. Reclaiming it will require a sustained, collective effort to dismantle the engine of profiteering and build a new social contract—one that recognizes health as a fundamental right, not a commodity.

This is a battle that will define the character of our society: whether we choose to continue to subsidize greed or to finally invest in a future where compassion and care are the true measures of our progress.

THIS ESSAY WAS WRITTEN AND EDITED BY MICHAEL CUMMINS UTILIZING AI

Organized Religion and the Quest for Autonomy

By Sue Passacantilli

Despite the rise of science and secularism, organized religion, particularly Western and Abrahamic faiths like Christianity, Judaism, and Islam, continues to exert immense influence on individuals and societies worldwide. From shaping political discourse to dictating moral codes, its reach is undeniable. But is this influence always benign?

This essay argues that organized religion, while often presented as a source of divine truth for its adherents, is fundamentally a human construct with a complex history. It has led to significant negative consequences and poses risks that demand critical examination. We’ll explore its origins as a means of social control, analyze the harm it has inflicted throughout history, assess the dangers of its unchecked power in the modern world, and finally, consider alternative paths to spiritual fulfillment that prioritize reason, compassion, and individual autonomy.


Origins as a Tool of Social Control

The earliest organized religions didn’t emerge solely from spiritual yearning; they were deeply entwined with the rise of centralized power. In ancient civilizations such as Mesopotamia, Egypt, and the Indus Valley, religious systems were meticulously crafted to reinforce political hierarchies and legitimize authority. Gods weren’t invoked as private sources of transcendence but as public affirmations of rule. Kings and pharaohs claimed divine sanction, and priesthoods became custodians of not only spiritual knowledge but also civic obedience. In these societies, religion wasn’t merely a personal belief system—it was a powerful mechanism for maintaining order and regulating behavior through divine surveillance.

Perhaps the most emblematic example is Hammurabi’s Code, inscribed in Babylon around 1754 BCE. Hammurabi declared that these laws had been bestowed upon him by Shamash, the Babylonian god of justice, thereby framing the legal code as a divine mandate rather than a human decree. The image of Hammurabi standing before Shamash, etched into the stele itself, visually elevated the law’s legitimacy by binding it to celestial authority. The Code governed issues ranging from property and trade to family and criminal justice, and its harsh penalties—like “an eye for an eye”—weren’t simply deterrents but reflections of cosmic balance. Justice was seen as divine reciprocity, and violating the law was tantamount to offending the gods themselves.

In ancient Egypt, the concept of Maat embodied truth, order, and divine equilibrium. The pharaoh, regarded as a living god, was tasked with maintaining Maat through just governance. Legal edicts issued by the pharaoh were seen as spiritual imperatives, and judges, often priests, were instructed to uphold these standards in their decisions. The vizier Rekhmire, under Thutmose III, recorded his duty to be impartial and reflect the divine wisdom of Maat in all judgments. In local settings, Kenbet councils, composed of elders and religious figures, handled minor disputes, merging communal norms with sacred oversight. Disobedience was more than a civic offense—it was a disruption of cosmic order.

These ancient legal-religious structures made law inseparable from morality and morality inseparable from religious dogma. Religion functioned as an instrument of social engineering, institutionalizing norms that were framed as sacred, thereby discouraging dissent and ensuring conformity. Obedience wasn’t just expected; it was sanctified.

Friedrich Nietzsche’s haunting question—“Is man merely a mistake of God’s? Or God merely a mistake of man?”—forces us to reconsider the origins of divine authority and whether it reflects genuine spiritual insight or simply projections of human need. Thomas Paine, echoing this skepticism in The Age of Reason, wrote that “It is from the Bible that man has learned cruelty, rapine, and murder; for the belief of a cruel God makes a cruel man.” Paine’s indictment highlights how institutionalized texts, when shielded from critique, have historically served to justify violence and suppress alternative perspectives. When religion is codified into law, it becomes more than belief—it becomes a scaffold for society, morality, and power.


Historical Harms and Conflicts

Organized religion hasn’t only shaped societies; it has scarred them. Major historical events like the Crusades and the Inquisition weren’t merely spiritual endeavors; they were deeply political and economic campaigns cloaked in religious rhetoric. The Crusades, traditionally described as holy wars to reclaim sacred territories, were motivated by a complex blend of faith, ambition, and desire for material gain. While many participants earnestly believed they were undertaking a divine mission, this conviction was often stoked by papal promises of spiritual rewards and absolution of sins.

Beneath the spiritual fervor lay strategic political goals: European monarchs and nobles viewed the Crusades as opportunities to expand their realms, assert dominance, and gain prestige. The promise of new land, wealth, and access to lucrative trade routes added powerful economic incentives. Even the Church benefited, using the movement to unify Christendom and bolster its supremacy over secular rulers. The First Crusade culminated in a gruesome massacre during the sack of Jerusalem, while the Fourth Crusade didn’t even reach the Holy Land—it ended in the plundering of Constantinople, a Christian city, explicitly exposing the secular aims masked by religious zeal.

The Inquisition, particularly in its Spanish and Papal forms, offers another chilling example of institutional religion weaponizing faith for control. At its core was a profound fear of heresy—not only as a spiritual deviation but as a direct challenge to ecclesiastical and political authority. The Church saw doctrinal purity as essential for its survival, and any deviation threatened its claim to divine legitimacy. Thus, heresy became synonymous with rebellion. The Inquisition was engineered to enforce uniform belief, employing surveillance, coercion, and torture to suppress dissent. It disproportionately targeted Jews, Muslims, and Protestants—not just for theological reasons, but also to solidify national and religious identity in post-Reconquista Spain. Social engineering played a central role: religious orthodoxy became a means of homogenizing the population under Catholic rule. Monarchs found the Inquisition a useful tool for eliminating opposition, cloaking political suppression in the sanctity of faith. Public spectacles like the auto-da-fé reinforced obedience through fear, making salvation contingent on submission.

Richard Dawkins, in The God Delusion, famously described the God of the Old Testament as “a petty, unjust, unforgiving control-freak… a capriciously malevolent bully.” Though intentionally provocative, his critique draws attention to the dangers of institutionalized belief—how sacred texts and doctrines, once embedded in systems of power, can become instruments of cruelty. Similarly, Napoleon Bonaparte’s assertion that “Religion is what keeps the poor from murdering the rich” reveals a more cynical view: religion not as a moral compass, but as a societal pacifier, preserving hierarchies and muting dissent. These historical episodes illustrate that organized religion, far from being a universal balm, has often served as a catalyst for division, violence, and authoritarian control.


Why Organized Religion Endures Despite Secularism

Despite the rise of secularism and scientific rationality, organized religion has endured—and in many regions, even flourished—due to its multifaceted role in fulfilling deeply rooted human needs. While critics rightly scrutinize its historical and political abuses, religion’s resilience is partly explained by its unparalleled capacity to offer meaning, belonging, and psychological stability. In times of uncertainty or suffering, religion provides a structured worldview that assures adherents of cosmic order and moral purpose, offering comfort in the face of death, injustice, or randomness. For many, faith communities serve as crucial social safety nets—providing charity, companionship, and guidance in ways secular institutions often struggle to replicate.

For instance, throughout history, religious institutions have often been at the forefront of social welfare, establishing the first hospitals, orphanages, and schools, and continuing to operate food banks and aid organizations today. Religious rituals, holidays, and sacred texts also create a powerful sense of continuity and identity across generations, fostering not only individual solace but strong communal cohesion.

In this light, the persistence of religion can’t be attributed solely to dogma or coercion, but to its symbolic richness and emotional resonance. The challenge, then, is not merely to reject religious institutions for their excesses, but to understand the existential vacuum they often fill. Any secular alternative aspiring to replace organized religion must grapple with these fundamental human functions—offering connection, ceremony, and a shared moral language—without reverting to authoritarian or exclusionary structures.


Modern Dangers of Institutional Power

In the contemporary world, organized religion continues to wield significant influence—often in ways that challenge democratic principles and individual freedoms. Religious institutions actively lobby for legislation that aligns with their moral doctrines, particularly on issues like reproductive rights and LGBTQ+ equality.

One of the most visible examples is the role of Evangelical Christian and Catholic organizations in shaping abortion policy in the United States. Following the Supreme Court’s decision to overturn Roe v. Wade in 2022, religious lobbying intensified across multiple states. Groups such as the Alliance Defending Freedom (ADF) and the Family Research Council (FRC) have supported laws that ban or severely restrict abortion access, often without exceptions for rape, incest, or maternal health. These organizations frame abortion as a moral and religious crisis, equating it with murder and advocating for fetal personhood amendments. In states like Texas and Mississippi, religious activists have successfully lobbied for near-total bans, and in some cases, have influenced the removal of medical exceptions, leaving women with life-threatening pregnancies without legal recourse.

Similarly, religious institutions have been central to opposition against LGBTQ+ rights, particularly through legal challenges and lobbying efforts that invoke religious liberty. In the landmark case Fulton v. City of Philadelphia, Catholic Social Services argued that their refusal to place foster children with same-sex couples was protected under the First Amendment. The Supreme Court ruled in favor of the agency, setting a precedent that allows religious organizations to bypass anti-discrimination laws in certain contexts. Other cases, such as 303 Creative v. Elenis, involved Christian business owners seeking exemptions from serving LGBTQ+ clients, claiming that doing so violated their religious beliefs.

These legal victories have emboldened religious lobbying groups to push for broader “Religious Freedom Restoration Acts” (RFRAs) at the state level. While originally intended to protect minority faiths, these laws are now often used to justify discrimination against LGBTQ+ individuals in areas like healthcare, education, and employment. For example, some states allow therapists or teachers to refuse services to LGBTQ+ youth based on religious objections, even when such refusals violate institutional nondiscrimination policies.

Bertrand Russell, a staunch advocate of rational inquiry, observed, “Religion is based, I think, primarily and mainly upon fear.” He argued that religious belief often arises not from evidence or reason, but from existential anxiety and the human desire for certainty. This fear-based foundation can lead to intolerance. When religious dogma is treated as absolute truth, it leaves little room for pluralism or dissent.

George Carlin, with characteristic wit, noted, “I’m completely in favor of the separation of Church and State. These two institutions screw us up enough on their own, so both of them together is certain death.” His humor belies a serious concern: when religious institutions gain political power, the result is often authoritarianism disguised as moral governance. In some regions, religious extremism has led to terrorism and sectarian violence. The danger lies not in belief itself, but in the institutionalization of belief as unchallengeable truth.


Toward a More Liberated Spirituality

Rejecting organized religion doesn’t mean rejecting spirituality. In fact, many individuals find deeper meaning and connection outside institutional frameworks. Secular humanism, nature-based spirituality, meditation, and philosophical inquiry offer paths to transcendence that prioritize autonomy and compassion.

Deepak Chopra distinguishes between religion and spirituality: “Religion is believing someone else’s experience, spirituality is having your own experience.” This shift—from external authority to internal exploration—marks a profound evolution in how we seek meaning.

Carl Sagan, in Pale Blue Dot, wrote, “Science is not only compatible with spirituality; it is a profound source of spirituality.” For Sagan, awe and wonder arise not from dogma but from the vastness and beauty of the cosmos—a spirituality rooted in reality.

Spirituality, when divorced from rigid doctrine, becomes a deeply personal journey. It encourages introspection, empathy, and ethical living without coercion. Practices like mindfulness, journaling, and philosophical reflection allow individuals to cultivate inner peace and moral clarity without intermediaries.

As Thomas Jefferson asserted, “Question with boldness even the existence of a god.” This call to intellectual courage invites us to examine inherited beliefs and forge our own understanding of existence.

In this liberated model, spirituality becomes inclusive rather than exclusive. It welcomes doubt, celebrates diversity, and honors the complexity of human experience. It is not a system to be obeyed, but a path to be walked—one that evolves with each step.

While secular spirituality offers personal freedom and introspective depth, critics often point out that it can lack the communal bonds and time-honored rituals that organized religion provides. Traditional religious institutions have long served as hubs of social connection, shared values, and intergenerational continuity. However, this sense of belonging isn’t exclusive to religious frameworks.

Many individuals are now finding community through secular congregations like Sunday Assembly, which mimic the structure of religious gatherings—complete with music, storytelling, and shared reflection—without invoking the divine. Others turn to meditation groups, ethical societies, or nature-based retreats, where collective practice fosters connection and shared purpose. Online platforms have also become fertile ground for spiritual communities, allowing people to engage in dialogue, rituals, and support networks across geographic boundaries.

As for tradition, new rituals are emerging—rooted in seasonal cycles, personal milestones, or collective values—that offer continuity and meaning without dogma. These evolving practices reflect a desire not to abandon tradition, but to reimagine it in ways that honor authenticity and inclusivity.


Conclusion

Organized religion, with its rituals and revelations, undeniably offers comfort and community to countless individuals. Yet, our exploration has illuminated its deeply human origins, its historical complicity in profound harms, and its continued entanglement with political power. When personal belief becomes institutionalized dogma, it risks becoming rigid, coercive, and resistant to the very human flourishing it often claims to foster.

The critical examination of these structures reveals the profound importance of distinguishing between genuine personal faith and the often-oppressive grip of institutional authority. The former can uplift and guide; the latter, as history shows, frequently seeks to control and suppress.

By embracing reason, empathy, and above all, individual autonomy, we empower ourselves to forge spiritual paths untethered from external mandates. These paths honor our inherent humanity, encourage ethical living, and allow us to reach for transcendence on our own terms. As Carl Jung wisely observed, “Your vision will become clear only when you look into your heart. Who looks outside, dreams; who looks inside, awakes.”

In awakening to our own inner truths, we reclaim the sacred from the hands of hierarchy and return it to the realm of personal meaning. That, perhaps, is the most divine act of all.

*This essay was written by Sue Passacantilli and edited by Intellicurean utilizing AI.