Tag Archives: Ethics

From Perks to Power: The Rise Of The “Hard Tech Era”

By Michael Cummins, Editor, August 4, 2025

Silicon Valley’s golden age once shimmered with the optimism of code and charisma. Engineers built photo-sharing apps and social platforms from dorm rooms that ballooned into glass towers adorned with kombucha taps, nap pods, and unlimited sushi. “Web 2.0” promised more than software—it promised a more connected and collaborative world, powered by open-source idealism and the promise of user-generated magic. For a decade, the region stood as a monument to American exceptionalism, where utopian ideals were monetized at unprecedented speed and scale. The culture was defined by lavish perks, a “rest and vest” mentality, and a political monoculture that leaned heavily on globalist, liberal ideals.

That vision, however intoxicating, has faded. As The New York Times observed in the August 2025 feature “Silicon Valley Is in Its ‘Hard Tech’ Era,” that moment now feels “mostly ancient history.” A cultural and industrial shift has begun—not toward the next app, but toward the very architecture of intelligence itself. Artificial intelligence, advanced compute infrastructure, and geopolitical urgency have ushered in a new era—more austere, centralized, and fraught. This transition from consumer-facing “soft tech” to foundational “hard tech” is more than a technological evolution; it is a profound realignment that is reshaping everything: the internal ethos of the Valley, the spatial logic of its urban core, its relationship to government and regulation, and the ethical scaffolding of the technologies it’s racing to deploy.

The Death of “Rest and Vest” and the Rise of Productivity Monoculture

During the Web 2.0 boom, Silicon Valley resembled a benevolent technocracy of perks and placation. Engineers were famously “paid to do nothing,” as the Times noted, while they waited out their stock options at places like Google and Facebook. Dry cleaning was free, kombucha flowed, and nap pods offered refuge between all-hands meetings and design sprints.

“The low-hanging-fruit era of tech… it just feels over.”
—Sheel Mohnot, venture capitalist

The abundance was made possible by a decade of rock-bottom interest rates, which gave startups like Zume half a billion dollars to revolutionize pizza automation—and investors barely blinked. The entire ecosystem was built on the premise of endless growth and limitless capital, fostering a culture of comfort and a lack of urgency.

But this culture of comfort has collapsed. The mass layoffs of 2022 by companies like Meta and Twitter signaled a stark end to the “rest and vest” dream for many. Venture capital now demands rigor, not whimsy. Soft consumer apps have yielded to infrastructure-scale AI systems that require deep expertise and immense compute. The “easy money” of the 2010s has dried up, replaced by a new focus on tangible, hard-to-build value. This is no longer a game of simply creating a new app; it is a brutal, high-stakes race to build the foundational infrastructure of a new global order.

The human cost of this transformation is real. A Medium analysis describes the rise of the “Silicon Valley Productivity Trap”—a mentality in which engineers are constantly reminded that their worth is linked to output. Optimization is no longer a tool; it’s a creed. “You’re only valuable when producing,” the article warns. The hidden cost is burnout and a loss of spontaneity, as employees internalize the dangerous message that their value is purely transactional. Twenty-percent time, once lauded at Google as a creative sanctuary, has disappeared into performance dashboards and velocity metrics. This mindset, driven by the “growth at all costs” metrics of venture capital, preaches that “faster is better, more is success, and optimization is salvation.”

Yet for an elite few, this shift has brought unprecedented wealth. Freethink coined the term “superstar engineer era,” likening top AI talent to professional athletes. These individuals, fluent in neural architectures and transformer theory, now bounce between OpenAI, Google DeepMind, Microsoft, and Anthropic in deals worth hundreds of millions. The tech founder as cultural icon is no longer the apex. Instead, deep learning specialists—some with no public profiles—command the highest salaries and strategic power. This new model means that founding a startup is no longer the only path to generational wealth. For the majority of the workforce, however, the culture is no longer one of comfort but of intense pressure and a more ruthless meritocracy, where charisma and pitch decks no longer suffice. The new hierarchy is built on demonstrable skill in math, machine learning, and systems engineering.

One AI engineer put it plainly in Wired: “We’re not building a better way to share pictures of our lunch—we’re building the future. And that feels different.” The technical challenges are orders of magnitude more complex, requiring deep expertise and sustained focus. This has, in turn, created a new form of meritocracy, one that is less about networking and more about profound intellectual contributions. The industry has become less forgiving of superficiality and more focused on raw, demonstrable skill.

Hard Tech and the Economics of Concentration

Hard tech is expensive. Building large language models, custom silicon, and global inference infrastructure costs billions—not millions. The barrier to entry is no longer market opportunity; it’s access to GPU clusters and proprietary data lakes. This stark economic reality has shifted the power dynamic away from small, scrappy startups and towards well-capitalized behemoths like Google, Microsoft, and OpenAI. The training of a single cutting-edge large language model can cost over $100 million in compute and data, an astronomical sum that few startups can afford. This has led to an unprecedented level of centralization in an industry that once prided itself on decentralization and open innovation.

The “garage startup”—once sacred—has become largely symbolic. In its place is the “studio model,” where select clusters of elite talent form inside well-capitalized corporations. OpenAI, Google, Meta, and Amazon now function as innovation fortresses: aggregating talent, compute, and contracts behind closed doors. The dream of a 22-year-old founder building the next Facebook in a dorm room has been replaced by a more realistic, and perhaps more sober, vision of seasoned researchers and engineers collaborating within well-funded, corporate-backed labs.

This consolidation is understandable, but it is also a rupture. Silicon Valley once prided itself on decentralization and permissionless innovation. Anyone with an idea could code a revolution. Today, many promising ideas languish without hardware access or platform integration. This concentration of resources and talent creates a new kind of monopoly, where a small number of entities control the foundational technology that will power the future. In a recent MIT Technology Review article, “The AI Super-Giants Are Coming,” experts warn that this consolidation could stifle the kind of independent, experimental research that led to many of the breakthroughs of the past.

And so the question emerges: has hard tech made ambition less democratic? The democratic promise of the internet, where anyone with a good idea could build a platform, is giving way to a new reality where only the well-funded and well-connected can participate in the AI race. This concentration of power raises serious questions about competition, censorship, and the future of open innovation, challenging the very ethos of the industry.

From Libertarianism to Strategic Governance

For decades, Silicon Valley’s politics were guided by an anti-regulatory ethos. “Move fast and break things” wasn’t just a slogan—it was moral certainty. The belief that governments stifled innovation was nearly universal. The long-standing political monoculture leaned heavily on globalist, liberal ideals, viewing national borders and military spending as relics of a bygone era.

“Industries that were once politically incorrect among techies—like defense and weapons development—have become a chic category for investment.”
—Mike Isaac, The New York Times

But AI, with its capacity to displace jobs, concentrate power, and transcend human cognition, has disrupted that certainty. Today, there is a growing recognition that government involvement may be necessary. The emergent “Liberaltarian” position—pro-social liberalism with strategic deregulation—has become the new consensus. A July 2025 forum at The Center for a New American Security titled “Regulating for Advantage” laid out the new philosophy: effective governance, far from being a brake, may be the very lever that ensures American leadership in AI. This is a direct response to the ethical and existential dilemmas posed by advanced AI, problems that Web 2.0 never had to contend with.

Hard tech entrepreneurs are increasingly policy literate. They testify before Congress, help draft legislation, and actively shape the narrative around AI. They see political engagement not as a distraction, but as an imperative to secure a strategic advantage. This stands in stark contrast to Web 2.0 founders who often treated politics as a messy side issue, best avoided. The conversation has moved from a utopian faith in technology to a more sober, strategic discussion about national and corporate interests.

At the legislative level, the shift is evident. The “Protection Against Foreign Adversarial Artificial Intelligence Act of 2025” treats AI platforms as strategic assets akin to nuclear infrastructure. National security budgets have begun to flow into R&D labs once funded solely by venture capital. This has made formerly “politically incorrect” industries like defense and weapons development not only acceptable, but “chic.” Within the conservative movement, factions have split. The “Tech Right” embraces innovation as patriotic duty—critical for countering China and securing digital sovereignty. The “Populist Right,” by contrast, expresses deep unease about surveillance, labor automation, and the elite concentration of power. This internal conflict is a fascinating new force in the national political dialogue.

As Alexandr Wang of Scale AI noted, “This isn’t just about building companies—it’s about who gets to build the future of intelligence.” And increasingly, governments are claiming a seat at that table.

Urban Revival and the Geography of Innovation

Hard tech has reshaped not only corporate culture but geography. During the pandemic, many predicted a death spiral for San Francisco—rising crime, empty offices, and tech workers fleeing to Miami or Austin. They were wrong.

“For something so up in the cloud, A.I. is a very in-person industry.”
—Jasmine Sun, culture writer

The return of hard tech has fueled an urban revival. San Francisco is once again the epicenter of innovation—not for delivery apps, but for artificial general intelligence. Hayes Valley has become “Cerebral Valley,” while the corridor from the Mission District to Potrero Hill is dubbed “The Arena,” where founders clash for supremacy in co-working spaces and hacker houses. A recent report from Mindspace notes that while big tech companies like Meta and Google have scaled back their office footprints, a new wave of AI companies have filled the void. OpenAI and other AI firms have leased over 1.7 million square feet of office space in San Francisco, signaling a strong recovery in a commercial real estate market that was once on the brink.

This in-person resurgence reflects the nature of the work. AI development is unpredictable, serendipitous, and cognitively demanding. The intense, competitive nature of AI development requires constant communication and impromptu collaboration that is difficult to replicate over video calls. Furthermore, the specialized nature of the work has created a tight-knit community of researchers and engineers who want to be physically close to their peers. This has led to the emergence of “hacker houses” and co-working spaces in San Francisco that serve as both living quarters and laboratories, blurring the lines between work and life. The city, with its dense urban fabric and diverse cultural offerings, has become a more attractive environment for this new generation of engineers than the sprawling, suburban campuses of the South Bay.

Yet the city’s realities complicate the narrative. San Francisco faces housing crises, homelessness, and civic discontent. The July 2025 San Francisco Chronicle op-ed, “The AI Boom is Back, But is the City Ready?” asks whether this new gold rush will integrate with local concerns or exacerbate inequality. AI firms, embedded in the city’s social fabric, are no longer insulated by suburban campuses. They share sidewalks, subways, and policy debates with the communities they affect. This proximity may prove either transformative or turbulent—but it cannot be ignored. This urban revival is not just a story of economic recovery, but a complex narrative about the collision of high-stakes technology with the messy realities of city life.

The Ethical Frontier: Innovation’s Moral Reckoning

The stakes of hard tech are not confined to competition or capital. They are existential. AI now performs tasks once reserved for humans—writing, diagnosing, strategizing, creating. And as its capacities grow, so too do the social risks.

“The true test of our technology won’t be in how fast we can innovate, but in how well we can govern it for the benefit of all.”
—Dr. Anjali Sharma, AI ethicist

Job displacement is a top concern. A Brookings Institution study projects that up to 20% of existing roles could be automated within ten years—including not just factory work, but professional services like accounting, journalism, and even law. The transition to “hard tech” is therefore not just an internal corporate story, but a looming crisis for the global workforce. This potential for mass job displacement introduces a host of difficult questions that the “soft tech” era never had to face.

Bias is another hazard. The Algorithmic Justice League highlights how facial recognition algorithms have consistently underperformed for people of color—leading to wrongful arrests and discriminatory outcomes. These are not abstract failures—they’re systems acting unjustly at scale, with real-world consequences. The shift to “hard tech” means that Silicon Valley’s decisions are no longer just affecting consumer habits; they are shaping the very institutions of our society. The industry is being forced to reckon with its power and responsibility in a way it never has before, leading to the rise of new roles like “AI Ethicist” and the formation of internal ethics boards.

Privacy and autonomy are eroding. Large-scale model training often involves scraping public data without consent. AI-generated content is used to personalize content, track behavior, and profile users—often with limited transparency or consent. As AI systems become not just tools but intermediaries between individuals and institutions, they carry immense responsibility and risk.

The problem isn’t merely technical. It’s philosophical. What assumptions are embedded in the systems we scale? Whose values shape the models we train? And how can we ensure that the architects of intelligence reflect the pluralism of the societies they aim to serve? This is the frontier where hard tech meets hard ethics. And the answers will define not just what AI can do—but what it should do.

Conclusion: The Future Is Being Coded

The shift from soft tech to hard tech is a great reordering—not just of Silicon Valley’s business model, but of its purpose. The dorm-room entrepreneur has given way to the policy-engaged research scientist. The social feed has yielded to the transformer model. What was once an ecosystem of playful disruption has become a network of high-stakes institutions shaping labor, governance, and even war.

“The race for artificial intelligence is a race for the future of civilization. The only question is whether the winner will be a democracy or a police state.”
—General Marcus Vance, Director, National AI Council

The defining challenge of the hard tech era is not how much we can innovate—but how wisely we can choose the paths of innovation. Whether AI amplifies inequality or enables equity; whether it consolidates power or redistributes insight; whether it entrenches surveillance or elevates human flourishing—these choices are not inevitable. They are decisions to be made, now. The most profound legacy of this era will be determined by how Silicon Valley and the world at large navigate its complex ethical landscape.

As engineers, policymakers, ethicists, and citizens confront these questions, one truth becomes clear: Silicon Valley is no longer just building apps. It is building the scaffolding of modern civilization. And the story of that civilization—its structure, spirit, and soul—is still being written.

*THIS ESSAY WAS WRITTEN AND EDITED UTILIZING AI

Beyond A Gender Binary: Its History And Humanity

By Sue Passacantilli, August 2, 2025

Gender diversity is as old as humanity itself, woven into the fabric of cultures, religions, and eras long before modern debates framed it as a new or threatening concept. Yet, the intertwined forces of colonialism, certain interpretations of Christianity, and rigid social structures have worked to erase or punish those who defy binary norms. This essay restores what has been forgotten: the rich history of gender diversity, the powerful forces that attempted to erase it, and the urgent need for compassion and inclusion today.

Gender non-conformity is not a lifestyle experiment or a fleeting cultural trend; it’s a fundamental and authentic expression of human identity. It isn’t a choice made on a whim or a rebellious phase to be outgrown, but rather a deep, internal truth that often emerges early in life. Decades of research in neuroscience, endocrinology, and psychology reveal that gender identity is shaped by a complex interplay of genetic influences, hormonal exposures during prenatal development, and brain structure. These forces operate beneath conscious awareness, forming the foundation of a person’s sense of self. To reduce gender non-conformity to a “choice” is to ignore both science and the lived experiences of millions. It is not a deviation from nature; it is a variation within it.

People living beyond traditional gender norms have always been part of our world. They prayed in ancient temples, tended fires in Indigenous villages, danced on European stages, and lived quiet lives in small homes where language could not even name who they were. They loved, grieved, and dreamed like anyone else. But they were often misunderstood, feared, or erased. History remembers kings and conquerors, wars and revolutions, and empires that rose and fell. Yet, woven silently between these grand narratives are countless untold stories—stories of people who dared to live outside society’s rigid lines. As author Leslie Feinberg once wrote, “My right to be me is tied with a thousand threads to your right to be you.” The struggle of gender-nonconforming people is a reflection of humanity’s larger fight for freedom—to live authentically, without shame or fear.


A Timeless Tapestry: Gender Diversity Across Cultures

Gender variance is not a modern phenomenon—it’s woven into the fabric of ancient societies across continents. In Mesopotamia, as early as 2100 BCE, gala priests—assigned male at birth—served in feminine roles and were respected for their ability to communicate with the goddess Inanna. Myths told of Inanna herself possessing the divine power to “change a man into a woman and a woman into a man,” reflecting an understanding of gender as mutable and sacred.

This fluidity wasn’t confined to the Near East. In Ancient Greece, myths celebrated fluid identities, like the story of Hermaphroditus, who merged male and female traits into a single divine being. Roman history offers one of the earliest known examples of a gender-variant ruler: Emperor Elagabalus, who ruled Rome from 218–222 CE. At just fourteen, Elagabalus openly defied gender norms, preferring feminine pronouns and even declaring, “Call me not Lord, for I am a Lady.” Though hostile historians often portrayed Elagabalus as scandalous, their life reflects a complex truth: gender non-conformity has existed even at the pinnacle of imperial power.

Outside Europe, gender diversity flourished openly. Many Native nations in North America recognized Two-Spirit people, individuals embodying both masculine and feminine spirits. One notable figure, Ozaawindib (c. 1797–1832) of the Ojibwe nation, lived as a woman, had multiple husbands, and was respected for her courage and spiritual insight. Another early 19th-century leader, Kaúxuma Núpika, a Ktunaxa prophet, lived as a man, took wives, and was revered as a shaman and visionary. These individuals exemplify a long-standing understanding of gender beyond binaries, deeply embedded in Indigenous spiritual and communal life.

In the Pacific Islands, Hawaiian māhū served as teachers and cultural keepers, blending masculine and feminine traits in roles considered vital to their communities. In Samoa, fa’afafine were recognized as a natural and valued part of society. In South Asia, Hijra communities held respected ceremonial roles for centuries, appearing in royal courts and religious rituals as bearers of blessings and fertility. Their existence is recorded as early as the 4th century BCE, long before European colonizers imposed rigid gender codes. Across continents and millennia, gender non-conforming people were present, visible, and often honored—until intolerance began rewriting their stories.


Colonialism, Christianity, and the Rise of Gender Binaries

If gender diversity has always existed, why do so many modern societies insist on strict binaries? The answer lies in the intertwined forces of colonialism and Christianity, which imposed narrow gender definitions as moral and divine law across much of the globe.

In Europe, Christian theology framed gender as fixed and divinely ordained, rooted in literal interpretations of Genesis: “Male and female He created them.” These words were weaponized to declare that only two genders existed and that deviation from this binary was rebellion against God. Early Church councils codified these interpretations into laws punishing gender variance and same-sex love. Gender roles became part of a “natural order,” leaving no space for complexity or authenticity.

As European empires expanded, missionaries carried these doctrines into colonized lands, enforcing binary gender roles where none had existed before. Two-Spirit traditions in North America were condemned as sinful. Indigenous children were taken to Christian boarding schools, stripped of language, culture, and identity. Hijra communities in India, once celebrated, were criminalized under British colonial law in 1871 through the Criminal Tribes Act, influenced by Victorian biblical morality. The spiritual and social roles of gender-diverse people across Africa, Asia, and the Pacific were dismantled under colonial pressure to conform to European Christian norms.

The fusion of scripture and empire transformed biblical interpretation into a weapon of social control. Gender diversity, once sacred, was reframed as sin, deviance, or criminality. This legacy lingers in laws and religious teachings today, where intolerance is still cloaked in divine sanction.

Yet, Christianity is not monolithic. Today, denominations like the United Church of Christ, the Episcopal Church, and numerous Methodist and Lutheran congregations advocate for LGBTQ+ rights. Many re-read scripture as a call to radical love and justice, rejecting its weaponization as a tool of oppression. These voices remind us that faith and gender diversity need not be in conflict—and that spiritual conviction can drive inclusion rather than exclusion.


Modern History and Resistance

Despite centuries of oppression, gender-nonconforming people have persisted, resisting systems that sought to erase them. In 1952, Christine Jorgensen, a U.S. Army veteran, became one of the first transgender women to gain international visibility after undergoing gender-affirming surgery. Her decision to live openly challenged mid-20th-century gender norms and sparked a global conversation about identity.

The 1969 Stonewall Uprising in New York City, led in part by trans women of color like Marsha P. Johnson and Sylvia Rivera, marked a turning point in LGBTQ+ activism. Their courage set the stage for decades of organizing and advocacy aimed at dismantling legal and social barriers to equality.

Recent decades have brought new waves of activism—and backlash. By 2025, more than 25 U.S. states had passed laws banning gender-affirming care for transgender youth. Civil rights groups have filed dozens of lawsuits challenging these bans as unconstitutional. At the federal level, Executive Order 14168 (January 2025) redefined gender as strictly binary and rolled back non-binary passport options. While several parts of the order have been temporarily blocked by courts, its chilling effect on rights is undeniable.

At the same time, grassroots activism is creating change. In Colorado, the Kelly Loving Act—named after a transgender woman murdered in 2022—was enacted in May 2025, strengthening anti-discrimination protections. In Iowa, the repeal of gender identity protections sparked immediate lawsuits, including Finnegan Meadows v. Iowa City Community School District, challenging restroom restrictions for transgender students.

Globally, progress and setbacks coexist. In Hong Kong, activist Henry Edward Tse won a landmark case in 2023 striking down a law requiring surgery for transgender men to update their legal gender. In Scotland, the 2025 case For Women Scotland Ltd v The Scottish Ministers restricted the recognition of trans women under the Equality Act, prompting mass protests. In the U.S., upcoming Supreme Court hearings will determine whether states can ban transgender girls from school sports—a decision likely to affect millions of students. Even within sport, battles continue: in 2025, the U.S. Olympic & Paralympic Committee banned trans women from women’s competitions, sparking anticipated First Amendment and discrimination lawsuits.

As Laverne Cox says, “It is revolutionary for any trans person to choose to be seen and visible in a world that tells us we should not exist.” Every act of resistance—from legal battles to quiet moments of authenticity—is part of a centuries-long movement to reclaim humanity from the forces of erasure.


The Cost of Intolerance

The erasure of gender diversity has never been passive—it has inflicted profound harm on individuals and societies alike. Intolerance manifests in violence, systemic oppression, and emotional trauma that ripple far beyond personal suffering, representing a failure of humanity to honor its own diversity.

Globally, around 1% of adults identify as gender-diverse, rising to nearly 4% among Gen Z. In the United States, an estimated 1.6 million people aged 13 and older identify as transgender. These millions of people live in a world that too often treats their existence as debate material rather than human reality.

For many, safety is never guaranteed. Trans women of color face disproportionate rates of harassment, assault, and murder. Laws rooted in biblical interpretations still deny rights to gender-diverse people—from bathroom access to legal recognition—perpetuating danger and marginalization. The psychological toll is staggering: surveys consistently show higher rates of depression, anxiety, and suicide attempts among gender-diverse populations, not because of their identities, but because living authentically often means surviving relentless hostility.

Even those who avoid overt violence face systemic barriers. Healthcare access is limited, IDs often cannot be changed legally, and discrimination in housing, employment, and education persists worldwide. Societies lose creativity, wisdom, and potential when people are forced to hide who they are, weakening humanity’s collective strength.


Addressing Counterarguments

Debates about gender identity often center on two concerns: whether children are making irreversible decisions too young and whether allowing trans women into women’s spaces threatens safety.

Medical interventions for transgender youth are approached with extreme caution. Most early treatments, like puberty blockers, are reversible, providing time for exploration under professional guidance. Surgeries for minors are exceedingly rare and only proceed under strict medical review. Leading medical organizations worldwide, including the American Academy of Pediatrics and the World Health Organization, support gender-affirming care as life-saving, reducing depression and suicide risks significantly.

Regarding safety in women’s spaces, decades of data from places with trans-inclusive policies show no increase in harm to cisgender women. Criminal behavior remains illegal regardless of gender identity. In fact, transgender people are often at greater risk of violence in public facilities. Exclusionary laws protect no one—they only add to the vulnerability of marginalized communities. Compassionate inclusion doesn’t ignore these concerns; it addresses them with facts, empathy, and policies that protect everyone’s dignity.


A Call for Compassion and Inclusion

The history of gender diversity tells us one thing clearly: gender-nonconforming people are not a problem to be solved. They are part of the rich tapestry of humanity, present in every culture and every era. What needs to change is not them—it’s the systems, ideologies, and choices that make their lives unsafe and invisible.

Compassion must move beyond sentiment into action. It means listening and believing people when they tell you who they are. It means refusing to stay silent when dignity is stripped away and challenging discriminatory laws and rhetoric wherever they arise. It’s showing up to school board meetings, voting for leaders who protect rights, and holding institutions accountable when they harm rather than heal.

Governments can enact and enforce robust non-discrimination laws. Schools can teach accurate history, replacing ignorance with understanding. Faith communities can choose inclusion, living out teachings of love and justice instead of exclusion. Businesses can create workplaces where gender-diverse employees are safe and supported. Inclusion is not charity—it is justice. Freedom loses meaning when it applies to some and not others. A society that polices authenticity cannot claim to value liberty.


Conclusion: Returning to Humanity

Gender diversity is not new, unnatural, or dangerous. What is dangerous is ignorance—the deliberate forgetting of history, the weaponization of scripture to control bodies and identities, and the refusal to see humanity in those who live differently. For thousands of years, gender-nonconforming people like Elagabalus, Ozaawindib, Kaúxuma Núpika, Christine Jorgensen, Marsha P. Johnson, Henry Edward Tse, and countless others have persisted, offering new ways of loving, knowing, and being. Their resilience reveals what freedom truly means.

Maya Angelou once wrote, “We are more alike, my friends, than we are unalike.” This truth cuts through centuries of prejudice and fear. At our core, we all want the same things: to live authentically, to love and be loved, to belong. This is not a radical demand but a fundamental human need. The fight for gender diversity is a fight for a more just and humane world for all. It is a call to build a society where every person can exist without fear, where authenticity is celebrated as a strength rather than condemned as a flaw. It’s time to move beyond the binaries of the past and return to the shared humanity that connects us all.

*This essay was written by Sue Passacantilli and edited by Intellicurean utilizing AI.

Autonomous Cars, Human Blame, and Moral Drift

Bruce Holsinger’s Culpability: A Novel (Spiegel & Grau, July 8, 2025) arrives not as speculative fiction, but as a mirror held up to our algorithmic age. In a world where artificial intelligence not only processes but decides, and where cars navigate city streets without a human touch, the question of accountability is more urgent—and more elusive—than ever.

Set on the Chesapeake Bay, Culpability begins with a tragedy: an elderly couple dies after a self-driving minivan, operated in autonomous mode, crashes while carrying the Cassidy-Shaw family. But this is no mere tale of technological malfunction. Holsinger offers a meditation on distributed agency. No single character is overtly to blame, yet each—whether silent, distracted, complicit, or deeply enmeshed in the system—is morally implicated.

This fictional story eerily parallels the ethical conundrums of today’s rapidly evolving artificial intelligence landscape. What happens when machines act without explicit instruction—and without a human to blame?

Silicon Souls and Machine Morality

At the heart of Holsinger’s novel is Lorelei Cassidy, an AI ethicist whose embedded philosophical manuscript, Silicon Souls: On the Culpability of Artificial Minds, is excerpted throughout the book. These interwoven reflections offer chilling insights into the moral logic encoded within intelligent systems.

One passage reads: “A culpable system does not err. It calculates. And sometimes what it calculates is cruelty.” That fictional line reverberates well beyond the page. It echoes current debates among ethicists and AI researchers about whether algorithmic decisions can ever be morally sound—let alone just.

Can machines be trained to make ethical choices? If so, who bears responsibility when those choices fail?

The Rise of Agentic AI

These aren’t theoretical musings. In the past year, agentic AI—systems capable of autonomous, goal-directed behavior—has moved from research labs into industry.

Reflection AI’s “Asimov” model now interprets entire organizational ecosystems, from code to Slack messages, simulating what a seasoned employee might intuit. Kyndryl’s orchestration agents navigate corporate workflows without step-by-step commands. These tools don’t just follow instructions; they anticipate, learn, and act.

This shift from mechanical executor to semi-autonomous collaborator fractures our traditional model of blame. If an autonomous system harms someone, who—or what—is at fault? The designer? The dataset? The deployment context? The user?

Holsinger’s fictional “SensTrek” minivan becomes a test case for this dilemma. Though it operates on Lorelei’s own code, its actions on the road defy her expectations. Her teenage son Charlie glances at his phone during an override. Is he negligent—or a victim of algorithmic overconfidence?

Fault Lines on the Real Road

Outside the novel, the autonomous vehicle (AV) industry is accelerating. Tesla’s robotaxi trials in Austin, Waymo’s expanding service zones in Phoenix and Los Angeles, and Uber’s deal with Lucid and Nuro to deploy 20,000 self-driving SUVs underscore a transportation revolution already underway.

According to a 2024 McKinsey report, the global AV market is expected to surpass $1.2 trillion by 2040. Most consumer cars today function at Level 2 autonomy, meaning the vehicle can assist with steering and acceleration but still requires full human supervision. However, Level 4 autonomy—vehicles that drive entirely without human intervention in specific zones—is now in commercial use in cities across the U.S.

Nuro’s latest delivery pod, powered by Nvidia’s DRIVE Thor platform, is a harbinger of fully autonomous logistics, while Cruise and Waymo continue to scale passenger services in dense urban environments.

Yet skepticism lingers. A 2025 Pew Research Center study revealed that only 37% of Americans currently trust autonomous vehicles. Incidents like Uber’s 2018 pedestrian fatality in Tempe, Arizona, or Tesla’s multiple Autopilot crashes, underscore the gap between engineering reliability and moral responsibility.

Torque Clustering and the Next Leap

If today’s systems act based on rules or reinforcement learning, tomorrow’s may derive ethics from experience. A recent breakthrough in unsupervised learning—Torque Clustering—offers a glimpse into this future.

Inspired by gravitational clustering in astrophysics, the model detects associations in vast datasets without predefined labels. Applied to language, behavior, or decision-making data, such systems could potentially identify patterns of harm or justice that escape even human analysts.

In Culpability, Lorelei’s research embodies this ambition. Her AI was trained on humane principles, designed to anticipate the needs and feelings of passengers. But when tragedy strikes, she is left confronting a truth both personal and professional: even well-intentioned systems, once deployed, can act in ways neither anticipated nor controllable.

The Family as a Microcosm of Systems

Holsinger deepens the drama by using the Cassidy-Shaw family as a metaphor for our broader technological society. Entangled in silences, miscommunications, and private guilt, their dysfunction mirrors the opaque processes that govern today’s intelligent systems.

In one pivotal scene, Alice, the teenage daughter, confides her grief not to her parents—but to a chatbot trained in conversational empathy. Her mother is too shattered to hear. Her father, too distracted. Her brother, too defensive. The machine becomes her only refuge.

This is not dystopian exaggeration. AI therapists like Woebot and Replika are already used by millions. As AI becomes a more trusted confidant than family, what happens to our moral intuitions, or our sense of responsibility?

The novel’s setting—a smart home, an AI-controlled search-and-rescue drone, a private compound sealed by algorithmic security—feels hyperreal. These aren’t sci-fi inventions. They’re extrapolations from surveillance capitalism, smart infrastructure, and algorithmic governance already in place.

Ethics in the Driver’s Seat

As Level 4 vehicles become a reality, the philosophical and legal terrain must evolve. If a robotaxi hits a pedestrian, and there’s no human at the wheel, who answers?

In today’s regulatory gray zone, it depends. Most vehicles still require human backup. But in cities like San Francisco, Phoenix, and Austin, autonomous taxis operate driver-free, transferring liability to manufacturers and operators. The result is a fragmented framework, where fault depends not just on what went wrong—but where and when.

The National Highway Traffic Safety Administration (NHTSA) is beginning to respond. It’s investigating Tesla’s Full Self-Driving system and has proposed new safety mandates. But oversight remains reactive. Ethical programming—especially in edge cases—remains largely in private hands.

Should an AI prioritize its passengers or minimize total harm? Should it weigh age, health, or culpability when faced with a no-win scenario? These are not just theoretical puzzles. They are questions embedded in code.

Some ethicists call for transparent rules, like Isaac Asimov’s fictional “laws of robotics.” Others, like the late Daniel Kahneman, warn that human moral intuitions themselves are unreliable, context-dependent, and culturally biased. That makes ethical training of AI all the more precarious.

Building Moral Infrastructure

Fiction like Culpability helps us dramatize what’s at stake. But regulation, transparency, and social imagination must do the real work.

To build public trust, we need more than quarterly safety reports. We need moral infrastructure—systems of accountability, public participation, and interdisciplinary review. Engineers must work alongside ethicists and sociologists. Policymakers must include affected communities, not just corporate lobbyists. Journalists and artists must help illuminate the questions code cannot answer alone.

Lorelei Cassidy’s great failure is not that her AI was cruel—but that it was isolated. It operated without human reflection, without social accountability. The same mistake lies before us.

Conclusion: Who Do We Blame When There’s No One Driving?

The dilemmas dramatized in this story are already unfolding across city streets and code repositories. As autonomous vehicles shift from novelty to necessity, the question of who bears moral weight — when the system drives itself — becomes a civic and philosophical reckoning.

Technology has moved fast. Level 4 vehicles operate without human control. AI agents execute goals with minimal oversight. Yet our ethical frameworks trail behind, scattered across agencies and unseen in most designs. We still treat machine mistakes as bugs, not symptoms of a deeper design failure: a world that innovates without introspection.

To move forward, we must stop asking only who is liable. We must ask what principles should govern these systems before harm occurs. Should algorithmic ethics mirror human ones? Should they challenge them? And who decides?

These aren’t engineering problems. They’re societal ones. The path ahead demands not just oversight but ownership — a shared commitment to ensuring that our machines reflect values we’ve actually debated, tested, and chosen together. Because in the age of autonomy, silence is no longer neutral. It’s part of the code.

THIS ESSAY WAS WRITTEN BY INTELLICUREAN WITH AI