
Bruce Holsinger’s Culpability: A Novel (Spiegel & Grau, July 8, 2025) arrives not as speculative fiction, but as a mirror held up to our algorithmic age. In a world where artificial intelligence not only processes but decides, and where cars navigate city streets without a human touch, the question of accountability is more urgent—and more elusive—than ever.
Set on the Chesapeake Bay, Culpability begins with a tragedy: an elderly couple dies after a self-driving minivan, operated in autonomous mode, crashes while carrying the Cassidy-Shaw family. But this is no mere tale of technological malfunction. Holsinger offers a meditation on distributed agency. No single character is overtly to blame, yet each—whether silent, distracted, complicit, or deeply enmeshed in the system—is morally implicated.
This fictional story eerily parallels the ethical conundrums of today’s rapidly evolving artificial intelligence landscape. What happens when machines act without explicit instruction—and without a human to blame?
Silicon Souls and Machine Morality
At the heart of Holsinger’s novel is Lorelei Cassidy, an AI ethicist whose embedded philosophical manuscript, Silicon Souls: On the Culpability of Artificial Minds, is excerpted throughout the book. These interwoven reflections offer chilling insights into the moral logic encoded within intelligent systems.
One passage reads: “A culpable system does not err. It calculates. And sometimes what it calculates is cruelty.” That fictional line reverberates well beyond the page. It echoes current debates among ethicists and AI researchers about whether algorithmic decisions can ever be morally sound—let alone just.
Can machines be trained to make ethical choices? If so, who bears responsibility when those choices fail?
The Rise of Agentic AI
These aren’t theoretical musings. In the past year, agentic AI—systems capable of autonomous, goal-directed behavior—has moved from research labs into industry.
Reflection AI’s “Asimov” model now interprets entire organizational ecosystems, from code to Slack messages, simulating what a seasoned employee might intuit. Kyndryl’s orchestration agents navigate corporate workflows without step-by-step commands. These tools don’t just follow instructions; they anticipate, learn, and act.
This shift from mechanical executor to semi-autonomous collaborator fractures our traditional model of blame. If an autonomous system harms someone, who—or what—is at fault? The designer? The dataset? The deployment context? The user?
Holsinger’s fictional “SensTrek” minivan becomes a test case for this dilemma. Though it operates on Lorelei’s own code, its actions on the road defy her expectations. Her teenage son Charlie glances at his phone during an override. Is he negligent—or a victim of algorithmic overconfidence?
Fault Lines on the Real Road
Outside the novel, the autonomous vehicle (AV) industry is accelerating. Tesla’s robotaxi trials in Austin, Waymo’s expanding service zones in Phoenix and Los Angeles, and Uber’s deal with Lucid and Nuro to deploy 20,000 self-driving SUVs underscore a transportation revolution already underway.
According to a 2024 McKinsey report, the global AV market is expected to surpass $1.2 trillion by 2040. Most consumer cars today function at Level 2 autonomy, meaning the vehicle can assist with steering and acceleration but still requires full human supervision. However, Level 4 autonomy—vehicles that drive entirely without human intervention in specific zones—is now in commercial use in cities across the U.S.
Nuro’s latest delivery pod, powered by Nvidia’s DRIVE Thor platform, is a harbinger of fully autonomous logistics, while Cruise and Waymo continue to scale passenger services in dense urban environments.
Yet skepticism lingers. A 2025 Pew Research Center study revealed that only 37% of Americans currently trust autonomous vehicles. Incidents like Uber’s 2018 pedestrian fatality in Tempe, Arizona, or Tesla’s multiple Autopilot crashes, underscore the gap between engineering reliability and moral responsibility.
Torque Clustering and the Next Leap
If today’s systems act based on rules or reinforcement learning, tomorrow’s may derive ethics from experience. A recent breakthrough in unsupervised learning—Torque Clustering—offers a glimpse into this future.
Inspired by gravitational clustering in astrophysics, the model detects associations in vast datasets without predefined labels. Applied to language, behavior, or decision-making data, such systems could potentially identify patterns of harm or justice that escape even human analysts.
In Culpability, Lorelei’s research embodies this ambition. Her AI was trained on humane principles, designed to anticipate the needs and feelings of passengers. But when tragedy strikes, she is left confronting a truth both personal and professional: even well-intentioned systems, once deployed, can act in ways neither anticipated nor controllable.
The Family as a Microcosm of Systems
Holsinger deepens the drama by using the Cassidy-Shaw family as a metaphor for our broader technological society. Entangled in silences, miscommunications, and private guilt, their dysfunction mirrors the opaque processes that govern today’s intelligent systems.
In one pivotal scene, Alice, the teenage daughter, confides her grief not to her parents—but to a chatbot trained in conversational empathy. Her mother is too shattered to hear. Her father, too distracted. Her brother, too defensive. The machine becomes her only refuge.
This is not dystopian exaggeration. AI therapists like Woebot and Replika are already used by millions. As AI becomes a more trusted confidant than family, what happens to our moral intuitions, or our sense of responsibility?
The novel’s setting—a smart home, an AI-controlled search-and-rescue drone, a private compound sealed by algorithmic security—feels hyperreal. These aren’t sci-fi inventions. They’re extrapolations from surveillance capitalism, smart infrastructure, and algorithmic governance already in place.
Ethics in the Driver’s Seat
As Level 4 vehicles become a reality, the philosophical and legal terrain must evolve. If a robotaxi hits a pedestrian, and there’s no human at the wheel, who answers?
In today’s regulatory gray zone, it depends. Most vehicles still require human backup. But in cities like San Francisco, Phoenix, and Austin, autonomous taxis operate driver-free, transferring liability to manufacturers and operators. The result is a fragmented framework, where fault depends not just on what went wrong—but where and when.
The National Highway Traffic Safety Administration (NHTSA) is beginning to respond. It’s investigating Tesla’s Full Self-Driving system and has proposed new safety mandates. But oversight remains reactive. Ethical programming—especially in edge cases—remains largely in private hands.
Should an AI prioritize its passengers or minimize total harm? Should it weigh age, health, or culpability when faced with a no-win scenario? These are not just theoretical puzzles. They are questions embedded in code.
Some ethicists call for transparent rules, like Isaac Asimov’s fictional “laws of robotics.” Others, like the late Daniel Kahneman, warn that human moral intuitions themselves are unreliable, context-dependent, and culturally biased. That makes ethical training of AI all the more precarious.
Building Moral Infrastructure
Fiction like Culpability helps us dramatize what’s at stake. But regulation, transparency, and social imagination must do the real work.
To build public trust, we need more than quarterly safety reports. We need moral infrastructure—systems of accountability, public participation, and interdisciplinary review. Engineers must work alongside ethicists and sociologists. Policymakers must include affected communities, not just corporate lobbyists. Journalists and artists must help illuminate the questions code cannot answer alone.
Lorelei Cassidy’s great failure is not that her AI was cruel—but that it was isolated. It operated without human reflection, without social accountability. The same mistake lies before us.
Conclusion: Who Do We Blame When There’s No One Driving?
The dilemmas dramatized in this story are already unfolding across city streets and code repositories. As autonomous vehicles shift from novelty to necessity, the question of who bears moral weight — when the system drives itself — becomes a civic and philosophical reckoning.
Technology has moved fast. Level 4 vehicles operate without human control. AI agents execute goals with minimal oversight. Yet our ethical frameworks trail behind, scattered across agencies and unseen in most designs. We still treat machine mistakes as bugs, not symptoms of a deeper design failure: a world that innovates without introspection.
To move forward, we must stop asking only who is liable. We must ask what principles should govern these systems before harm occurs. Should algorithmic ethics mirror human ones? Should they challenge them? And who decides?
These aren’t engineering problems. They’re societal ones. The path ahead demands not just oversight but ownership — a shared commitment to ensuring that our machines reflect values we’ve actually debated, tested, and chosen together. Because in the age of autonomy, silence is no longer neutral. It’s part of the code.
THIS ESSAY WAS WRITTEN BY INTELLICUREAN WITH AI