Holding AI to a Double Standard

Holding AI to a Double Standard: The Moral Costs of Inaction

AI isn’t perfect but neither are we. In 2019, David Gunkel coined the term “asymmetric virtue”, to refer to the demanding level of perfection we expect from machines, such as self-driving cars, that we would never expect from human beings.

But when does this imbalanced way of assessing technology cross the line and become unethical?

To better understand this problem, let’s look at two examples – doctors and cars.

The Case for AI in Medical Diagnostics

In a study done by the University of Virginia Health System, the median diagnostic accuracy for doctors using Chat GPT Plus was 76.3%, while the results for the physicians using conventional approaches was 73.7%. The Chat GPT group members reached their diagnoses slightly more quickly overall — 519 seconds compared with 565 seconds.

The researchers were also surprised at how well Chat GPT Plus alone performed, with a median diagnostic accuracy of more than 92%.

Higher accuracy diagnostics would not only result in a reduction of human suffering and death, but it would lessen the strain on the healthcare system, allowing resources to be better allocated to other patients who would benefit.

The Case for AI in Transportation

Google’s Waymo driverless cars have traveled more than 22 million miles. In that time they were 3 times less likely to be involved in an accident than a human driven car.

Waymo has been involved in 84% fewer crashes with airbag deployment, 73% fewer injury-causing crashes, and 48% fewer police-reported crashes compared to human drivers.

Over 40% of the crashes experienced by Waymo cars occurred at less than 1 mph. Most of the serious collisions were caused by a human driver, such as a couple of robbery suspects making their getaway.

The regulatory burden on driverless cars requiring reporting of accidents combined with the innate ability of driverless cars to record every detail of a collision gives us a rare dataset with which we can be certain that this technology vastly outperforms humans on safety.

The Toll of Inaction

At what point does it become unethical to ignore these statistics? At what point do we draw a line and label humans as a hazard?

Countless papers on AI and robotics have agonized over exotic problems, such as the “trolly problem”, but these debates often focus on issues that rarely come up in everyday life.

By contrast, misdiagnosis is a major cause of death. A study by John Hopkins University published in the journal BMJ Quality & Safety estimated that roughly 371,000 deaths in the US annually are linked to diagnostic error. They estimated that 795,000 Americans become permanently disabled or die annually because they were misdiagnosed.

If AI could reduce these deaths by 20%, that’s nearly the number of lives taken by a combination of car accidents and guns in the US every year.

Car accidents take roughly 42,000 lives each year in the US. Not to mention the 2 to 2.5 million people who are injured in car crashes each year in the U.S. Imagine if that number could be reduced by 2/3?

Keep in mind that these are just two areas in which AI could be saving lives today. If AI technology were employed more widely in industry, farming, mining and other hazardous areas of the economy, it’s likely the lives saved and the injuries avoided would be much higher.

The Moral Question

Arguments can be made that AI isn’t ready because it isn’t error free. But these arguments ring hollow when you look at the errors we tolerate from humans.

When does prudent caution become denial? 

If AI really can decrease diagnostic error by a notable margin and dramatically lower the likelihood of fatal car crashes – then not using AI becomes more than a missed opportunity. It starts to look like a moral failure.

Indeed, a core tenet of ethics is the principle of “do no harm.” Although AI comes with risks—algorithmic biases, the potential for misuse, privacy concerns—human clinicians and drivers pose risks, too. 

The key is not to demand perfection, but rather to evaluate which system (human-only vs. human-plus-AI) minimizes harm. 

If consistent data shows that AI-driven solutions reduce fatalities and injuries compared to an all-human approach, we face a moral imperative to integrate them responsibly.

More to Consider

But the conversation doesn’t end at raw statistics. We must also consider:

Equity and Access: If AI diagnostics save lives, how do we ensure this technology reaches underserved areas, rural hospitals, or small clinics? Making AI widely accessible could amplify its benefits and address healthcare disparities.

Accountability: How do we ensure that when AI makes a mistake, we can trace the root cause and implement corrective measures? Transparent system design, explainability, and regulatory oversight become crucial here.

Human Oversight: While data indicates AI may outperform humans on average, we still need humans in the loop for ethical deliberation, empathy, and nuanced decision-making in edge cases AI can’t yet handle.

Societal Impact: Widespread adoption of AI in diagnostics or self-driving vehicles will likely disrupt existing economic models and labor markets. An ethical approach considers how to mitigate negative social impacts while harnessing technology’s life-saving potential.

Conclusion

When weighed against the current toll of human error—thousands of preventable deaths in cars and hospitals every year—waiting on the sidelines for perfect AI feels increasingly indefensible.

The moral question isn’t just about “when” AI will be flawless; it’s about what ethical cost we pay every day we neglect to employ a system that—even in its imperfect state—surpasses our own abilities in certain domains.

On balance, if we accept that no method is 100% faultless, but one method is safer than the other, insisting on human exclusivity in critical sectors may indeed cross into unethical territory. 

The lesson here is not that AI should replace humans outright; rather, it’s that humanity’s bar for “acceptable error” needs to be consistently applied. 

If we can save thousands—if not millions—of lives by integrating AI into human decision-making, the moral calculus shifts sharply in favor of adoption. The true debate, then, is not about whether AI is perfect, but whether continuing to rely on demonstrably less safe human-only systems can ever be justified once AI has proven to do better.

About Verlicity AI

At Verlicity we specialize in incorporating real-time data into Self-Hosted AI solutions. Our platform is designed to keep your proprietary information confidential while delivering real-time insights from domain-specific and client data.

Share the Post: