Why do we need someone to take the blame in the first place? Is the goal to assign blame, or to improve the process so that the same mistake doesn't happen again?
Imagine a fully automated industrial production line. When it produces a defective part, do we look for someone to blame, or do we examine the quality assurance process and fix whatever allowed the defect to slip through?
There is a strong parallel here. In many cases, AI used to be just a tool. The person using it could directly see and control the output and was therefore responsible. But as systems become more autonomous and automated, the human role shifts from tool user to process operator. And with that shift, responsibility changes as well.
The operator is responsible for following and maintaining the process. If AI fails, it is increasingly a failure of the system design, not an individual mistake.
This feels like a natural progression. The more autonomy we give AI, the less practical it becomes for humans to micromanage every action. Instead, we need robust processes and technical safeguards to keep systems within acceptable limits.
This does not mean letting AI run wild without accountability. It means being intentional about where responsibility truly lies. Requiring a human to be personally responsible for every AI action can make AI far less useful. Perhaps we should reserve that level of responsibility for decisions with significant human impact, where judgment and accountability genuinely matter.
And yes, we could also argue that humans aren’t exactly flawless moral agents either, and that AI might even be more predictable in some ways. But that’s a slightly deeper philosophical rabbit hole for another day.