Why do we need someone to take the blame in the first place? Is the goal to point fingers, or to improve the process so the same mistake does not happen again?
Think about a fully automated industrial production line. When it produces a faulty part, do we look for a person to blame, or do we examine the quality assurance process and fix what allowed the defect to pass through?
There is a strong parallel here. AI used to be just a tool in many cases. The human using it could directly see and control the output and was therefore responsible. But as systems become more agentic and automated, the human role shifts from tool user to process operator. And with that shift, responsibility changes as well.
The operator is responsible for following and maintaining the process. If AI fails, it is increasingly a failure of the system design, not an individual mistake.
This feels like a natural evolution. The more autonomy we give AI, the less practical it becomes to have humans micromanage every action. Instead, we need robust processes and technical guardrails that keep systems within acceptable boundaries.
This does not mean letting AI run wild without accountability. It means being intentional about where responsibility truly belongs. Requiring a human to be personally responsible for every AI action can make AI far less useful. Maybe we should reserve that level of responsibility for decisions with significant human impact, where judgment and accountability genuinely matter.
And yes, we could also argue that humans are not exactly flawless moral actors either, and that AI might even be more predictable in some ways. But that is a slightly bigger philosophical rabbit hole for another day.