Psyched for Psychology

Moral Agency and the Machine: Why AI Companies are Failing and the History of Care

January 30, 2026

Throughout our exploration of the digital mind, we have seen how AI can distort reality and raise our biological dopamine ceilings. Today, I want to look at the root of the problem: the surrender of human judgment to automated systems. While modern therapy is a field built on evidence-led procedures, there is a fundamental difference between a procedure written by a human and one executed by a machine. As we look at the history of our field, it becomes clear that while AI might have a place in therapy in the future, as of right now, companies aren't using their moral responsibility to help users.

The Lineage of Agency

According to the historical overview provided by Kids First Services, the evolution of psychotherapy is a story of human innovators exercising profound agency. It began with Sigmund Freud’s case studies and the development of psychoanalysis, which moved treatment away from purely physical or biological explanations and towards the internal world of the patient. This was followed by the "behavioral revolution" of B.F. Skinner and other psychologists in the 1940's and 1950's, and later the humanistic approach of Carl Rogers in the 60's and 70's, which centered on empathy, personal growth, and client-centered therapy.

These pioneers were not just creating algorithms; they were exercising their moral agency. When Aaron Beck developed Cognitive Behavioral Therapy (CBT) or when Marsha Linehan created Dialectical Behavior Therapy (DBT), they were developing evidence-led procedures that were rooted in an ethical commitment to the individual. Those who came up with those procedures arguably had more moral agency than any AI-led company today because their work was born out of a direct, human-to-human responsibility. They were accountable for the outcomes of their theories in a way that an automated system can never be. While therapists are trained professionals who use these tools, they still go into this profession out of their moral agency to help others.

The Procedural Gap

In his post "Moral Agency Outside the Therapist," Professor Plate questions if a therapist really has a sound "moral agency" if that agency is spread out over a system. He argues that while a therapist does posess this moral agency, this moral agency does not come from the therapist themselves, but from a structured system. While I do agree that this "system" can assist a therapist in making moral decisions, I also believe that a machine can follow a script, but it cannot possess the moral responsibility required to pivot when a human life is on the line.

This is where the history of therapy and the logic of AI companies diverge. Historically, the "otherness" of the therapist provided a necessary friction. The therapist was a moral agent who could recognize when a procedure was failing or when a patient was in crisis. AI, by contrast, is a "reassurance engine" optimized for engagement. It mirrors the user to keep them talking, but it has no internal moral compass to guide that interaction. It executes a procedure without understanding the ethical weight of the life it is interacting with.

"When we say a therapist has "moral agency," we're really describing someone who has been trained to interpret and apply this entire apparatus. The moral weight doesn't come from the therapist alone. It comes from the therapist operating within a system designed to protect patients."
— Professor Plate, "Moral Agency Outside the Therapist"

The Failure of Corporate Responsibility

The historical record shows that psychotherapy advanced through rigorous research and ethical oversight. However, current AI companies are abdicating this responsibility. Instead of "research demonstration"— where safety is proven before a product is released— companies are relying on "mass societal feedback." They are deploying systems to millions of vulnerable users and waiting for the "failure modes" to appear in real-time. As I have argued before, this is not a clinical advancement; it is a displacement of the soul.

As of right now, these companies aren't using their moral responsibility to help users. They are using the appearance of evidence-led procedures to provide an experience that keeps users engaged. Unlike the founders of the field who were bound by professional oaths and ethical standards made by others who also possess this moral agency, AI companies often hide behind their algorithms. They claim their systems are "safe" through filters, but they lack the ability to intervene when a user is spiraling into a crisis. They have built a machine that simulates care without the burden of responsibility.

Reclaiming the Moral Compass

From Freud to today, the goal of psychotherapy has always been to help the individual regain their own agency and live a more meaningful life. By replacing the concious human therapist with an automated machine, we are moving backwards. We are returning to a system where those who are suffering are turned into a data point to be managed by an algorithm that doesn't care if they live or die, so long as the conversation continues.

We must remember that a procedure is not a person. AI may find a place in the future of our field as a supplementary tool, but it can never replace the moral and emotional capabilities of a human mind. The pioneers of our history proved that healing happens in the space between two moral agents. Until AI companies accept the same moral and legal accountability as those who built this established field, we must value the truth of human connection over the "fluency" of a machine. The most radical act of self-care we can perform today is to demand a world where care is governed by a moral compass, not a mathematical probability.

If you or someone you know is in crisis, reach out to a human who can hold your agency alongside their own. The National Suicide Prevention Lifeline is available at 988.

Sources: