oh i see that someone who got his phd at the media lab decided to use ChatGPT as a therapy standin for thousands of patients without their clear consent, was surprised when the patients hated it, and is now surprised that people are pissed about it.
wild.
oh i also see that he declared the study exempt from IRB and ethics review because it was for a business with no intent to publish in an academic journal and he thinks filing for IRB approval is hard.
_wild_.
@oddletters uff
@pl sorry, they are "peer supporters," not professional therapists (thank god). the people on the recieving end of the ChatGPT crafted messages were still not informed until afterwards
@oddletters @pl
I found the story interesting but didn't think about whether/how this is regulated by IRB.
But instead of throwing regulations at a problem as a carrier for criticism, curious as to two things:
1) what was risk of harm in this study? I found the risk to be relatively minimal.
2) how does one provide informed consent on interfacing with GPT if the study hinges on the quality of relationship with GPT? Doesn't it obliquely pollute the study?
@rajvsmachine @oddletters @pl
Pretending an AI is a therapist dehumanizes patients. Vulnerable people will be traumatized by this experience.
The very idea... is just... I don't have a word for it.
@peatbog @oddletters @pl
Did these patients empirically experience trauma? What were they being treated for?
I absolutely understand the potential for significant trauma, but the context will actually define whether it manifests, and I don't really have that. It's a fallacy to jump to the most devastating consequence as the most likely one and assume that the conditions match what's required.
@rajvsmachine @peatbog @oddletters @pl "It's a fallacy to jump to the most devastating consequence as the most likely one and assume that the conditions match what's required." No that's basic ethical oversight with human subjects. One must always examine the worst case scenario and plan ways to minimize it or prevent it. That's what IRBs are in place to do. That's the whole point!
@VerbingNouns @peatbog @oddletters @pl
I agree with that, but there seems to be a lot of implication that this study caused some kind of serious traumatic harm.
One can simultaneously hold the opinion "IRB is intended to mitigate this and should have been used" and "this study may not have had serious negative consequences, and it's findings are still useful".
@rajvsmachine @peatbog @oddletters @pl the study is clearly causing harm *right now* by reducing the trust people have in mental health support provided by text. That's enough harm, one doesn't need to be handed unequivocal evidence that someone was traumatized. But even so, I can easily believe they have been. I imagine I might have been!
@rajvsmachine @VerbingNouns @oddletters @pl
We don't need a double-blind controlled study to assess the harm that might follow plenty of experiences --e.g., jumping out of planes with or without parachutes.
Generally speaking, people struggling with depression and thoughts of suicide feel rejected by and alienated from the rest of us --the human family. They feel unworthy of help. They're afraid. Put yourself in those shoes and imagine you'd just been fooled by a chat bot.
@rajvsmachine @oddletters @pl
Thinking about potentially devastating consequences resulting from some therapeutic intervention isn't a fallacy; it's a duty.
@peatbog @oddletters @pl thinking about them is different from assuming them as fact.
@rajvsmachine @oddletters @pl
People demonstrate that they considered potential harms by discussing them in a study proposal submitted to an institutional review board.
@peatbog @rajvsmachine @pl @oddletters Here’s an edge case: what if one of these patients was suicidal.
I don't agree that disappointment and disillusionment is harmful, it is like saying unhappiness is harmful. It's a natural part of a lived human experience and must be regularly experienced at manageable doses.
But I agree that there should be oversight on what constitutes a manageable dose.
How do you propose AI tools be ethically studied in clinical work?
@rajvsmachine @asherlangton @oddletters gtfo please, I don't care about you and your views. If you want to get therapy by a robot without knowing that's fine, but you clearly have no understanding of the work an ethics committee is doing.
It looks like you just want to insult people who may not overlap with your views. I'd suggest finding a private forum and leaving public discussion boards to those who can share opinions and viewpoints like mature adults.
@rajvsmachine @asherlangton @oddletters you keep posting under my question without excluding me from the conversation. Never asked for your views
@pl @asherlangton @oddletters imagine being upset that viewpoints not explicitly asked for are offered on public threads.
Sorry need to defend myself here: the study concluded the opposite - that getting therapy from a robot is *not* good!
@rajvsmachine @asherlangton @oddletters @pl
“How do you propose AI tools be ethically studied in clinical work?”
Ethics review boards have existed for a long time and work across a wide range of scenarios. AI is not so different or special that these conventions are no longer applicable.
@pl sure looks like no clear consent on the part of folks in mental health crisis to me.