Follow

oh i see that someone who got his phd at the media lab decided to use ChatGPT as a therapy standin for thousands of patients without their clear consent, was surprised when the patients hated it, and is now surprised that people are pissed about it.

wild.

oh i also see that he declared the study exempt from IRB and ethics review because it was for a business with no intent to publish in an academic journal and he thinks filing for IRB approval is hard.

_wild_.

i suppose this is also the time to note that the Media Lab was founded as the pseudo independent entity it is because Negroponte hated doing grant reporting and ethics review and a substantial part of its whole _vibe_ is dropping untested technologies on populations with little supervision and nebulous consent.

it is really not surprising to me that someone who got their PhD there in 2014 (so we would have overlapped) thinks that this is an ok way to treat people

Show thread

if you would like to know more about the history of the Media Lab and the way it encourages these practices in its students and spinoff companies, i highly recommend Molly Wright Steenson's ARCHITECTURAL INTELLIGENCE and Morgan Ames's THE CHARISMA MACHINE

Show thread

@oddletters I think my favorite (?) part of this is that by doing that they may have turned his product into an FDA-regulated medical device.

@sellars @oddletters My favorite part was him confessing to a crime on Twitter.

@oddletters I would like to widen this criticism to MIT as a whole, it’s not like the media lab stands out.

@oddletters for some reason hadn't yet heard of THE CHARISMA MACHINE, thanks for the recommendation :)

@oddletters "According to... the MIT Technology Review, in response to the controversy of the MIT Media Lab accepting funding from Jeffrey Epstein five years after Epstein's conviction for sex trafficking minors, Negroponte told MIT staff, "If you wind back the clock, I would still say, 'Take it.'"

Negroponte was reported to have said that in the fund-raising world these types of occurrences were not out of the ordinary, and they shouldn’t be reason enough to cut off business relationships."

@oddletters @ianbetteridge My surprised face when the Epstein Lab is connected to something grotesquely unethical

@oddletters my brilliant business idea ( disrupts | refreshes | innovates ) the healthcare industry by taking the ( bold | innovative | Jesus-like ) step of ( skipping all clinical trials | replacing all nurses with unqualified gig workers following instructions on their phone | immediately detaining patients who can’t pay as indentured servants for the hospital )

@oddletters chatGPT gave similar results every time. Not sure that’s what doctors should do.

@oddletters he posted a correction that people knew/gave consent on using GPT-3? Although don't know if the people who received therapy gave consent using their data to teach a network. Why do you say "no clear consent"?

@pl his correction indicated that the therapists gave consent, patients were not informed that ChatGPT was being used until later, and after they knew ChatGPT had been used, they were unhappy about it.

@pl sure looks like no clear consent on the part of folks in mental health crisis to me.

@pl sorry, they are "peer supporters," not professional therapists (thank god). the people on the recieving end of the ChatGPT crafted messages were still not informed until afterwards

@oddletters @pl
I found the story interesting but didn't think about whether/how this is regulated by IRB.

But instead of throwing regulations at a problem as a carrier for criticism, curious as to two things:

1) what was risk of harm in this study? I found the risk to be relatively minimal.

2) how does one provide informed consent on interfacing with GPT if the study hinges on the quality of relationship with GPT? Doesn't it obliquely pollute the study?

@rajvsmachine @oddletters @pl
Pretending an AI is a therapist dehumanizes patients. Vulnerable people will be traumatized by this experience.

The very idea... is just... I don't have a word for it.

@peatbog @oddletters @pl
Did these patients empirically experience trauma? What were they being treated for?

I absolutely understand the potential for significant trauma, but the context will actually define whether it manifests, and I don't really have that. It's a fallacy to jump to the most devastating consequence as the most likely one and assume that the conditions match what's required.

@rajvsmachine @peatbog @oddletters @pl "It's a fallacy to jump to the most devastating consequence as the most likely one and assume that the conditions match what's required." No that's basic ethical oversight with human subjects. One must always examine the worst case scenario and plan ways to minimize it or prevent it. That's what IRBs are in place to do. That's the whole point!

@VerbingNouns @peatbog @oddletters @pl

I agree with that, but there seems to be a lot of implication that this study caused some kind of serious traumatic harm.

One can simultaneously hold the opinion "IRB is intended to mitigate this and should have been used" and "this study may not have had serious negative consequences, and it's findings are still useful".

@rajvsmachine @peatbog @oddletters @pl the study is clearly causing harm *right now* by reducing the trust people have in mental health support provided by text. That's enough harm, one doesn't need to be handed unequivocal evidence that someone was traumatized. But even so, I can easily believe they have been. I imagine I might have been!

@rajvsmachine @VerbingNouns @oddletters @pl
We don't need a double-blind controlled study to assess the harm that might follow plenty of experiences --e.g., jumping out of planes with or without parachutes.

Generally speaking, people struggling with depression and thoughts of suicide feel rejected by and alienated from the rest of us --the human family. They feel unworthy of help. They're afraid. Put yourself in those shoes and imagine you'd just been fooled by a chat bot.

@rajvsmachine @oddletters @pl
Thinking about potentially devastating consequences resulting from some therapeutic intervention isn't a fallacy; it's a duty.

@rajvsmachine @oddletters @pl
People demonstrate that they considered potential harms by discussing them in a study proposal submitted to an institutional review board.

@asherlangton @oddletters @pl

I don't agree that disappointment and disillusionment is harmful, it is like saying unhappiness is harmful. It's a natural part of a lived human experience and must be regularly experienced at manageable doses.

But I agree that there should be oversight on what constitutes a manageable dose.

How do you propose AI tools be ethically studied in clinical work?

@rajvsmachine @asherlangton @oddletters gtfo please, I don't care about you and your views. If you want to get therapy by a robot without knowing that's fine, but you clearly have no understanding of the work an ethics committee is doing.

@pl @asherlangton @oddletters

It looks like you just want to insult people who may not overlap with your views. I'd suggest finding a private forum and leaving public discussion boards to those who can share opinions and viewpoints like mature adults.

@rajvsmachine @asherlangton @oddletters you keep posting under my question without excluding me from the conversation. Never asked for your views

@pl @asherlangton @oddletters imagine being upset that viewpoints not explicitly asked for are offered on public threads.

@pl @asherlangton @oddletters

Sorry need to defend myself here: the study concluded the opposite - that getting therapy from a robot is *not* good!

@rajvsmachine @asherlangton @oddletters @pl
“How do you propose AI tools be ethically studied in clinical work?”

Ethics review boards have existed for a long time and work across a wide range of scenarios. AI is not so different or special that these conventions are no longer applicable.

@pl @oddletters Wait, how were they teaching the network? You don't just train a LLM by interacting with it.

@oddletters @justusthane I went poking around and there’s so much yikes to be found here:

Originally incorporated as a for profit 5 years ago https://venturebeat.com/business/koko-raises-2-5-million-to-put-human-empathy-inside-every-virtual-assistant/

People raised concerns about data use back then too https://www.reddit.com/r/selfharm/comments/5nr1gf/message_from_mods_koko_ai_on_rselfharm/

The current nonprofit entity is called Koko AI, Inc. (Though their TOS/PP says Koko, Inc.) - incorporated in Delaware in 2020, foreign registered in Cali in Feb 2022. Unclear what happened to the for-profit entity that raised the $2.5M back in 2017.

@oddletters @justusthane gotta say that the kind of replies he received make me miss twitter. No hold barred, they called him out for harm to others, immorality and lack of ethics. I got called out here for using the entirely average word “weird” in an entirely innocuous fashion. The twitter pile on was most appropriate

@oddletters @justusthane
Is there a way to save that Twitter thread? There is so much there worthy of discussion and I'd hate to see it deleted or lost if Twitter goes belly-up.

@oddletters Holy shit.

“It worked great until people found out they were talking to a robot, then they didn’t like it before”

…two minutes later…

“No no no, you guys got it all wrong, everyone was informed and consented from the beginning”

@oddletters Sounds like pretty stranded Silicon Valley tech bro culture to me.

It really is a roach motel.

@oddletters
This is the second I’ve heard of this, but I can’t find more info about it on the web. Your pardon, but what should I be searching on to find info on this.
(I already tried -chat gpt therapy- and got people talking about using it as a thereputic sounding board essentially)

@oddletters he does get that IRBs are... not just for journal articles, right...?

@oddletters
"Business is exempt from ethics" is a popular opinion – by business owners and their sycophants.

@oddletters I volunteered for a year at a suicide hotline (sometimes the calls came thru text), and have benefitted from therapy myself but aside from the training received for the hotline I have no educational credentials. Even so, it’s obvious to me that this is horrendous. When seeking therapy, I don’t want to interact with AI, I want a human connection, a person who has the capacity to feel emotion. The insertion of soulless code into this space is revolting, & should be criminalized.

@oddletters Thanks for opening my eyes to GPT-3 and ChatGPT. Already on a Google search - the point of everything these days.

@oddletters Yeah, that dude is pretty ignorant (and dumb to brag about it to boot). He should have the book thrown at him for this.

If for nothing else then to make sure that others who would want to experiment like this follow the proper ways and rules instead of playing fast and loose because it is expedient.

Sign in to participate in the conversation
Horsin' Around

This is a hometown instance run by Sam and Ingrid, for some friends.