There’s something I don’t like about using the word “hallucination” to describe the way LLMs make things up. It implies the models are doing something wrong, something they are not supposed to do. But it’s only wrong from our perspective as users: it’s not wrong from the model’s perspective. From the model’s perspective there is no difference in what it is doing when it is being factual and when it isn’t. In a sense LLMs are *always* hallucinating, but sometimes what they say happens to be true.

I wanted to say this because I think the “hallucination” problem may be harder to solve than some people think, and because I have a professional interest in using LLMs for tasks where factual accuracy really matters.

Show thread

@olihawkins I think I broadly agree with the root idea here, but I don’t think this is actually at odds with our original, human-oriented definition of hallucinations.

Follow

@olihawkins That is, in general, a person who is hallucinating does not realize they are hallucinating. Just as, in general, a person dreaming does not realize they are dreaming. “In general” used advisedly – there are certainly clear exceptions in both cases.

· · Web · 1 · 0 · 1

@vruba @olihawkins But lack of self-awareness / not recognizing that one is hallucinating isn't an essential part of the definition of "hallucinating." It's tricky because the original post discusses the LLM's "perspective," which seems to suggest a consciousness that can indeed be self-aware, but that's not what the author meant. There is no real "perspective," but there is a mechanical epistemology that is always "correct" insofar as... (1/2)

@vruba @olihawkins It's technically functioning. The point is an important one — and it's one that I think a lot of people are coming to realize, by different means and in different contexts.

@emma @olihawkins Agreed that lack of insight into hallucination is not essential: thus my pains over “in general”.

I’m saying something weaker, which is that being wrong and not knowing it also happens to people sometimes, and in ways that are more LLM-like than I think some of the fiercer LLM critics are doing justice to. (The range of valid criticisms of things people say about LLMs remains vast, to be clear.)

@vruba @emma @olihawkins I think “hallucination” bothers me because it implies a false perception of reality corrupting your model of the world. LLMs dint perceive or model reality, only language (and thus I guess I agree with Oli)

Sign in to participate in the conversation
Horsin' Around

This is a hometown instance run by Sam and Ingrid, for some friends.