I have quibbles with this but they’re “why didn’t you mention this important detail” kinds of things. I think it’s admirably clear and fundamentally right-headed. https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web
@paulmison For example, I think the no-output-as-input argument is correct but not as airtight as it might seem to a lay reader – one could argue that it’s more just a redundancy thing.
@paulmison Specifically, I think it’s super relevant to point out that lossy compression is often used as a backbone for lossless compression (predictor + residual models). That tells us something about all these issues. But I also see why that would be one of the first paragraphs you’d cut for space and focus.
@paulmison Anyway, for the word count and the presumed audience, I think this is a spectacularly good piece and will be recommending it.
@paulmison I think focusing on compression is very good (it’s how I tend to think about this stuff) but the exact phrasing of why people don’t use LLMs on the Hutter Prize is basically wrong as stated. Probably down to editorial brevity decisions, not misunderstanding, though.