@ingrid On the off chance you're serious, you'd want to try this, if at all, then with a frontier LLM with a context window large enough to fit the whole thing -- Claude Pro or Google Gemini. "ChatGPT"-branded stuff probably won't cut it.
(The "LLM summarization doesn't work!" paper that was making the rounds a few days ago used LLama2-70b -- a small model with a 4k-token context window, which is something like 2000-3000 words. No surprise at all that that can't summarize books.)