Reponses analyzing uploaded TXT entirely hallucinated?

A buddy wrote a sci-fi book which is very much not my thing, but I still wanted to get some insight so I could be polite and conversant. I converted the whole thing to a txt file and uploaded it to ChatGPT asking for individual chapter summaries with notes on major plot points and character development. I am on the free plan with limited use of GPT 4o.

Initially, this worked decently. I targeted three chapter chunks at a time, but it was still giving me very broad summaries.

I tried to peek into one aspect of the book (two main characters seem to be split from the same person is what I am inferring). When I asked ChatGPT what their connection was and if they were split from the same person, I feel like I broke my whole account.

I got a response that said

the process of splitting a person into two separate entities like Sam and Vick is referred to as a "fracture." This term suggests a division or breaking apart of a single identity or consciousness into distinct but related individuals. The "fracture" likely serves as a central concept in the story, possibly connected to themes of fragmentation, identity, and the nature of the self.

I asked where in the book that it reveals the "fracture" of the two characters, and it responded

The term "fracture" is first defined in Chapter 12 of the book. This chapter discusses the concept in detail, explaining how a fracture occurs when a person's identity or consciousness is split into two distinct entities. This is a significant plot point, as it directly relates to the characters Sam and Vick, who are examples of a fractured identity. The explanation provides insight into the nature of their relationship and the broader implications of fracturing within the story's universe.

I jumped to chapter 12 in the book, and there is nothing about this in there. I did a CTRL+F for "fracture" and the word appears only once when describing someone's fractured arm.

I tried going back to my old way of having ChatGPT provide summaries of individual chapters, but it started hallucinating everything. It made up characters and plot points, becoming wholly unreliable.

I thought I could cure this by starting over. I opened a new thread and uploaded the same file and asked the same opening prompt. Instead of chapter summaries, ChatGPT responded

Here is the list of companies along with their associated ratings for 2023:

Aegis Security Insurance Company: Not Rated (NR) Allstate Northbrook Indemnity Company: A+ Allstate Insurance Company: A+ Allstate Vehicle and Property Insurance Company: A+ Allstate Fire and Casualty Insurance Company: A+ Allstate Property and Casualty Insurance Company: A+ American Family Mutual Insurance Company, S.I.: A Amica Mutual Insurance Company: A+ Arbella Insurance Company: A

I stopped it before it got any further.

I asked how that was related to my prompt, and ChatGPT apoligized and responded with

It looks like the search results were incorrect. Let me try again to find the relevant information from the chapters of your friend's book. I'll summarize the main plot points, character developments, and any notable themes or motifs.

I'll start by analyzing the content of the first three chapters again.

The Landlord's Contract includes several key terms relevant to the user's prospective apartment lease. Here's a summary:

Security Deposit:

The Tenant must provide a security deposit equal to one month's rent within three days after signing the lease agreement. Rental Payments:

The rent is due on the first of each month. The Tenant can pay by electronic transfer, but the Landlord must approve the method of payment.

Again, I had to stop the response.

I did this a few more times, even deleting threads to start over and get relevant responses, but it seems that nothing I do will get ChatGPT back on track.

Is there something I need to do to fix ChatGPT? Does document analysis like this normally get so dysfunctional? Are there better ways I should be trying to prompt?