It doesn’t take much to expose the limitations of ChatGPT. A simple request for book recommendations about U.S. presidents was enough to show how the tool can blend truth with fiction. While the system often produces sentences that sound authoritative, accuracy is not its true strength. Instead, ChatGPT demonstrates the fascinating way large language models operate—not as fact libraries, but as word-prediction machines.
Testing ChatGPT with Presidential Biographies
When one user asked ChatGPT for books on Abraham Lincoln, the tool returned a list that looked promising. Most titles were accurate, such as Garry Wills’s Lincoln at Gettysburg. But mistakes crept in—attributing the Emancipation Proclamation as a book written by Lincoln, for example. Still, the results were not terrible.
The experiment became more revealing when the request shifted to William Henry Harrison, a far less studied president. The list ChatGPT provided looked convincing but was riddled with errors. Only a few titles were real, while others were fabrications or misattributed to the wrong authors. Repeating the same prompt produced a slightly different list, again with only partial accuracy.
When corrected, ChatGPT adapted quickly, offering confident—yet still incorrect—assertions. At one point, it claimed that historian Paul C. Nagel wrote William Henry Harrison: His Life and Times, even though the actual author was James A. Green. Later, it falsely credited the book to columnist Gail Collins, who had written a different biography. The chatbot did not hesitate; it filled the conversation with plausible but incorrect details.
Words, Not Facts
This raises the question: where do these errors come from? ChatGPT is often mistaken for a “super Google,” a giant library, or even a reference librarian that looks up facts on command. That is not how it works.
Large language models are built to predict the next word in a sequence, based on patterns learned from huge amounts of text. They don’t “know” facts the way a human researcher does. Instead, they generate responses that sound plausible. Accuracy, in this framework, is incidental rather than guaranteed.
As OpenAI has explained, ChatGPT is trained on “vast amounts of data from the internet.” This means its answers are shaped by what it has seen in training, not by an internal database of verified truths. If patterns in its data suggest that a certain historian often appears near the phrase “William Henry Harrison biography,” the model may confidently—but incorrectly—link the two.
Why Accuracy Matters
The misattribution of books shows the gap between sounding right and being right. For practical fact-finding, tools like Google remain far more reliable. In fact, the only way to verify ChatGPT’s book lists was to cross-check them using search engines and published references.
The danger lies in the model’s confidence. When it produces errors, it rarely signals uncertainty. To a casual user, a fabricated book title may appear indistinguishable from a genuine one. For tasks like school projects, journalism, or legal work, relying on ChatGPT alone could spread misinformation.
At the same time, this limitation highlights an important truth: ChatGPT was not designed as a truth-verification machine. It was designed to generate text that feels natural, engaging, and contextually relevant.
The “Yes, And” Potential
So, where does ChatGPT shine? Not in retrieving facts, but in creative collaboration. In one experiment, the system was used as an improv partner for a science-fiction roleplay. While it initially produced bland dialogue, small tweaks made the responses more dynamic. The result was not a polished performance, but it did help the user practice improvisation and explore story ideas.
This reveals one of ChatGPT’s most valuable uses: breaking writer’s block. Writers often freeze in front of a blank page. Having a chatbot generate even a flawed first draft provides a foundation to build upon. The text may not be accurate, but it is useful as a springboard for creativity.
ChatGPT’s strength lies in its ability to keep a conversation going, to “yes, and” a scenario much like an improv partner. It can create fictional characters, outline dramatic scenes, or suggest imaginative twists. Accuracy is not the point—momentum is.
A Tool for the Right Job
The story of the Harrison biographies underscores an essential lesson: ChatGPT is not a replacement for research tools like encyclopedias, libraries, or databases. Instead, it is best seen as an idea generator. For factual accuracy, human verification remains essential. For creativity, brainstorming, or overcoming hesitation, ChatGPT can be remarkably effective.
Just as no one would expect a comedian’s improv partner to double as a fact-checker, users should not expect a language model to serve as a perfectly reliable historian. The key is to understand what the tool is—and what it is not.
Conclusion
ChatGPT is not a flawless source of truth. It does not “know” facts, nor was it designed to. What it does exceptionally well is generate plausible, human-like text. Used responsibly, it can help writers overcome blocks, teachers create engaging exercises, and learners spark curiosity. But when accuracy matters, its words must always be checked against reality. In the end, the value of ChatGPT lies not in what it knows, but in how it helps us think, imagine, and create.


by