Skip to Main Content

Boatwright Memorial Library

On the value of traditional reference sources, the pitfalls of Wikipedia, and the absurdity of AI

In the initial stages of research, it can be helpful to get a bird's-eye view of the topic. 

Encyclopedias, summary reports, systematic reviews, and other traditional reference works provide readers with valuable insight into the literature.  They help to identify important authors, paradigmatic works, competing theories, and so on.  This sort of meta-research (i.e., research on the research) facilitates more expedient article gathering, data sourcing, and information synthesis practices. 

 

Wikipedia

The researcher may be tempted to turn to Wikipedia for this bird's eye view.  After all, Wikipedia represents a relatively trusted source for popular information.  It is, however, a poor choice for scholarly information.  Academic sources must be, above all, authoritative and reliable; these attributes follow mainly from the author's position relative to her academic field.  Because Wikipedia is crowd-sourced and its entries anonymously authored, authority and reliability are always unknown.  

We may not know who contributes to academic Wikipedia entries, but we can be confident who does not: professional scholars.  Academics like your professor daily negotiate multiple, often-conflicting professional obligations.  They are expected to publish scholarship, teach courses, mentor students, apply for grants, present material at conferences, and on and on.  In various ways, academics are incentivized to satisfy each of these responsibilities.  By contrast, they have no incentive to contribute to academic Wikipedia entries.    

 

Wikimedia publishes a handbook for the evaluation of Wikipedia entries:

Evaluating Wikipedia brochure

 

AI

If reference sources offer the researcher valuable points of departure and Wikipedia is ill-suited to serious academic inquiry, then you may be seduced by the convenience and ostensible authority of large language model (LLM) chatbots like ChatGPT, Mistral, and Anthropic.  But if the authority of a book or an article follows largely from the authority of its author(s), then chatbot responses are even less authoritative than Wikipedia entries.  To the extent chatbots merely synthesize information from across the surface web, their responses are authored simultaneously by everyone and no one in particular.  No author = no academic authority.   

Even more concerning, chatbots "hallucinate."  They frequently fabricate information and present fictitious data as fact.  Nor does this tendency to make stuff up appear to be a fly in the LLM ointment that may in time be excised.  Rather, it appears to be a fundamental and intractable characteristic of LLMs.  "Hallucinations," Kristian Hammond of Northwestern University argues, are "a feature, not a bug."  Hammond explains:

Language models are not built to be encyclopedias or databases of facts. Instead, they are designed to model the way humans use language. They encode how to structure sentences, connect words, and follow the rules of grammar. This ability comes from their exposure to vast amounts of text, allowing them to pick up on patterns and structures. But when it comes to factual accuracy, these models can only work when likelihood (the metric by which they choose the next word) and truth align. If there's a gap in their knowledge, they’ll fill it in with whatever is the most likely, regardless of whether it is true.    

Better reference sources

Ultimately, neither Wikipedia nor LLM chatbots are appropriate reference sources for academic inquiry.  Where, then, should you begin? 

Try one of these: