Encyclopedias, summary reports, systematic reviews, and other traditional reference works provide readers with valuable insight into the literature. They help to identify important authors, paradigmatic works, competing theories, and so on. This sort of meta-research (i.e., research on the research) facilitates more expedient article gathering, data sourcing, and information synthesis practices.
Wikipedia
The researcher may be tempted to turn to Wikipedia for this bird's eye view. After all, Wikipedia represents a relatively trusted source for popular information. It is, however, a poor choice for scholarly information. Academic sources must be, above all, authoritative and reliable; these attributes follow mainly from the author's position relative to her academic field. Because Wikipedia is crowd-sourced and its entries anonymously authored, authority and reliability are always unknown.
We may not know who contributes to academic Wikipedia entries, but we can be confident who does not: professional scholars. Academics like your professor daily negotiate multiple, often-conflicting professional obligations. They are expected to publish scholarship, teach courses, mentor students, apply for grants, present material at conferences, and on and on. In various ways, academics are incentivized to satisfy each of these responsibilities. By contrast, they have no incentive to contribute to academic Wikipedia entries.
Wikimedia publishes a handbook for the evaluation of Wikipedia entries:
AI
If reference sources offer the researcher valuable points of departure and Wikipedia is ill-suited to serious academic inquiry, then you may be seduced by the convenience and ostensible authority of large language model (LLM) chatbots like ChatGPT, Mistral, and Anthropic. But if the authority of a book or an article follows largely from the authority of its author(s), then chatbot responses are even less authoritative than Wikipedia entries. To the extent chatbots merely synthesize information from across the surface web, their responses are authored simultaneously by everyone and no one in particular. No author = no academic authority.
Even more concerning, chatbots "hallucinate." They frequently fabricate information and present fictitious data as fact. Nor does this tendency to make stuff up appear to be a fly in the LLM ointment that may in time be excised. Rather, it appears to be a fundamental and intractable characteristic of LLMs. "Hallucinations," Kristian Hammond of Northwestern University argues, are "a feature, not a bug." Hammond explains:
Ultimately, neither Wikipedia nor LLM chatbots are appropriate reference sources for academic inquiry. Where, then, should you begin?
Try one of these: