A key part of understanding generative AI, and specific tools like ChatGPT, is contextualizing it within the broader field of artificial intelligence. The following terns are ordered from conceptually broad to specific, and additional resources are included in each section:
Source: AI Guide – The AI Pedagogy Project. (n.d.). Retrieved December 4, 2023, from https://aipedagogy.org/guide/
What is generative AI? from Generative AI: Introduction to Large Language Models by Frederick Nwanganga
Find more information about generative AI through the following LinkedIn Learning courses (all UR students, staff, and faculty have access to LinkedIn Learning):
The LinkedIn Learning platform (formerly Lynda.com) is available to all UR students, staff, and faculty.
Here are terms commonly used in discussions of generative AI and their definitions:
Algorithm: a set of rules or instructions that tell a machine what to do with the data input into the system.
Deep Learning: a method of machine learning that lets computers learn in a way that mimics a human brain, by analyzing lots of information and classifying that information into categories. Deep learning relies on a neural network.
Extractive: category of AI tools that are designed to identify and extract data and other information from resources. This is different from generative AI broadly in that generative tools create new content, while extractive tools find and summarize data.
Hallucination: a situation where an AI system produces fabricated, nonsensical, or inaccurate information. The wrong information is presented with confidence, which can make it difficult for the human user to know whether the answer is reliable.
Large Language Model (LLM): a computer program that has been trained on massive amounts of text data such as books, articles, website content, etc. An LLM is designed to understand and generate human-like text based on the patterns and information it has learned from its training. LLMs use natural language processing (NLP) techniques to learn to recognize patterns and identify relationships between words. Understanding those relationships helps LLMs generate responses that sound human—it’s the type of model that powers AI chatbots such as ChatGPT.
Machine Learning (ML): a type of artificial intelligence that uses algorithms which allow machines to learn and adapt from evidence (often historical data), without being explicitly programmed to learn that particular thing.
Natural Language Processing (NLP): the ability of machines to use algorithms to analyze large quantities of text, allowing the machines to simulate human conversation and to understand and work with human language.
Neural Network: a deep learning technique that loosely mimics the structure of a human brain. Just as the brain has interconnected neurons, a neural network has tiny interconnected nodes that work together to process information. Neural networks improve with feedback and training.
Token: the building block of text that a chatbot uses to process and generate a response. For example, the sentence "How are you today?" might be separated into the following tokens: ["How," "are," "you," "today," "?"]. Tokenization helps the chatbot understand the structure and meaning of the input.
Adapted in part from: Monahan, J. (2023, July). Artificial Intelligence, Explained. Carnegie Mellon University’s Heinz College. https://www.heinz.cmu.edu/media/2023/July/artificial-intelligence-explained