Skip to Main Content

Boatwright Memorial Library

Image that says "Defining Generative AI"


Where to start? Defining "Generative AI"

A key part of understanding generative AI, and specific tools like ChatGPT, is contextualizing it within the broader field of artificial intelligence. The following terns are ordered from conceptually broad to specific, and additional resources are included in each section: 

 

Image that says "Artificial Intelligence"

 

Image that says "Generative AI"

  • Branch of artificial intelligence focused on a machine model's ability to create new content, be it text-based, images, music, or even human voices. This new content, in part, is based on the data and content used to "train" the model. Generative AI also applies the concept of machine learning to adapt and refine the generated content over time. 
  • Additional Resources

 

Image that says "Large-Language Models"

  • A specific type of generative AI that creates written works or text-based responses to user-generated prompts. Large-language models (LLMs) are trained on huge amounts of data which it uses to predict the most likely series of words needed to address a user's prompt, essentially act as "an extremely powerful form of autocomplete".
  • It is important to understand the difference between LLMs and the tools that utilize their training data. For example, OpenAI's LLM training data is used as the foundation for a wide range of generative AI tools, including ChatGPT and Jenni.ai. Someone using ChatGPT creates queries using ChatGPT, which in turn relies on OpenAI's LLM as the basis for its responses.
  • Web Article - "A jargon-free explanation of how AI large language models work" (ArsTechnica; Timothy B. Lee and Sean Trott)

 

Image that says "ChatGPT" 

 

Source: AI Guide – The AI Pedagogy Project. (n.d.). Retrieved December 4, 2023, from https://aipedagogy.org/guide/

Practical AI for Instructors and Students Video Series

LinkedIn Learning Courses


Find more information about generative AI through the following LinkedIn Learning courses (all UR students, staff, and faculty have access to LinkedIn Learning):

The LinkedIn Learning platform (formerly Lynda.com) is available to all UR students, staff, and faculty. 

Books on Artificial Intelligence at the UR Library

Key Terminology

Here are terms commonly used in discussions of generative AI and their definitions:

Algorithm: a set of rules or instructions that tell a machine what to do with the data input into the system.

Deep Learning: a method of machine learning that lets computers learn in a way that mimics a human brain, by analyzing lots of information and classifying that information into categories. Deep learning relies on a neural network.

Extractive: category of AI tools that are designed to identify and extract data and other information from resources. This is different from generative AI broadly in that generative tools create new content, while extractive tools find and summarize data.

Hallucination: a situation where an AI system produces fabricated, nonsensical, or inaccurate information. The wrong information is presented with confidence, which can make it difficult for the human user to know whether the answer is reliable.

Large Language Model (LLM): a computer program that has been trained on massive amounts of text data such as books, articles, website content, etc. An LLM is designed to understand and generate human-like text based on the patterns and information it has learned from its training. LLMs use natural language processing (NLP) techniques to learn to recognize patterns and identify relationships between words. Understanding those relationships helps LLMs generate responses that sound human—it’s the type of model that powers AI chatbots such as ChatGPT.

Machine Learning (ML): a type of artificial intelligence that uses algorithms which allow machines to learn and adapt from evidence (often historical data), without being explicitly programmed to learn that particular thing.

Natural Language Processing (NLP): the ability of machines to use algorithms to analyze large quantities of text, allowing the machines to simulate human conversation and to understand and work with human language.

Neural Network: a deep learning technique that loosely mimics the structure of a human brain. Just as the brain has interconnected neurons, a neural network has tiny interconnected nodes that work together to process information. Neural networks improve with feedback and training.

Token: the building block of text that a chatbot uses to process and generate a response. For example, the sentence "How are you today?" might be separated into the following tokens: ["How," "are," "you," "today," "?"]. Tokenization helps the chatbot understand the structure and meaning of the input.

 

Adapted in part from: Monahan, J. (2023, July). Artificial Intelligence, Explained. Carnegie Mellon University’s Heinz College. https://www.heinz.cmu.edu/media/2023/July/artificial-intelligence-explained

Additional Resources