Click on the statements below to learn more about each topic:
Numerous AI-detection tools claim to be able to reliably distinguish between human and AI-generated content, focusing on concepts like burstiness and perplexity. However, AI detection tools have been plagued by frequent false positive and false negative results. Currently, there is no AI detection tool with a reliable success rate.
Use the resources below to find additional information:
Part of the appeal of generative AI tools for some users is the assumption that such tools are unbiased in their decision-making and creative process. However, generative AI tools can exhibit bias in very fundamental ways. Because these tools rely on human-created training data to generate their outputs, any biases found in the training datasets, be it "socioeconomic, racial [or] gender bias", is then reflected in AI outputs. The same concept holds true for the decision-making algorithms used by some AI tools; because the algorithms are created by humans, they may reflect those same human biases. Algorithmic bias can impact a wide range of AI-adjacent functions, from creating AI art to healthcare decision-making.
Use the resources below to find additional information:
To form their training data sets, many generative AI tools have scraped information from a wide range of sources on the Internet with no notice to the sources’ original owners/creators. Some of this information is protected under copyright, which may mean that generative AI companies face legal problems because of their use of such material. Two of the most pressing questions on copyright and AI are:
Does the use of copyrighted material by generative AI companies constitute copyright infringement, or is it a legal form of fair use?
Will human-prompted outputs from generative AI tools be afforded copyright protection?
In January 2025, the U.S. Copyright Office released the second part of a report on AI and Copyright that provides clarity to the second question. While all AI content generated from human prompts alone isn't considered copyrightable, AI-generated content may receive copyright protection if "sufficient human [contributions]" can be proven. This may come from how generated content is later arranged/organized, or if significant changes were made to AI-generated content by humans after being created. However, this may change as more generative AI companies are subjected to legal scrutiny.
Use the resources below to find additional information:
Generative AI tools require a significant amount of computational processing power to function, which is provided by high-performance servers housed in physical data centers located across the country. These centers require massive amounts of electricity to keep tools operational, as well as water to keep the servers cool. Many AI companies have not revealed just how much electricity and water are used by their tools, or how much will be needed in the future. As such, there are significant unanswered questions about the environmental costs of keeping generative AI tools functional. Additionally, many of these data centers are increasingly located in Virginia.
Use the resources below to find additional information
In the context of generative AI tools, “hallucinations” refer to instances where generated responses feature seemingly accurate answers, when in reality the outputs are false or nonexistent. For example, users have noticed that citations or bibliographic entries produced by LLMs often include made-up authors, articles, and journals. In part, hallucinations exist because of how generative AI tools function, predicting what the most likely word in a response should be instead of "thinking” through what the correct response actually is. While some tools are more prone to hallucinations than others, it is a problem that should give users pause before trusting all generative AI outputs.
Use the resources below to find additional information:
Because of generative AI's ability to produce human-like responses and complete various repetitive tasks, there are major concerns about how AI tools will impact the human labor market. For example, some writers and language translators have already been replaced by generative AI tools, highlighting the employment anxiety felt by other workers. Global attention has also focused on the economic exploitation of some AI content moderators and workers tasked with training and improving AI tools.
Use the resources below to find additional information:
Most generative AI tools require users to provide personal information to use them, such as full names, phone numbers, and/or email addresses. Additionally, data, user prompts, and other content entered into generative AI tools could be added to their training datasets and used to further develop those tools, often without users' knowledge or consent. Consider the following before using generative AI tools:
Review all privacy policies – AI companies will post and update privacy and data retention policies on their websites. Look through those policies, including any changes from previous iterations, before inputting potentially sensitive data. For tools that lack a sound privacy policy (or any policy at all), consider using a different tool.
Be cautious with what you share – Know that any sensitive or personal information may be incorporated into generative AI datasets, so exercise caution when using proprietary data, controlled information, or student information (as this may constitute a FERPA violation).
Use the following links to access relevant UR-specific data security policy documents:
Use the resources below to find additional information:
“Prompts” refer to a user’s initial request or input into a generative AI tool to receive a desired result (ex. “Come up with a list of...”). Currently, the semantic structure of those prompts is a major factor in the overall quality of the tool’s output. Below are articles that offer practical tips and advice on how to craft prompts when using generative AI tools for specific tasks. Additionally:
Many AI researchers predict that users will not need to focus as much on structuring their prompts in the future as tools continue evolving. However, that day has not yet come, so prompts remain an important feature of generative AI tools.
Using generative AI tools that have direct access to the live internet (ex. Gemini, Microsoft Copilot) and/or more advanced tools (ex. ChatGPT 4o) can affect output quality just as much, or more, than prompt engineering.
Click here to see helpful tips on prompt engineering on this guide, or use the following links for more information about prompt engineering, and guides for creating prompts for specific tasks:
Since the release of ChatGPT 3.5 in November 2022, generative AI tools have continued to emerge at a rapid pace. While many of these tools are open-source and freely available, others offer paid access for additional usage and functionality. As more users interact with generative AI tools, it can be difficult to both keep up with changes to existing tools and assess the quality of new tools.
Use the resources below to find additional information:
The following websites offer access to new and open-source generative AI tools:
Generative AI’s applications in research and writing have raised questions from scholars about how to cite such uses, and whether or not journal publishers will allow the inclusion of AI assistance in submitted papers. The major concerns include generative AI potentially providing incorrect information presented as fact (i.e. hallucinations), data privacy, and dataset bias. While there is no universal standard to citing and/or including generative AI use in published papers, many scholarly journals and publishers have created guidelines for scholars to follow when submitting works for publication.
Use the resources below to find additional information:
Generative AI and Publishing Guidelines
The following are examples of generative AI guidelines from select scholarly journals and publishers. These can (and will) evolve, so please check directly with these sources and with specific publishers for the most up-to-date guidelines: