The proliferation of generative AI tools has been followed by the emergence of numerous AI-detection tools. Trained on both human and AI-generated content, these tools claim to be able to reliably distinguish between the two, focusing on concepts like burstiness and perplexity. However, AI detection tools have been plagued by frequent false positive and false negative results. This means that there is currently no AI detection tool with a reliable success rate.
Use the resources below to find additional information :
Since the wide release of ChatGPT 3.5 in November 2022, generative AI tools have continued to emerge and evolve at a rapid pace. New tools, many of which are open-source and freely available, are made available seemingly every day. Popular tools improve their processing capabilities, offer free and paid tiers of access, and force large companies like Microsoft and Google to enter the generative AI field. As more users interact with generative AI tools, and as the amount of training data available to those tools grows, it can be difficult to both keep up with changes to existing tools and assess the quality of new tools.
Use the resources below to find additional information:
Open-Source AI Platforms
“Prompts” refer to a user’s initial request or input into a generative AI tool to receive a desired result (ex. “What are research topics related to...”). Currently, the semantic structure of those prompts is a major factor in the overall quality of the tool’s output. Below are articles that offer practical tips and advice on how to craft prompts when using generative AI tools for specific tasks. Additionally:
Many AI researchers predict that users will not need to focus as much on structuring their prompts in the future as tools continue evolving.
Using generative AI tools that have direct access to the live internet (ex. Bard, Bing Copilot) and/or more advanced tools (ex. ChatGPT 4 vs. ChatGPT 3.5) will also have an effect on output quality.
Use the following links for more information about prompt engineering, and guides for creating prompts for specific tasks:
In the context of generative AI tools, “hallucinations” refer to instances where tools will respond to users with seemingly accurate answers and information when in reality the outputs are false or nonexistent. For example, users have noticed that citations or bibliographic entries produced by LLMs often include made-up authors, articles, and journals. In part, hallucinations exist because of how generative AI tools function, predicting what the most likely word in a response should be instead of "thinking” through what the correct response actually is. While some tools are more prone to hallucinations than others, it is a problem that should make all users pause before trusting all generative AI outputs.
Use the resources below to find additional information:
Most generative AI tools require users to provide personal information in order to use them, such as full names, phone numbers, and/or email addresses. Additionally, data, user prompts, and other content entered into generative AI tools can be added to their training datasets and used to further develop those tools. Consider the following when using generative AI tools:
Review all privacy policies – AI companies will post and update privacy and data retention polices on their tool websites. Look through those policies, including any changes from previous iterations, before inputting potentially sensitive data. For tools that lack a sound privacy policy (or any policy at all), consider using a different tool.
Be cautious with what you share – Know that any sensitive or personal information may be incorporated into generative AI datasets, so exercise caution when using proprietary data, controlled information, or student information (as this may constitute a FERPA violation).
Use the resources below to find additional information:
Generative AI tools require a significant amount of computational processing power to function, which is provided by high-performance servers housed in physical data centers located across the country. These centers require massive amounts of electricity to keep tools operational, as well as water to keep the servers cool. Many AI companies have not revealed just how much electricity and water are used by their tools, or how much will be needed in the future. As such, there are significant unanswered questions about the environmental costs of keeping generative AI tools functional.
Use the resources below to find additional information
Because many generative AI tools can produce human-like responses and complete various repetitive tasks, there are concerns about how AI tools will impact the human labor market. For example, some writers and language translators have been replaced by their employers and contractors and replaced by AI tools. Attention has also focused on the economic exploitation of some AI content moderators, or workers tasked with training and improving AI tools. While these are not the only labor-related concerns, continued attention is needed to ensure fair and equitable treatment of all workers potential impacted by AI.
Use the resources below to find additional information:
To form their training data sets, many generative AI tools have scraped information from a wide range of sources on the Internet with no notice to the sources’ original owners/creators. Some of this information is protected under copyright, which may mean that generative AI companies face legal problems because of their use of such material. Two of the most pressing questions on copyright and AI are:
Does the use of copyrighted material by generative AI companies constitute copyright infringement, or is it a legal form of fair use?
Will human-prompted outputs from generative AI tools be afforded copyright protection?
As of March 2023, the U.S. Copyright Office has provided an answer to the second question: no. However, this may change as more generative AI companies are subjected to legal scrutiny.
Use the resources below to find additional information:
Generative AI’s applications in research and writing have raised questions from scholars about how to cite such uses, and whether or not journal publishers will allow the inclusion of AI assistance in submitted papers. The major concerns include generative AI potentially providing incorrect information presented as fact (i.e. hallucinations), data privacy, and dataset bias. While there is no universal standard to citing and/or including generative AI use in published papers, many scholarly journals and publishers have created guidelines for scholars to follow when submitting works for publication.
Use the resources below to find additional information:
Generative AI and Publishing Guidelines
The following are examples of generative AI guidelines from select scholarly journals and publishers. These can (and will) evolve, so please check directly with these sources and with specific publishers for the most up-to-date guidelines: