Carnegie Mellon University

Human and AI interaction concept

August 07, 2024

Tepper School Expert Guides Instructors on AI's Impact on Student Thinking

The integration of artificial intelligence (AI) education tools into courses on business ethics and negotiations may significantly enhance students’ analytic reasoning and argumentation skills. Learning opportunities — specifically with students using large language models (LLMs) like ChatGPT and Gemini — can bring risks. In a new article, researchers propose methods to measure the impact of these models on critical thinking and analytical reasoning.

The article, by researchers at Carnegie Mellon University and the University of Denver, is forthcoming in a special issue of the Journal of Management Inquiry entitled, “AI and the Student-Centered Business School.”

leben-derek.jpg“Our guidelines are designed to help educators in negotiation and business ethics develop a detailed framework for using large language models to boost students’ analytical abilities while reducing potential risks,” explained Derek Leben, associate teaching professor of business ethics at Carnegie Mellon’s Tepper School of Business, who coauthored the article.

LLMs, which can comprehend and generate language, provide targeted assistance to students through chatbots. In courses on business ethics, these models assist students in analyzing the strength of arguments and evidence. In negotiation courses, they help in preparing for negotiations, developing negotiation styles, and evaluating outcomes.

Among the risks of LLMs are the development of biases, errors, and possible misuse that violates academic integrity standards. Overreliance is also a risk, leading students to think less critically and independently. In the article, the authors recommend that: 

  • The field should set common benchmarks for instructors to evaluate the effectiveness of these models in student learning. These benchmarks could assess student skills without AI help.
  • Instructors should also perform long-term studies to compare student performance with and without AI teaching methods.
  • These studies should consider ethical and security issues related to using LLMs in education. Furthermore, these models should be trained on diverse data sets and thoroughly tested for accuracy, efficiency, fairness, bias, and safety.

“These evaluations will help teachers determine if large language models are being used to enhance skills without causing dependence or other significant risks,” says Lily Morse, assistant professor of management at the University of Denver’s Daniels School of Business, who co-authored the article.