student with laptop looking out a window.

Webinar Spotlights Ethics and Equity in AI-Supported Learning

Experts share how students and faculty are using AI in classrooms, with guidance on ethical use, privacy risks, and preparing for real-world practice.

The College of Professional Studies hosted a webinar titled “Trust and Fairness in AI Supported Learning” as part of their ongoing series AI Community Conversations.

The inspiration for the session came directly from feedback the hosts—April Demers, Ph.D., program chair for the M.S. Biological Sciences program, and Cody Dckson, Ph.D., director of clinical training  for the Counselor Education program—and other members of The Chicago School community have received and expressed about a lack of clarity surrounding the use of AI in the classroom. Faculty and students alike have questioned what constitutes acceptable use. This webinar represented an effort to identify and begin to address these issues.

“The goal is to move toward shared language, shared principles, and shared confidence around AI-supported learning,” Dr. Demers says.

Initial concerns surrounding the use of AI tools in the classrooms largely focused on worries that students would submit work generated by AI as their own. As the technology has evolved, two realities have emerged. First, AI has become permanently entrenched in classrooms and workplaces. Second, as AI tools become more advanced, ethical questions surrounding its use become more complicated.

An informal poll of the webinar’s attendees revealed that all but one incorporated the use of AI into their lesson plans and that this instruction often focused on the benefits and risks of large language model tools. In the case of graduate professional programs, students are using AI in their practicums, and their experiences help clarify where guidelines will need to be established in the workforce.

As part of the seminar, the presenters provided a hypothetical example of possible pitfalls: A student in a clinical psychology program writes up a case study of a patient, removes details that might reveal who the patient is (a process called “de-identifying”), loads the information into an AI tool to develop a 10-week treatment plan, then submits the assignment without revealing that they have supplemented their work with AI.

During the discussion that followed, the moderators and attendees focused on two concerns with the student’s approach: first, that they did not inform the instructor, and second, that a patient’s privacy may have been compromised. These concerns led to discussion regarding how essential it is for those integrating AI technology to communicate openly about how they use it.

The webinar moderators identified five core ethical concerns in the use of AI in classroom instruction.

  1. Academic integrity and authorship. Since the introduction of AI tools to the public, this concern has been paramount, though currents trends in addressing the issue discourage the view that students are trying to get away with cheating.
  2. Equity and access: differential familiarity and tool access. Some students who apply themselves to mastering AI applications may outperform those with a superior grasp of the subject matter, and most companies who sell AI applications offer free versions as well as subscription options that can be expensive and give an advantage to those who can afford them.
  3. Data privacy and confidentiality (especially with clinical materials). As the hypothetical example discussed during the webinar suggests, health care professionals and lawyers are going to be using AI in their practices, and protecting the identities of patients and clients will be an ongoing focus of ethical use and efforts to limit liability.
  4. Bias and cultural representation in AI outputs. Large language models rely upon the sum of publicly available information; therefore, they will export the biases they input.
  5. Overreliance on AI versus development of professional judgment. This trade-off will be an ongoing challenge as we all have ever-increasing access to tools that can perform tasks from writing emails to expansive coding projects, ethical considerations aside.

The key takeaway from the webinar is that there is no one “right” AI policy, but that transparency is more important than strictness. “Prohibition doesn’t automatically create ethical clarity just as permissiveness alone doesn’t equal innovation,” Dr. Demers explains. Clearly communicated expectations will help us understand the boundaries and the purpose behind them. “When communication’s clear,” she says, “it supports both learning and academic integrity.”

Zeen is a next generation WordPress theme. It’s powerful, beautifully designed and comes with everything you need to engage your visitors and increase conversions.

Top
Top