A brief post for upcoming presentations.
There are several concerns regarding the use of AI in education, particularly related to data privacy, potential for bias, and the need for careful implementation.
Data Privacy and Transparency:
The collection of student data, including personal and biological information, raises concerns about privacy and security. For example, the use of wearable technologies like headbands that monitor students’ concentration levels, or facial recognition systems to track attendance, may lead to the exposure of students’ digital privacy to insecure parties if the data is not properly stored and protected.
This seems particularly difficult when faculty want to “catch” students using AI. I have always maintained the only intended use of work submitted by students is for the instructor to read and comment on it. Using it for any other purposes, including to submit it to AI to be checked for plagiarism cannot be done without the approval of the student.
Transparency in how AI systems use data is also a concern. It’s important for educators, parents, and students to understand how AI systems work and how decisions are made based on the collected data.
Potential for Bias:
AI systems, including grading systems, can be susceptible to human bias, especially since they are trained on data that may reflect existing societal biases. AI-powered grading systems require training data to establish grading criteria, and if this data is biased, the system may perpetuate those biases. For example, a system trained using biased data might disadvantage certain groups of students.
Teachers should be aware of potential flaws in AI grading systems and should not implement them without customizing the grading criteria with their own data.
Equity of AI Decision-Making:
It is important to ensure that AI systems do not exacerbate existing inequalities in education. Careful consideration is needed to ensure that AI applications are designed and implemented in a way that promotes equitable outcomes for all students.
Flaws in AI Grading Systems:
While AI grading systems can relieve teachers from grading labor and reduce subjective bias, they can be problematic if not properly trained and implemented. An example of a flawed AI grading system can be seen in the Graduate Record Examinations (GRE) grading, where the system was susceptible to human bias.
Impact on the Role of Teachers:
The integration of AI in education redefines the role of instructors, who are required to become learning coaches for students rather than being the sole source of knowledge.
Teachers need to develop AI literacy and AI thinking to use AI effectively and ethically in their classrooms. It is important to address these concerns to ensure that AI is used to promote equity, enhance learning, and protect student privacy.