Deep Dive into Learning Evaluation in the AI era

Event

Deep Dive into Learning Evaluation in the AI era

Deep Dive into Learning Evaluation in the AI era

Description

Evaluating learning has never been straightforward. For adult educators, it has always been a fine balancing act between rigour, relevance, and real-world application. Today, with artificial intelligence (AI) rapidly transforming the way we teach and work, there is new potential to make evaluation faster, smarter, and more efficient.

But as international learning and evaluation expert Professor Emeritus Thomas Guskey of the University of Kentucky reminded participants at ALX 2025, IAL’s flagship adult learning conference, the most important questions remain unchanged: Is the evaluation meaningful? Is it fair? And is it truly fit for purpose?

In a keynote speech and masterclass packed with insight, Guskey challenged educators not just to embrace AI, but also to lead its use with discernment, purpose, and professional integrity.



The Promise of AI in Adult Learning Evaluation

AI tools today offer powerful support to educators. From automating tasks, analysing learner data, and generating lesson plans to developing quizzes and assessments, the digital assistance helps educators to become far more productive and efficient and frees up their time to focus on deeper, higher-value work such as mentoring, curriculum design, and intervention planning.

When it comes to learner evaluation, AI can be a game-changer. Platforms can automate grading, generate customised or even personalised learner surveys, and track performance metrics at scale. With the right prompts, AI can churn out full sets of evaluation tools within minutes.

For learners, AI-powered assessments can provide instant, personalised feedback tailored to individual learners’ needs, goals, and progress. For large-scale adult learning environments, such tools make evaluation more scalable and accessible than ever before.

But, the Limitations

Still, Professor Guskey was clear. At the end of the day, AI is a tool, not an educator. Its effectiveness depends entirely on the judgment of the humans using it.

To illustrate the limitations of AI tools, Professor Guskey conducted an experiment using five popular AI platforms. He prompted each one to generate learner surveys based on different levels of evaluation and the results varied dramatically. Some were useful and insightful, some were not.

This reveals something critical. That AI, for all its functions, lacks what matters most in education: Context.

“Take one example,” Professor Guskey shared. “One AI gave me a great list of ten criteria to evaluate how well learners applied their learning. The list made sense on paper. But AI doesn’t know that no human educator can track ten things at once during a class session. At most, you can observe maybe three or four. You’d need to spread the rest out over several sessions to make it work.”

Beyond practicality, there is also the question of relevance. The criteria for evaluating learning depend not just on the task, but on the learner’s level, the learning objectives, and what stakeholders actually care about. AI cannot intuit these nuances. That is why educators must step in, not to override AI, but to guide it with professional judgement.

Human Leadership Matters

Leadership is crucial in ensuring that learning evaluation is relevant and meaningful. “Managers know how to do things right, but leaders know the right thing to do,” Professor Guskey reminded.


Good evaluation is not just about asking evaluation technique. It is about understanding the context of learning and application, who the stakeholders are, what evidence they trust, and aligning evaluation methods to specific learning goals. This calls for professionals who are not just implementers of tools, but interpreters of meaning. He emphasised, “You are going to have to make judgements about what AI produces for you. This requires a very different kind of leadership on your part.”

Leveraging AI with Purpose

AI may be disrupting many aspects of work, life, and learning. But the core principles of good evaluation remain timeless. Going back to the foundation of learning and assessment, Professor Guskey reiterated, “Always begin with the end in mind. What is the purpose of your evaluation? Why are we doing it? Who is the evaluation for? What kinds of decisions will be made based on the evaluation information?”

In a world of smart machines, it is easy to overestimate what technology can do while underestimating our own expertise. Professor Guskey’s message was both empowering and sobering: AI can enhance evaluation, but it cannot replace human insight. The responsibility for making evaluations meaningful, equitable, and fit for purpose still lies with the professional.
Click here to learn more about ALX2025.