Announcing the Graduate Research Competition 2025

We are thrilled to announce the Graduate Research Competition 2025, a unique opportunity to showcase your innovative research ideas in real-world or work-based learning. This competition, hosted by ImBlaze and the UMN CEHD Learning Informatics Lab, offers the chance to win prizes, receive mentorship, and gain recognition while doing important education focused research.

What is the ImBlaze x UMN CEHD Learning Informatics Lab Research Competition?

This tiered competition offers the opportunity to explore a research question and design a study using data collected by ImBlaze. Compete for prize money, mentorship, and the opportunity to make a significant impact using the real-world data!

What is ImBlaze?

ImBlaze is a technology platform that enables schools to manage, collect, and analyze student out-of-school learning experiences. With over 1.7 million hours of student attendance logged, it tracks key metrics such as business professions, hours logged, and student reflections. Dynamic questions allow educators or researchers to customize data collection, providing real-time insights into student experiences and mentor relationships.

Key Dates:

  • Proposal Deadline: April 25, 2025
  • Semifinalists Announced: June 1, 2025
  • Research Period: Summer-Fall 2025
  • Winner Announced: December 15, 2025

For more information including competition entry instructions, please click this link here.

Challenge your mind. Impact the world. Join the ImBlaze Graduate Research Competition today!

Eligibility: This competition is open to graduate students affiliated with the University of Minnesota. Students can apply as individuals or in teams. If a team wins, the prize money will be shared amongst the team members. If you are not currently a member of the Learning Informatics Lab, please reach out to Heeryung Choi (heeryung@umn.edu) for more information.

Cognitive Assessment of Large Language Models

On Monday, March 31, 2025, the Learning Informatics Lab hosted Karin de Langis from the University of Minnesota. In this talk, she discussed the current research landscape around cognition in Large Language Models (LLMs), as well as the methodological challenges involved. She also discussed her work with Minnesota NLP and highlighted several of their recent findings on LLM performance on tasks across multiple domains, including memory, executive function, and narrative comprehension.

SmartPal: Augmenting Learning Management Systems with LLM Chatbots and Gamification with Dr. De Liu

On Friday, February 14, 2025, the Learning Informatics Lab hosted Dr. De Liu from the University of Minnesota. In this talk, Dr. Liu discussed his approach to enhancing learning engagement and performance through SmartPal, a digital learning assistant that integrates with Canvas. He also discussed the design of SmartPal, findings from a randomized field experiment on the effects of integrating AI chatbot and gamification, and highlights opportunities for research enabled by the SmartPal platform.

Leveraging Social Theories to Enhance Human-AI Interaction with Dr. Harmanpreet Kaur

On Friday, December 6, 2024, the Learning Informatics Lab hosted Dr. Harmanpreet Kaur from the University of Minnesota. In this talk, Dr. Kaur discussed her research on explainable AI and why it does not work in practice. She also shared design ideas—both completed and current work—to help people with varying expertise understand AI outputs.

Fall ’24 Colloquium: Leveraging Social Theories to Enhance Human-AI Interaction

Date: Friday, December 6
Time: 4:00 – 5:00 PM (Central Time)
Location: Education Sciences Building, Room 325


Featured Speaker: Dr. Harmanpreet Kaur (She/Her)

Assistant Professor, Department of Computer Science & Engineering

Dr. Harmanpreet Kaur is a leading researcher in human-centered artificial intelligence (AI), focusing on explainability, interpretability, and hybrid intelligence systems.

Talk OVERVIEW

As human-AI partnerships become more prevalent, their effectiveness hinges on addressing critical challenges like dynamic human needs and AI’s opaque reasoning. Many current systems fail to explain AI decision-making clearly, often perpetuating biases and overlooking nuanced edge cases.

Dr. Kaur will explore why explainable AI often falls short in practical applications and discuss innovative designs—both completed and ongoing—that aim to empower users with varying levels of expertise to better understand and interact with AI outputs.