CodeAid: Evaluating a Classroom Deployment of an LLM-based Programming Assistant that Balances Student and Educator Needs

Timely, personalized feedback is essential for students learning programming.LLM-powered tools like ChatGPT offer instant support, but reveal direct answerswith code, which may hinder deep conceptual engagement. We developed CodeAid,an LLM-powered programming assistant delivering helpful, technically correctresponses, without revealing code solutions. CodeAid answers conceptualquestions, generates pseudo-code with line-by-line explanations, and annotatesstudent’s incorrect code with fix suggestions. We deployed CodeAid in aprogramming class of 700 students for a 12-week semester. A thematic analysisof 8,000 usages of CodeAid was performed, further enriched by weekly surveys,and 22 student interviews. We then interviewed eight programming educators togain further insights. Our findings reveal four design considerations forfuture educational AI assistants: D1) exploiting AI’s unique benefits; D2)simplifying query formulation while promoting cognitive engagement; D3)avoiding direct responses while encouraging motivated learning; and D4)maintaining transparency and control for students to asses and steer AIresponses.

Further reading