STAR Assessments for Mastery Learning
Update: Fall 2023 offering
We will offer this course in Fall 2023 in person, Mondays 1:30-3:00pm in 606 Soda. You may take it for credit even if you took it in Spring 2023.
If you are taking this course now (S23) | If you are not taking this course now (S23) | |
Undergraduate standing in F23 | register for CS194-245 Fa23 (32728) | register for CS194-244 Fa23 (32408) |
Graduate standing in F23 | register for CS294-245 Fa23 (32729) | register for CS294-244 Fa23 (32407) |
Spring 2023 projects & poster session (Thu 27 April, 10:00-11:30am, 5th floor atrium Soda Hall)
- CS61B graph algorithms exercises Links to an external site. (Mohammad Shahnawaz, Shirley Chen, Michael Wu)
- CS61B "leetcode-like" exercises Links to an external site. (Dhruv Kumar, Michael Khaykin)
- CS169: Scaffolding test-writing using Faded Parsons Problems Links to an external site. (Nelson Lojo, Narisam Haynam, Logan Caraco)
- CS70 Vitamins for Stable Matching problem Links to an external site. (Alberto Checcone)
- CS170 Graph Theory problems Links to an external site. (Bowen Fan, Jaylem Brar)
- Practicing MongoDB queries Links to an external site. (Rex Tabora, Brenden Inhelder, Cal State Long Beach, advised by Prof. Neal Terrell)
Undergraduates enroll in CS194-244 (ccn#33950), graduate students in CS294-244 (ccn#33951)
In this special topics course, small teams (2-4) of graduate and undergraduate students will develop and rigorously evaluate rich, machine-gradable assessments that would address learning goals that might arise in typical EECS courses. The assessments will promote mastery learning (aka proficiency learning) Links to an external site. by following the acronym STAR:
- Specific to a learning goal in a particular domain or topic. Examples from EECS could include timing diagrams, graph labeling, connect the circuit elements, etc, but not generic-format multiple-choice/numeric-answer/fill-in-the-blanks questions. Developing and evaluating novel question formats and types are the main goals.
- Tagged to specific learning outcomes, skills to be demonstrated (competencies), etc. within the context of a full or partial concept inventory for the course, so that evaluation can be focused on whether the learning outcome is in fact reinforced by the assessment
- Autogradable with instant or near-instant automatic feedback
- Randomized, so each exercise has a large or very large number of variants and can therefore be used for practice (formative assessment for mastery learning)
In addition to developing assessments, student teams will evaluate them by using the methods of HCI and education research to run either informal or formal pilot studies. We will encourage students to make any resulting artifacts available as open educational resources, and if appropriate, submit results for publication.
Format & Prereqs, How to Enroll
- Instructors: Armando Fox, Dan Garcia, possibly others TBD & guest speakers
- Location: Online only, Thursdays 10:00-11:30, regular attendance and participation required, first meeting February 2
- Permission of instructors required to enroll, based on these conditions:
- Experience as a student, academic staff, or both, for one or more EECS courses that could serve as the basis for you to think up and develop novel assessment types for those course(s).
- Evidence of reasonable proficiency in Python: B+ or better in CS61A, or we might ask you to do a brief diagnostic interview
- Pre-proposal for a STAR exercise you could create based on your experience in a specific course. You’re not committing yourself to do this specific one, but it will serve as a starting point to make sure you have some good concrete ideas about what you might work on. The instructors will be available for informal consultation/discussion to help you prepare your proposal.
- Helpful but not required: basic knowledge of HTML & Bootstrap CSS; we will provide basic learning materials and there will be a quiz to make sure you have enough background. Some knowledge of JavaScript and some of its major libraries (jQuery, d3). Comfort working with basic data analysis tools (spreadsheets, R/Pandas, etc.)
- Meetings will be part lecture/didactic, but (esp. after first 2-3 weeks) primarily demos, team presentations, discussion.
- Required readings from the literature, which may be verified via periodic microquizzes.
- Grading: 3 units, letter graded; CS194 units count toward upper division EECS technical electives.
- Grading based on:
- Regular synchronous attendance, engagement (camera on), and participation in discussions. Contact instructors prior to a class meeting if it will be a problem for you to attend that meeting or meet these criteria generally. If you’re attending class with your camera off and haven’t discussed it beforehand with the instructors, you may be removed from the Zoom call.
- Project: In a small group, produce and evaluate a STAR assessment that works within the PrairieLearn assessment authoring system, tagged to concepts in a (partial) concept map for an appropriate course.
- Team presentations: Paper summaries, project proposal, project progress, user studies, final presentation
- A few “checkpoint” assignments to keep projects on track: mini-presentations/progress reports, GitHub/deployment checkpoints, etc.
- Microquizzes to cross-check that people are at least skimming the readings.
- No midterm or final. Final project presentation will be last day of classes (4/27) but everyone gets an automatic extension until the end of RRR week to finish final coding, make pull requests, etc. and write a project report that is expected to be high enough quality to be submittable as a short or full paper to a reputable workshop, conference, or journal.
Projects will be proposed in week 2 or 3, instructors will provide feedback on project proposal to make sure it meets the general project criteria.
To enroll: please read the FAQ first to make sure the class is a good fit. At the bottom of that page is a link to fill out a short form to enroll.