Grading Policy

A major focus of the course is that teams learn to follow a disciplined process for creating well-tested, maintainable code that is good enough to put into production and become a permanent improvement to the app.

There are several mechanisms teams and coaches will have available to measure this:

  • Data from specific tools (test coverage in CI, story lifecycles in Tracker, maintainability score in CodeClimate, Bluejay dashboard, etc.)
  • Notes/feedback from coach meetings
  • Team peer evaluations
  • Customer feedback

Here are the categories we're looking for, with examples of what constitutes excellent results vs. improvement required. An "A" project will have substantially all of its categories in or close to the Excellent column; an A– will have few areas less than "excellent"; and so on. 

While the focus is on the team project, grades are of course given to individual students. We will use all the sources of information above to assess how well each team member is attempting to follow the practices taught. We recognize that a team’s performance can be impacted significantly (for better or worse) by individual team members. As a result, not all students on the same team will necessarily get the same final grade. The extent to which each team member contributes to the overall effort and follows the practices can substantially affect their overall final grade.

Excellent Improvement Required
On-time, consistent attendance at, and proactive engagement in, customer meetings and coach meetings Sporadic attendance at such meetings, lack of contribution/participation in meetings, not focused/engaged during meetings
Customer meeting results are converted to specific and SMART iteration plans in a timely way Relationship between proceedings of customer meeting and work performed in iteration is unclear
Disciplined process used to prioritize and "point" stories before start of iteration Stories rarely prioritized/"pointed", and/or developer effort does not respect prioritization or rational division of work
Branch-per-feature with frequent small commits and rebases, to keep branch lifetimes short and branches small Frequent merge conflicts, unclear correspondence between branch structure and features/stories, large/unwieldy commits or PRs that cause merge problems
Consistent use of BDD and TDD to develop code: net coverage never decreases as a result of a commit, as reported by CI tools, and total coverage >=90% Code frequently checked in without sufficient test coverage, or code merged that breaks regression tests
Attention to code quality as demonstrated by rapidly tracking and fixing significant issues reported by CodeClimate/Hound Code quality low or decreasing over time; significant maintainability issues reported by CodeClimate are not addressed
Recent customer-facing improvements always available for review in an app deployed to staging Customer can rarely "try out" new features effectively because app is not deployed in staging or is not working well enough in staging for customer to use; team relies on "screenshare demos" instead
Consistently positive reviews from customer regarding effectiveness of meetings & communication; team demonstrates clear continuity in addressing items agreed on in previous meetings Customer generally reports meetings as being poorly organized, under-prepared, or not enough continuity showing that the items agreed on in last meeting have been addressed
Consistent retros identify areas for team improvement, and team demonstrates improvement in subsequent iterations Retros are rarely done, or areas identified for improvement in retros do not show specific attempts to improve
Several PRs of high-quality code merged to "golden repo" and/or pushed to production app Little or no code merged to golden repo because it's not high quality enough, so team's work never makes it into production

How we are checking for the above during each iteration

Category/expectation Sanity check

Effective customer meetings: requirements gathering

Meeting in class calendar;  lo-fi mockups, storyboards, etc. in GitHub project documents/wiki
Please note who attended each meeting as part of your meeting notes
Customer surveys: was the meeting timely, well organized/attended, all team members attentive and engaged and affecting a professional manner?

Effective customer meetings: customer review/signoff Customer surveys: did you get to try features in advance of meeting? Were open issues from previous meeting(s) addressed?
IPM/team discussion post-meeting "High quality" GitHub issues that have been prioritized, "pointed", assigned on Kanban-style GitHub Projects board
Standups (online or in person) StandupBot: each person should have some engagement at each standup
Code and test development Everyone should be active during every iteration: opening feature branches, committing, opening PR, contributing to others' PRs (code review)
Repo hygiene Merges to main (dev) trunk are green; overall test coverage (as shown by badge) never goes down; overall CodeClimate maintainability score (as shown by badge) never goes down; no "hot spots" lacking coverage or maintainability
Other team communication Slack
Retro (in class, every Friday) Do team self-evals and peer evals reflect improvement over time?
Observe iteration boundaries

We generally expect issues to be closed out before the following Monday iteration starts.
Did you finish your iteration tasks? If not, why not? Did it come up in retro? Was it a failure of planning? Unforeseen obstacle? What could you do in next iteration to avoid the same problem?
Don't start new stories/features until in-progress ones are closed out.