What things do you look at when evaluating student work on open source? I had to do this recently for an RIT team working on Sugar Labs Activities, and here's what I used:

  • the product. As someone trying to be a user (rather than a developer) of their code, can I get their code and run it? Does it work for me?
  • the project. How's the wiki page and supporting documentation? Who wrote what? Does it pass the FAILmeter? What was the project plan - were milestones deliberately planned for and hit (and adjusted over time), or was this more improvisational? What kind of reflection on the project process has taken place?
  • design. Since Sugar is an education project, it is important to the culture of the Sugar Labs community that all their work be designed to support learning - what kind of pedagogic thought has gone into this project? (In other communities, pedagogical design might be less important than something else, such as UI innovation, cross-platform compatibility, etc.)
  • developer docs. As someone who might pick up on the code and try to extend it, can I quickly figure out how to do that - to go from running it to pushing up a patch?
  • the code. Who wrote which code when, and what did it do?
  • mailing list archives. How did the team coordinate?
  • meeting logs. (I had logs from lurking in-channel for the quarter, so I simply used those.) Who spoke? How did they interact? Did they bring any people from the community other than their immediate team in?
  • blog posts. How did they present their work to an external audience? How did they phrase that conversation?

What am I missing? What's extraneous? What are cues you look for, or quick tools/commands/pages you use to get an overall gauge that you can use to dive in for more detail on areas you should pay more attention to - what do you use as "should look at this more closely because it might be interesting" flags? (For instance, Special:Contributions makes it easy to look at one student's wiki work.)

How do you grade open source work?