m |
|||
Line 21: | Line 21: | ||
<p> | <p> | ||
− | Each section is called a category. Within each category, there are 2 - 8 questions that we call aspects (shown below). Each aspect has 6 language choices that covers a range of how the judge evaluating should feel about the quality of the work. Unlike the aspects, these language choices will not be shown. We want iGEMers to know how they are being evaluated, but we don't want to | + | Each section is called a category. Within each category, there are 2 - 8 questions that we call aspects (shown below). Each aspect has 6 language choices that covers a range of how the judge evaluating should feel about the quality of the work. Unlike the aspects, these language choices will not be shown. We want iGEMers to know how they are being evaluated, but we don't want to <a href="https://en.wikipedia.org/wiki/Teaching_to_the_test">"teach to the test."</a> The language choices correspond to roughly: |
</p> | </p> | ||
Revision as of 20:18, 9 August 2016
iGEM Judging Rubric
Judging is a complex task and can seem mysterious to iGEMers at times. We're aiming to help teams understand how they are evaluated and provide more information ahead of time. While the individual decisions judges make about teams must remain confidential until after the Jamboree, the systems they use do not.
The main mechanism through which iGEM teams are evaluated is called the rubric. The rubric is composed of three main sections:
- Medals Section
- Project Section
- Special Awards Section
Each section is called a category. Within each category, there are 2 - 8 questions that we call aspects (shown below). Each aspect has 6 language choices that covers a range of how the judge evaluating should feel about the quality of the work. Unlike the aspects, these language choices will not be shown. We want iGEMers to know how they are being evaluated, but we don't want to "teach to the test." The language choices correspond to roughly:
- Amazing!
- Great
- Good
- Present
- Bad
- Absent
Each section of the rubric has a separate function and correlates with different awards. The Medals section obviously refers to a team's work convincing the judges they have achieved specific medal criteria. These criteria can be found on the Medals Page and won't be reiterated here.
The Project section is composed of two sub-sections: the main project category and the track-specific category. The main project category has eight aspects while the track-specific category only has two. Combined, these ten aspects determine the scores for the teams who will win their tracks and will also determine the finalist teams. This category is arguably the most important part of the evaluation for an iGEM team.
The final section of the judging rubric determines special awards. Each award has its own category in the rubric with either four or five aspects. This part of the evaluation integrates with the Evaluated Pages system. To be eligible for an award, teams need to complete the corresponding page on the wiki and fill out a 150 word description on the judging form.
This rubric is the result of more than four years of development, hundreds of hours of discussion, dozens and dozens of meetings, and thousands of emails between some of the most experienced advisers in iGEM. We are continuously improving and tweaking the rubric, but the system we have is extremely effective at selecting for the winning teams that best represent the values of iGEM.
No | Categories | Aspects |
1 | Project | How impressive is this project? |
2 | Project | How creative is the team's project? |
3 | Project | Did the project work? |
4 | Project | How much did the team accomplish? |
5 | Project | Is the project likely to have an impact? |
6 | Project | How well are engineering principles used? |
7 | Project | How thoughtful and thorough was the team's consideration of human practices? |
8 | Project | How much of the work did the team do themselves and how much was done by others? |
9 | Track Specific - Standard Tracks | Did the team design a project based on synthetic biology and standard parts? |
10 | Track Specific - Standard Tracks | Are the parts functions and behaviors well-documented in the Registry? |
9 | Track Specific - Special Tracks | Did the team design a project based on synthetic biology? |
10 | Track Specific - Special Tracks | Are the project components (hardware, software, art & design, etc) thoroughly documented on their wiki? |
Special Prizes | ||
1 | Wiki | Do I understand what the team accomplished? |
2 | Wiki | Is the wiki attractive and easy to navigate? |
3 | Wiki | Does the team provide convincing evidence to support their conclusions? |
4 | Wiki | How well does the team describe what they did and what was done by others? |
5 | Wiki | Will the wiki be a compelling record of the team's project for future teams? |
1 | Presentation | Did the presentation flow well? |
2 | Presentation | How professional is the graphic design in terms of layout and composition? |
3 | Presentation | Did you find the presentation engaging? |
4 | Presentation | How competent were the team members at answering questions? |
1 | Poster | Did the poster flow well? |
2 | Poster | How professional is the graphic design in terms of layout and composition? |
3 | Poster | Did you find the poster appealing? |
4 | Poster | How competent were the team members at answering questions? |
1 | Integrated Human Practices | Was their work integrated into their project? |
2 | Integrated Human Practices | Does it serve as an inspiring example to others? |
3 | Integrated Human Practices | Is it documented in a way that others can build upon? |
4 | Integrated Human Practices | Was it thoughtfully implemented? (did they explain the context, rationale, prior work) |
1 | Education & Public Engagement | Did their work establish a dialogue? |
2 | Education & Public Engagement | Does it serve as an inspiring example to others? |
3 | Education & Public Engagement | Is it documented in a way that others can build upon? |
4 | Education & Public Engagement | Was it thoughtfully implemented? (did they explain the context, rationale, prior work) |
1 | Model | How impressive is the mathematical modeling? |
2 | Model | Did the model help the team understand their part or device? |
3 | Model | Did the team use measurements of the device to develop the model? |
4 | Model | Does the modeling approach provide a good example for others? |
1 | Measurement | Is the measurement potentially repeatable? |
2 | Measurement | Is the protocol well described? |
3 | Measurement | Are there web-based support materials? |
4 | Measurement | Is it useful to other projects? |
5 | Measurement | Was a standard reference sample included? |
1 | Entrepreneurship | Customer Discovery - Has the team interviewed a representative number of potential customers for the technology and clearly communicated what they learned? |
2 | Entrepreneurship | Based on their interviews, does the team have a clear hypothesis describing their customers' needs? |
3 | Entrepreneurship | Does the team present a convincing case that their product meets the customers' needs? |
4 | Entrepreneurship | Has the team demonstrated a minimum viable (MVP) product? And does the team have customers to commit (LOI, etc.) to purchasing it / using it? |
5 | Entrepreneurship | Does the team have a viable and understood business model/value proposition to take their company to market? |
1 | Applied Design | How well did the project address potential applications and implications of synthetic biology? |
2 | Applied Design | How creative, original, and compelling was the project? |
3 | Applied Design | How impressive was the project installation in the art & design exhibition space? |
4 | Applied Design | How well did the team engage in collaboration with people outside of their primary fields? |
1 | Software Tool | How well is the software using and supporting existing synthetic biology standards and platforms? |
2 | Software Tool | Was this software validated by experimental work? |
3 | Software Tool | Did the team use non-trivial algorithms or designs? |
4 | Software Tool | How easily can others embed this software in new workflows? |
5 | Software Tool | How user-friendly is the software? |
1 | Hardware | Does the hardware address a need or problem in synthetic biology? |
2 | Hardware | Did the team conduct user testing and learn from user feedback? |
3 | Hardware | Did the team demonstrate utility and functionality in their hardware proof of concept? |
4 | Hardware | Is the documentation of the hardware system sufficient to enable reproduction by other teams? |
1 | Plant Synthetic Biology | How impressive was the use of a plant chassis? |
2 | Plant Synthetic Biology | How impressive was the collection of parts made for the plant chassis? |
3 | Plant Synthetic Biology | How well did the team use the special attributes of the plant chassis? |
4 | Plant Synthetic Biology | Are the parts/tools/protocols for plants made during this project useful to other teams? |
1 | New Basic Part | How does the documentation compare to BBa_K863006 and BBa_K863001? |
2 | New Basic Part | How new/innovative is it? |
3 | New Basic Part | Did the team show the part works as expected? |
4 | New Basic Part | Is it useful to the community? |
1 | New Composite Part | How does the documentation compare to BBa_K404122 and BBa_K863005? |
2 | New Composite Part | How new/innovative is it? |
3 | New Composite Part | Did the team show the part works as expected? |
4 | New Composite Part | Is it useful to the community? |
1 | Part Collection | Is this collection a coherent group of parts meant to be used as a collection, or just a list of all the parts the team made? |
2 | Part Collection | How does the documentation compare to BBa_K747000 and BBa_K525710? |
3 | Part Collection | Did the team submit an internally complete collection allowing it to be used without any further manipulation or parts from outside Registry? |
4 | Part Collection | Did the team finish building a functional system using this collection? |
5 | Part Collection | Did the team create excellent documentation to allow future use of this collection? |