Team:Waterloo/Integrated Practices/Networks

Introduction:

iGEM (International Genetically Engineered Machine) is an annual academic team competition that hosts their ‘Jamboree’ in Boston, Massachusetts. 2015 was the thirteenth year of competition drawing over 2700 participants on 259 teams from 40 countries. Out of these 259 teams, 227 of them earned competition medals: 55 bronze, 57 silver, and 115 gold. On top of this, competition judges 216 special awards and award nominations, honouring both exemplary projects in the 15 project tracks and particularly excellent presentation components including best poster, best software, and best integrated human practices.

This codebook will be used as a guideline and reference to the dataset used in a network analysis of the 2015 iGEM collaborations. A thorough explanation will be given on all definitions and methods needed to contextualize the data.

Download it here

Definitions:

The University of Waterloo iGEM Policy and Practices team researched network collaborations of 2015 data, and interpreted how each team’s collaboration affected their finishing result. In this codebook, the Waterloo iGEM team will have clear definitions on what a significant collaboration is, what a registered iGEM team is, and what a team wiki is.

Significant: In iGEM, the idea of significance plays a vital role in determining the validity of contributions and how medal criteria is met. A contribution is significant if it betters a project or a team, ultimately contributing towards its advancement. That being said, significance is also largely subjective because what a judge may see as beneficial, another may see as too passive of an effort to qualify.

Medal Criteria: The 2016 iGEM judging handbook explains the reasoning behind medal criteria and the variations of criteria depending on factors such as member age and track. It states that there are different medal requirements for Standard Tracks (High School teams included) and Special Tracks. Rather than standard motivations of competition where teams are competing against each other, iGEM teams are ultimately competing against themselves as they strive to meet criteria, with different criteria translating to the attainment of different medals. While many medal criteria can be assessed by following static wiki page links found in the judging forms, it is also the team’s responsibility to convince judges that they have met criteria.

All iGEM teams are required to deliver a team wiki, poster, presentation, project attribution, registry part pages, sample submissions, safety forms, and a judging form. Additionally, teams must follow all Laboratory Safety Rules, adhere to the Responsible Conduct Committee’s Conduct Policy, be intellectually honest, and treat everyone with respect throughout the duration of the competition, as well as contribute positively to society.

Registered Team: iGEM teams range greatly in size, origin, and member age. They tend to consist primarily of undergraduate students from accredited postsecondary institutions, but teams can also be composed of high school students or lab members of the greater community. iGEM HQ recommends teams to be gender-balanced and have between 8 and 15 members. It also compares iGEM teams to sports teams in that all members have different roles and responsibilities. The 3 team sections are Undergraduate, Overgraduate, and High School.

Team Wiki: Wiki pages provide teams with a space to communicate their project to others and make a case for how they have met criteria. The general requirements of a wiki are project overview, project design, project results, and medal criteria checklist. All iGEM teams are given a namespace on the iGEM wiki with their team name, and teams are allowed to create and edit pages solely within their respective team namespace. iGEM HQ refers to wikis as the “public-facing representation” of a team’s project, stating that wikis should be understandable by non-scientists as well as by judges who are not familiar with projects prior to the judging phase.

Methods:

The full method for how the data spreadsheet will be explicitly described below. Each column within the collaboration network data spreadsheet was gathered from multiple locations in iGEM’s publicly available website from last year’s competition.

First, each of the 2015 iGEM teams was distributed evenly between all members of the “Helping Practices of Science and Engineering” Policy and Practices subteam. Next, the website: https://2015.igem.org/Results was visited which gave some necessary information for populating our spreadsheet. In this webpage, a list of every iGEM team’s proper name was listed, as well as their respective medal colour and any “special awards” won. Special awards were distributed in a variety of categories from “Best Poster” to “Best Integrated Human Practices”. In the spreadsheet, the column “Donor” was filled with the team’s name, the “(Donor) Medal Colour” with the medal earned, the “Special Award(s)? (Yes/No)” with a Yes or No binary response, and the “Name of Special Awards” with the specific award(s) won, if any.

Two links were then consulted from the Results webpage. These were the official team profile and the team’s Wiki. The official team profile provided the basic information such as the proper registered name of the team, the country and continent they were from, as well as the team size or any auxiliary team members. With the given information on this page, the “(Donor) Country” column, the “(Donor) Continent” column, the “(Donor) Team size - Student Members” column, and the “(Donor) Team size - Auxiliary Members” column were able to be recorded. There are multiple headings for team members depending on how they contributed to their respective iGEM team. It was decided that the “Student Members” heading would correspond to our “(Donor) Team size - Student Members” column and all other headings such as “Advisors”, “Primary PI”, “Secondary PI” and “Instructors” would fall under our “(Donor) Team size - Auxiliary Members”.

The next link ollowed was finally the team’s Wiki page. This would then give us the team’s collaboration and who the recipient teams were. All team’s Wiki’s were formatted differently, but most followed a familiar format. On the top headings of the home page, a Collaboration heading could be found on its own, or possibly under the Human Practices heading. Once arrived at this page, we made a judgement on whether a collaboration was valid to record or not. Below will be our method on how we determined if a collaboration was valid or not, with example edge cases to help explain further.

To summarize, there are 9 columns in our spreadsheet:

  • Donor
  • Country
  • Continent
  • Medal Colour
  • Team Size
  • Auxiliary Team Members
  • Recipient Team
  • Special Awards (Yes or no?)
  • Special Awards Awarded

What is a valid collaboration?

Convince the judges you have helped any registered iGEM team from high school, a different track, another university, or another institution in a significant way by, for example, mentoring a new team, characterizing a part, debugging a construct, modeling/simulating their system or helping validate a software/hardware solution to a synbio problem.- iGEM HQ, Medal criteria

In most cases, teams reported collaborations similar to the examples outlined in the medal criteria. However, we have found instances wherein the “significance” of a collaboration were ambiguous. The following are edge cases representing the classes of collaboration with such ambiguity.

Edge Cases:

  1. NEGEM
  2. New England iGEM (NEGEM) is a Team Meetup hosted by BostonU each year. At this Team Meetup, iGEM teams will share ideas, help each other, and socialize. However, teams that attended the Team Meetup were not counted as a collaboration. This is because although NEGEM may have been the starting point for a collaboration, it itself was not significant enough to be a collaboration. The actual event did not allow enough time or resources for teams to significantly help each other. However, collaborations did occur as a result of the Team Meetup, where teams significantly assisted other teams with their projects.
  3. Ted Harrison Middle School
  4. Ted Harrison Middle School is a middle school that started an iGEM project in the 2015 iGEM season. They collaborated with the University of Calgary as well as the University of Lethbridge. Even though they did receive aid (first year mentorship, wet lab training) that would qualify as significant enough to be collaboration, their collaborations were not counted because they were not a registered iGEM team - they did not appear on the Team List for 2015 or any other year (https://igem.org/Team_List?year=2015).
  5. Sheffield 2014 & Chicago 2013
  6. OLS Canmore, a high school in Alberta, collaborated with Team Sheffield and Team UChicago. These collaborations were an edge case because it was the 2014 Sheffield team and 2013 UChicago team that helped the 2015 OLS Canmore team. When asking for clarification from iGEM HQ, the Waterloo Policy and Practices team was referred to Kim de Mora, Ph.D., the Director of Development at the iGEM Foundation, who also coordinates the judging program. It was confirmed that as long as the collaboration is properly documented and proven to be a valid collaboration, it counts.
  7. Macquarie_Australia
  8. The iGEM teams representing University of Sydney, Linköping University, Birbeck University and Oxford University directly took part in an episode of the world’s first synthetic biology themed game show, “So You Think You Can Synthesise”. Given this unique and creative circumstance, it became difficult to judge whether this was justified as a human practices collaboration. But to abide by our true definition of a collaboration, this still falls under the category of surveys. For all intents and purposes, this is one team who creates a material or activity and other iGEM teams only participated. Since there was no reciprocative collaboration where a knowledge gap has been bridged between two or more teams, this cannot be classified as a valid collaboration.
  9. Korea_U_Seoul
  10. The University of Korea created a large survey that many teams or the general public eventually filled out to gather data with. The questions ranged from asking if others were aware about the concept of synthetic biology, to other questions based on open science. Although other teams did reciprocate these surveys that gave information to help the University of Korea with further research, there was no exchange of ideas in lab work, mathematical modelling or policy and practice work that helped both teams grow and learn.

Simple Statistics

References
Hello