Team:Waterloo/Integrated Practices/Networks

Math Modelling
Policy and Practices

Networks

Introduction:

iGEM (International Genetically Engineered Machine) is an annual academic team competition that hosts their ‘Jamboree’ in Boston, Massachusetts. 2015 was the thirteenth year of competition drawing over 2700 participants on 259 teams from 40 countries. Out of these 259 teams, 227 of them earned competition medals: 55 bronze, 57 silver, and 115 gold. On top of this, competition judges 216 special awards and award nominations, honouring both exemplary projects in the 15 project tracks and particularly excellent presentation components including best poster, best software, and best integrated human practices.

This codebook will be used as a guideline and reference to the dataset used in a network analysis of the 2015 iGEM collaborations. A thorough explanation will be given on all definitions and methods needed to contextualize the data.

Download the 2015 iGEM Collaboration Dataset here!

Definitions:

The University of Waterloo iGEM Policy and Practices team researched network collaborations of 2015 data, and interpreted how each team’s collaboration affected their finishing result. In this codebook, the Waterloo iGEM team will have clear definitions on what a significant collaboration is, what a registered iGEM team is, and what a team wiki is.

Significant: In iGEM, the idea of significance plays a vital role in determining the validity of contributions and how medal criteria is met. A contribution is significant if it betters a project or a team, ultimately contributing towards its advancement. That being said, significance is also largely subjective because what a judge may see as beneficial, another may see as too passive of an effort to qualify.

Medal Criteria: The 2016 iGEM judging handbook explains the reasoning behind medal criteria and the variations of criteria depending on factors such as member age and track. It states that there are different medal requirements for Standard Tracks (High School teams included) and Special Tracks. Rather than standard motivations of competition where teams are competing against each other, iGEM teams are ultimately competing against themselves as they strive to meet criteria, with different criteria translating to the attainment of different medals. While many medal criteria can be assessed by following static wiki page links found in the judging forms, it is also the team’s responsibility to convince judges that they have met criteria.

All iGEM teams are required to deliver a team wiki, poster, presentation, project attribution, registry part pages, sample submissions, safety forms, and a judging form. Additionally, teams must follow all Laboratory Safety Rules, adhere to the Responsible Conduct Committee’s Conduct Policy, be intellectually honest, and treat everyone with respect throughout the duration of the competition, as well as contribute positively to society.

Registered Team: iGEM teams range greatly in size, origin, and member age. They tend to consist primarily of undergraduate students from accredited postsecondary institutions, but teams can also be composed of high school students or lab members of the greater community. iGEM HQ recommends teams to be gender-balanced and have between 8 and 15 members. It also compares iGEM teams to sports teams in that all members have different roles and responsibilities. The 3 team sections are Undergraduate, Overgraduate, and High School.

Team Wiki: Wiki pages provide teams with a space to communicate their project to others and make a case for how they have met criteria. The general requirements of a wiki are project overview, project design, project results, and medal criteria checklist. All iGEM teams are given a namespace on the iGEM wiki with their team name, and teams are allowed to create and edit pages solely within their respective team namespace. iGEM HQ refers to wikis as the “public-facing representation” of a team’s project, stating that wikis should be understandable by non-scientists as well as by judges who are not familiar with projects prior to the judging phase.

Methods:

The full method for how the data spreadsheet will be explicitly described below. Each column within the collaboration network data spreadsheet was gathered from multiple locations in iGEM’s publicly available website from last year’s competition.

First, each of the 2015 iGEM teams was distributed evenly between all members of the “Helping Practices of Science and Engineering” Policy and Practices subteam. Next, the website: https://2015.igem.org/Results was visited which gave some necessary information for populating our spreadsheet. In this webpage, a list of every iGEM team’s proper name was listed, as well as their respective medal colour and any “special awards” won. Special awards were distributed in a variety of categories from “Best Poster” to “Best Integrated Human Practices”. In the spreadsheet, the column “Donor” was filled with the team’s name, the “(Donor) Medal Colour” with the medal earned, the “Special Award(s)? (Yes/No)” with a Yes or No binary response, and the “Name of Special Awards” with the specific award(s) won, if any.

Two links were then consulted from the Results webpage. These were the official team profile and the team’s Wiki. The official team profile provided the basic information such as the proper registered name of the team, the country and continent they were from, as well as the team size or any auxiliary team members. With the given information on this page, the “(Donor) Country” column, the “(Donor) Continent” column, the “(Donor) Team size - Student Members” column, and the “(Donor) Team size - Auxiliary Members” column were able to be recorded. There are multiple headings for team members depending on how they contributed to their respective iGEM team. It was decided that the “Student Members” heading would correspond to our “(Donor) Team size - Student Members” column and all other headings such as “Advisors”, “Primary PI”, “Secondary PI” and “Instructors” would fall under our “(Donor) Team size - Auxiliary Members”.

The next link ollowed was finally the team’s Wiki page. This would then give us the team’s collaboration and who the recipient teams were. All team’s Wiki’s were formatted differently, but most followed a familiar format. On the top headings of the home page, a Collaboration heading could be found on its own, or possibly under the Human Practices heading. Once arrived at this page, we made a judgement on whether a collaboration was valid to record or not. Below will be our method on how we determined if a collaboration was valid or not, with example edge cases to help explain further.

To summarize, there are 9 columns in our spreadsheet:

  • Donor
  • Country
  • Continent
  • Medal Colour
  • Team Size
  • Auxiliary Team Members
  • Recipient Team
  • Special Awards (Yes or no?)
  • Special Awards Awarded

What is a valid collaboration?

Convince the judges you have helped any registered iGEM team from high school, a different track, another university, or another institution in a significant way by, for example, mentoring a new team, characterizing a part, debugging a construct, modeling/simulating their system or helping validate a software/hardware solution to a synbio problem.- iGEM HQ, Medal criteria

In most cases, teams reported collaborations similar to the examples outlined in the medal criteria. However, we have found instances wherein the “significance” of a collaboration were ambiguous. The following are edge cases representing the classes of collaboration with such ambiguity.

Edge Cases:

  1. NEGEM
  2. New England iGEM (NEGEM) is a Team Meetup hosted by BostonU each year. At this Team Meetup, iGEM teams will share ideas, help each other, and socialize. However, teams that attended the Team Meetup were not counted as a collaboration. This is because although NEGEM may have been the starting point for a collaboration, it itself was not significant enough to be a collaboration. The actual event did not allow enough time or resources for teams to significantly help each other. However, collaborations did occur as a result of the Team Meetup, where teams significantly assisted other teams with their projects.

  3. Ted Harrison Middle School
  4. Ted Harrison Middle School is a middle school that started an iGEM project in the 2015 iGEM season. They collaborated with the University of Calgary as well as the University of Lethbridge. Even though they did receive aid (first year mentorship, wet lab training) that would qualify as significant enough to be collaboration, their collaborations were not counted because they were not a registered iGEM team - they did not appear on the Team List for 2015 or any other year (https://igem.org/Team_List?year=2015).

  5. Sheffield 2014 & Chicago 2013
  6. OLS Canmore, a high school in Alberta, collaborated with Team Sheffield and Team UChicago. These collaborations were an edge case because it was the 2014 Sheffield team and 2013 UChicago team that helped the 2015 OLS Canmore team. When asking for clarification from iGEM HQ, the Waterloo Policy and Practices team was referred to Kim de Mora, Ph.D., the Director of Development at the iGEM Foundation, who also coordinates the judging program. It was confirmed that as long as the collaboration is properly documented and proven to be a valid collaboration, it counts.

  7. Macquarie_Australia
  8. The iGEM teams representing University of Sydney, Linköping University, Birbeck University and Oxford University directly took part in an episode of the world’s first synthetic biology themed game show, “So You Think You Can Synthesise”. Given this unique and creative circumstance, it became difficult to judge whether this was justified as a human practices collaboration. But to abide by our true definition of a collaboration, this still falls under the category of surveys. For all intents and purposes, this is one team who creates a material or activity and other iGEM teams only participated. Since there was no reciprocative collaboration where a knowledge gap has been bridged between two or more teams, this cannot be classified as a valid collaboration.

  9. Korea_U_Seoul
  10. The University of Korea created a large survey that many teams or the general public eventually filled out to gather data with. The questions ranged from asking if others were aware about the concept of synthetic biology, to other questions based on open science. Although other teams did reciprocate these surveys that gave information to help the University of Korea with further research, there was no exchange of ideas in lab work, mathematical modelling or policy and practice work that helped both teams grow and learn.

Simple Statistics

Total collaborations: 583

Collaboration Actors
RoleNumber of unique teams
Donor205
Recipient218
Donor Team Size
Type of MemberMean
Student14
Auxilliary5
Donor Geography

Number of unique countries: 36

ContinentFrequency*Percent*
Africa20.34
Asia15025.73
Europe24542.02
Latin America457.72
North America14124.19

*All collaborations, before simplification by donor team

Donor Medal and Special Awards
Medal colourFrequency*Percent*
Bronze34114.75
Gold11358.49
Silver8619.38
No medal awarded437.38

Collaborations whose donor team won, or was nominated for, at least one special award: 251 (43.05%)

*All collaborations, before simplification by donor team

Intra-iGEM Team Collaborations

Not only did we participate in inter-iGEM team collaborations, but we also deliberately promoted intra-iGEM team collaborations between our three subteams (Lab and Design, Math Modelling, and Policy and Practices). Look for stickers in the bottom left corner to see if and which subteams collaborated to produce part of our project!

Inter-iGEM Team Collaborations

Read about our collaborations with other iGEM teams here.

Appendix A: Linear Regression - Global Data Analysis of Team Size vs. success factors

Appendix B: Linear Regression - Latin America Analysis of Team size vs. Success Factors

Appendix C1: Latin America Data Averages

Analysis: Sample Size (n): 51
Team Size Aux Team Size Mean: 3.25
Mean Team Size: 13.61
Team Size Standard Deviation: 4.92 Standard Error: 0.6883
Medals Number of times Gold Won: 20 60.61% 39.22%
Number of times Silver Won: 6 18.18% 11.76%
Number of times Bronze Won: 7 21.21% 13.73%
Teams without medals: 18 54.55% 35.29%
Total Number of winnings: 33 100.00% 64.71%
Special Awards: Number of times Special Awards won: 1 1.96%
Special awards AND Gold Medals: 0 0.00% 0.00%
Special awards AND Silver Medals: 0 0.00% 0.00%
Special awards AND Bronze Medals: 0 0.00% 0.00%
Special awards but NO Medals: 1 100.00% 1.96%
Collaboration: Number of times collaborated: 47 92.16%
Collaboration AND Gold Medals: 20 42.55% 39.22%
Collaboration AND Silver Medals: 6 12.77% 11.76%
Collaboration AND Bronze Medals: 4 8.51% 7.84%
Collaboration AND No Medals: 17 36.17% 33.33%
Collab vs. SA: Collaboration AND Special Awards 1 2.13% 1.96%
Non-Collab: Teams without Collaboration 4
No Collaboration AND Gold 0 0.00% 0.00%
No Collaboration AND Silver 0 0.00% 0.00%
No Collaboration AND Bronze 3 75.00% 5.88%
No Collaboration AND No Medals 1 25.00% 1.96%

Appendix C2: North America Data Averages

Analysis: Sample Size (n): 136
Team Size Aux Team Size Mean: 3.654411765
Mean Team Size: 10.85294118
Team Size Standard Deviation: 5.383829577 Standard Error: 0.4617
Medals Number of times Gold Won: 53 46.09% 38.97%
Number of times Silver Won: 35 30.43% 25.74%
Number of times Bronze Won: 27 23.48% 19.85%
Teams without medals: 21 18.26% 15.44%
Total Number of winnings: 115 100.00% 84.56%
Special Awards: Number of times Special Awards won: 40 29.41%
Special awards AND Gold Medals: 30 75.00% 22.06%
Special awards AND Silver Medals: 6 15.00% 4.41%
Special awards AND Bronze Medals: 4 10.00% 2.94%
Special awards but NO Medals: 0 0.00% 0.00%
Collaboration: Number of times collaborated: 87 63.97%
Collaboration AND Gold Medals: 30 34.48% 22.06%
Collaboration AND Silver Medals: 28 32.18% 20.59%
Collaboration AND Bronze Medals: 22 25.29% 16.18%
Collaboration AND No Medals: 7 8.05% 5.15%
Collab vs. SA: Collaboration AND Special Awards 26 29.89% 19.12%
Non-Collab: Teams without Collaboration 49
No Collaboration AND Gold 23 46.94% 16.91%
No Collaboration AND Silver 7 14.29% 5.15%
No Collaboration AND Bronze 5 10.20% 3.68%
No Collaboration AND No Medals 14 28.57% 10.29%

Appendix C3: Europe Data Averages

Analysis: Sample Size: 127
Team Size Aux Team Size Mean: 5.16
Mean Team Size: 11.45
Team Size Standard Deviation: 5.03 Standard Error: 0.4463
Medals Number of times Gold Won: 86 69.35% 67.72%
Number of times Silver Won: 21 16.94% 16.54%
Number of times Bronze Won: 17 13.71% 13.39%
Teams without medals: 3 2.42% 2.36%
Total Number of winnings: 124 100.00% 97.64%
Special Awards: Number of times Special Awards won: 53 41.73%
Special awards AND Gold Medals: 50 94.34% 39.37%
Special awards AND Silver Medals: 3 5.66% 2.36%
Special awards AND Bronze Medals: 0 0.00% 0.00%
Special awards but NO Medals: 0 0.00% 0.00%
Collaboration: Number of times collaborated: 124 97.64%
Collaboration AND Gold Medals: 84 67.74% 66.14%
Collaboration AND Silver Medals: 21 16.94% 16.54%
Collaboration AND Bronze Medals: 17 13.71% 13.39%
Collaboration AND No Medals: 2 1.61% 1.57%
Collab vs. SA: Collaboration AND Special Awards 53 42.74% 41.73%
Non-Collab: Teams without Collaboration 3
No Collaboration AND Gold 2 66.67% 1.57%
No Collaboration AND Silver 0 0.00% 0.00%
No Collaboration AND Bronze 0 0.00% 0.00%
No Collaboration AND No Medals 1 33.33% 0.79%

Appendix C4: Asia Data Averages

Analysis: Sample Size (n): 178
Team Size Aux Team Size Mean: 5.77
Mean Team Size: 16.37
Team Size Standard Deviation: 8.28 Standard Error: 0.621
Medals Number of times Gold Won: 80 49.08% 44.94%
Number of times Silver Won: 52 31.90% 29.21%
Number of times Bronze Won: 31 19.02% 17.42%
Teams without medals: 15 9.20% 8.43%
Total Number of winnings: 163 100.00% 91.57%
Special Awards: Number of times Special Awards won: 61 34.27%
Special awards AND Gold Medals: 49 80.33% 27.53%
Special awards AND Silver Medals: 10 16.39% 5.62%
Special awards AND Bronze Medals: 2 3.28% 1.12%
Special awards but NO Medals: 0 0.00% 0.00%
Collaboration: Number of times collaborated: 156 87.64%
Collaboration AND Gold Medals: 73 46.79% 41.01%
Collaboration AND Silver Medals: 47 30.13% 26.40%
Collaboration AND Bronze Medals: 25 16.03% 14.04%
Collaboration AND No Medals: 11 7.05% 6.18%
Collab vs. SA: Collaboration AND Special Awards 57 36.54% 32.02%
Non-Collab: Teams without Collaboration 22
No Collaboration AND Gold 7 31.82% 3.93%
No Collaboration AND Silver 5 22.73% 2.81%
No Collaboration AND Bronze 6 27.27% 3.37%
No Collaboration AND No Medals 4 18.18% 2.25%

Appendix C5: Africa Data Averages

Analysis: Sample Size: 3
Team Size Aux Team Size Mean: 6.33
Mean Team Size: 20
Team Size Standard Deviation: 10.39 Standard Error: 6
Medals Number of times Gold Won: 0 0.00% 0.00%
Number of times Silver Won: 0 0.00% 0.00%
Number of times Bronze Won: 3 100.00% 100.00%
Teams without medals: 0 0.00% 0.00%
Total Number of winnings: 3 100.00% 100.00%
Special Awards: Number of times Special Awards won: 0 0.00%
Special awards AND Gold Medals: 0 - 0.00%
Special awards AND Silver Medals: 0 - 0.00%
Special awards AND Bronze Medals: 0 - 0.00%
Special awards but NO Medals: 0 - 0.00%
Collaboration: Number of times collaborated: 2 66.67%
Collaboration AND Gold Medals: 0 0.00% 0.00%
Collaboration AND Silver Medals: 0 0.00% 0.00%
Collaboration AND Bronze Medals: 2 100.00% 66.67%
Collaboration AND No Medals: 0 0.00% 0.00%
Collab vs. SA: Collaboration AND Special Awards 0 0.00% 0.00%
Non-Collab: Teams without Collaboration 1
No Collaboration AND Gold 0 0.00% 0.00%
No Collaboration AND Silver 0 0.00% 0.00%
No Collaboration AND Bronze 1 100.00% 33.33%
No Collaboration AND No Medals 0 0.00% 0.00%

Appendix D: Collaboration Bias Hypothesis Testing

North America:

H0: π = 0, Donor team is not biased towards collaborating

H1: π = 1, Donor team is biased towards collaborating

α = .01

We have traisl (n) where n = 137 where of the 137 observations 87 cases were of collaboration

Using the function below we then have a cumulative binomial distribution of 0.99985 > 0.99 = 1 - α

So, we can reject the null hypothesis and accept the alternative hypothesis that North American teams were biased towards collaboration.

Latin America:

H0: π = 0, Donor team is not biased towards collaborating

H1: π = 1, Donor team is biased towards collaborating

α = .01

We have traisl (n) where n = 51, where of the 51 observations, 47 observations included collaboration.

Using the function below we then have a cumulative binomial distribution of 1.00 > 0.99 = 1 - α

So, we can reject the null hypothesis and accept the alternative hypothesis that Latin American teams were biased towards collaboration.

Europe:

H0: π = 0, Donor team is not biased towards collaborating

H1: π = 1, Donor team is biased towards collaborating

α = .01

We have trials (n) where n = 127, where of the 127 observations, 124 observations included collaboration.

Using the function below we then have a cumulative binomial distribution of 1.00 > 0.99 = 1 - α

So, we can reject the null hypothesis and accept the alternative hypothesis that European teams were biased towards collaboration.

Asia:

H0: π = 0, Donor team is not biased towards collaborating

H1: π = 1, Donor team is biased towards collaborating

α = .01

We have traisl (n) where n = 178, where of the 178 observations, 156 observations included collaboration.

Using the function below we then have a cumulative binomial distribution of 1.00 > 0.99 = 1 - α

So, we can reject the null hypothesis and accept the alternative hypothesis that Asian teams were biased towards collaboration.

Africa:

H0: π = 0, Donor team is not biased towards collaborating

H1: π = 1, Donor team is biased towards collaborating

α = .01

We have traisl (n) where n = 3, where of the 3 observations, 2 observations included collaboration.

Using the function below we then have a cumulative binomial distribution of 0.875 < 0.99 = 1 - α

So, we can NOT reject the null hypothesis and accept the alternative hypothesis that African teams were biased towards collaboration.

* However, due to the small size of the data this test may not be truly reflective*

Appendix E: Single Factor Anova Test

Appendix F: Burt’s Constraint vs. Betweenness

References
  1. Brandes, U. (2001). "A faster algorithm for betweenness centrality". Journal of Mathematical Sociology. 25: 163–177.
  2. Burt, R. S. (2000, May). Structural holes versus network closure as social capital. Social Capital: Theory and Research. Retrieved October 16, 2016, from http://snap.stanford.edu/class/cs224w-readings/burt00capital.pdf
  3. Burt, R. S. (2004). Structural holes and good ideas American Journal of Sociology, 110(2), 349-399.
  4. Coleman, J.S. (1988). “Social Capital in the Creation of Human Capital”, American Journal of Sociology, Vol. 94, pp. S95-S120.
  5. Freeman, Linton (1977). "A set of measures of centrality based on betweenness”, Sociometry. 40: 35–41
  6. Hall, H., & Graham, D. (2004). “Creation and recreation: motivating collaboration to generate knowledge capital in online communites”, International Journal of Information Management, Vol. 24, pp. 235-246.
  7. Hunter, D. R., Handcock, M. S., Butts, C. T., Goodreau, S. M., & Morris, M. (2008). ergm: A Package to Fit, Simulate and Diagnose Exponential-Family Models for Networks. Journal of Statistical Software, 24(3), nihpa54860.
  8. Ingham, A. G., Levinger, G., Graves, J., & Peckham, V. (1974). The Ringelmann effect: Studies of group size and group performance. Journal of Experimental Social Psychology, 10, 371–384.
  9. Johanson, J. (2001). “The Balance of Corporate Social Capital”, in Gabbay, S.M. and Leenders, R.A.J. (Ed.), Research in the Sociology of Organizations, Vol. 18, pp. 1-20, Stamford, CT: JAI Press.
  10. Kravitz, D. (1986). Ringelmann rediscovered: The original article. Journal of Personality and Social Psychology, Vol 50(5): 936-941.
  11. Liu, P., Luo, S., & Xia, H.. (2015, October 27). Evolution of Scientific Collaboration Network Driven by Homophily and Heterophily. Retrieved from https://arxiv.org/abs/1510.07763.
  12. Liu, W.T., and Duff, R.W.. (1972). The Strength in Weak Ties. The Public Opinion Quarterly, Vol 36, 361-366.
  13. McPherson, M., Smith-Lovin, L., & Cook, J.M.. (2001). BIRDS OF A FEATHER: Homophily in Social Networks. Annu. Rev. Sociol., Vol 27, 415–44.
  14. Robins, G. (2015). Doing Social Network Research: Network-based Research Design for Social Scientists. London, UK: SAGE Publications Ltd.
  15. Rogers, E.M., Bhowmik, D.K.. (1970). Homophily-Heterophily: Relational Concepts for Communication Research. The Public Opinion Quarterly, Vol 34, 523-538.
  16. Zanghia, H., Ambroiseb, C., Mieleb,V., (2008). Fast online graph clustering via Erdős–Rényi mixture. Science Direct. Volume 41, Issue 12, 3592–3599.