Difference between revisions of "Team:Toronto/Software"

Line 1: Line 1:
{{Toronto}}
 
 
<html>
 
<html>
 +
<!-- ####################################################### -->
 +
<!-- #  This html was produced by the igemwiki generator  # -->
 +
<!-- #  https://github.com/igemuoftATG/generator-igemwiki  # -->
 +
<!-- ####################################################### -->
  
<div>
+
<!-- repo for this wiki: https://github.com/igemuoftATG/wiki2016 -->
<h5>Modelling Metabolic Activity with Flux-Balance Analysis</h5>
+
<!-- file built: Mon Oct 17 2016 23:59:11 GMT-0400 (EDT) -->
<p>We used Flux Balance Analysis (FBA) in order to model the metabolic activity of the E. coli S30 cell-free extract which has been used by our Wetlab team as the basis for the synthetic gene network driving the Pardee lab’s paper biosensor. We began by using an annotated, EcoCyc-aligned genome-scale reconstruction of the metabolic network SBML file for E. coli K12 MG1655 generated by Feist et al2., removed all transport, import, and export reactions from the metabolome, altered the reaction constraints to suit the conditions in LB media, and finally used COBRApy to model and optimize the metabolic activity of the E. coli cell extract.</p>
+
</p>
+
</div>
+
  
<div>
+
</html>
<h5>Data Mining Pipeline </h5>
+
{{Toronto/head}}
<p>We constructed profile HMMs based on sets of sequences with functions related to metal binding and resistance to metal toxicity, where each profile HMM was constructed around one gene. After pulling annotated genomic information for all available bacterial species from EnsemblBacteria, we ran the nhmmer command in HMMER in order to locally align sections of genomic DNA with sections of the sequence sets making up our profile HMMs. Based on the nhmmer results, we created a table of annotations for all species which includes regions of match in genomic DNA (start and stop positions), annotations for all genes that received an alignment score above the default threshold, bit scores, E-values, and predicted bias. </p>
+
<html>
<p>We then stored annotated genomic sequence files (Genbank files), sequence positions (start and stop) nhmmer output, what operons each gene belongs to (using information from ODB3), KEGG Orthology annotations for operon function, phylogenetic profiles, and the profile HMMs discussed above in a postgreSQL relational database.  </p>
+
<p>Following this, we used the information in the database to train recurrent neural nets (RNNs) or MLPs to recognize operons. The first RNN determines whether or not any gene cluster entered as input is part of an operon, and the second RNN determines whether or not that operon has functions related to metal-binding. Thus, our pipeline allows us to identify the function of unknown operons. </p>
+
</div>
+
  
<div>
+
<div id="navigation">
<h5>Smartphone Camera App for Colorimetric Analysis </h5>
+
<ul>
<p>We used Apache Cordova to create a smartphone app for colorimetric analysis. The app was designed to analyze the output of the cell-free paper biosensor implemented for gold detection using the lacZ colour change by our Wetlab team. However, given that the app determines the base colour directly from the image, the app has wide-ranging capabilities that make it useful for analyzing reaction data from any one-to-one colour change. In response to a given trigger RNA, LacZ will cleave yellow chlorophenol red-b-D-galactopyranoside within the paper disc platforms, resulting in a purple chlorophenol red product. This colour change presents a colour intensity from which analyte concentration can be calculated. </p>  
+
<li><a href="https://2016.igem.org/Team:Toronto"><span>home</span></a></li>
<p>When opening the app, the user will be prompted to designate the following configurations: the aspect ratio of the biosensor paper, the (labelled) number of rows and columns within the wells, and the row-column coordinate for the well containing only yellow pigments. Our app’s image processing capabilities allow us to use information about the aspect ratio of the paper to construct a translucent frame in the camera’s live preview mode so that the user can more easily frame the paper, and the app includes an image colour summarizer that gets the image, converts it to LCH colourspace, and shows us the colour of each cell, which will account for small variations in shading and saturation. All the values described are used in the following major steps: A) Image processing (selecting a framing window and cropping everything outside of that window using an OpenCV method, B) Colour analysis (using the Huo et al. Robust Auto White-Balance API to account from distortions due to ambient lighting, creating separate image segments for each well in the biosensor using the OpenCV GrabCut algorithm, then inserting an image segment of each disk into a 2D array  to ensure that only the coloured wells are analyzed, after which the user will be prompted to mark a border around a disk with an on-screen drawing tool), and C) Approximating the relative expression of the reporter gene (we will analyze each cell in the array and the ratio of purple to yellow in the substrate, concentrations will be stored in another 2D array of the same size and can be used to indicate relative expression of the reporter gene based on the amount of purple pigment, which corresponds to the amount of chlorophenol-red-beta-D-galactopyranoside cleavage.)</p>
+
</li>
<p>Overall, we have created an app that will make colourimetric analysis simple and efficient for a layperson, and will be an invaluable tool for on-the-go testing when used in combination with paper-based biosensors (Pardee et al. 2014).</p>
+
<li><a href="#"><span>team</span></a>
 +
<ul>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Team"><span>team</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Collaborations"><span>collaborations</span></a></li>
 +
</li>
 +
</ul>
 +
<li><a href="#"><span>project</span></a>
 +
<ul>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Description"><span>description</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Design"><span>design</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Experiments"><span>experiments</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Proof"><span>proof</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Demonstrate"><span>demonstrate</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Results"><span>results</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Notebook"><span>notebook</span></a></li>
 +
</li>
 +
</ul>
 +
<li><a href="#"><span>parts</span></a>
 +
<ul>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Parts"><span>parts</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Basic_Part"><span>basic_part</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Composite_Part"><span>composite_part</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Part_Collection"><span>part_collection</span></a></li>
 +
</li>
 +
</ul>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Safety"><span>safety</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Attributions"><span>attributions</span></a></li>
 +
</li>
 +
<li><a href="#"><span>human_practices</span></a>
 +
<ul>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Human_Practices"><span>human_practices</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/HP-Silver"><span>silver</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/HP-Gold"><span>gold</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Integrated_Practices"><span>integrated_practices</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Engagement"><span>engagement</span></a></li>
 +
</li>
 +
</ul>
 +
<li class="active"><a href="#"><span>awards</span></a>
 +
<ul>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Entrepreneurship"><span>entrepreneurship</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Hardware"><span>hardware</span></a></li>
 +
</li>
 +
<li class="active"><a href="https://2016.igem.org/Team:Toronto/Software"><span>software</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Measurement"><span>measurement</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Model"><span>model</span></a></li>
 +
</li>
 +
</ul>
 +
</ul>
 
</div>
 
</div>
 
+
<div class="content">
<div>
+
<div class="content" id="content-main"><div class="row"><div class="col col-lg-8 col-md-12"><div class="content-main"><h3 id="-alert-">★ ALERT!</h3>
<h5>Modelling Protein Folding with Rosetta</h5>
+
<p>This page is used by the judges to evaluate your team for the <a href="https://2016.igem.org/Judging/Awards">Best Software Tool award</a>.</p>
<p>We used Rosetta and pyRosetta to model and compare the gold-binding ability of GolS as a monomer and a predicted GolS homodimer. GolS belongs to the MerR family of transcriptional regulators, which usually function as homodimers. Based on the amino acid sequence of GolS, we generated a predicted 3D structure for a GolS monomer within Rosetta, and then docked the two GolS monomers together to create a homodimer. Following this, we modelled the ability of the predicted GolS homodimer to bind Gold(III), then compare the gold-binding abilities of two mutant versions of GolS created by our wetlab team.</p>
+
<p>Delete this box in order to be evaluated for this medal. See more information at <a href="https://2016.igem.org/Judging/Pages_for_Awards/Instructions">Instructions for Pages for awards</a>.</p>
 +
<p>Regardless of the topic, iGEM projects often create or adapt computational tools to move the project forward. Because they are born out of a direct practical need, these software tools (or new computational methods) can be surprisingly useful for other teams. Without necessarily being big or complex, they can make the crucial difference to a project&#39;s success. This award tries to find and honor such &quot;nuggets&quot; of computational work.</p>
 +
<h5 id="inspiration">Inspiration</h5>
 +
<p>Here are a few examples from previous teams:</p>
 +
<ul>
 +
<li><a href="https://2013.igem.org/Team:TU-Munich/Results/Software">TU Munich 2013</a></li>
 +
<li><a href="https://2014.igem.org/Team:Heidelberg/Software">Heidelberg 2014</a></li>
 +
<li><a href="https://2014.igem.org/Team:Aachen/Project/Measurement_Device#Software">Aachen 2014</a></li>
 +
</ul>
 +
</div></div><div id="tableofcontents" class="tableofcontents affix sidebar col-lg-4 hidden-xs hidden-sm hidden-md visible-lg-3"><ul class="nav">
 +
<li><a href="#inspiration">Inspiration</a></li>
 +
</ul>
 +
</div></div></div>
 
</div>
 
</div>
 
 
 
</html>
 
</html>
 +
{{Toronto/footer}}

Revision as of 04:00, 18 October 2016

★ ALERT!

This page is used by the judges to evaluate your team for the Best Software Tool award.

Delete this box in order to be evaluated for this medal. See more information at Instructions for Pages for awards.

Regardless of the topic, iGEM projects often create or adapt computational tools to move the project forward. Because they are born out of a direct practical need, these software tools (or new computational methods) can be surprisingly useful for other teams. Without necessarily being big or complex, they can make the crucial difference to a project's success. This award tries to find and honor such "nuggets" of computational work.

Inspiration

Here are a few examples from previous teams: