Difference between revisions of "Team:Toronto/Software"

 
(19 intermediate revisions by the same user not shown)
Line 1: Line 1:
{{Toronto}}
 
 
<html>
 
<html>
 +
<!-- ####################################################### -->
 +
<!-- #  This html was produced by the igemwiki generator  # -->
 +
<!-- #  https://github.com/igemuoftATG/generator-igemwiki  # -->
 +
<!-- ####################################################### -->
  
<div>
+
<!-- repo for this wiki: https://github.com/igemuoftATG/wiki2016 -->
<h5>Modelling Metabolic Activity with Flux-Balance Analysis</h5>
+
<!-- file built: Wed Oct 19 2016 23:55:16 GMT-0400 (EDT) -->
<p>We used Flux Balance Analysis (FBA) in order to model the metabolic activity of the E. coli S30 cell-free extract which has been used by our Wetlab team as the basis for the synthetic gene network driving the Pardee lab’s paper biosensor. We began by using an annotated, EcoCyc-aligned genome-scale reconstruction of the metabolic network SBML file for E. coli K12 MG1655 generated by Feist et al2., removed all transport, import, and export reactions from the metabolome, altered the reaction constraints to suit the conditions in LB media, and finally used COBRApy to model and optimize the metabolic activity of the E. coli cell extract.</p>
+
</p>
+
</div>
+
  
<div>
+
</html>
<h5>Data Mining Pipeline </h5>
+
{{Toronto/head}}
<p>We constructed profile HMMs based on sets of sequences with functions related to metal binding and resistance to metal toxicity, where each profile HMM was constructed around one gene. After pulling annotated genomic information for all available bacterial species from EnsemblBacteria, we ran the nhmmer command in HMMER in order to locally align sections of genomic DNA with sections of the sequence sets making up our profile HMMs. Based on the nhmmer results, we created a table of annotations for all species which includes regions of match in genomic DNA (start and stop positions), annotations for all genes that received an alignment score above the default threshold, bit scores, E-values, and predicted bias. </p>
+
<html>
<p>We then stored annotated genomic sequence files (Genbank files), sequence positions (start and stop) nhmmer output, what operons each gene belongs to (using information from ODB3), KEGG Orthology annotations for operon function, phylogenetic profiles, and the profile HMMs discussed above in a postgreSQL relational database.  </p>
+
<p>Following this, we used the information in the database to train recurrent neural nets (RNNs) or MLPs to recognize operons. The first RNN determines whether or not any gene cluster entered as input is part of an operon, and the second RNN determines whether or not that operon has functions related to metal-binding. Thus, our pipeline allows us to identify the function of unknown operons. </p>
+
</div>
+
  
<div>
+
<div id="navigation">
<h5>Smartphone Camera App for Colorimetric Analysis </h5>
+
<ul>
<p>We used Apache Cordova to create a smartphone app for colorimetric analysis. The app was designed to analyze the output of the cell-free paper biosensor implemented for gold detection using the lacZ colour change by our Wetlab team. However, given that the app determines the base colour directly from the image, the app has wide-ranging capabilities that make it useful for analyzing reaction data from any one-to-one colour change. In response to a given trigger RNA, LacZ will cleave yellow chlorophenol red-b-D-galactopyranoside within the paper disc platforms, resulting in a purple chlorophenol red product. This colour change presents a colour intensity from which analyte concentration can be calculated. </p>  
+
<li><a href="https://2016.igem.org/Team:Toronto"><span>home</span></a></li>
<p>When opening the app, the user will be prompted to designate the following configurations: the aspect ratio of the biosensor paper, the (labelled) number of rows and columns within the wells, and the row-column coordinate for the well containing only yellow pigments. Our app’s image processing capabilities allow us to use information about the aspect ratio of the paper to construct a translucent frame in the camera’s live preview mode so that the user can more easily frame the paper, and the app includes an image colour summarizer that gets the image, converts it to LCH colourspace, and shows us the colour of each cell, which will account for small variations in shading and saturation. All the values described are used in the following major steps: A) Image processing (selecting a framing window and cropping everything outside of that window using an OpenCV method, B) Colour analysis (using the Huo et al. Robust Auto White-Balance API to account from distortions due to ambient lighting, creating separate image segments for each well in the biosensor using the OpenCV GrabCut algorithm, then inserting an image segment of each disk into a 2D array  to ensure that only the coloured wells are analyzed, after which the user will be prompted to mark a border around a disk with an on-screen drawing tool), and C) Approximating the relative expression of the reporter gene (we will analyze each cell in the array and the ratio of purple to yellow in the substrate, concentrations will be stored in another 2D array of the same size and can be used to indicate relative expression of the reporter gene based on the amount of purple pigment, which corresponds to the amount of chlorophenol-red-beta-D-galactopyranoside cleavage.)</p>
+
</li>
<p>Overall, we have created an app that will make colourimetric analysis simple and efficient for a layperson, and will be an invaluable tool for on-the-go testing when used in combination with paper-based biosensors (Pardee et al. 2014).</p>
+
<li><a href="https://2016.igem.org/Team:Toronto/Achievements"><span>achievements</span></a></li>
 +
</li>
 +
<li><a href="#"><span>team</span></a>
 +
<ul>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Team"><span>team</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Collaborations"><span>collaborations</span></a></li>
 +
</li>
 +
</ul>
 +
<li><a href="#"><span>project</span></a>
 +
<ul>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Description"><span>description</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Design"><span>design</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Experiments"><span>experiments</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Proof"><span>proof</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Demonstrate"><span>demonstrate</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Results"><span>results</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Notebook"><span>notebook</span></a></li>
 +
</li>
 +
</ul>
 +
<li><a href="#"><span>parts</span></a>
 +
<ul>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Parts"><span>parts</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Basic_Part"><span>basic_part</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Composite_Part"><span>composite_part</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Part_Collection"><span>part_collection</span></a></li>
 +
</li>
 +
</ul>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Safety"><span>safety</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Attributions"><span>attributions</span></a></li>
 +
</li>
 +
<li><a href="#"><span>human_practices</span></a>
 +
<ul>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Human_Practices"><span>human_practices</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/HP-Silver"><span>silver</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/HP-Gold"><span>gold</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Integrated_Practices"><span>integrated_practices</span></a></li>
 +
</li>
 +
</ul>
 +
<li class="active"><a href="#"><span>awards</span></a>
 +
<ul>
 +
<li class="active"><a href="https://2016.igem.org/Team:Toronto/Software"><span>software</span></a></li>
 +
</li>
 +
<li><a href="https://2016.igem.org/Team:Toronto/Model"><span>model</span></a></li>
 +
</li>
 +
</ul>
 +
</ul>
 
</div>
 
</div>
 +
<div class="content">
 +
<div class="content" id="content-main"><div class="row"><div class="col col-lg-8 col-md-12"><div class="content-main"><h1 id="ios-smartphone-app-for-colorimetric-analysis">iOS Smartphone App for Colorimetric Analysis</h1>
 +
<h3 id="introduction">Introduction</h3>
 +
<p>Colorimetric assays, through convenience and simplicity, have garnered lasting value in both academia and industry. This year, the University of Toronto iGEM Computational Team has developed an iOS application for colorimetric image analysis, usable on-the-go for even greater convenience and intuitive handling. </p>
 +
<p>Our app is compatible with colorimetric reaction data from any one-to-one colour change, including the gold detector implemented by our Wet Lab in the form of their cell-free paper-based biosensor. To illustrate, upon the conversion of yellow substrate (chlorophenol-red-beta-D-galactopyranoside) into purple product (chlorophenol red) within the paper disks of the biosensor (Pardee et al. 2014)., where the intensity of the colour change corresponds directly to concentration of analyte, one of our Wet Lab members could take an iPhone photograph in order to automatically determine the concentration of gold present. Many of the test cases used for our app were based on this yellow-to-purple colour change.</p>
 +
<p>Our app utilizes the iPhone camera and advanced colorimetric analysis algorithms in order to perform quantitative comparisons between images of the control and images of the sample. The values representing differences in colour and intensity are analyzed via a machine learning algorithm that has been trained using image datasets of molecules susceptible to colorimetric changes, over various concentrations.</p>
 +
<h3 id="user-experience">User Experience</h3>
 +
<p>Upon opening the app for the first time, the user would be asked to create a project, as shown in Appendix IV.A below, and will be prompted to provide a project name, project location, date, and any additional notes, as shown in Appendix IV.B below. Alternatively, if the app has been used before, the user would be able to select a previously created project. A sample project is shown in Appendix IV.C.</p>
 +
<p>Each project would contain multiple tests, as shown in for each of which the user would be prompted to create the following fields: test name, date, description, number of rows in the well plate, and number of columns in the well plate. A list of tests is shown in Appendix IV.D. Further, upon starting the test, the user would receive a prompt offering a choice to use an existing photo or take a new photo, after which the chosen photo would be saved and analyzed. For expediency, we have advised users to place a control in the first row and the first column of the wells (position 1,1). </p>
 +
<p>Following this, our analytical algorithm would be used in combination with the number of row and column inputs in order to display a table of results showing differences in colour between the images of the control and images of the sample. In addition, a table showing estimated concentrations of analyte will be generated for the user, as shown in Appendix IV.F.</p>
 +
<p>This data hierarchy has been created in order to support multiple tests for a single large location or project. Thus, the user would be able to store all tests related to the same area in an organized manner.</p>
 +
<p>In addition, we have advised that images should be taken under relatively even light distribution, given that reflections from the wells will lower the accuracy of the results. </p>
 +
<h3 id="software-development">Software Development</h3>
 +
<p>We chose Apache Cordova, an open-source program for developing multiplatform apps, for development of our app. Apache Cordova operates through creation of a web application and use of native wrappers for select devices. Further, Apache Cordova allows native device APIs to be accessed freely through plugins such as the camera plugin used extensively with our app. As Apache Cordova apps are simply modified web applications, our source code has been written in HTML, CSS, and JavaScript, notwithstanding the exceptions described below. </p>
 +
<p>The Apache Cordova camera plugin allowed us to integrate the ability to use the iPhone camera to capture photos in-app and the render the image accessible through the app at the same time. Our team then made use of front-end Javascript framework, AngularJS, (along with its modules and controllers) and HTML Canvas in order to seamlessly and dynamically retrieve image data and output the analyzed results on the user’s screen.</p>
 +
<p>All data is stored through Cordova-sqlite-Storage (Brody, 2016), which uses its own implementation of sqlite3 for on device storage. Cordova-sqlite-Storage has made it possible for us to store data for multiple projects and corresponding tests using key-value tuples, allowing for constant time data recovery. </p>
 +
<h3 id="analytical-algorithm">Analytical Algorithm</h3>
 +
<h4 id="image-data-retrieval">Image data retrieval</h4>
 +
<p>The captured image would be drawn on an HTML Canvas, giving us the ability to retrieve and analyze pixel information. However, in order to conduct the first step of the analysis, we were required to split up the image of multiple wells into sections of single wells that could be used for analysis. We generated an algorithm for retrieving image data for every well and comparing it to image data for the control well (row 1, column 1) through implementation of Sobel operators for edge detection (via location of neighbouring pixels with high contrast) (Gonzalez and Woods, 1992). Our method is applied to the control well in order to find the values representing the x-min, x-max, y-min and y-max of the well’s circular circumference, after which the radius of the control well in the image is calculated. As all wells are expected to have the same size, the radius of the control well is used as a reference for all other wells. This method is particularly effective in combination with the paper-based biosensor utilized by our Wet Lab, as the black background of the sensor contrasts sharply with the yellow/purple wells. Our code for the edge detection algorithm can be viewed in Appendix I below.</p>
 +
<p>The next stage of the pixel retrieval involves image cropping based on the ratio of rows and columns and reduction of the total image into single-well divisions. To ensure that the process of division is executed correctly, we have advised the user to align the edges of the photo with the edges of the biosensor or well plate. </p>
 +
<p>The edge detection method will be used once again on the single-well divisions in order to find the value corresponding to the x-min edge. At this point, given the well radius, the exact pixel-level locations of the remaining any of the remaining wells can be determined. In the next step, we submit the colour values determined within each circular well to our analysis algorithm. In order to ensure that no overlap occurred between the black background of the biosensor and the approximated well location, thereby heightening the difference in colour values, we also downsized the circumference of the wells by a small margin. </p>
 +
<h4 id="color-difference-detection-delta-e-calculation-">Color Difference Detection (Delta-E calculation)</h4>
 +
<p>In order to quantify the difference in colour between our control and sample wells, we used the CIE76 color difference algorithm, which takes two pixels and calculates Delta-E (Euclidean Distance) between them [some ref]. However, the CIE76 algorithm uses CIELAB color space as input (L = Lightness, A = Color Channel A, B = Color Channel B), whereas smartphone images are commonly saved as RGB (Red Channel, Green Channel, Blue Channel). The first step in our colour analysis algorithm was the creation of Javascript functions for conversion of RGB values to CIELAB color space. In the conversion process, the smartphone-based RGB value was first converted to an XYZ value and then to a LAB value based on existing mathematical equations (Lindbloom, 2016).
 +
Please refer to Appendix II to view our code for colour conversion.</p>
 +
<p>Following the conversion of the color channels to CIELAB, we developed our javascript functions based on the CIE76 Delta-E algorithm (Sharma, 2003), as shown in Appendix III.</p>
 +
<h4 id="concentration-approximation">Concentration approximation</h4>
 +
<p>Thus far, the applications of our algorithms are limited to determining the difference in colour between two images. However, in order to be able to interpolate within concentration ranges for each molecule susceptible to colorimetric change, we first had to establish a correlation between the color change and the concentration. Our team developed a machine learning regressor using the TensorFlow library, which is to be trained using datasets containing images of molecules susceptible to colorimetric change and their concentrations. Due to the lack of a sufficiently sized dataset, it was not possible to train this algorithm using a real dataset.</p>
 +
<p>For the case of the yellow-to-purple color change visible in our cell-free paper-based biosensor, we determined that the concentrations of the molecule susceptible to colour change and the intensity of the colour change are linearly dependent. Given this linear relationship, we created a dummy dataset to train the regressor such that it would output approximated values for the concentration of the molecules. Given an appropriate dataset, the regressor could be re-trained with minimal effort. Further, because the regressor operates independently to the particular hue of colours present and what type of relationships exist between colour and concentration, (linear, quad, expo), the regressor can potentially be trained for any apparatus.</p>
 +
<h3 id="test-cases-and-other-possible-applications">Test Cases and Other Possible Applications</h3>
 +
<p>Most of the test cases run by our team were produced graphically in Photoshop, wherein we tried to imitate the yellow-to-purple color change visible in our Wet Lab’s paper-based biosensor for gold detection, as shown in Appendix VI.E.</p>
 +
<p>As explained in the section directly above, this analysis is applicable to any apparatus and any one-to-one color change, given that the regressor has been trained. </p>
 +
<h3 id="platform-support">Platform Support</h3>
 +
<p>Our app was developed and tested for iOS devices. The rationale behind this decision is that iPhone camera image quality is more consistent across different versions of iPhones compared to Android devices, which are manufactured by different companies and may have greatly differing camera standards depending on the device.  </p>
 +
<p>However, given that the application was created using Cordova, which allows for development in multiple platforms from a single code base, the application can be made available as an Android app and in-browser. However, the caveat that camera performance consistency would need to be tested remains in effect, as image quality plays a very important role in our app.</p>
 +
<h3 id="references">References</h3>
 +
<ol>
 +
<li>Pardee K, Green AA, Ferrante T, Cameron DE, DaleyKeyser A, Yin P, Collins JJ. 2014. Paper-based synthetic gene networks. Cell. 159(4): 940-954.</li>
 +
<li>Brody, C. (2016) Cordova/PhoneGap sqlite storage adapter, litehelpers/Cordova-sqlite-storage, Retrieved from <a href="https://github.com/litehelpers/Cordova-sqlite-storage">https://github.com/litehelpers/Cordova-sqlite-storage</a></li>
 +
<li>Gonzalez, R. and Woods, R. 1992. Digital Image Processing. Addison Wesley. 414-428.</li>
 +
<li>Lindbloom, J.B. 2003. Useful Colour Equations, Retrieved from <a href="http://www.brucelindbloom.com/index.html?Equations.html">http://www.brucelindbloom.com/index.html?Equations.html</a></li>
 +
<li>Sharma, G. 2003. Digital Colour Imaging. New York, Webster: CRC Press.</li>
 +
</ol>
 +
<h3 id="appendix">Appendix</h3>
 +
<h4 id="appendix-i-edge-detection">Appendix I: Edge Detection</h4>
 +
<pre>
 +
this.findEdges = function(bound){ //[t,d,l,r] -> [[x],[y]]
 +
    //$('#debug').html("in coreLoop");
  
<div>
+
    var x = 0;
<h5>Modelling Protein Folding with Rosetta</h5>
+
    var y = 0;
<p>We used Rosetta and pyRosetta to model and compare the gold-binding ability of GolS as a monomer and a predicted GolS homodimer. GolS belongs to the MerR family of transcriptional regulators, which usually function as homodimers. Based on the amino acid sequence of GolS, we generated a predicted 3D structure for a GolS monomer within Rosetta, and then docked the two GolS monomers together to create a homodimer. Following this, we modelled the ability of the predicted GolS homodimer to bind Gold(III), then compare the gold-binding abilities of two mutant versions of GolS created by our wetlab team.</p>
+
</div>
+
  
 +
    var left = undefined;
 +
    var top = undefined;
 +
    var right = undefined;
 +
    var bottom = undefined;
  
 +
    var detected = false;
 +
    var edgeDataX = [];
 +
    var edgeDataY = [];
 +
    for(y=bound[0];y<=bound[1];y++){
 +
      for(x=bound[2];x<=bound[3];x++){
 +
          index = (x + y*this.ctxDimensions.width)*4;
 +
 +
          pixel = this.pixelData.data.slice(index,index+3);
 +
 +
          left = this.pixelData.data.slice(index-4,index-1);
 +
          right = this.pixelData.data.slice(index+4,index+7);
 +
          top = this.pixelData.data.slice(index-(this.ctxDimensions.width*4),index-(this.ctxDimensions.width*4)+3);
 +
          bot = this.pixelData.data.slice(index+(this.ctxDimensions.width*4),index+(this.ctxDimensions.width*4)+3);
 +
 +
          if(colorDiff(pixel,left)>this.threshold){
 +
              detected = true;
 +
          }
 +
          else if(colorDiff(pixel,right)>this.threshold){
 +
              detected = true;
 +
          }
 +
          else if(colorDiff(pixel,top)>this.threshold){
 +
              detected = true;
 +
          }
 +
          else if(colorDiff(pixel,bot)>this.threshold){
 +
              detected = true;
 +
          }//*/
 +
          if (detected){
 +
            edgeDataX.push(x);
 +
            edgeDataY.push(y);
 +
            this.plotPoint(x,y);
 +
            detected = false;
 +
          }
 +
      }}
 +
      return [edgeDataX,edgeDataY];
 +
  };
 +
</pre>
 +
 +
#### Appendix II: Colourspace Conversion
 +
 +
<pre>
 +
function RGBtoLAB(rgb){ // [r,g,b] --> [l,a,b]
 +
    return XYZtoLAB(RGBtoXYZ(rgb));
 +
}
 +
 +
function RGBtoXYZ(rgb){ // [r,g,b] --> [x,y,z]
 +
    for (var i = 0; i < 3; i++) {
 +
        rgb[i] = ( rgb[i] / 255.0 );
 +
 +
        if ( rgb[i] > 0.04045 ){
 +
            rgb[i] = 100 * Math.pow((rgb[i] + 0.055 ) / 1.055 , 2.4);
 +
            continue;
 +
        }
 +
        rgb[i] = 100 * rgb[i] / 12.92;
 +
    }    // may not need to multiply by 100
 +
 +
  X = rgb[0] * 0.4124 + rgb[1] * 0.3576 + rgb[2] * 0.1805;
 +
  Y = rgb[0] * 0.2126 + rgb[1] * 0.7152 + rgb[2] * 0.0722;
 +
  Z = rgb[0] * 0.0193 + rgb[1] * 0.1192 + rgb[2] * 0.9505;
 +
  return [X, Y, Z];
 +
}
 +
 +
function XYZtoLAB(xyz){ // [x,y,z] --> [l,a,b]
 +
    var ref_xyz = [95.05, 100.0, 108.89999999999999];
 +
// this is the xyz of WHITE
 +
 +
    for (var i = 0; i < 3; i++) {
 +
        xyz[i] = ( xyz[i] / ref_xyz[i] );
 +
 +
        if ( xyz[i] > 0.008856 ){                                    // 0.008856 <=> 216.0/24389
 +
            xyz[i] = Math.pow(xyz[i] , 1.0/3);
 +
            continue;
 +
        }
 +
        xyz[i] = ( 903.3*xyz[i] + 16 ) / 116;
 +
 +
// 903.3 <=> 24389.0/27
 +
    }
 +
 +
  L = 116 * xyz[1] - 16;
 +
  a = 500 * ( xyz[0] - xyz[1] );
 +
  b = 200 * ( xyz[1] - xyz[2] );
 +
 +
    return [L,a,b];
 +
}
 +
</pre>
 +
 +
#### Appendix III: Euclidean Distance Calculation
 +
 +
<pre>
 +
function euclidean_distance(v1,v2){
 +
    return Math.pow(Math.pow(v1[0]-v2[0],2)+Math.pow(v1[1]-v2[1],2)+Math.pow(v1[2]-v2[2],2),0.5);
 +
}
 +
 +
function colorDiff(p1,p2){ // vi is [r,g,b]
 +
    //$('#debug').html("in colorScript.js");
 +
    v1 = [p1[0]+0,p1[1]+0,p1[2]+0];
 +
    v2 = [p2[0]+0,p2[1]+0,p2[2]+0];
 +
    return Math.abs(euclidean_distance(RGBtoLAB(v1),RGBtoLAB(v2)));
 +
}
 +
 +
function colorsDiff(data1,data2){ //[r,g,b,a,r,g,b,a,r,...]
 +
    var v1;
 +
    var v2;
 +
    var lab1;
 +
    var lab2;
 +
    var sum_diff = 0;
 +
    for (var i = 0; i < data1.length; i+=4) {
 +
        v1 = data1.slice(i,i+3);
 +
        lab1 = RGBtoLAB(v1);
 +
 +
        v2 = data2.slice(i,i+3);
 +
        lab2 = RGBtoLAB(v2);
 +
 +
        sum_diff += euclidean_distance(lab1,lab2);
 +
    }
 +
    var result = sum_diff / (data1.length/4);
 +
 +
    return result;
 +
}
 +
</pre>
 +
 +
<h4 id="appendix-iv-camera-app-images">Appendix IV: Camera App Images</h4>
 +
<p><img src="https://static.igem.org/mediawiki/2016/c/c1/Camera_app.jpg" alt=""></p>
 +
<h3 id="references">References</h3>
 +
<ol>
 +
<li>Checa, S.K., and Soncini, F.C. 2011. Bacterial gold sensing and resistance. PubMed. 24(3):419-27</li>
 +
<li>Kim, E.D., Chivian, D., and Baker, D., 2004. Protein structure prediction and analysis using the Robetta Server. Nucleic Acids Res. (1)32:W526-531</li>
 +
<li>Chaudhury, S., Lyskov, S., and Gray, J. J. 2010. PyRosetta: a script-based interface for implementing molecular modeling algorithms using Rosetta Bioinformatics. 26(5): 689-691.</li>
 +
<li>Berman, H. M., Westbrook, J., Feng, Z, Gililand, G., Bhat, T.N., Weissig, H., Shindyalov, I.N.,  Bourne, P.E. 2000. The Protein Data Bank. Nucleic Acids Research. 28:235-242.</li>
 +
</ol>
 +
</div></div><div id="tableofcontents" class="tableofcontents affix sidebar col-lg-4 hidden-xs hidden-sm hidden-md visible-lg-3"><ul class="nav">
 +
<li><a href="#introduction">Introduction</a></li>
 +
<li><a href="#user-experience">User Experience</a></li>
 +
<li><a href="#software-development">Software Development</a></li>
 +
<li><a href="#analytical-algorithm">Analytical Algorithm</a><ul>
 +
<li><a href="#image-data-retrieval">Image data retrieval</a></li>
 +
<li><a href="#color-difference-detection--delta-e-calculation-">Color Difference Detection (Delta-E calculation)</a></li>
 +
<li><a href="#concentration-approximation">Concentration approximation</a></li>
 +
</ul>
 +
</li>
 +
<li><a href="#test-cases-and-other-possible-applications">Test Cases and Other Possible Applications</a></li>
 +
<li><a href="#platform-support">Platform Support</a></li>
 +
<li><a href="#references">References</a></li>
 +
<li><a href="#appendix">Appendix</a><ul>
 +
<li><a href="#appendix-i--edge-detection">Appendix I: Edge Detection</a></li>
 +
<li><a href="#appendix-ii--colourspace-conversion">Appendix II: Colourspace Conversion</a></li>
 +
<li><a href="#appendix-iii--euclidean-distance-calculation">Appendix III: Euclidean Distance Calculation</a></li>
 +
<li><a href="#appendix-iv--camera-app-images">Appendix IV: Camera App Images</a></li>
 +
</ul>
 +
</li>
 +
<li><a href="#references">References</a></li>
 +
</ul>
 +
</div></div></div>
 +
</div>
 
</html>
 
</html>
 +
{{Toronto/footer}}

Latest revision as of 03:56, 20 October 2016

iOS Smartphone App for Colorimetric Analysis

Introduction

Colorimetric assays, through convenience and simplicity, have garnered lasting value in both academia and industry. This year, the University of Toronto iGEM Computational Team has developed an iOS application for colorimetric image analysis, usable on-the-go for even greater convenience and intuitive handling.

Our app is compatible with colorimetric reaction data from any one-to-one colour change, including the gold detector implemented by our Wet Lab in the form of their cell-free paper-based biosensor. To illustrate, upon the conversion of yellow substrate (chlorophenol-red-beta-D-galactopyranoside) into purple product (chlorophenol red) within the paper disks of the biosensor (Pardee et al. 2014)., where the intensity of the colour change corresponds directly to concentration of analyte, one of our Wet Lab members could take an iPhone photograph in order to automatically determine the concentration of gold present. Many of the test cases used for our app were based on this yellow-to-purple colour change.

Our app utilizes the iPhone camera and advanced colorimetric analysis algorithms in order to perform quantitative comparisons between images of the control and images of the sample. The values representing differences in colour and intensity are analyzed via a machine learning algorithm that has been trained using image datasets of molecules susceptible to colorimetric changes, over various concentrations.

User Experience

Upon opening the app for the first time, the user would be asked to create a project, as shown in Appendix IV.A below, and will be prompted to provide a project name, project location, date, and any additional notes, as shown in Appendix IV.B below. Alternatively, if the app has been used before, the user would be able to select a previously created project. A sample project is shown in Appendix IV.C.

Each project would contain multiple tests, as shown in for each of which the user would be prompted to create the following fields: test name, date, description, number of rows in the well plate, and number of columns in the well plate. A list of tests is shown in Appendix IV.D. Further, upon starting the test, the user would receive a prompt offering a choice to use an existing photo or take a new photo, after which the chosen photo would be saved and analyzed. For expediency, we have advised users to place a control in the first row and the first column of the wells (position 1,1).

Following this, our analytical algorithm would be used in combination with the number of row and column inputs in order to display a table of results showing differences in colour between the images of the control and images of the sample. In addition, a table showing estimated concentrations of analyte will be generated for the user, as shown in Appendix IV.F.

This data hierarchy has been created in order to support multiple tests for a single large location or project. Thus, the user would be able to store all tests related to the same area in an organized manner.

In addition, we have advised that images should be taken under relatively even light distribution, given that reflections from the wells will lower the accuracy of the results.

Software Development

We chose Apache Cordova, an open-source program for developing multiplatform apps, for development of our app. Apache Cordova operates through creation of a web application and use of native wrappers for select devices. Further, Apache Cordova allows native device APIs to be accessed freely through plugins such as the camera plugin used extensively with our app. As Apache Cordova apps are simply modified web applications, our source code has been written in HTML, CSS, and JavaScript, notwithstanding the exceptions described below.

The Apache Cordova camera plugin allowed us to integrate the ability to use the iPhone camera to capture photos in-app and the render the image accessible through the app at the same time. Our team then made use of front-end Javascript framework, AngularJS, (along with its modules and controllers) and HTML Canvas in order to seamlessly and dynamically retrieve image data and output the analyzed results on the user’s screen.

All data is stored through Cordova-sqlite-Storage (Brody, 2016), which uses its own implementation of sqlite3 for on device storage. Cordova-sqlite-Storage has made it possible for us to store data for multiple projects and corresponding tests using key-value tuples, allowing for constant time data recovery.

Analytical Algorithm

Image data retrieval

The captured image would be drawn on an HTML Canvas, giving us the ability to retrieve and analyze pixel information. However, in order to conduct the first step of the analysis, we were required to split up the image of multiple wells into sections of single wells that could be used for analysis. We generated an algorithm for retrieving image data for every well and comparing it to image data for the control well (row 1, column 1) through implementation of Sobel operators for edge detection (via location of neighbouring pixels with high contrast) (Gonzalez and Woods, 1992). Our method is applied to the control well in order to find the values representing the x-min, x-max, y-min and y-max of the well’s circular circumference, after which the radius of the control well in the image is calculated. As all wells are expected to have the same size, the radius of the control well is used as a reference for all other wells. This method is particularly effective in combination with the paper-based biosensor utilized by our Wet Lab, as the black background of the sensor contrasts sharply with the yellow/purple wells. Our code for the edge detection algorithm can be viewed in Appendix I below.

The next stage of the pixel retrieval involves image cropping based on the ratio of rows and columns and reduction of the total image into single-well divisions. To ensure that the process of division is executed correctly, we have advised the user to align the edges of the photo with the edges of the biosensor or well plate.

The edge detection method will be used once again on the single-well divisions in order to find the value corresponding to the x-min edge. At this point, given the well radius, the exact pixel-level locations of the remaining any of the remaining wells can be determined. In the next step, we submit the colour values determined within each circular well to our analysis algorithm. In order to ensure that no overlap occurred between the black background of the biosensor and the approximated well location, thereby heightening the difference in colour values, we also downsized the circumference of the wells by a small margin.

Color Difference Detection (Delta-E calculation)

In order to quantify the difference in colour between our control and sample wells, we used the CIE76 color difference algorithm, which takes two pixels and calculates Delta-E (Euclidean Distance) between them [some ref]. However, the CIE76 algorithm uses CIELAB color space as input (L = Lightness, A = Color Channel A, B = Color Channel B), whereas smartphone images are commonly saved as RGB (Red Channel, Green Channel, Blue Channel). The first step in our colour analysis algorithm was the creation of Javascript functions for conversion of RGB values to CIELAB color space. In the conversion process, the smartphone-based RGB value was first converted to an XYZ value and then to a LAB value based on existing mathematical equations (Lindbloom, 2016). Please refer to Appendix II to view our code for colour conversion.

Following the conversion of the color channels to CIELAB, we developed our javascript functions based on the CIE76 Delta-E algorithm (Sharma, 2003), as shown in Appendix III.

Concentration approximation

Thus far, the applications of our algorithms are limited to determining the difference in colour between two images. However, in order to be able to interpolate within concentration ranges for each molecule susceptible to colorimetric change, we first had to establish a correlation between the color change and the concentration. Our team developed a machine learning regressor using the TensorFlow library, which is to be trained using datasets containing images of molecules susceptible to colorimetric change and their concentrations. Due to the lack of a sufficiently sized dataset, it was not possible to train this algorithm using a real dataset.

For the case of the yellow-to-purple color change visible in our cell-free paper-based biosensor, we determined that the concentrations of the molecule susceptible to colour change and the intensity of the colour change are linearly dependent. Given this linear relationship, we created a dummy dataset to train the regressor such that it would output approximated values for the concentration of the molecules. Given an appropriate dataset, the regressor could be re-trained with minimal effort. Further, because the regressor operates independently to the particular hue of colours present and what type of relationships exist between colour and concentration, (linear, quad, expo), the regressor can potentially be trained for any apparatus.

Test Cases and Other Possible Applications

Most of the test cases run by our team were produced graphically in Photoshop, wherein we tried to imitate the yellow-to-purple color change visible in our Wet Lab’s paper-based biosensor for gold detection, as shown in Appendix VI.E.

As explained in the section directly above, this analysis is applicable to any apparatus and any one-to-one color change, given that the regressor has been trained.

Platform Support

Our app was developed and tested for iOS devices. The rationale behind this decision is that iPhone camera image quality is more consistent across different versions of iPhones compared to Android devices, which are manufactured by different companies and may have greatly differing camera standards depending on the device.

However, given that the application was created using Cordova, which allows for development in multiple platforms from a single code base, the application can be made available as an Android app and in-browser. However, the caveat that camera performance consistency would need to be tested remains in effect, as image quality plays a very important role in our app.

References

  1. Pardee K, Green AA, Ferrante T, Cameron DE, DaleyKeyser A, Yin P, Collins JJ. 2014. Paper-based synthetic gene networks. Cell. 159(4): 940-954.
  2. Brody, C. (2016) Cordova/PhoneGap sqlite storage adapter, litehelpers/Cordova-sqlite-storage, Retrieved from https://github.com/litehelpers/Cordova-sqlite-storage
  3. Gonzalez, R. and Woods, R. 1992. Digital Image Processing. Addison Wesley. 414-428.
  4. Lindbloom, J.B. 2003. Useful Colour Equations, Retrieved from http://www.brucelindbloom.com/index.html?Equations.html
  5. Sharma, G. 2003. Digital Colour Imaging. New York, Webster: CRC Press.

Appendix

Appendix I: Edge Detection

this.findEdges = function(bound){ //[t,d,l,r] -> [[x],[y]]
    //$('#debug').html("in coreLoop");

    var x = 0;
    var y = 0;

    var left = undefined;
    var top = undefined;
    var right = undefined;
    var bottom = undefined;

    var detected = false;
    var edgeDataX = [];
    var edgeDataY = [];
    for(y=bound[0];y<=bound[1];y++){
      for(x=bound[2];x<=bound[3];x++){
          index = (x + y*this.ctxDimensions.width)*4;

          pixel = this.pixelData.data.slice(index,index+3);

          left = this.pixelData.data.slice(index-4,index-1);
          right = this.pixelData.data.slice(index+4,index+7);
          top = this.pixelData.data.slice(index-(this.ctxDimensions.width*4),index-(this.ctxDimensions.width*4)+3);
          bot = this.pixelData.data.slice(index+(this.ctxDimensions.width*4),index+(this.ctxDimensions.width*4)+3);

          if(colorDiff(pixel,left)>this.threshold){
              detected = true;
          }
          else if(colorDiff(pixel,right)>this.threshold){
              detected = true;
          }
          else if(colorDiff(pixel,top)>this.threshold){
              detected = true;
          }
          else if(colorDiff(pixel,bot)>this.threshold){
              detected = true;
          }//*/
          if (detected){
            edgeDataX.push(x);
            edgeDataY.push(y);
            this.plotPoint(x,y);
            detected = false;
          }
      }}
      return [edgeDataX,edgeDataY];
  };
  #### Appendix II: Colourspace Conversion
function RGBtoLAB(rgb){ // [r,g,b] --> [l,a,b]
    return XYZtoLAB(RGBtoXYZ(rgb));
}

function RGBtoXYZ(rgb){ // [r,g,b] --> [x,y,z]
    for (var i = 0; i < 3; i++) {
        rgb[i] = ( rgb[i] / 255.0 );

        if ( rgb[i] > 0.04045 ){
            rgb[i] = 100 * Math.pow((rgb[i] + 0.055 ) / 1.055 , 2.4);
            continue;
        }
        rgb[i] = 100 * rgb[i] / 12.92;
    }    // may not need to multiply by 100

  X = rgb[0] * 0.4124 + rgb[1] * 0.3576 + rgb[2] * 0.1805;
  Y = rgb[0] * 0.2126 + rgb[1] * 0.7152 + rgb[2] * 0.0722;
  Z = rgb[0] * 0.0193 + rgb[1] * 0.1192 + rgb[2] * 0.9505;
  return [X, Y, Z];
}

function XYZtoLAB(xyz){ // [x,y,z] --> [l,a,b]
    var ref_xyz = [95.05, 100.0, 108.89999999999999]; 
// this is the xyz of WHITE

    for (var i = 0; i < 3; i++) {
        xyz[i] = ( xyz[i] / ref_xyz[i] );

        if ( xyz[i] > 0.008856 ){                                    // 0.008856 <=> 216.0/24389
            xyz[i] = Math.pow(xyz[i] , 1.0/3);
            continue;
        }
        xyz[i] = ( 903.3*xyz[i] + 16 ) / 116; 

// 903.3 <=> 24389.0/27
    }

  L = 116 * xyz[1] - 16;
  a = 500 * ( xyz[0] - xyz[1] );
  b = 200 * ( xyz[1] - xyz[2] );

    return [L,a,b];
}
  #### Appendix III: Euclidean Distance Calculation
function euclidean_distance(v1,v2){
    return Math.pow(Math.pow(v1[0]-v2[0],2)+Math.pow(v1[1]-v2[1],2)+Math.pow(v1[2]-v2[2],2),0.5);
}

function colorDiff(p1,p2){ // vi is [r,g,b]
    //$('#debug').html("in colorScript.js");
    v1 = [p1[0]+0,p1[1]+0,p1[2]+0];
    v2 = [p2[0]+0,p2[1]+0,p2[2]+0];
    return Math.abs(euclidean_distance(RGBtoLAB(v1),RGBtoLAB(v2)));
}

function colorsDiff(data1,data2){ //[r,g,b,a,r,g,b,a,r,...]
    var v1;
    var v2;
    var lab1;
    var lab2;
    var sum_diff = 0;
    for (var i = 0; i < data1.length; i+=4) {
        v1 = data1.slice(i,i+3);
        lab1 = RGBtoLAB(v1);

        v2 = data2.slice(i,i+3);
        lab2 = RGBtoLAB(v2);

        sum_diff += euclidean_distance(lab1,lab2);
    }
    var result = sum_diff / (data1.length/4);

    return result;
}

Appendix IV: Camera App Images

References

  1. Checa, S.K., and Soncini, F.C. 2011. Bacterial gold sensing and resistance. PubMed. 24(3):419-27
  2. Kim, E.D., Chivian, D., and Baker, D., 2004. Protein structure prediction and analysis using the Robetta Server. Nucleic Acids Res. (1)32:W526-531
  3. Chaudhury, S., Lyskov, S., and Gray, J. J. 2010. PyRosetta: a script-based interface for implementing molecular modeling algorithms using Rosetta Bioinformatics. 26(5): 689-691.
  4. Berman, H. M., Westbrook, J., Feng, Z, Gililand, G., Bhat, T.N., Weissig, H., Shindyalov, I.N., Bourne, P.E. 2000. The Protein Data Bank. Nucleic Acids Research. 28:235-242.