Line 6: | Line 6: | ||
<!-- repo for this wiki: https://github.com/igemuoftATG/wiki2016 --> | <!-- repo for this wiki: https://github.com/igemuoftATG/wiki2016 --> | ||
− | <!-- file built: Tue Oct 18 2016 | + | <!-- file built: Tue Oct 18 2016 21:53:18 GMT-0400 (EDT) --> |
</html> | </html> | ||
Line 84: | Line 84: | ||
</div> | </div> | ||
<div class="content"> | <div class="content"> | ||
− | <div class="content" id="content-main"><div class="row"><div class="col col-lg-8 col-md-12"><div class="content-main"><h1 id=" | + | <div class="content" id="content-main"><div class="row"><div class="col col-lg-8 col-md-12"><div class="content-main"><h1 id="ios-smartphone-app-for-colorimetric-analysis">iOS Smartphone App for Colorimetric Analysis</h1> |
− | <h3 id=" | + | <h3 id="introduction">Introduction</h3> |
− | <p> | + | <p>Colorimetric assays, through convenience and simplicity, have garnered lasting value in both academia and industry. This year, the University of Toronto iGEM Computational Team has developed an iOS application for colorimetric image analysis, usable on-the-go for even greater convenience and intuitive handling. </p> |
− | <h3 id=" | + | <p>Our app is compatible with colorimetric reaction data from any one-to-one colour change, including the gold detector implemented by our Wet Lab in the form of their cell-free paper-based biosensor. To illustrate, upon the conversion of yellow substrate (chlorophenol-red-beta-D-galactopyranoside) into purple product (chlorophenol red) within the paper disks of the biosensor (Pardee et al. 2014)., where the intensity of the colour change corresponds directly to concentration of analyte, one of our Wet Lab members could take an iPhone photograph in order to automatically determine the concentration of gold present. Many of the test cases used for our app were based on this yellow-to-purple colour change.</p> |
− | <p> | + | <p>Our app utilizes the iPhone camera and advanced colorimetric analysis algorithms in order to perform quantitative comparisons between images of the control and images of the sample. The values representing differences in colour and intensity are analyzed via a machine learning algorithm that has been trained using image datasets of molecules susceptible to colorimetric changes, over various concentrations.</p> |
− | <p> | + | <h3 id="user-experience">User Experience</h3> |
− | <p> | + | <p>Upon opening the app for the first time, the user would be asked to create a project, as shown in Appendix IV below, and will be prompted to provide a project name, project location, date, and any additional notes, as shown in Appendix V below. Alternatively, if the app has been used before, the user would be able to select a previously created project. A sample project is shown in Appendix VI.</p> |
− | <h3 id=" | + | <p>Each project would contain multiple tests, as shown in for each of which the user would be prompted to create the following fields: test name, date, description, number of rows in the well plate, and number of columns in the well plate, as shown in Appendix VII. A list of tests is shown in Appendix VIII. Further, upon starting the test, the user would receive a prompt offering a choice to use an existing photo or take a new photo, after which the chosen photo would be saved and analyzed, as shown in Appendix IX. For expediency, we have advised users to place a control in the first row and the first column of the wells (position 1,1). </p> |
− | <p>We | + | <p>Following this, our analytical algorithm would be used in combination with the number of row and column inputs in order to display a table of results showing differences in colour between the images of the control and images of the sample. In addition, a table showing estimated concentrations of analyte will be generated for the user, as shown in Appendix X.</p> |
− | <p> | + | <p>This data hierarchy has been created in order to support multiple tests for a single large location or project. Thus, the user would be able to store all tests related to the same area in an organized manner.</p> |
− | + | <p>In addition, we have advised that images should be taken under relatively even light distribution, given that reflections from the wells will lower the accuracy of the results. </p> | |
− | <h3 id=" | + | <h3 id="software-development">Software Development</h3> |
− | <p> | + | <p>We chose Apache Cordova, an open-source program for developing multiplatform apps, for development of our app. Apache Cordova operates through creation of a web application and use of native wrappers for select devices. Further, Apache Cordova allows native device APIs to be accessed freely through plugins such as the camera plugin used extensively with our app. As Apache Cordova apps are simply modified web applications, our source code has been written in HTML, CSS, and JavaScript, notwithstanding the exceptions described below. </p> |
− | < | + | <p>The Apache Cordova camera plugin allowed us to integrate the ability to use the iPhone camera to capture photos in-app and the render the image accessible through the app at the same time. Our team then made use of front-end Javascript framework, AngularJS, (along with its modules and controllers) and HTML Canvas in order to seamlessly and dynamically retrieve image data and output the analyzed results on the user’s screen.</p> |
+ | <p>All data is stored through Cordova-sqlite-Storage (Brody, 2016), which uses its own implementation of sqlite3 for on device storage. Cordova-sqlite-Storage has made it possible for us to store data for multiple projects and corresponding tests using key-value tuples, allowing for constant time data recovery. </p> | ||
+ | <h3 id="analytical-algorithm">Analytical Algorithm</h3> | ||
+ | <h4 id="image-data-retrieval">Image data retrieval</h4> | ||
+ | <p>The captured image would be drawn on an HTML Canvas, giving us the ability to retrieve and analyze pixel information. However, in order to conduct the first step of the analysis, we were required to split up the image of multiple wells into sections of single wells that could be used for analysis. We generated an algorithm for retrieving image data for every well and comparing it to image data for the control well (row 1, column 1) through implementation of Sobel operators for edge detection (via location of neighbouring pixels with high contrast) (Gonzalez and Woods, 1992). Our method is applied to the control well in order to find the values representing the x-min, x-max, y-min and y-max of the well’s circular circumference, after which the radius of the control well in the image is calculated. As all wells are expected to have the same size, the radius of the control well is used as a reference for all other wells. This method is particularly effective in combination with the paper-based biosensor utilized by our Wet Lab, as the black background of the sensor contrasts sharply with the yellow/purple wells. Our code for the edge detection algorithm can be viewed in Appendix I below.</p> | ||
+ | <p>The next stage of the pixel retrieval involves image cropping using the OpenCV GrabCut algorithm, based on the ratio of rows and columns and reduction of the total image into single-well divisions. To ensure that the process of division is executed correctly, we have advised the user to align the edges of the photo with the edges of the biosensor or well plate. </p> | ||
+ | <p>The edge detection method will be used once again on the single-well divisions in order to find the value corresponding to the x-min edge. At this point, given the well radius, the exact pixel-level locations of the remaining any of the remaining wells can be determined. In the next step, we submit the colour values determined within each circular well to our analysis algorithm. In order to ensure that no overlap occurred between the black background of the biosensor and the approximated well location, thereby heightening the difference in colour values, we also downsized the circumference of the wells by a small margin. </p> | ||
+ | <h4 id="color-difference-detection-delta-e-calculation-">Color Difference Detection (Delta-E calculation)</h4> | ||
+ | <p>In order to quantify the difference in colour between our control and sample wells, we used the CIE76 color difference algorithm, which takes two pixels and calculates Delta-E (Euclidean Distance) between them [some ref]. However, the CIE76 algorithm uses CIELAB color space as input (L = Lightness, A = Color Channel A, B = Color Channel B), whereas smartphone images are commonly saved as RGB (Red Channel, Green Channel, Blue Channel). The first step in our colour analysis algorithm was the creation of Javascript functions for conversion of RGB values to CIELAB color space. In the conversion process, the smartphone-based RGB value was first converted to an XYZ value and then to a LAB value based on existing mathematical equations (Lindbloom, 2016). | ||
+ | Please refer to Appendix II to view our code for colour conversion.</p> | ||
+ | <p>Following the conversion of the color channels to CIELAB, we developed our javascript functions based on the CIE76 Delta-E algorithm (Sharma, 2003), as shown in Appendix III.</p> | ||
+ | <h4 id="concentration-approximation">Concentration approximation</h4> | ||
+ | <p>Thus far, the applications of our algorithms are limited to determining the difference in colour between two images. However, in order to be able to interpolate within concentration ranges for each molecule susceptible to colorimetric change, we first had to establish a correlation between the color change and the concentration. Our team developed a machine learning regressor using the TensorFlow library, which is to be trained using datasets containing images of molecules susceptible to colorimetric change and their concentrations. Due to the lack of a sufficiently sized dataset, it was not possible to train this algorithm using a real dataset.</p> | ||
+ | <p>For the case of the yellow-to-purple color change visible in our cell-free paper-based biosensor, we determined that the concentrations of the molecule susceptible to colour change and the intensity of the colour change are linearly dependent. Given this linear relationship, we created a dummy dataset to train the regressor such that it would output approximated values for the concentration of the molecules. Given an appropriate dataset, the regressor could be re-trained with minimal effort. Further, because the regressor operates independently to the particular hue of colours present and what type of relationships exist between colour and concentration, (linear, quad, expo), the regressor can potentially be trained for any apparatus.</p> | ||
+ | <h3 id="test-cases-and-other-possible-applications">Test Cases and Other Possible Applications</h3> | ||
+ | <p>Most of the test cases run by our team were produced graphically in Photoshop, wherein we tried to imitate the yellow-to-purple color change visible in our Wet Lab’s paper-based biosensor for gold detection, as shown in Appendix VI.</p> | ||
+ | <p>As explained in the section directly above, this analysis is applicable to any apparatus and any one-to-one color change, given that the regressor has been trained. </p> | ||
+ | <h3 id="platform-support">Platform Support</h3> | ||
+ | <p>Our app was developed and tested for iOS devices. The rationale behind this decision is that iPhone camera image quality is more consistent across different versions of iPhones compared to Android devices, which are manufactured by different companies and may have greatly differing camera standards depending on the device. </p> | ||
+ | <p>However, given that the application was created using Cordova, which allows for development in multiple platforms from a single code base, the application can be made available as an Android app and in-browser. However, the caveat that camera performance consistency would need to be tested remains in effect, as image quality plays a very important role in our app.</p> | ||
+ | <h3 id="references">References</h3> | ||
+ | <ol> | ||
+ | <li>Pardee K, Green AA, Ferrante T, Cameron DE, DaleyKeyser A, Yin P, Collins JJ. 2014. Paper-based synthetic gene networks. Cell. 159(4): 940-954.</li> | ||
+ | <li>Brody, C. (2016) Cordova/PhoneGap sqlite storage adapter, litehelpers/Cordova-sqlite-storage, Retrieved from <a href="https://github.com/litehelpers/Cordova-sqlite-storage">https://github.com/litehelpers/Cordova-sqlite-storage</a></li> | ||
+ | <li>Gonzalez, R. and Woods, R. 1992. Digital Image Processing. Addison Wesley. 414-428.</li> | ||
+ | <li>Lindbloom, J.B. 2003. Useful Colour Equations, Retrieved from <a href="http://www.brucelindbloom.com/index.html?Equations.html">http://www.brucelindbloom.com/index.html?Equations.html</a></li> | ||
+ | <li>Sharma, G. 2003. Digital Colour Imaging. New York, Webster: CRC Press.</li> | ||
+ | </ol> | ||
+ | <h3 id="appendix">Appendix</h3> | ||
+ | <h4 id="appendix-i-edge-detection">Appendix I: Edge Detection</h4> | ||
+ | <pre> | ||
+ | this.findEdges = function(bound){ //[t,d,l,r] -> [[x],[y]] | ||
+ | //$('#debug').html("in coreLoop"); | ||
− | + | var x = 0; | |
+ | var y = 0; | ||
− | + | var left = undefined; | |
+ | var top = undefined; | ||
+ | var right = undefined; | ||
+ | var bottom = undefined; | ||
− | + | var detected = false; | |
+ | var edgeDataX = []; | ||
+ | var edgeDataY = []; | ||
+ | for(y=bound[0];y<=bound[1];y++){ | ||
+ | for(x=bound[2];x<=bound[3];x++){ | ||
+ | index = (x + y*this.ctxDimensions.width)*4; | ||
− | + | pixel = this.pixelData.data.slice(index,index+3); | |
− | + | left = this.pixelData.data.slice(index-4,index-1); | |
+ | right = this.pixelData.data.slice(index+4,index+7); | ||
+ | top = this.pixelData.data.slice(index-(this.ctxDimensions.width*4),index-(this.ctxDimensions.width*4)+3); | ||
+ | bot = this.pixelData.data.slice(index+(this.ctxDimensions.width*4),index+(this.ctxDimensions.width*4)+3); | ||
− | * [ | + | if(colorDiff(pixel,left)>this.threshold){ |
− | * [ | + | detected = true; |
− | * | + | } |
+ | else if(colorDiff(pixel,right)>this.threshold){ | ||
+ | detected = true; | ||
+ | } | ||
+ | else if(colorDiff(pixel,top)>this.threshold){ | ||
+ | detected = true; | ||
+ | } | ||
+ | else if(colorDiff(pixel,bot)>this.threshold){ | ||
+ | detected = true; | ||
+ | }//*/ | ||
+ | if (detected){ | ||
+ | edgeDataX.push(x); | ||
+ | edgeDataY.push(y); | ||
+ | this.plotPoint(x,y); | ||
+ | detected = false; | ||
+ | } | ||
+ | }} | ||
+ | return [edgeDataX,edgeDataY]; | ||
+ | }; | ||
+ | </pre> | ||
+ | |||
+ | #### Appendix II: Colourspace Conversion | ||
+ | |||
+ | <pre> | ||
+ | function RGBtoLAB(rgb){ // [r,g,b] --> [l,a,b] | ||
+ | return XYZtoLAB(RGBtoXYZ(rgb)); | ||
+ | } | ||
+ | |||
+ | function RGBtoXYZ(rgb){ // [r,g,b] --> [x,y,z] | ||
+ | for (var i = 0; i < 3; i++) { | ||
+ | rgb[i] = ( rgb[i] / 255.0 ); | ||
+ | |||
+ | if ( rgb[i] > 0.04045 ){ | ||
+ | rgb[i] = 100 * Math.pow((rgb[i] + 0.055 ) / 1.055 , 2.4); | ||
+ | continue; | ||
+ | } | ||
+ | rgb[i] = 100 * rgb[i] / 12.92; | ||
+ | } // may not need to multiply by 100 | ||
+ | |||
+ | X = rgb[0] * 0.4124 + rgb[1] * 0.3576 + rgb[2] * 0.1805; | ||
+ | Y = rgb[0] * 0.2126 + rgb[1] * 0.7152 + rgb[2] * 0.0722; | ||
+ | Z = rgb[0] * 0.0193 + rgb[1] * 0.1192 + rgb[2] * 0.9505; | ||
+ | return [X, Y, Z]; | ||
+ | } | ||
+ | |||
+ | function XYZtoLAB(xyz){ // [x,y,z] --> [l,a,b] | ||
+ | var ref_xyz = [95.05, 100.0, 108.89999999999999]; | ||
+ | // this is the xyz of WHITE | ||
+ | |||
+ | for (var i = 0; i < 3; i++) { | ||
+ | xyz[i] = ( xyz[i] / ref_xyz[i] ); | ||
+ | |||
+ | if ( xyz[i] > 0.008856 ){ // 0.008856 <=> 216.0/24389 | ||
+ | xyz[i] = Math.pow(xyz[i] , 1.0/3); | ||
+ | continue; | ||
+ | } | ||
+ | xyz[i] = ( 903.3*xyz[i] + 16 ) / 116; | ||
+ | |||
+ | // 903.3 <=> 24389.0/27 | ||
+ | } | ||
+ | |||
+ | L = 116 * xyz[1] - 16; | ||
+ | a = 500 * ( xyz[0] - xyz[1] ); | ||
+ | b = 200 * ( xyz[1] - xyz[2] ); | ||
+ | |||
+ | return [L,a,b]; | ||
+ | } | ||
+ | </pre> | ||
+ | |||
+ | #### Appendix III: Euclidean Distance Calculation | ||
+ | |||
+ | <pre> | ||
+ | function euclidean_distance(v1,v2){ | ||
+ | return Math.pow(Math.pow(v1[0]-v2[0],2)+Math.pow(v1[1]-v2[1],2)+Math.pow(v1[2]-v2[2],2),0.5); | ||
+ | } | ||
+ | |||
+ | function colorDiff(p1,p2){ // vi is [r,g,b] | ||
+ | //$('#debug').html("in colorScript.js"); | ||
+ | v1 = [p1[0]+0,p1[1]+0,p1[2]+0]; | ||
+ | v2 = [p2[0]+0,p2[1]+0,p2[2]+0]; | ||
+ | return Math.abs(euclidean_distance(RGBtoLAB(v1),RGBtoLAB(v2))); | ||
+ | } | ||
+ | |||
+ | function colorsDiff(data1,data2){ //[r,g,b,a,r,g,b,a,r,...] | ||
+ | var v1; | ||
+ | var v2; | ||
+ | var lab1; | ||
+ | var lab2; | ||
+ | var sum_diff = 0; | ||
+ | for (var i = 0; i < data1.length; i+=4) { | ||
+ | v1 = data1.slice(i,i+3); | ||
+ | lab1 = RGBtoLAB(v1); | ||
+ | |||
+ | v2 = data2.slice(i,i+3); | ||
+ | lab2 = RGBtoLAB(v2); | ||
+ | |||
+ | sum_diff += euclidean_distance(lab1,lab2); | ||
+ | } | ||
+ | var result = sum_diff / (data1.length/4); | ||
+ | |||
+ | return result; | ||
+ | } | ||
+ | </pre> | ||
+ | |||
+ | <h4 id="appendix-iv-projects-page">Appendix IV: Projects page</h4> | ||
+ | <p><img src="https://static.igem.org/mediawiki/2016/5/50/T--Toronto--2016_appendix4.png" alt=""></p> | ||
+ | <h4 id="appendix-v-creating-new-project">Appendix V: Creating New Project</h4> | ||
+ | <p><img src="https://static.igem.org/mediawiki/2016/9/9d/T--Toronto--2016_appendix5.png" alt=""> </p> | ||
+ | <h4 id="appendix-vi-sample-project">Appendix VI: Sample Project</h4> | ||
+ | <p><img src="https://static.igem.org/mediawiki/2016/6/68/T--Toronto--2016_appendix6.png" alt=""></p> | ||
+ | <h4 id="appendix-vii-creating-new-test">Appendix VII: Creating New Test</h4> | ||
+ | <p><img src="https://static.igem.org/mediawiki/2016/e/e9/T--Toronto--2016_appendix7.png" alt=""></p> | ||
+ | <h4 id="appendix-viii-list-of-tests">Appendix VIII: List of Tests</h4> | ||
+ | <p><img src="https://static.igem.org/mediawiki/2016/a/ad/T--Toronto--2016_appendix8.png" alt=""></p> | ||
+ | <h4 id="appendix-ix-input-image">Appendix IX: Input Image</h4> | ||
+ | <p><img src="https://static.igem.org/mediawiki/2016/f/f6/T--Toronto--2016_appendix9.png" alt=""></p> | ||
+ | <h4 id="appendix-x-analysis-results">Appendix X: Analysis Results</h4> | ||
+ | <p><img src="https://static.igem.org/mediawiki/2016/5/58/T--Toronto--2016_appendix10.png" alt=""></p> | ||
+ | <h1 id="modeling-protein-folding-with-rosetta">Modeling Protein Folding with Rosetta</h1> | ||
+ | <h3 id="introduction">Introduction</h3> | ||
+ | <p>In order to model and compare the gold(III)-binding ability of a GolS homodimer and two mutant variants, we used the Robetta Full-chain Protein Structure Prediction Server, as well as the Rosetta and pyRosetta modeling software tools. GolS, as a MerR family transcriptional regulator, serves as a kind of natural “gold detector” within S. enterica, but lacks specificity for gold, and is susceptible to cross-reactions with copper(I) and silver(I) (Checa and Soncini, 2011). Further, there is no extant crystal structure for GolS. In conjunction with our Wet Lab’s efforts to improve the gold-selectivity and specificity of GolS, we have modeled the gold(III)-binding potential of a GolS homodimer and two mutated variants (P118A, or alanine-to-phenylananine point mutation at residue 118, and A113T, or alanine-to-threonine point mutation at residue 113). In particular, these mutations were designed to shrink the size of the metal ion binding pocket of GolS. GolS shares a helix-turn-helix metal ion binding domain and almost all major catalytic residues with another MerR transcriptional regulator, CueR, that is sensitive to copper. These catalytic residues include Thr13, Lys15, Arg18, Tyr20, Asn34, Tyr36, Arg37, and Arg54, but have an exception in Ser4 in GolS, which corresponds to the functionally opposed Gly4 in CueR. Given these similarities, our Wet Lab formed the hypothesis that the size and shape of the metal ion binding pocket of MerR family transcriptional regulators may confer binding specificity to a particular metal ion.</p> | ||
+ | <h3 id="modeling-the-gols-homodimer-with-rosetta">Modeling the GolS Homodimer with Rosetta</h3> | ||
+ | <p>We modeled GolS as a homodimer for two reasons; firstly, many MerR family transcriptional regulators function as homodimers, and secondly, we predict that monomer-monomer interactions between Cys112 and Cys120 from chain 1 and Ser77 from chain B, (from each side), are necessary for chelation of the metal ion, such that the gold-binding potential of a GolS monomer could not usefully be measured. | ||
+ | In order to model GolS as a homodimer, we inputted the amino acid sequence of GolS, our desired symmetry constraints, and a CueR template. and used the Robetta 3-D Modeling web service in order to generate a 3D structure for a GolS homodimer. Robetta employs the ‘Ginzu’ method of domain prediction to screen for regions of the query sequence that are homologous to extant experimentally verified models via BLAST, PSI-BLAST, FFASO3, 3D-Jury, and multiple-sequence based alignments. This is followed by alignment of the generated model with the template through the K*Sync method, which utilizes secondary-structure prediction and residue profile-profile comparison (Kim, E.D., Chivian, D., and Baker, D., 2004.) A still image of the generated GolS homodimer is attached below in Appendix I.</p> | ||
+ | <h3 id="modeling-mutant-variants-in-pyrosetta">Modeling Mutant Variants in PyRosetta</h3> | ||
+ | <p>In order to model the mutant variants of GolS, we used PyRosetta’s “pose_from_pdb” function to generate a “pose” structure that could be manipulated within PyRosetta from the Robetta-generated Protein Data Bank (PDB) file of the GolS homodimer. Following “pose” generation, we used the “mutate_residue” function to create two mutant GolS variants: a P118A variant where the Ala118 residue in GolS was converted to Phe118 and a A113T variant where the Ala113 residue in GolS was converted to Thr113, and PyRosetta automatically accounts for resulting changes to rotamers (Chaudhary, Lyskov, and Gray, 2010). </p> | ||
+ | <h3 id="modeling-gold-binding">Modeling Gold Binding</h3> | ||
+ | <p>Finally, we created a model for our ligand (gold (III)) within Rosetta, modelled the ability of the GolS homodimer to bind gold(III), and further compared the gold-binding abilities of two mutant versions of GolS, P118A and A113T. First, we downloaded a structural data file (SDF file) containing information of all documented instances of AU3+ functioning as a ligand from RCSB (Berman, Westbrook, Feng, Gililand, Bhat, Weissig, Shindyalov, Bourne, 2000). Following this, we executed Rosetta’s “molfile_to_params” function in order to generate a “params” file for gold(III) containing structural and charge information that allowed for ligand modelling within Rosetta. Due to the fact that gold(III) is a metal ion and is not expected to contain hydrogens, we were able to omit an intermediate cleaning step that is usually required prior to generation of a “params” file. | ||
+ | Finally, we used Rosetta’s automatic ligand-docking function in order to compare gold(III)-binding potential of GolS, P118A, and A113T. [insert results here once generated]</p> | ||
+ | <h3 id="modeling-the-size-of-the-metal-ion-binding-pocket">Modeling the Size of the Metal Ion Binding Pocket</h3> | ||
+ | <p>[to be removed if not replaced]</p> | ||
+ | <h3 id="references">References</h3> | ||
+ | <ol> | ||
+ | <li>Checa, S.K., and Soncini, F.C. 2011. Bacterial gold sensing and resistance. PubMed. 24(3):419-27</li> | ||
+ | <li>Kim, E.D., Chivian, D., and Baker, D., 2004. Protein structure prediction and analysis using the Robetta Server. Nucleic Acids Res. (1)32:W526-531</li> | ||
+ | <li>Chaudhury, S., Lyskov, S., and Gray, J. J. 2010. PyRosetta: a script-based interface for implementing molecular modeling algorithms using Rosetta Bioinformatics. 26(5): 689-691.</li> | ||
+ | <li>Berman, H. M., Westbrook, J., Feng, Z, Gililand, G., Bhat, T.N., Weissig, H., Shindyalov, I.N., Bourne, P.E. 2000. The Protein Data Bank. Nucleic Acids Research. 28:235-242.</li> | ||
+ | </ol> | ||
+ | <h3 id="appendix">Appendix</h3> | ||
+ | <p>Appendix I: GolS Homodimer | ||
+ | <img src="https://static.igem.org/mediawiki/2016/e/ed/T--Toronto--2016_GolS_homodimer.png" alt=""></p> | ||
</div></div><div id="tableofcontents" class="tableofcontents affix sidebar col-lg-4 hidden-xs hidden-sm hidden-md visible-lg-3"><ul class="nav"> | </div></div><div id="tableofcontents" class="tableofcontents affix sidebar col-lg-4 hidden-xs hidden-sm hidden-md visible-lg-3"><ul class="nav"> | ||
− | <li><a href="# | + | <li><a href="#introduction">Introduction</a></li> |
− | <li><a href="# | + | <li><a href="#user-experience">User Experience</a></li> |
− | <li><a href="# | + | <li><a href="#software-development">Software Development</a></li> |
− | <li><a href="# | + | <li><a href="#analytical-algorithm">Analytical Algorithm</a><ul> |
− | + | <li><a href="#image-data-retrieval">Image data retrieval</a></li> | |
+ | <li><a href="#color-difference-detection--delta-e-calculation-">Color Difference Detection (Delta-E calculation)</a></li> | ||
+ | <li><a href="#concentration-approximation">Concentration approximation</a></li> | ||
+ | </ul> | ||
+ | </li> | ||
+ | <li><a href="#test-cases-and-other-possible-applications">Test Cases and Other Possible Applications</a></li> | ||
+ | <li><a href="#platform-support">Platform Support</a></li> | ||
+ | <li><a href="#references">References</a></li> | ||
+ | <li><a href="#appendix">Appendix</a><ul> | ||
+ | <li><a href="#appendix-i--edge-detection">Appendix I: Edge Detection</a></li> | ||
+ | <li><a href="#appendix-ii--colourspace-conversion">Appendix II: Colourspace Conversion</a></li> | ||
+ | <li><a href="#appendix-iii--euclidean-distance-calculation">Appendix III: Euclidean Distance Calculation</a></li> | ||
+ | <li><a href="#appendix-iv--projects-page">Appendix IV: Projects page</a></li> | ||
+ | <li><a href="#appendix-v--creating-new-project">Appendix V: Creating New Project</a></li> | ||
+ | <li><a href="#appendix-vi--sample-project">Appendix VI: Sample Project</a></li> | ||
+ | <li><a href="#appendix-vii--creating-new-test">Appendix VII: Creating New Test</a></li> | ||
+ | <li><a href="#appendix-viii--list-of-tests">Appendix VIII: List of Tests</a></li> | ||
+ | <li><a href="#appendix-ix--input-image">Appendix IX: Input Image</a></li> | ||
+ | <li><a href="#appendix-x--analysis-results">Appendix X: Analysis Results</a></li> | ||
+ | <li><a href="#modeling-protein-folding-with-rosetta">Modeling Protein Folding with Rosetta</a></li> | ||
+ | </ul> | ||
+ | </li> | ||
+ | <li><a href="#introduction">Introduction</a></li> | ||
+ | <li><a href="#modeling-the-gols-homodimer-with-rosetta">Modeling the GolS Homodimer with Rosetta</a></li> | ||
+ | <li><a href="#modeling-mutant-variants-in-pyrosetta">Modeling Mutant Variants in PyRosetta</a></li> | ||
+ | <li><a href="#modeling-gold-binding">Modeling Gold Binding</a></li> | ||
+ | <li><a href="#modeling-the-size-of-the-metal-ion-binding-pocket">Modeling the Size of the Metal Ion Binding Pocket</a></li> | ||
+ | <li><a href="#references">References</a></li> | ||
+ | <li><a href="#appendix">Appendix</a></li> | ||
</ul> | </ul> | ||
</div></div></div> | </div></div></div> |
Revision as of 01:57, 19 October 2016
iOS Smartphone App for Colorimetric Analysis
Introduction
Colorimetric assays, through convenience and simplicity, have garnered lasting value in both academia and industry. This year, the University of Toronto iGEM Computational Team has developed an iOS application for colorimetric image analysis, usable on-the-go for even greater convenience and intuitive handling.
Our app is compatible with colorimetric reaction data from any one-to-one colour change, including the gold detector implemented by our Wet Lab in the form of their cell-free paper-based biosensor. To illustrate, upon the conversion of yellow substrate (chlorophenol-red-beta-D-galactopyranoside) into purple product (chlorophenol red) within the paper disks of the biosensor (Pardee et al. 2014)., where the intensity of the colour change corresponds directly to concentration of analyte, one of our Wet Lab members could take an iPhone photograph in order to automatically determine the concentration of gold present. Many of the test cases used for our app were based on this yellow-to-purple colour change.
Our app utilizes the iPhone camera and advanced colorimetric analysis algorithms in order to perform quantitative comparisons between images of the control and images of the sample. The values representing differences in colour and intensity are analyzed via a machine learning algorithm that has been trained using image datasets of molecules susceptible to colorimetric changes, over various concentrations.
User Experience
Upon opening the app for the first time, the user would be asked to create a project, as shown in Appendix IV below, and will be prompted to provide a project name, project location, date, and any additional notes, as shown in Appendix V below. Alternatively, if the app has been used before, the user would be able to select a previously created project. A sample project is shown in Appendix VI.
Each project would contain multiple tests, as shown in for each of which the user would be prompted to create the following fields: test name, date, description, number of rows in the well plate, and number of columns in the well plate, as shown in Appendix VII. A list of tests is shown in Appendix VIII. Further, upon starting the test, the user would receive a prompt offering a choice to use an existing photo or take a new photo, after which the chosen photo would be saved and analyzed, as shown in Appendix IX. For expediency, we have advised users to place a control in the first row and the first column of the wells (position 1,1).
Following this, our analytical algorithm would be used in combination with the number of row and column inputs in order to display a table of results showing differences in colour between the images of the control and images of the sample. In addition, a table showing estimated concentrations of analyte will be generated for the user, as shown in Appendix X.
This data hierarchy has been created in order to support multiple tests for a single large location or project. Thus, the user would be able to store all tests related to the same area in an organized manner.
In addition, we have advised that images should be taken under relatively even light distribution, given that reflections from the wells will lower the accuracy of the results.
Software Development
We chose Apache Cordova, an open-source program for developing multiplatform apps, for development of our app. Apache Cordova operates through creation of a web application and use of native wrappers for select devices. Further, Apache Cordova allows native device APIs to be accessed freely through plugins such as the camera plugin used extensively with our app. As Apache Cordova apps are simply modified web applications, our source code has been written in HTML, CSS, and JavaScript, notwithstanding the exceptions described below.
The Apache Cordova camera plugin allowed us to integrate the ability to use the iPhone camera to capture photos in-app and the render the image accessible through the app at the same time. Our team then made use of front-end Javascript framework, AngularJS, (along with its modules and controllers) and HTML Canvas in order to seamlessly and dynamically retrieve image data and output the analyzed results on the user’s screen.
All data is stored through Cordova-sqlite-Storage (Brody, 2016), which uses its own implementation of sqlite3 for on device storage. Cordova-sqlite-Storage has made it possible for us to store data for multiple projects and corresponding tests using key-value tuples, allowing for constant time data recovery.
Analytical Algorithm
Image data retrieval
The captured image would be drawn on an HTML Canvas, giving us the ability to retrieve and analyze pixel information. However, in order to conduct the first step of the analysis, we were required to split up the image of multiple wells into sections of single wells that could be used for analysis. We generated an algorithm for retrieving image data for every well and comparing it to image data for the control well (row 1, column 1) through implementation of Sobel operators for edge detection (via location of neighbouring pixels with high contrast) (Gonzalez and Woods, 1992). Our method is applied to the control well in order to find the values representing the x-min, x-max, y-min and y-max of the well’s circular circumference, after which the radius of the control well in the image is calculated. As all wells are expected to have the same size, the radius of the control well is used as a reference for all other wells. This method is particularly effective in combination with the paper-based biosensor utilized by our Wet Lab, as the black background of the sensor contrasts sharply with the yellow/purple wells. Our code for the edge detection algorithm can be viewed in Appendix I below.
The next stage of the pixel retrieval involves image cropping using the OpenCV GrabCut algorithm, based on the ratio of rows and columns and reduction of the total image into single-well divisions. To ensure that the process of division is executed correctly, we have advised the user to align the edges of the photo with the edges of the biosensor or well plate.
The edge detection method will be used once again on the single-well divisions in order to find the value corresponding to the x-min edge. At this point, given the well radius, the exact pixel-level locations of the remaining any of the remaining wells can be determined. In the next step, we submit the colour values determined within each circular well to our analysis algorithm. In order to ensure that no overlap occurred between the black background of the biosensor and the approximated well location, thereby heightening the difference in colour values, we also downsized the circumference of the wells by a small margin.
Color Difference Detection (Delta-E calculation)
In order to quantify the difference in colour between our control and sample wells, we used the CIE76 color difference algorithm, which takes two pixels and calculates Delta-E (Euclidean Distance) between them [some ref]. However, the CIE76 algorithm uses CIELAB color space as input (L = Lightness, A = Color Channel A, B = Color Channel B), whereas smartphone images are commonly saved as RGB (Red Channel, Green Channel, Blue Channel). The first step in our colour analysis algorithm was the creation of Javascript functions for conversion of RGB values to CIELAB color space. In the conversion process, the smartphone-based RGB value was first converted to an XYZ value and then to a LAB value based on existing mathematical equations (Lindbloom, 2016). Please refer to Appendix II to view our code for colour conversion.
Following the conversion of the color channels to CIELAB, we developed our javascript functions based on the CIE76 Delta-E algorithm (Sharma, 2003), as shown in Appendix III.
Concentration approximation
Thus far, the applications of our algorithms are limited to determining the difference in colour between two images. However, in order to be able to interpolate within concentration ranges for each molecule susceptible to colorimetric change, we first had to establish a correlation between the color change and the concentration. Our team developed a machine learning regressor using the TensorFlow library, which is to be trained using datasets containing images of molecules susceptible to colorimetric change and their concentrations. Due to the lack of a sufficiently sized dataset, it was not possible to train this algorithm using a real dataset.
For the case of the yellow-to-purple color change visible in our cell-free paper-based biosensor, we determined that the concentrations of the molecule susceptible to colour change and the intensity of the colour change are linearly dependent. Given this linear relationship, we created a dummy dataset to train the regressor such that it would output approximated values for the concentration of the molecules. Given an appropriate dataset, the regressor could be re-trained with minimal effort. Further, because the regressor operates independently to the particular hue of colours present and what type of relationships exist between colour and concentration, (linear, quad, expo), the regressor can potentially be trained for any apparatus.
Test Cases and Other Possible Applications
Most of the test cases run by our team were produced graphically in Photoshop, wherein we tried to imitate the yellow-to-purple color change visible in our Wet Lab’s paper-based biosensor for gold detection, as shown in Appendix VI.
As explained in the section directly above, this analysis is applicable to any apparatus and any one-to-one color change, given that the regressor has been trained.
Platform Support
Our app was developed and tested for iOS devices. The rationale behind this decision is that iPhone camera image quality is more consistent across different versions of iPhones compared to Android devices, which are manufactured by different companies and may have greatly differing camera standards depending on the device.
However, given that the application was created using Cordova, which allows for development in multiple platforms from a single code base, the application can be made available as an Android app and in-browser. However, the caveat that camera performance consistency would need to be tested remains in effect, as image quality plays a very important role in our app.
References
- Pardee K, Green AA, Ferrante T, Cameron DE, DaleyKeyser A, Yin P, Collins JJ. 2014. Paper-based synthetic gene networks. Cell. 159(4): 940-954.
- Brody, C. (2016) Cordova/PhoneGap sqlite storage adapter, litehelpers/Cordova-sqlite-storage, Retrieved from https://github.com/litehelpers/Cordova-sqlite-storage
- Gonzalez, R. and Woods, R. 1992. Digital Image Processing. Addison Wesley. 414-428.
- Lindbloom, J.B. 2003. Useful Colour Equations, Retrieved from http://www.brucelindbloom.com/index.html?Equations.html
- Sharma, G. 2003. Digital Colour Imaging. New York, Webster: CRC Press.
Appendix
Appendix I: Edge Detection
this.findEdges = function(bound){ //[t,d,l,r] -> [[x],[y]] //$('#debug').html("in coreLoop"); var x = 0; var y = 0; var left = undefined; var top = undefined; var right = undefined; var bottom = undefined; var detected = false; var edgeDataX = []; var edgeDataY = []; for(y=bound[0];y<=bound[1];y++){ for(x=bound[2];x<=bound[3];x++){ index = (x + y*this.ctxDimensions.width)*4; pixel = this.pixelData.data.slice(index,index+3); left = this.pixelData.data.slice(index-4,index-1); right = this.pixelData.data.slice(index+4,index+7); top = this.pixelData.data.slice(index-(this.ctxDimensions.width*4),index-(this.ctxDimensions.width*4)+3); bot = this.pixelData.data.slice(index+(this.ctxDimensions.width*4),index+(this.ctxDimensions.width*4)+3); if(colorDiff(pixel,left)>this.threshold){ detected = true; } else if(colorDiff(pixel,right)>this.threshold){ detected = true; } else if(colorDiff(pixel,top)>this.threshold){ detected = true; } else if(colorDiff(pixel,bot)>this.threshold){ detected = true; }//*/ if (detected){ edgeDataX.push(x); edgeDataY.push(y); this.plotPoint(x,y); detected = false; } }} return [edgeDataX,edgeDataY]; };#### Appendix II: Colourspace Conversion
function RGBtoLAB(rgb){ // [r,g,b] --> [l,a,b] return XYZtoLAB(RGBtoXYZ(rgb)); } function RGBtoXYZ(rgb){ // [r,g,b] --> [x,y,z] for (var i = 0; i < 3; i++) { rgb[i] = ( rgb[i] / 255.0 ); if ( rgb[i] > 0.04045 ){ rgb[i] = 100 * Math.pow((rgb[i] + 0.055 ) / 1.055 , 2.4); continue; } rgb[i] = 100 * rgb[i] / 12.92; } // may not need to multiply by 100 X = rgb[0] * 0.4124 + rgb[1] * 0.3576 + rgb[2] * 0.1805; Y = rgb[0] * 0.2126 + rgb[1] * 0.7152 + rgb[2] * 0.0722; Z = rgb[0] * 0.0193 + rgb[1] * 0.1192 + rgb[2] * 0.9505; return [X, Y, Z]; } function XYZtoLAB(xyz){ // [x,y,z] --> [l,a,b] var ref_xyz = [95.05, 100.0, 108.89999999999999]; // this is the xyz of WHITE for (var i = 0; i < 3; i++) { xyz[i] = ( xyz[i] / ref_xyz[i] ); if ( xyz[i] > 0.008856 ){ // 0.008856 <=> 216.0/24389 xyz[i] = Math.pow(xyz[i] , 1.0/3); continue; } xyz[i] = ( 903.3*xyz[i] + 16 ) / 116; // 903.3 <=> 24389.0/27 } L = 116 * xyz[1] - 16; a = 500 * ( xyz[0] - xyz[1] ); b = 200 * ( xyz[1] - xyz[2] ); return [L,a,b]; }#### Appendix III: Euclidean Distance Calculation
function euclidean_distance(v1,v2){ return Math.pow(Math.pow(v1[0]-v2[0],2)+Math.pow(v1[1]-v2[1],2)+Math.pow(v1[2]-v2[2],2),0.5); } function colorDiff(p1,p2){ // vi is [r,g,b] //$('#debug').html("in colorScript.js"); v1 = [p1[0]+0,p1[1]+0,p1[2]+0]; v2 = [p2[0]+0,p2[1]+0,p2[2]+0]; return Math.abs(euclidean_distance(RGBtoLAB(v1),RGBtoLAB(v2))); } function colorsDiff(data1,data2){ //[r,g,b,a,r,g,b,a,r,...] var v1; var v2; var lab1; var lab2; var sum_diff = 0; for (var i = 0; i < data1.length; i+=4) { v1 = data1.slice(i,i+3); lab1 = RGBtoLAB(v1); v2 = data2.slice(i,i+3); lab2 = RGBtoLAB(v2); sum_diff += euclidean_distance(lab1,lab2); } var result = sum_diff / (data1.length/4); return result; }
Appendix IV: Projects page
Appendix V: Creating New Project
Appendix VI: Sample Project
Appendix VII: Creating New Test
Appendix VIII: List of Tests
Appendix IX: Input Image
Appendix X: Analysis Results
Modeling Protein Folding with Rosetta
Introduction
In order to model and compare the gold(III)-binding ability of a GolS homodimer and two mutant variants, we used the Robetta Full-chain Protein Structure Prediction Server, as well as the Rosetta and pyRosetta modeling software tools. GolS, as a MerR family transcriptional regulator, serves as a kind of natural “gold detector” within S. enterica, but lacks specificity for gold, and is susceptible to cross-reactions with copper(I) and silver(I) (Checa and Soncini, 2011). Further, there is no extant crystal structure for GolS. In conjunction with our Wet Lab’s efforts to improve the gold-selectivity and specificity of GolS, we have modeled the gold(III)-binding potential of a GolS homodimer and two mutated variants (P118A, or alanine-to-phenylananine point mutation at residue 118, and A113T, or alanine-to-threonine point mutation at residue 113). In particular, these mutations were designed to shrink the size of the metal ion binding pocket of GolS. GolS shares a helix-turn-helix metal ion binding domain and almost all major catalytic residues with another MerR transcriptional regulator, CueR, that is sensitive to copper. These catalytic residues include Thr13, Lys15, Arg18, Tyr20, Asn34, Tyr36, Arg37, and Arg54, but have an exception in Ser4 in GolS, which corresponds to the functionally opposed Gly4 in CueR. Given these similarities, our Wet Lab formed the hypothesis that the size and shape of the metal ion binding pocket of MerR family transcriptional regulators may confer binding specificity to a particular metal ion.
Modeling the GolS Homodimer with Rosetta
We modeled GolS as a homodimer for two reasons; firstly, many MerR family transcriptional regulators function as homodimers, and secondly, we predict that monomer-monomer interactions between Cys112 and Cys120 from chain 1 and Ser77 from chain B, (from each side), are necessary for chelation of the metal ion, such that the gold-binding potential of a GolS monomer could not usefully be measured. In order to model GolS as a homodimer, we inputted the amino acid sequence of GolS, our desired symmetry constraints, and a CueR template. and used the Robetta 3-D Modeling web service in order to generate a 3D structure for a GolS homodimer. Robetta employs the ‘Ginzu’ method of domain prediction to screen for regions of the query sequence that are homologous to extant experimentally verified models via BLAST, PSI-BLAST, FFASO3, 3D-Jury, and multiple-sequence based alignments. This is followed by alignment of the generated model with the template through the K*Sync method, which utilizes secondary-structure prediction and residue profile-profile comparison (Kim, E.D., Chivian, D., and Baker, D., 2004.) A still image of the generated GolS homodimer is attached below in Appendix I.
Modeling Mutant Variants in PyRosetta
In order to model the mutant variants of GolS, we used PyRosetta’s “pose_from_pdb” function to generate a “pose” structure that could be manipulated within PyRosetta from the Robetta-generated Protein Data Bank (PDB) file of the GolS homodimer. Following “pose” generation, we used the “mutate_residue” function to create two mutant GolS variants: a P118A variant where the Ala118 residue in GolS was converted to Phe118 and a A113T variant where the Ala113 residue in GolS was converted to Thr113, and PyRosetta automatically accounts for resulting changes to rotamers (Chaudhary, Lyskov, and Gray, 2010).
Modeling Gold Binding
Finally, we created a model for our ligand (gold (III)) within Rosetta, modelled the ability of the GolS homodimer to bind gold(III), and further compared the gold-binding abilities of two mutant versions of GolS, P118A and A113T. First, we downloaded a structural data file (SDF file) containing information of all documented instances of AU3+ functioning as a ligand from RCSB (Berman, Westbrook, Feng, Gililand, Bhat, Weissig, Shindyalov, Bourne, 2000). Following this, we executed Rosetta’s “molfile_to_params” function in order to generate a “params” file for gold(III) containing structural and charge information that allowed for ligand modelling within Rosetta. Due to the fact that gold(III) is a metal ion and is not expected to contain hydrogens, we were able to omit an intermediate cleaning step that is usually required prior to generation of a “params” file. Finally, we used Rosetta’s automatic ligand-docking function in order to compare gold(III)-binding potential of GolS, P118A, and A113T. [insert results here once generated]
Modeling the Size of the Metal Ion Binding Pocket
[to be removed if not replaced]
References
- Checa, S.K., and Soncini, F.C. 2011. Bacterial gold sensing and resistance. PubMed. 24(3):419-27
- Kim, E.D., Chivian, D., and Baker, D., 2004. Protein structure prediction and analysis using the Robetta Server. Nucleic Acids Res. (1)32:W526-531
- Chaudhury, S., Lyskov, S., and Gray, J. J. 2010. PyRosetta: a script-based interface for implementing molecular modeling algorithms using Rosetta Bioinformatics. 26(5): 689-691.
- Berman, H. M., Westbrook, J., Feng, Z, Gililand, G., Bhat, T.N., Weissig, H., Shindyalov, I.N., Bourne, P.E. 2000. The Protein Data Bank. Nucleic Acids Research. 28:235-242.
Appendix
Appendix I: GolS Homodimer