MiguelCroce (Talk | contribs) |
MiguelCroce (Talk | contribs) |
||
Line 192: | Line 192: | ||
− | + | ||
− | <div class="small-10 columns small-offset-2 titulo-verde"><a id="Macroscopic_analysis" padding-top: | + | <div class="small-10 columns small-offset-2 titulo-verde"><a id="Macroscopic_analysis" style="padding-top:200px"></a> |
<div class="small-11 small-offset-1 columns"><a name="des" ></a> | <div class="small-11 small-offset-1 columns"><a name="des" ></a> | ||
<h2>Macroscopic analysis</h2> | <h2>Macroscopic analysis</h2> |
Revision as of 00:21, 19 October 2016
AlgAranha Team USP_UNIFESP-Brazil
In the twelfth iGEM edition happens the third Interlab Study. This study is based on the characterization of standard biological parts and, as standard parts, it is fundamental to observe reproducibility and repeatability on their behaviour. For instance, even well characterized promoters in a given strain of E. coli may behave reasonably different in another strain. Acknowledging this challenge, the Interlab Studies is a way to gather experiments from all around the world and provide a more unified understanding about the fundamental building blocks of Synthetic Biology. Until last year, each research team had its own strains, plasmids and protocols, however, in an attempt to standardize the obtained data, specific protocols and calibration samples were provided for each iGEM team attending the Interlab 2016. With this approach, we can construct a rich knowledge base of standard biological parts, together with several study cases of different protocols and other details. The value this have to the whole community of Synthetic Biology is beyond doubt.
We have done not only the standard plate reader, cuvette-based and flow cytometry assays, but also tested for better measuring conditions (LB and M9 media) and for alternative methods ranging from DIY ones (digital camera and fluorimetric-based methods) to single cell analysis by fluorescence microscopy. We have also evaluated the promoter strength of all devices by Relative Promoter Units [2] using DH5α E. coli harbouring all devices and controls. Results show interesting differences: Device 2 (J23106) shows half the strength it would be expected in the original library. Thus, we have fulfilled both the InterLab study the extra credit requirements by searching for optimized measurement protocols and generating new cheaper and accessible approaches for assessing promoter strength.
Test Devices and controls:
We have received three Test Devices and one positive control derived from the Anderson’s library, a constitutive promoter library generated by single mutations, which affected the promoters’ strength in different ways. The devices are a combination of the Anderson’s promoters, RBS, a GFP reporter gene and a terminator. The negative control consist only on an inert sequence derived from the TetR operator. All devices and controls have the pSB1C3 plasmid (high-copy number) as backbone. Following the iGEM protocol, all plasmids were transformed into DH5α E. coli cells - following the iGEM transformation protocol - which were used as samples for all the different experiments. You can find more information about the devices below and on figure 1.
Positive control (PC) - I20270 in pSB1C3
Negative control (NC) -R0040 in pSB1C3
Test Device 1 (TD1) - J23101.B0034.E0040.B0015 in pSB1C3
Test Device 2 (TD2) - J23106.B0034.E0040.B0015 in pSB1C3
Test Device 3 (TD3) - J23117.B0034.E0040.B0015 in pSB1C3
Multi-scale combined experiments:
In order to best characterize the Biobricks, we have done experiments with differential sensibility thresholds, ranging from macroscopic analysis of photos taken by a cellphone camera to single cell analysis in a flow cytometer. The main rationale was to compare methods focused on different scales of the same system, providing both general and specific information about the behavior of the selected promoters. Furthermore, we wanted to try new inexpensive methods as regular macroscopic image analysis (taken by a cellphone, for example) and DIY-fluorimeter analysis. We present below an overview of those multi-scale approaches chosen by our team during the InterLab, from the macro to the cellular microscopic view:
(i) Macroscopic Analysis: Cellphone Photos quantitatively analyzed by GIMP and ImageJ open-source software
(ii) Population Analysis I: Cuvette-based measurements on Fluorescence Spectometer
(iii) Population Analysis II: Plate Reader Assay and Relative Promoter Units (RPU) calculation
(iv) Population Analysis III: Do-It-Yourself Fluorimeter-based Assay (Not finished yet. However, you can see our progress with the hardware development HERE (link))
(v) Sub-Population/Single Cell Analysis: Flow Cytometry Assay and Relative Promoter Units (RPU) calculation
(vi) Microscopic Analysis/Single Cell: Fluorescence Microscopy and quantitative analysis by ImageJ
Imagine you are a Biohacker or someone very interested in Science stuff, but you have no money… How could you avoid expensive high-end equipment and yet, still obtain some data about the promoters you love?
In order to do so and draw a sketch of our promoter’s strength we have followed and updated the 2015 USP_Brazil iGEM team approach for an inexpensive and quick analysis by taking digital photos and analyzing them on open-source softwares for image processing (GIMPand ImageJ).
All test devices and controls were grown on both solid (LB-Agar) and Liquid (LB and M9) media and photos were taken by a regular cellphone under the effect of fluorescent white or blue light lamps (for exciting GFP reporter molecules). The choice of comparing both LB and M9 liquid media was based on an extensive number of reports regarding the influence of auto fluorescence of LB on measurements. Thus, we wanted to have check if the outcome of this effect would be so strong that it could be visually detected.
On a direct analysis under the blue light lamp, we can observe that there is a huge difference between M9 and LB samples (Figure 2). While we can easily observe different degrees of GFP expression on M9, it is almost impossible to do so on LB due to its intrinsic fluorescence. Even though, on both media, TD1 seems to be the strongest promoter, followed by TD2, which is similar to PC and stronger than TD3 (easier to see on M9). The last test device, TD3, seems to behave very similarly to the negative control. To sum up, at a first glance, our promoter’s strength rank is:
Figure 2 Comparison between fluorescence of Test Devices on both M9 and LB. While M9 allows us to easily compare fluorescence intensities the same is not true for LB samples due to its auto fluorescence effect.
Figure 3 Green Intensity for each Test Device measured by GIMP. All measures from LB present similar values due to auto fluorescence effect.
In order to obtain more data about the relative promoter expression by macroscopic visual analysis, we have used the ImageJ 3D plugin (Interactive 3D Surface Plot) to create an intensity Heatmap based on photos of devices grown on liquid culture (M9) and on LB-Agar plates. As we can see in figure 4 and figure 5, the same ranking observed before persists. This method, together with the GIMP Image analysis, represent an easy, quick and inexpensive method for outlining promoter strength.
Figure 4 A heatmap of fluorescence intensity generated by ImageJ. Above: M9 liquid cultures with all devices were grown for 16 hours, photographed under a blue light lamp and transformed into a heatmap on ImageJ; Below: The heatmap can be graphically represented by a 3D interactive surface plot.
Figure 5 A heatmap of fluorescence intensity generated by ImageJ. Above: Single colonies for each device were streaked on an LB-Agar plate, grown for 16 hours, photographed under a blue light lamp and transformed into a heatmap by ImageJ; Below: The heatmap can be graphically represented by a 3D interactive surface plot.
To sum up, we have shown that simple image analysis based on regular digital photos and open-source common softwares can provide an overview of the promoter strength on the macro scale. We have also shown the importance of choosing the correct media for your measurements, highlighting the high levels of noise found in LB media. We will now compare the fluorescence patterns found in this macroscopic and static analysis to the ones found in the study of dynamic GFP expression behavior over time, along the growth curve of a bacterial population.
Analysis of GFP expression on Fluorescence Spectrometer allowed us to observe the expression dynamics of our test devices on a population-scale. The data acquisition was performed in a Fluorescence spectrometer model LS55, Perkin Elmer. Samples were grown overnight on a shaker (37°C), diluted in cuvettes and measured for fluorescence every hour, during 6 hours and our cells were on exponential growth phase after 2 hours of growth. The spectrometer was configured for 250-500 nm excitation and 480-750 nm emission, using 10/10 slit and a filter 1% attenuation. The optical density (OD) was measured on a common spectrometer.
We have analysed GFP expression for FITC, TD 2, TD 3, PC and NC. At first, Test Device 1 was not present on our InterLab kit and was sent to us separately after a few weeks. As this experiment was done on a separate lab (as a collaboration between Brazilian Universities) before the arrival of TD1, the data for the following Device is not present. Furthermore, samples were incubated on ice for approximately 10 hours after absorbance quantification, prior to fluorescence measurement as the utilized fluorimeter was located on another city (yes, the guys travelled many miles just to get their samples measured!). This process may have generated abnormal behaviours, as the quantifications showed below are not directly comparable with the rest of our data. However, we can still get a grasp of the relative strength of the Test Devices with this measurement!
We have used the calibration solutions provided (LUDOX for Absorbance and FITC for fluorescence – figures 6 and 7) in order to help normalizing the data gathered from different laboratories. Furthermore, all fluorescence measurements compared emission wavelengths ranging from 480nm to 745nm in order to find the best output wavelength (figures 6 to 12). The data for Absorbance (OD600 - Figure 13), was used to calculate the GFP /OD600 rates for each time point for all test devices (figures 14 and 15)
Figure 6. Fluorescence emission of FITC between 480-700nm wavelengths. The concentrations for FITC are the same as standard curve. The FITC1-m has the higher concentration while FITC12-m the lower concentration.
Figure 7. FITC standard curve
Figure 8. Fluorescence emission of NC between 480-700nm wave-length.
Figure 9. Fluorescence emission of PC between 480-700nm wave-lenght.
Figure 10. Fluorescence emission of TD2 between 480-700nm wave-lenght.
Figure 11. Fluorescence emission of TD3 between 480-700nm wave-lenght.
Figure 12. Fluorescence of all test devices (on MEFLs – normalized by FITC)
Figure 13. Abs600 measured for each Test Device
Figure 14 Fl/Abs600 for each device on LB medium with respective error bars. We can notice aberrant error bars for devices with low/absent promoter strength .
Figure 15. GFP/OD600 values for each device. We can notice the negative values for NC and TD3.
We can see with the provided data that weak promoters are more prone to noise effects, probably caused by the combination of long exposure of bacterial cells to cold environment (more than 10 hours on ice) and background noise from LB media (auto fluorescence). Moreover, we notice the relationship between the Test Devices, remains the same as observed on the macroscopic analysis. To sum up, we can observe:
Analysis of GFP expression on multi-well microplates allow us to observe the expression dynamics of our test devices on a population-scale. It is less sensitive than the flow cytometry analysis; however, it allows us to have a better understating of the sample’s average regulatory behavior over time. On our initial test, we have successfully followed the iGEM protocol (see Protocols Section) for plate reader; however, based on literature and on our previous findings, we believe a few changes would make the protocol less time-consuming and more reliable on its measurements:
(i) Assessing technical and biological triplicates. The layout provided (see Protocols Section) on the iGEM website has only assessed biological duplicates of each sample while triplicates would fit better for making statistical assumptions about the generated data. Moreover, technical replicates are essential on measuring experiments and the layout provided did not present such internal controls.
(ii)Setting the machine for automatic measurements. We understand that growing the bacteria on a shaker before sampling is important for comparing results between cuvette-based and microplate-based assays. However, setting the plate reader machine for automatic measurements during 6 hours would result in a less time-consuming protocol and in fewer measuring errors, as we would reduce sample manipulation and practices that could change their behavior, as storing them on ice for too long.
(iii)Using M9 instead of LB or TB. It has already been reported by many research groups (and by our previous results, see Figure 2) that LB has high levels of auto fluorescence when compared to M9 and it can impact the measurement of weak promoters. Furthermore, some plate readers (as the one we have – VictorX3) are so sensible that LB is not even recommended for measurements. Furthermore, we have tested for both M9 and LB media in our plate reader assays and found that for lower levels of fluorescence (weak to medium strength promoters) the errors bars are much higher on LB than in M9 (figures 19 and 23).
Thus, in order to improve the iGEM protocol for the next generation, we have prepared the plate reader experiments following both the standard iGEM protocol and a tweaked one that uses M9 as growth media – which we have set for automatic measurements on the plate reader. We have also used the calibration solutions provided (LUDOX for Absorbance and FITC for fluorescence – Figure 16) for both experiments, in order to help normalizing the data gathered from different laboratories. Overnight liquid culture was diluted before the start of the experiment and our cells were on exponential growth phase during all the experiments – 6 hours. We present below, (figure 17 to 20 – LB assay - and figure 21 to 24 – M9 Assay) the results for Absorbance (OD600), Fluorescence and GFPblank-corrected/OD600blank-corrected for each assay.
Figure 16 FITC Standard Curve obtained for all Plate Reader experiments in order to normalize fluorescence data
LB Assay – Standard Protocol:
Figure 17 Abs600 (blank corrected) measured for each test device on LB medium
Figure 18 Fluorescence (Blank corrected) over time of all test devices on LB medium
Figure 19 Fl/Abs600 for each device on LB medium with respective error bars
Figure 20 Fl/Abs600 normalized by NC and transformed to log2 allow us to better observe the dynamics of GFP expression for each test device on LB medium
M9 Assay – Tweaked Protocol:
Figure 21 Abs600 (blank corrected) for all test devices on M9 medium
Figure 22 Fluorescence (blank corrected) of each test device on M9 media
Figure 23 Fl/Abs600 for each device on M9 medium with respective error bars. We can notice error bars are smaller for positive control, test device 2 and test device 3 than in LB experiment.
Figure 24 Fl/Abs600 normalized by NC and transformed to log2 allow us to better observe the dynamics of GFP expression for each test device on M9 medium
Despite the similar trends observed between M9 and LB, we can see that promoters with strengths ranging from low to medium are more prone to noise effects caused by LB auto fluorescence (figure 19), which is not observed on the M9 assay (figure 23). Moreover, we notice the relationship between the Test Devices, regarding promoter strength, remains the same as observed on the macroscopic analysis. However, the finer resolution of the plate reader allowed us to solve some ambiguities in our ranking:
Relative Promoter Units (RPU) Analysis
In order to compare our results with the original ones from the Anderson’s library, we have converted our data from both LB and M9 assays to Relative Promoter Units (RPU). Below is an explanation from the Kelly, 2009 [1], introducing this new unit and its importance on the standardization of biological parts:
“We found that the absolute activity of BioBrick promoters varies across experimental conditions and measurement instruments. We choose one promoter (BBa_J23101) to serve as an in vivo reference standard for promoter activity. We demonstrated that, by measuring the activity of promoters relative to BBa_J23101, we could reduce variation in reported promoter activity due to differences in test conditions and measurement instruments by ~50%. We defined a Relative Promoter Unit (RPU) in order to report promoter characterization data in compatible units and developed a measurement kit so that researchers might more easily adopt RPU as a standard unit for reporting promoter activity.” “(…) an important consequence of considering a relative unit of measurement for reporting promoter activities is that many of the difficult-to-measure model parameters that might change with changing environmental conditions can be cancelled when calculating relative promoter activities.”
Thus, as our bacteria were on their log-phase during the measurements, we could use the following equation to calculate the RPUs for each tested device:
Notice that this measure was evaluated with respect to Device 1, thus we were able to measure the promoter strength of devices 2 (J23106), 3 (J23117) and Positive Control (1bp mutant from J23114). In compliance with the Anderson Collection, we set to 0.7 the promoter strength of the device 1 (J23101). See below, on figures 25, 26 and 27 the Anderson’s RPU published on the Registry of Parts (http://parts.igem.org/Promoters/Catalog/Anderson), our obtained RPUs and a graphical comparison between them, respectively.
Figure 25 Anderson Collection Original RPU values
Figure 26 Our RPU values calculated for all Test Devices from the plate reader experiments on the steady-state exponential growth phase
Figure 27 A graphical comparison between the official RPU values from the Anderson's collection and our experimental data
As observed, we have the same ranking observed on previous experiments; however, when we compare the RPUs obtained in this analysis with the ones provided for the Anderson’s library we could notice some remarking differences:
(i) TD2 value is lower than expected (almost half the value from the Anderson’s library) with a higher value in LB than in M9 (close to a two-fold difference).
(ii) TD3 is more than 10-fold lower than expected and has similar values in both LB and M9.
(iii) The PC is stronger than the J23114 promoter from which it was derived and its value is similar between LB and M9 treatments.
The differences between samples to Anderson’s standards may be caused by a of combination experimental choices such as media, E. coli strain, plasmid type, RBS strength, reporter gene and terminator. Concerning the differences observed between M9 and LB treatment (Figure 26), we hypothesized that the growth rates may vary between media types and maybe the 10% glycerol used as carbon source could have hindered bacterial growth. Moreover, the differences could have been overestimated by the noise effect of LB (auto-fluorescence) on those assays.
To sum up, the plate reader experiment allowed us to observe the dynamic behaviour of the Test Devices on a population scale and to corroborate the ranking between strengths observed on previous assays. Furthermore, the RPU calculation has allowed us to compare our data between the Anderson’s characterized library and between our own treatments, revealing that even though those are constitutive promoters, their strength might be affected by a myriad of choices regarding the experimental design. We can conclude a genetic part is not just a piece of a DNA isolated from everything else; it is deeply affected by its surroundings (the intracellular bacterial state, thermodynamic constrains etc.), changing its behaviour on unexpected ways, sometimes, and that is exactly what we have seen along our experiments.
In order to better comprehend our promoters behaviour, we have taken one step further into the stochastic microscopic world. And stochasticity is exactly the word we were looking for on this specific assay. The main question was: are there other behaviours sustaining the average populational signal we have seen on the plate reader assay? Are there subpopulations with different promoter strengths in a single colony? From what we know about biology, biophysics and the importance of stochastic noise for Life, the answer was probably yes. So, we had to test it.
We have followed the iGEM protocol and grown our diluted samples (biological triplicates) on LB for 6 hours, washing then with PBS before measuring for avoiding reading interferences. The machine was also calibrated with two different kinds of beads before the test (one of them recommended by the iGEM protocol for normalizing arbitrary fluorescence units to MEFLs – relative fluorescence units) and the population area was set to minimize the influence of debris on measurements. Results can be seen bellow on figures 28 (Negative Control), 29 (Positive Control), 30 (TD1), 31 (TD2), 32 (TD3), 33 (Histograms Overlay). Figure 34 sums up the obtained data from the flow cytometer, directly comparing the promoters’ strength.
Figure 28 Negative control fluorescence after 6 hours. Above: Selected area for all test devices analysis on a FSC-H (y-axis) by FSC-A (x-axis) dotplot. Below: Histogram of cell count (y-axis) by fluorescence (FIT-C on x-axis)
Figure 29 Positive Control fluorescence after 6 hours. Above: Selected area for all test devices analysis on a FSC-H (y-axis) by FSC-A (x-axis) dotplot. Below: Histogram of cell count (y-axis) by fluorescence (FIT-C on x-axis)
Figure 30 Test Device 1 fluorescence after 6 hours. Above: Selected area for all test devices analysis on a FSC-H (y-axis) by FSC-A (x-axis) dotplot. Below: Histogram of cell count (y-axis) by fluorescence (FIT-C on x-axis)
Figure 31 Test Device 2 fluorescence after 6 hours. Above: Selected area for all test devices analysis on a FSC-H (y-axis) by FSC-A (x-axis) dotplot. Below: Histogram of cell count (y-axis) by fluorescence (FIT-C on x-axis)
Figure 32 Test Device 3 fluorescence after 6 hours. Above: Selected area for all test devices analysis on a FSC-H (y-axis) by FSC-A (x-axis) dotplot. Below: Histogram of cell count (y-axis) by fluorescence (FIT-C on x-axis)
Figure 33 Overlay of all test devices. We can see TD1 is the strongest promoter, followed by TD2, PC, TD3 and, finally, NC. Despite promoter’s strength, they possess different subpopulation underlying its average behaviour.
Figure 34 Comparison of the Geometric Mean of fluorescence between each test device (normalized to MEFLs units)
As we can see, all test devices have two main behaviours represented by two peaks: the “cells with no promoter activity” on the far left side of the plot and the active cells with their respective fluorescence peak on the far right side. We assumed cells on the far left side of the plot are not expressing GFP as their peak co-localizes with the NC peak. Again, the relative promoter strength remained the same, however we could observe that despite TD1 being the stronger device, it also possesses the biggest subpopulation of inactive cells between all tested samples. This dual behaviour could explain why we see bigger standard deviation bars on TD1 on plate reader assays than in other test devices.
In order to compare the RPU values between plate reader and flow cytometer data we have averaged the RPUs obtained by both M9 and LB treatments on plates and compared with the RPUs calculated from the Flow Cytometer data following the equation below form the 2009 article:
What we did was basically to divide the Geometric Means of each Device, obtained from the Flow Cytometer data, by the respective absorbance value from the plate reader assay (corresponding to the blank corrected OD600 at t=6hrs). This was only possible due to the fact our samples were on log-phase during both experiments. As we can see on figure 35, the RPU values from all devices are very similar between our two experiments (Plate Reader and Flow Cytometry), but not between our results and the Anderson’s library values. Again, it corroborates the hypothesis that the different factors regarding experimental design (e.g. E. coli strain, plasmid, RBS etc.) are the main reasons for the differences between our RPU values and the standard ones. Furthermore, we highlight both the robustness of the RPU values obtained along our experiments and, consequentially, its importance on Synthetic Biology for the standardization of relative promoter strength.
Figure 35 Comparison between RPU values from the original Anderson’s collection and from our assays. It is important to notice that, despite the differences between our data and the one found in the Registry of Parts, the RPU values within our two independent experiments are very similar, indicating robustness of the Test Devices’ behaviour for our specific experimental design.
In order to refine the information about our test devices we have analyzed our samples by confocal fluorescence microscopy followed by ImageJ quantitative analysis. Our goal was to check if our sample’s phenotypes (regarding fluorescence) would resemble our previous results. In order to do so, we have picked three colonies from each sample and inoculated the on LB for overnight grow. Cell cultures were diluted to OD600 of 0.01 and sampled every two hours, along 6 hours of growth on a shaker (37°C/180RPM), for fluorescence microscopy analysis. The samples were poured on mini low-melting agarose pads and then a thin glass slide was used for covering the agar pads.
Digital photos were taken directly from the Leica Microscope on two channels: (i) Brightfield channel for total cell counting; (ii) Darkfield channel for fluorescent cell counting and analysis. Photos for both channels were then merged in order to compare the number of fluorescent and non-fluorescent cells on each area. As we can see on figures 36 to 40, while NC and TD did not present any visible fluorescent cells, PC, TD1 and TD2 presented fluorescent cells with heterogeneous intensities.
The next step was to obtain quantitative data from our images. For this purpose, we have used the ImageJ software, following the workflow represented on figure 41, to measure the proportion of fluorescent cells on each area and to analyze their average intensity. The figures 42 and 43 show the measured fluorescence for PC, TD1 and TD2, while NC and TD3 presented no observable fluorescence in this analysis. Moreover, the promoter strength ranking outlined since our very first experiment was maintained (figure 42), highlighting the robust behavior of the test devices.
Figure 36 Fluorescence Microscopy Images for NC. Left side: Dark Field with no fluorescent cells; Right Side: Merged Image with both Dark and Bright Field Images for total number of cells measure.
Figure 37 Fluorescence Microscopy Images for PC. Left side: Dark Field with fluorescent cells; Right Side: Merged Image with both Dark and Bright Field Images for total number of cells and analysis of ON/OFF behaviour regarding fluorescence.
Figure 38 Fluorescence Microscopy Images for TD1. Left side: Dark Field with fluorescent cells; Right Side: Merged Image with both Dark and Bright Field Images for total number of cells and analysis of ON/OFF behaviour regarding fluorescence
Figure 39 Fluorescence Microscopy Images for TD2. Left side: Dark Field with fluorescent cells; Right Side: Merged Image with both Dark and Bright Field Images for total number of cells and analysis of ON/OFF behaviour regarding fluorescence
Figure 40 Fluorescence Microscopy Images for TD3. Left side: Dark Field with no fluorescent cells; Right Side: Merged Image with both Dark and Bright Field Images for total number of cells
Figure 41 ImageJ workflow for counting bacterial cells and measuring their fluorescence intensities on a selected area. The first step is to use the Bright Field Image, containing all cells as a mask. The image should be duplicated and transformed into an 8-bit format. The next step is to subtract the 8-bit Image background and then set the optimal threshold for detecting as many cells as you can. The last step is to run the particle analysis directing it from the current mask to the Merged Image. Below: An example of the explained method.
Figura 42 Average intensity of each device that presented visible fluorescent cells. The error bars represent the intrinsic variation between cells of the same population.
Figure 43 Average intensity of individual cells on a population for each test device with visible fluorescence.
We have characterized all devices in E. coli DH5α using multi-scale approaches, ranging from macroscopic view to single-cell analysis and observed a consistent overall behaviour among all experiments (TD1>TD2>NC>TD3>NC). We have also demonstrated a quick, cheap and inexpensive method for outlining promoter strengths using regular cell phone cameras and open image editing software. Furthermore, in order to contribute with the community we have developed a DIY protocol for building an open-source fluorimeter. Although the hardware it is not fully functional yet (it shall be, soon!), you can find photos and its protocol here: LINK TO PDF.
We have also improved the current iGEM protocol by suggesting the usage of technical and biological triplicates and by comparing the effect of background noise of LB and M9 on independent experiments regarding fluorescence measurement. We have shown that M9 is a preferable medium for reducing background noise effects (auto-fluorescence) and increasing measurements’ accuracy on GFP assays.
Furthermore, we described the average behaviour of each test device on a population scale using the Plate Reader and distinguished sub-populations within each group by the Flow Cytometry analysis. Those sub-groups usually behaved on a bimodal distribution represented by ON/OFF transcriptional states. On a deeper analysis, the Fluorescence Microscopy revealed that even what we have considered ON states were composed by heterogeneous cells regarding fluorescence intensity. This stochastic layer could only be perceived by merging our analysis and represents an important phenomenon underlying all transcriptional systems. Thus, in order to develop more robust and orthogonal Synthetic Biology projects, we have to better understand the molecular stochasticity behind each small Biobrick and it involves all the multi-scale approaches previously described.
As an extra analysis for the InterLab, we have calculated the RPUs for each test device on two independent experiments (Plate Reader and Flow Cytometry) and compared them to the original Anderson’s collection. While our Test Devices were weaker, on overall, than the original library - possibly due to experimental design choices, as discussed before -, the RPU values between our experiments were similar. This surprising result showed us the robustness of the method and corroborated with the relative strengths described for our test devices.
We hope our experiments and DIY methods can improve the understanding of basic parts and highlight the importance of studying them by integrated approaches in order to build a solid ground for Synthetic Biology.
[1] Kelly, J. R., Rubin, A. J., Davis, J. H., Ajo-Franklin, C. M., Cumbers, J., Czar, M. J., ... & Endy, D. (2009). Measuring the activity of BioBrick promoters using an in vivo reference standard. Journal of biological engineering, 3(1), 1.
[2] Anderson promoter collection: http://parts.igem.org/Promoters/Catalog/Anderson