Difference between revisions of "Team:Manchester/Model/ParameterSelection"

m
Line 89: Line 89:
 
<h2 class="smalltitle">Weighting</h2>
 
<h2 class="smalltitle">Weighting</h2>
  
<p style="font-size: 17px;">Weighting allows you to give more significance to parameters which more closely fit your system. This allows the inclusion parameters with different conditions into the initial dataset as they will simply be given a lower weighting score. The lower the weighting score the less confident in that value you are.</p>
+
<p style="font-size: 17px;">Weighting allows you to give more significance to parameters which more closely fit your system. This allows the inclusion parameters with different conditions into the initial dataset as they will simply be given a lower weighting score. The lower the weighting score the less confident in that value you are. See the diagram below for the algorithm we used when deciding the weighting. </br> This process needs to undertaken individually for each parameter sourced.</p>
 +
 
 +
 
 +
<center>
 +
<img class="width70" src="https://static.igem.org/mediawiki/2016/9/9e/T--Manchester--model_flow_charts.png" alt="flow charts" />
 +
</center>
  
 
<h2 class="smalltitle">Truncating Tails</h2>
 
<h2 class="smalltitle">Truncating Tails</h2>
Line 95: Line 100:
 
<p style="font-size: 17px;">Finally the dataset should be filtered, this is done by removing points from the top and bottom of the dataset. The amount removed depends on the confidence interval set. Typically removing the top and bottom 2.5% of points is sufficient as this provides a 95% confidence interval.
 
<p style="font-size: 17px;">Finally the dataset should be filtered, this is done by removing points from the top and bottom of the dataset. The amount removed depends on the confidence interval set. Typically removing the top and bottom 2.5% of points is sufficient as this provides a 95% confidence interval.
 
  <br /><br />
 
  <br /><br />
After following these steps the probability density (pdf’s) can be calculated. See the pdf section.</p>
+
After following these steps the probability density function (pdf) can be calculated. See the pdf section.</p>
 +
 
  
<center>
 
<img class="width70" src="https://static.igem.org/mediawiki/2016/9/9e/T--Manchester--model_flow_charts.png" alt="flow charts" />
 
</center>
 
  
 
<h2 class="smalltitle1">Code from flow diagram </h2>
 
<h2 class="smalltitle1">Code from flow diagram </h2>

Revision as of 15:44, 18 October 2016

Manchester iGEM 2016

Parameter selection

THEORY

Ensemble benefits

Modelling is only as good as the data provided to it, often called GIGO (Garbage In, Garbage Out). This needs to be taken into account when selecting parameters to use as your starting point for ensemble modelling. Ensemble modelling is better able to handle this than most as it can account for the uncertainty in these parameters - still care has to be taken to improve the quality of the initial dataset.

There are 3 main steps to bear in mind when selecting parameters each with its on key consideration. These steps help deal with the garbage which could ruin the model.

Collecting all relevant parameters

The first is actually collecting the parameters - the more the better. 100 experimentally sourced parameters from 100 different papers will be much more representative than just a handful. In an ideal world these would all be in the exact same conditions that your system will be however this is highly unlikely. This is accounted for with the second step.

Weighting

Weighting allows you to give more significance to parameters which more closely fit your system. This allows the inclusion parameters with different conditions into the initial dataset as they will simply be given a lower weighting score. The lower the weighting score the less confident in that value you are. See the diagram below for the algorithm we used when deciding the weighting.
This process needs to undertaken individually for each parameter sourced.

flow charts

Truncating Tails

Finally the dataset should be filtered, this is done by removing points from the top and bottom of the dataset. The amount removed depends on the confidence interval set. Typically removing the top and bottom 2.5% of points is sufficient as this provides a 95% confidence interval.

After following these steps the probability density function (pdf) can be calculated. See the pdf section.

Code from flow diagram

Relevant github link. All files discussed here are available for reference



This flowchart shows the questions we asked ourselves about each parameter source. This was implemented in practice by having a spread sheet to fill out the responses (the user simply has to select the options from a drop down.) The weights are calculated in the spreadsheet. Matlab the reads in the weights and associated values. A data set is then made in which there are n copies of each data point (where n = weighting). From this a pdf can be created for later sampling.

The extra complications are done as simply as possible.



  • Data point error

    Any documented error associated with a point is taken as 5 data points with one at -2 standard deviations through to one at 2 standard deviation, the weighting of the original data point is then redistributed to each new data point relative to their probability from the normal distribution they are described by.


  • Any unphysical zeros resulting from this are removed from the data point. To extend this work, any 5 data point set which gets a zero should have its weighting renormalized, also extending to 2n + 1 data points where n is large would be more representative. The first was not done because if you are getting zeros the quoted errors clearly don’t follow a normal distribution and shouldn’t be quoted as such. The second wasn’t done because you run into rounding issues which can only be removed by increasing the sample size for pdf creation. It was decided that this wasn’t worth computer time for such a small increase in accuracy.


  • Removing tails.

    The array of parameter values for making pdf is ordered and then the top and bottom 2.5% of data points floored is removed. This is done to remove extreme outliers and as such help the pdf creators.