Difference between revisions of "Team:EMW Street Bio/Software"

 
(2 intermediate revisions by the same user not shown)
Line 24: Line 24:
  
 
                     <!-- Logo -->
 
                     <!-- Logo -->
                         <h1 id="logo"><a href="https://2016.igem.org/Team:EMW_Street_Bio">Biota Beats</a></h1>
+
                         <h1 id="logo"><a href="https://2016.igem.org/Team:EMW_Street_Bio"></a></h1>
  
 
                     <!-- Nav -->
 
                     <!-- Nav -->
Line 33: Line 33:
 
                                     <a href="https://2016.igem.org/Team:EMW_Street_Bio/Hardware">Hardware</a>
 
                                     <a href="https://2016.igem.org/Team:EMW_Street_Bio/Hardware">Hardware</a>
 
                                     <ul>
 
                                     <ul>
                                        <li><a href="https://2016.igem.org/Team:EMW_Street_Bio/Hardware/Player">Record Player</a></li>
 
 
                                         <li><a href="https://2016.igem.org/Team:EMW_Street_Bio/Hardware/Incubator">Incubator</a></li>
 
                                         <li><a href="https://2016.igem.org/Team:EMW_Street_Bio/Hardware/Incubator">Incubator</a></li>
 
                                         <li><a href="https://2016.igem.org/Team:EMW_Street_Bio/Hardware/Records">Microbiome Records</a></li>
 
                                         <li><a href="https://2016.igem.org/Team:EMW_Street_Bio/Hardware/Records">Microbiome Records</a></li>
Line 50: Line 49:
 
</div>
 
</div>
 
</div>
 
</div>
    <!-- Features 2 -->
+
    <!-- Features 2 -->
 
     <div class="wrapper">
 
     <div class="wrapper">
 
         <header class="major special">
 
         <header class="major special">
Line 56: Line 55:
 
         </header>
 
         </header>
 
         <section class="container">
 
         <section class="container">
            <header class="special">
 
                <h3>Background</h3>
 
            </header>
 
 
             <article class="row">
 
             <article class="row">
 +
                <p>How can microbes be sonified? Once inoculated, our microbiome records were imaged utilizing an
 +
                    open-source camera to generate 2D images. We developed several approaches to generate sound and
 +
                    music from these images, with a principal question in mind: how is the sound meaningful from a
 +
                    biological perspective?</p>
 +
 +
                <p>Ultimately, we focused on three forms of data that could be extracted from the images of our
 +
                    records: microbe location, density, and cluster size. Given the design of our records, we could add
 +
                    an additional layer of information by having different segments of each record correspond to
 +
                    organisms swabbed from different parts of the body. With our incubator and camera, we could also
 +
                    take time-lapse images, adding a time parameter. Based on these inputs, we developed several
 +
                    algorithms to convert these data streams to sound and music. </p>
 +
            </article>
 +
            <article class="row">
 +
                <header class="special">
 +
                    <h4>Background</h4>
 +
                </header>
  
 
                 <p><u>Sonification:</u> the use of non-speech audio to convey information or perceptualize data.</p>
 
                 <p><u>Sonification:</u> the use of non-speech audio to convey information or perceptualize data.</p>
Line 95: Line 107:
 
             <article class="row">
 
             <article class="row">
 
                 <p>Based on inspiration from the previous sonification projects, the sonification software was designed
 
                 <p>Based on inspiration from the previous sonification projects, the sonification software was designed
                using prior image processing techniques. First, a Gaussian filter reduces the image noise, and the image
+
                    using prior image processing techniques. First, a Gaussian filter reduces the image noise, and the
                is converted to a black-and-white binary image. The location of the center of each cluster and the size
+
                    image
                of each cluster are stored. Based on this information, the density of each area on the plate is
+
                    is converted to a black-and-white binary image. The location of the center of each cluster and the
 +
                    size
 +
                    of each cluster are stored. Based on this information, the density of each area on the plate is
 
                     computed.</p>
 
                     computed.</p>
  
 
                 <p>Each image is mapped to a coordinate system, and the colony location correlates to musical notes, or
 
                 <p>Each image is mapped to a coordinate system, and the colony location correlates to musical notes, or
                pitches. Then density of the colonies corresponds to background music. Granular Synthesis is a method by
+
                    pitches. Then density of the colonies corresponds to background music. Granular Synthesis is a
                which sounds are broken into tiny grains that are redistributed and reorganized to form other sounds.
+
                    method by
                Here the concept of granular synthesis is used with the density data.</p>
+
                    which sounds are broken into tiny grains that are redistributed and reorganized to form other
 +
                    sounds.
 +
                    Here the concept of granular synthesis is used with the density data.</p>
  
                 <p>In the first attempt, shape and size of the colonies was to correlate to different scale types: major,
+
                 <p>In the first attempt, shape and size of the colonies was to correlate to different scale types:
                natural minor, melodic minor, pentatonic,and chromatic. However, a classifier with only five groups
+
                    major,
                brings more ambiguities. Thus, it was switched to FM synthesis. FM synthesis stands for frequency
+
                    natural minor, melodic minor, pentatonic,and chromatic. However, a classifier with only five groups
                modulation synthesis; the timbre (tone quality) of a waveform is changed by adjustingits frequency with
+
                    brings more ambiguities. Thus, it was switched to FM synthesis. FM synthesis stands for frequency
                a modulator frequency that is also in the audio range, resulting in a more complex waveform. This new
+
                    modulation synthesis; the timbre (tone quality) of a waveform is changed by adjustingits frequency
                tone can be described as "gritty" if it is a thick and dark timbre. For synthesizing harmonic sounds,
+
                    with
                the modulating signal must have a harmonic relationship to the original carrier signal.</p>
+
                    a modulator frequency that is also in the audio range, resulting in a more complex waveform. This
 +
                    new
 +
                    tone can be described as "gritty" if it is a thick and dark timbre. For synthesizing harmonic
 +
                    sounds,
 +
                    the modulating signal must have a harmonic relationship to the original carrier signal.</p>
  
                 <p>With two types of culture plates, the program can be used to compare the sonification results from each.
+
                 <p>With two types of culture plates, the program can be used to compare the sonification results from
                With a timelapse video for 2-3 days of the culture plates, it is also possible to listen to the sound
+
                    each.
                change as the microorganisms grow.</p>
+
                    With a timelapse video for 2-3 days of the culture plates, it is also possible to listen to the
 +
                    sound
 +
                    change as the microorganisms grow.</p>
  
 
                 <p>Setup for camera module:</p>
 
                 <p>Setup for camera module:</p>
Line 134: Line 156:
 
                         alt=""/></span>
 
                         alt=""/></span>
  
                 <span class="image 8u -2u 12u(mobile)"><img
+
                 <span class="image 6u"><img
 
                         src="https://static.igem.org/mediawiki/2016/1/13/T--EMW_Street_Bio--images_yixiao4.png"
 
                         src="https://static.igem.org/mediawiki/2016/1/13/T--EMW_Street_Bio--images_yixiao4.png"
 
                         alt=""/></span>
 
                         alt=""/></span>
  
                 <span class="image 8u -2u 12u(mobile)"><img
+
                 <span class="image 6u"><img
 
                         src="https://static.igem.org/mediawiki/2016/1/18/T--EMW_Street_Bio--images_yixiao5.png"
 
                         src="https://static.igem.org/mediawiki/2016/1/18/T--EMW_Street_Bio--images_yixiao5.png"
 
                         alt=""/></span>
 
                         alt=""/></span>
  
                 <span class="image 8u -2u 12u(mobile)"><img
+
                 <span class="image 6u"><img
 
                         src="https://static.igem.org/mediawiki/2016/d/d4/T--EMW_Street_Bio--images_yixiao6.png"
 
                         src="https://static.igem.org/mediawiki/2016/d/d4/T--EMW_Street_Bio--images_yixiao6.png"
 
                         alt=""/></span>
 
                         alt=""/></span>
  
                 <span class="image 8u -2u 12u(mobile)"><img
+
                 <span class="image 6u"><img
 
                         src="https://static.igem.org/mediawiki/2016/4/4b/T--EMW_Street_Bio--images_yixiao7.png"
 
                         src="https://static.igem.org/mediawiki/2016/4/4b/T--EMW_Street_Bio--images_yixiao7.png"
 
                         alt=""/></span>
 
                         alt=""/></span>
                 <span class="image 8u -2u 12u(mobile)"><img
+
                 <span class="image 6u"><img
 
                         src="https://static.igem.org/mediawiki/2016/2/20/T--EMW_Street_Bio--images_yixiao8.png"
 
                         src="https://static.igem.org/mediawiki/2016/2/20/T--EMW_Street_Bio--images_yixiao8.png"
 
                         alt=""/></span>
 
                         alt=""/></span>
                 <span class="image 8u -2u 12u(mobile)"><img
+
                 <span class="image 6u"><img
 
                         src="https://static.igem.org/mediawiki/2016/f/f8/T--EMW_Street_Bio--images_yixiao9.png"
 
                         src="https://static.igem.org/mediawiki/2016/f/f8/T--EMW_Street_Bio--images_yixiao9.png"
 
                         alt=""/></span>
 
                         alt=""/></span>
Line 158: Line 180:
 
             </article>
 
             </article>
 
         </section>
 
         </section>
 +
        <section class="container">
 +
            <header class="special">
 +
                <h3>Creating Audio files from the Microbiome.</h3>
 +
            </header>
 +
            <article>
 +
                <span class="image 8u -2u 12u(mobile)"><img
 +
                                        src="https://static.igem.org/mediawiki/2016/a/a5/T--EMW_Street_Bio--images_yixiao_ex1.jpg"
 +
                                        alt=""/>Mapping Method from Microbiome to Music. 5 discrete areas are shown and sonified.</span>
 +
 +
                <span class="image 8u -2u 12u(mobile)"><img
 +
                        src="https://static.igem.org/mediawiki/2016/8/8e/T--EMW_Street_Bio--images_yixiao_ex2.png"
 +
                        alt=""/>Squares of Distances from Colony Clusters to the Center in each area. Each cluster is assigned with a rhythmic sound by a different musical instrument.</span>
 +
 +
                <p>Each numbered audio file corresponds to the sonification of 4 out of 5 discrete areas shown above.</p>
 +
                <audio controls>
 +
                    <source src="https://static.igem.org/mediawiki/2016/9/9a/T--EMW_Street_Bio--images_sound1.mp3"
 +
                            type="audio/mp3">
 +
                    Your browser does not support the audio element.
 +
                </audio>
 +
                <audio controls>
 +
                    <source src="https://static.igem.org/mediawiki/2016/e/e5/T--EMW_Street_Bio--images_sound2.mp3"
 +
                            type="audio/mp3">
 +
                    Your browser does not support the audio element.
 +
                </audio>
 +
                <audio controls>
 +
                    <source src="https://static.igem.org/mediawiki/2016/7/74/T--EMW_Street_Bio--images_sound3.mp3"
 +
                            type="audio/mp3">
 +
                    Your browser does not support the audio element.
 +
                </audio>
 +
                <audio controls>
 +
                <source src="https://static.igem.org/mediawiki/2016/3/38/T--EMW_Street_Bio--images_sound4.mp3" type="audio/mp3">
 +
                Your browser does not support the audio element.
 +
                </audio>
 +
                <audio controls>
 +
                <!--<source src="https://static.igem.org/mediawiki/2016/9/9a/T&#45;&#45;EMW_Street_Bio&#45;&#45;images_sound1.mp3" type="audio/mp3">-->
 +
                <!--Your browser does not support the audio element.-->
 +
                <!--</audio>-->
 +
            </article>
 +
        </section>
 +
 +
 
     </div>
 
     </div>
  

Latest revision as of 03:49, 20 October 2016

Biota Beats - by EMW Streetbio

Sonification

How can microbes be sonified? Once inoculated, our microbiome records were imaged utilizing an open-source camera to generate 2D images. We developed several approaches to generate sound and music from these images, with a principal question in mind: how is the sound meaningful from a biological perspective?

Ultimately, we focused on three forms of data that could be extracted from the images of our records: microbe location, density, and cluster size. Given the design of our records, we could add an additional layer of information by having different segments of each record correspond to organisms swabbed from different parts of the body. With our incubator and camera, we could also take time-lapse images, adding a time parameter. Based on these inputs, we developed several algorithms to convert these data streams to sound and music.

Background

Sonification: the use of non-speech audio to convey information or perceptualize data.

Auditory perception has advantages in temporal, spatial, amplitude, and frequency resolution that open possibilities as an alternative complement to visualisation techniques.

There is prior work sonifying algae data, star data, and microbial data into music by various methods. For example, scientists looked for patterns in readings from blue-green algae samples collected in the western English Channel then transformed the data into musical notes: http://www.livescience.com/23626-microbe-music-algae-songs.html. The tune "Bloom" illustrates how some algae species bloom occasionally and become more abundant for short periods of time. They had lower abundance of microbe correspond to lower notes, and chord progression was related to day length, chlorophyll concentration in the water, and other physical parameters.

Here astronomers converted star data from NASA's Kepler telescope into reggae songs: link.

Then a team converted RNA sequences and protein 3D structure into sound: link.

Making

Based on inspiration from the previous sonification projects, the sonification software was designed using prior image processing techniques. First, a Gaussian filter reduces the image noise, and the image is converted to a black-and-white binary image. The location of the center of each cluster and the size of each cluster are stored. Based on this information, the density of each area on the plate is computed.

Each image is mapped to a coordinate system, and the colony location correlates to musical notes, or pitches. Then density of the colonies corresponds to background music. Granular Synthesis is a method by which sounds are broken into tiny grains that are redistributed and reorganized to form other sounds. Here the concept of granular synthesis is used with the density data.

In the first attempt, shape and size of the colonies was to correlate to different scale types: major, natural minor, melodic minor, pentatonic,and chromatic. However, a classifier with only five groups brings more ambiguities. Thus, it was switched to FM synthesis. FM synthesis stands for frequency modulation synthesis; the timbre (tone quality) of a waveform is changed by adjustingits frequency with a modulator frequency that is also in the audio range, resulting in a more complex waveform. This new tone can be described as "gritty" if it is a thick and dark timbre. For synthesizing harmonic sounds, the modulating signal must have a harmonic relationship to the original carrier signal.

With two types of culture plates, the program can be used to compare the sonification results from each. With a timelapse video for 2-3 days of the culture plates, it is also possible to listen to the sound change as the microorganisms grow.

Setup for camera module:

Creating Audio files from the Microbiome.

Mapping Method from Microbiome to Music. 5 discrete areas are shown and sonified. Squares of Distances from Colony Clusters to the Center in each area. Each cluster is assigned with a rhythmic sound by a different musical instrument.

Each numbered audio file corresponds to the sonification of 4 out of 5 discrete areas shown above.