Line 1,096: | Line 1,096: | ||
<div class="verlinked" id="opencv"><h6>OpenCV</h6></div> | <div class="verlinked" id="opencv"><h6>OpenCV</h6></div> | ||
− | <p><i>OpenCV</i> is a cross-platform image processing library and free for use under the open-source BSD license. Its development has been initiated by Intel in the nineties to demonstrate the capability of CPUs in executing complex image processing tasks. <i>OpenCV</i> covers the most basic morphological image operations up to advanced machine learning algorithms. It is written in C++, which is also its primary interface. By now most of the available features have been wrapped for other programming languages like Python, which we are going to use. Especially what we are looking for in <i>OpenCV</i> are its feature extraction algorithms, like the <i>Hough transformation</i> (1)<a id="displayText" href="javascript:toggle();"></a>, contour and edge detection (2)<a id="displayText2" href="javascript:toggle2();"></a>, and image moment (3)<a id="displayText3" href="javascript:toggle3();"></a> extraction. Also we make our lives easier by utilizing its implemented and optimized morphological operators and image filters (4)<a id="displayText4" href="javascript:toggle4();"></a>, like the median filter and the sobel operator. <br> | + | <p><i>OpenCV</i> is a cross-platform image processing library and free for use under the open-source BSD license. Its development has been initiated by Intel in the nineties to demonstrate the capability of CPUs in executing complex image processing tasks. <i>OpenCV</i> covers the most basic morphological image operations up to advanced machine learning algorithms. It is written in C++, which is also its primary interface. By now most of the available features have been wrapped for other programming languages like Python, which we are going to use. Especially what we are looking for in <i>OpenCV</i> are its feature extraction algorithms, like the <i>Hough transformation</i> (1) <a id="displayText" href="javascript:toggle();">(show details)</a>, contour and edge detection (2) <a id="displayText2" href="javascript:toggle2();">(show details)</a>, and image moment (3) <a id="displayText3" href="javascript:toggle3();">(show details)</a> extraction. Also we make our lives easier by utilizing its implemented and optimized morphological operators and image filters (4) <a id="displayText4" href="javascript:toggle4();">(show details)</a>, like the median filter and the sobel operator. <br> |
To enhance the long-term reliability of detection we separate the detection procedure into two steps: Since racks are predominantly used to hold the sample vessels, we firstly extract and detect the rectangular shaped racks with contour and thresholding techniques. Several image shape descriptors are used to track possible positional changes. The user is then allowed to manually assign regional attributes (e.g. radii, heights) for each rack. The assignment of local radii helps the circle detection in the second step. Not only does it inhibit false detections, but also allows for a proper multithreaded apportionment of work.<br></p> | To enhance the long-term reliability of detection we separate the detection procedure into two steps: Since racks are predominantly used to hold the sample vessels, we firstly extract and detect the rectangular shaped racks with contour and thresholding techniques. Several image shape descriptors are used to track possible positional changes. The user is then allowed to manually assign regional attributes (e.g. radii, heights) for each rack. The assignment of local radii helps the circle detection in the second step. Not only does it inhibit false detections, but also allows for a proper multithreaded apportionment of work.<br></p> | ||
Line 1,102: | Line 1,102: | ||
<p> | <p> | ||
<div id="toggleText" style="display: none"><b>(1)</b> The <i>Hough Transformation</i> is an image processing algorithm, which transform an image in into the parameter space, also called <i>Hough Space</i>, with respect to a parameterizable geometric shape, analogous to a fourier transformation transforming something into Fourier space with respect to sinusoidal base functions. The possible sets of correct parameters establish themselves in <i>Hough Space</i> as local maxima.</div><br> | <div id="toggleText" style="display: none"><b>(1)</b> The <i>Hough Transformation</i> is an image processing algorithm, which transform an image in into the parameter space, also called <i>Hough Space</i>, with respect to a parameterizable geometric shape, analogous to a fourier transformation transforming something into Fourier space with respect to sinusoidal base functions. The possible sets of correct parameters establish themselves in <i>Hough Space</i> as local maxima.</div><br> | ||
− | <div id="toggleText2" style=" | + | <div id="toggleText2" style="display: none"><b>(2)</b> We use the detection of closed contours for our rectangle detection software rather than using Hough transformation, because it is faster, and in combination with the light box sufficiently reliable. The underlying edge detector uses the <i>Canny algorithm</i>.</div><br> |
<div id="toggleText3" style="display: none"><b>(3)</b> A line can be described by 2 parameters, slope and offset. These two parameters are descriptors, which can be used to reconstruct the image. They contain the whole information about the line. Similarly, the same line could be expressed with a fourier series (at least partly). Image moments are still another kind of descriptors. We use them in our tracking algorithm, to characterize detected objects and to be able to identify them in later recordings.</div><br> | <div id="toggleText3" style="display: none"><b>(3)</b> A line can be described by 2 parameters, slope and offset. These two parameters are descriptors, which can be used to reconstruct the image. They contain the whole information about the line. Similarly, the same line could be expressed with a fourier series (at least partly). Image moments are still another kind of descriptors. We use them in our tracking algorithm, to characterize detected objects and to be able to identify them in later recordings.</div><br> | ||
<div id="toggleText4" style="display: none"><b>(4)</b> Image filters act on the image to amplify certain aspects, and attenuate others. For example, the sobel filter, which is basically a derivative operator, amplifies regions of fast intensity change (edges). The Canny edge detection algorithm is based on this filter, and returns a binarily thresholded image.</div> | <div id="toggleText4" style="display: none"><b>(4)</b> Image filters act on the image to amplify certain aspects, and attenuate others. For example, the sobel filter, which is basically a derivative operator, amplifies regions of fast intensity change (edges). The Canny edge detection algorithm is based on this filter, and returns a binarily thresholded image.</div> |
Revision as of 16:02, 19 October 2016
ROBOTICS
ABSTRACT
Our main task was to develop a device that measures fluorescence and adds liquids to samples.
Therefore, our team decided to build a fully automatized pipetting robot that is able to locate a set of samples, detect potential light emission and pipet a specific amount of non-natural amino acid onto the fluorescent sample.
The foundation for the robot is a 3D-printer, due to the easy handling of movements in three dimensions. By controlling these movements with an optical system the autonomy of the robot is increased even more.
INTRODUCTION
Development of 3D Printers & Possibilities
In the 80s Chuck Hull invented the first standardized 3D printer, based on a procedure which is known as stereolithography (SLA, [1]). Moving from SLA to full deposit modeling (FDM) techniques, the 3D printing idea became alive in the do‑it‑yourself community. Ever since that time, basic 3D printers are accessible for little money and due to the open source idea of projects like REPRAP [2] affordable for many. In last years project, iGEM TU Darmstadt has already built a fully working SLA printer, capable of being fed with biologically manufactured plastics [3].
This year, the robotics team decided to rebuild a clone of the Ultimaker 2 FDM printer [4] and exchange the extruder with a camera and a pipet to create a pipetting robot. Using several open‑source parts and software, it is the idea to establish an easy-to-handle robot to assist the daily biologist's work.
Connection to our Team
iGEM TU DARMSTADT is a young and dynamic team of interdisciplinary and motivated researchers. Our advantage is, that we can bring together synthetic biology and classic engineering science, for which TU Darmstadt is famous. We have the possibility, thanks to iGEM, to experiment on our own ideas and to reach for the stars. Interested in a variety of scientific topics, we wanted to mix up different talents to create a unique project.
References:
- http://edition.cnn.com/2014/02/13/tech/innovation/the-night-i-invented-3d-printing-chuck-hall
- https://2015.igem.org/Team:TU_Darmstadt/Project/Tech
- http://reprap.org/wiki/About
- http://www.thingiverse.com/thing:811271, jasonatepaint
GOALS
The main task is to develop a machine which is capable to monitor our organisms and their health condition in order to keep them alive. Therefore the machine has to measure the light emission of the organisms and be able of dropping liquids into our containers. This has to be independent of the exact position of the container, which requires an automatic tracking system.
The idea is that one places a container somewhere under the robot's working area and click a run button of a program. The robot starts its routine by tracking the new container and measuring the light emission of the organisms. Based on the measurement the robot decides whether to feed the organisms with non‑natural amino acid or not. After a period of time it repeats this routine until the stop button of the program is clicked.
These are only the minimum requirements for our project's needs. We decided to go one step further and designed our robot in such a way, that it serves as a multi‑purpose platform which is adaptable and easy to modify. The open‑source character invites other scientists to add new features or improve the robot and its capabilities.
For example our liquid system can be upgraded to be able to prepare 96-well plates with samples and monitor routines by using the optical system.
Or our measuring head can be changed back to a printer head which allows to 3D print again with just a few changes.
There is a vast room of possibilities, just using the concept of the accurate positioning of a sample in the 3D space.
Due to the fact that we try to stick to widely used open-source software and standard commercial parts, our machine can be easily combined with the most DIY products, making it reusable, flexible and cheap.
In the special case of the TU Darmstadt and the next generations of iGEM competitors we have the idea to develop our technical equipment further from year to year and, if possible, combining them. Our SLA printer from last year’s competition was upgraded and is nearly ready for use again, giving us the possibility to manufacture parts for prototyping in our lab. Also this year’s project will serve as a starting point for the next year’s technical development team. New ideas and possibilities have been already discussed and we are looking forward to the next year’s competition.
SETUP OVERVIEW
FUNCTIONALITY
To fulfill the task of keeping the bacteria alive it loops through a specially designed procedure. Initially the robot scans the working area for samples by illuminating the downside of the sample stage using infrared LEDs and monitoring the shadows of the placed reservoirs with a camera. If the contrast is sufficiently high it is able to detect the edges of the mentioned reservoirs, fit a circle onto it and compute the distance between the reservoir and the camera itself. Furthermore it is possible to put an entire rack of reservoirs under observation due to its ability to locate every individual reservoir.
Shortly after the detection the distance information is sent to the 3D control program and the head of the robot moves in direction of the first reservoir. To check whether the bacteria needs more non‑natural amino acid the robot uses the fluorescence of the protein mVenus that has been inserted into the bacteria. Therefore the robot excites the protein via high power LEDs and detects the emitted light. To exclude reflected light from the LEDs that would disturb the measurement a longpass filter cuts off the spectrum below the emission peak of the protein. In dependence of the fluorescence signal the robot decides whether it is necessary to pipet non‑natural amino acid onto the sample. If that is the case the robot moves the samples in z-range just so that the syringe reaches the sample and is able to securely add the non‑natural amino acid.
Eventually the robot recommences the procedure described above, except for the scanning of the individual positions of the samples, which are saved temporarily until all samples are checked. As long as the robot is activated, connected to a power supply and the syringe pump does not run out on non‑natural amino acid, the robot will loop through this whole process and keep the bacteria alive without a need of human interaction. Nevertheless it is possible to check what the robot is doing via a livestream of the camera visible on a graphical user interface, since there is no other opportunity to look inside the robot itself while it is working.
ACHIEVEMENTS
- Successfully redesign a 3D printer chassis to meet our requirements
- Construct a unique lightbox with integrated IR LEDs for positioning purposes
- Design a measuring probe with a camera device with an integrated optical filter system and LEDs
- Implementing an automatic object tracking system including a vector based feedback system for positioning
- Construct a syringe pump system to add liquids down to microliter accuracy
- Connecting a Raspberry Pi with an Arduino microcontroller by establishing a serial connection between the two devices, allowing a variety of different tasks
- Data of all CADs designed by the TU Darmstadt technical department
- A complete construction tutorial including a BOM (Bill of Materials incl. prices)
RESULTS
Optics
Operating Range of Wavelengths
Fluorescence Measurement and Filtering
Optical Hardware - Camera Head
Optical Hardware - Lightbox
References
[1]: https://picamera.readthedocs.io/en/release-1.12/
[2]: https://www.plexiglas-shop.com/pdfs/en/212-15-PLEXIGLAS-LED-edge-lighting-en.pdf
Cooling
Software
Marlin
There, you will also find a link where you can download the original, unmodified version of the marlin firmware.
OpenCV
OpenCV is a cross-platform image processing library and free for use under the open-source BSD license. Its development has been initiated by Intel in the nineties to demonstrate the capability of CPUs in executing complex image processing tasks. OpenCV covers the most basic morphological image operations up to advanced machine learning algorithms. It is written in C++, which is also its primary interface. By now most of the available features have been wrapped for other programming languages like Python, which we are going to use. Especially what we are looking for in OpenCV are its feature extraction algorithms, like the Hough transformation (1) (show details), contour and edge detection (2) (show details), and image moment (3) (show details) extraction. Also we make our lives easier by utilizing its implemented and optimized morphological operators and image filters (4) (show details), like the median filter and the sobel operator.
To enhance the long-term reliability of detection we separate the detection procedure into two steps: Since racks are predominantly used to hold the sample vessels, we firstly extract and detect the rectangular shaped racks with contour and thresholding techniques. Several image shape descriptors are used to track possible positional changes. The user is then allowed to manually assign regional attributes (e.g. radii, heights) for each rack. The assignment of local radii helps the circle detection in the second step. Not only does it inhibit false detections, but also allows for a proper multithreaded apportionment of work.
PyQt
Qt is a software tool to develop a GUI (Graphical User Interface). It is available under a commercial and an open-source license. The software is a cross-platform application framework, which means it runs on the most computer systems like Unix or Windows. The underlying programming language is C++ and Qt can use already existing programming languages like Javascript, making it a powerful tool.
The main idea of Qt is to use a system of signals and slots to have an easy framework to connect displayed elements with underlying functions. Also, the reusability of already existing code is enhanced. Every graphical element, for example a button, emits its own signal when it is pressed or used. The signal then can be used to trigger an action, like closing a window. If the signal is not connected to a function nothing will happen, however the signal will be emitted with no consequences. Now it is possible to connect the emitted signal with a desired action, called slot, and the program gets its specific behavior.
Qt is widely used by companies like the European Space Agency (ESA), Samsung, DreamWorks, Volvo and many more.
To be able to combine the possibilities of Qt with the simplicity of the Python programming language, PyQt was developed. PyQt is a binding for Python capable of translating Qt methods within the Python syntax.
To be able to get a direct preview of the constructed GUI, Qt Designer is a helpful tool. Basically it enables an intuitive way to build a graphical user interface without a need to explicitly coding it. To later work with the code itself, PyQt uses a method called pyuic(number), which is executed through the terminal. The number in the brackets stands for the version.
After converting the code one can open the GUI as a regular python script and work with it as usual.
References:
https://riverbankcomputing.com/software/pyqt/intro
https://www.qt.io/
https://en.wikipedia.org/wiki/Qt_(software)
FURTHER DEVELOPMENTS
Due to a tight time schedule from the start to the end of iGEM it was not possible for us to realize all ideas and planned developments in respect of improvement of the robot itself.
For a working process with more kinds of bacteria cultures it is absolutely indispensable to develop a system that is able to avoid all sorts of contamination between the different bacteria. Therefore it would be an option to have an extra reservoir filled with ethanol in which the tip of the syringe can be sterilized between the checks of different samples.
Another modification that would be useful for working with individual bacteria cultures is making the power LED's changeable. This is necessary if the the wavelength of the LEDs does not overlap with the absorption spectrum of the fluorescent proteins or overlaps with a part of the spectrum that has a very low absorption efficiency.
Moreover, apart from the latter developments it may be useful to improve the syringe pump system. Instead of using a syringe pump it would be useful to use a system with a reservoir of liquids and a pump that works continously like a turbine, for example see http://www.ardulink.org/automatic-lipid-dispensing/.
Another useful modification of the robot would be to rebuild its foundation, namely an Ultimaker 3D-printer setup. Essential alterations would be to replace the sample stage with a heatbed and to replace the current head with a printhead hotend. Since the current head can be clipped it would not be too much of a challenge. Furthermore, a change of the syringe extruder is necessary, if the printer should work with plastics.
An alternative approach is a kind of paste 3D-printer. In this case it wouldn't even be needed to change the head and the syringe, because of the already viscous properties of the paste.
BUILDING INSTRUCTIONS
Construction Video
Bill Of Materials