Team:TU Darmstadt/Hardware

If you can see this message, you do not use Javascript. This Website is best to use with Javascript enabled. Without Javascript enabled, many features including the mobile version are not usable.

ABSTRACT
One task of our project is to monitor and to keep alive manipulated E. Coli before they die, since they are not able to survive on their own due to their dependence on the non-natural amino acid OMT. To catch the moment before the bacteria dies, a fluorescent protein was implemented in such a way that it emits light as a warning signal.
Therefore, our team decided to build a fully automatized pipetting robot that is able to locate a set of samples, detect potential light emission and pipet a specific amount of non-natural amino acid onto the fluorescent sample.
The foundation for the robot is a 3D-printer, due to the easy handling of movements in three dimensions. By controlling these movements with an optical system the autonomy of the robot should be increased even more.

INTRODUCTION

Development of 3D Printers & Possibilities

In the 80s Chuck Hull invented the first standardized 3D printer, based on a procedure which is known as stereolithography (SLA, [1]). Moving from SLA to full deposit modeling (FDM) techniques, the 3D printing idea became alive in the Do‑It‑Yourself community. Ever since that time, basic 3D printers are accessible for little money and due to the open source idea of projects like REPRAP [2] affordable for many. In last years' project, iGEM TU Darmstadt build already a fully working SLA printer, capable of being fed with biologically engineered plastics [3].
So now, the hardware branch decided to rebuild a clone of the Ultimaker 2 FDM printer [4] and exchange the extruder with a camera and a pipette to create a pipetting robot. Using several open‑source parts and software, it is the idea to establish an easily manageable robot to help the daily biologist's work.

Lab 3.0

Following the concept of the industry 3.0, the automation of simple tasks by robots (refers to the 4th stage of industrial revolution) let us introduce the laboratory 3.0, which enables scientists to concentrate on actual science by the automation of simple, repetitive work. This task is fulfilled with intelligent robots, which are enabled to detect, react and response to the experimenter.

Connection to our Team

iGEM TU DARMSTADT is a young and dynamic team of engaged researches. We have limited resources, namely time and money and we have to invent our lab work every day again, to reach higher and stay focused on science. We have the possibilities, thanks to iGEM, to experiment on our own ideas and to try reaching the sky (or actually the biologist's equivalent to something really cool!!). Interdisciplinarity opens our mind and sharpens our knowledge for the important things. It is really helpful that we work together, and we all benefit from each other.

References:

  1. http://edition.cnn.com/2014/02/13/tech/innovation/the-night-i-invented-3d-printing-chuck-hall
  2. https://2015.igem.org/Team:TU_Darmstadt/Project/Tech
  3. http://reprap.org/wiki/About
  4. http://www.thingiverse.com/thing:811271, jasonatepaint

GOALS

The main task is to develop a machine which is capable to monitor our organisms and their health condition in order to keep them alive. Therefore the machine has to measure the light emittance of the organisms and be capable of dropping liquids into our containers. This has to be independent of the exact position of the container, which requires an automatic tracking system.
The idea is that one places a container somewhere under the machines working area and click a run button of a program. The machine starts its routine by tracking the new container and measuring the light emittance of the organisms. Based on the measurement the machine decides whether to feed the organisms with non‑natural amino acid or not. After a period of time it repeats this routine until the stop button of the program is clicked.
These are only the minimum requirements for our project needs. We decided to go one step further and designed our machine in such a way, that it serves as a multi purpose platform which is adaptable and easy to modify. Our aim is that it is possible to add new features and software, inviting other scientists to improve our platform and share their ideas with the community.
For example our liquid system can be upgraded to be able to prepare 96-well plates with samples and monitor routines by using the optical system. Or our measuring head can be changed back to a printer head which allows to 3D print again with just a few changes.
One has a vast room of possibilities, just using the concept of the accurate positioning of a sample in the 3D space.
Due to the fact that we try to stick to widely used open-source software and standard commercial parts, our machine can be easily combined with the most DIY products, making it reusable, flexible and cheap.
In the special case of the TU Darmstadt and the next generations of iGEM competitors we have the idea to develop our technical equipment further from year to year and, if possible, combining them. Our SLA printer from last year’s competition was upgraded and is nearly ready for use again, giving us the possibility to manufacture parts for prototyping in our lab. Also this year’s project will serve as a starting point for the next year’s technical development team. New ideas and possibilities have been already discussed and we are looking forward to the next year’s competition.

SETUP OVERVIEW
image/svg+xml
FUNCTIONALITY
The functionality of the pipetting robot comprises the three‑dimensional agility of a 3D-printer, the possibility to pipet a specific amount of non natural amino acid (short: nnAA) using a syringe pump as well as intelligent visual object recognition so that it is able to distinguish between samples that require more nnAA from samples that still contain a sufficient amount. With that said it is capable to autonomously keep alive the modified E. coli bacteria, given that it is activated and connected to a reliable power supply.
To fulfill the task of keeping the bacteria alive it loops through a specially designed procedure. Initially the robot scans the internal space for samples by lighting the downside of the shelf space using infrared LEDs and monitoring the shadows of the placed reservoirs with a camera. If the contrast is sufficiently high, which is given due to isolation from light sources from outside, it is able to detect the edges of the mentioned reservoirs, fit a circle onto it and compute the distance between the reservoir and the camera itself. Furthermore it is possible to put an entire rack of reservoirs under observation due to its ability to recognize even rectangles.
Shortly after the detection of the samples the distance data is transferred to the 3D control and the head of the robot moves in direction of the first reservoir. To check whether the bacteria needs more nnAA the robot uses the fluorescence of the protein mVenus that has been inserted into the bacteria. Therefore the robot excites the protein via high power LEDs and detects the emitted photons. To exclude reflected light from the LEDs that would disturb the measurement a longpass filter cuts of the spectrum below the emission peak of the protein. In dependence of the fluorescence signal the robot decides whether it is necessary to pipet nnAA onto the sample. If that is the case the robot moves the samples in z-range just so that the syringe reaches the sample and is able to securely add the nnAA.
Eventually the robot recommences the procedure described above, except for the scanning of the individual positions of the samples, which are saved temporarily until all samples are checked. As long as the robot is activated, connected to a power supply and the syringe pump does not run out on nnAA, the robot will loop through this whole process and keep the bacteria alive without a need of human interaction. Nevertheless it is possible to check what the robot is doing via a livestream of the camera visible on the graphical user interface, since there is no other opportunity to look inside the robot itself while it is working.
ACHIEVEMENTS

  • Successfully redesign a 3D printer chassis to meet our requirements
  • Construct a unique base platform with integrated IR LEDs for measuring and positioning purposes
  • Design a measuring probe with a camera device with an integrated optical filter system and LEDs
  • Construct a syringe pump system to add liquids down to microliter accuracy
  • Implementing a unique GUI to run our program featuring a multi threading system optimizing the overall performance
  • Implementing an automatic object tracking system including a vector based feedback system for positioning
  • Connecting a Raspberry Pi with an Arduino Micro controller by establishing a serial connection between the two devices, allowing a variety of different tasks
  • Data of all CADs designed by the TU Darmstadt technical department
  • A complete build instruction including a BOM (Bill of Materials incl. prices) and a step by step video tutorial

Mechanics
Optics

Operating Range of Wavelengths
The robots optics consists of two big components, a camera head, and a lightbox. The camera head is responsible for two tasks, which are the detection and localization of cuvettes, and the fluorescence measurements. The light table illuminates the sample table uniformly from below, thereby aiding the camera to reliably do the detection work. The fluorescence measurement and object detection are separated in terms of their operating range of wavelengths. All the detection occurs at wavelengths above 860nm, which is near infrared. The light table radiates uniform infrared light, while the camera chip is capable of capturing this wavelength. There is nothing special to the camera chip. Actually all commercially available CCD-Chips can potentially capture near infrared, but this is usually an undesirable feature for photographic purposes, since it falsifies image colors, at least from a human point of view. This is why camera lenses are usually equipped with an infrared filter. In our case, we use our own lens system, where we removed the infrared filter. The reason why we chose the detection to operate at infrared is because we are mainly dealing with GFP, which does not absorb infrared. Otherwise, over a long time of illumination during periods of detection the fluorescent proteins would bleach out unnecessarily. Now, if we take a look at the camera head, there is also a slight spectral separation: The GFPs are stimulated at wavelengths lower than the emission wavelengths. The stimulation occurs at a central wavelength of 465 nm, while the emission takes place at a central wavelength of 525 nm. The reason for this separation is, that we use continuous rather than pulsed stimulation.

Fluorescence Measurement and Filtering
For pulse measurements there is a need for advanced high frequency circuitry, which is capable of forcing the LEDs to emit pulse lengths of picoseconds. LEDs have rise and fall times in the nanosecond region, if they are used in a simple on/off manner. For continuous measurements there are just two interconnected selections to be made: The wavelength of stimulation, and a corresponding optical filter. In our case it is a long pass filter with a cutoff wavelength of 515nm. It is capable of blocking most of the stimulation light, while letting GFP emission pass. The camera captures a long-exposed image to be further analyzed. To get rid of the residual stimulation light appearing in that image, the image is digitally color-filtered, and segmented into regions of interest (ROI). The determination of ROIs occurs simultaneously with the detection of cuvettes. The filtering is very strong, and does indeed block some of the already weak GFP fluorescence. This is why we use an exposure time of 10 seconds for a fluorescence capture, to make sure enough data is collected.

Optical Hardware - Camera Head
We are using the 8 megapixels PiCamera, because we have access to its capturing settings like framerate, exposure time, gains, light sensitivity etc. over an existing Programming interface [1], which is absolutely necessary since detection and measurement have totally different requirements. Another benefit is, that we are able to capture directly in grayscale (Y part of YUV) for fast detection purposes, and switch to RGB when doing fluorescence measurements. We do not have these degrees of freedom with an ordinary USB camera. The bad thing is, the stock camera itself is equipped with a very minimal lens system. Since detection and measurement not only occur at different wavelengths, but also at different distances to the vessels (fluorescence images are taken from each single cuvette, while the camera head is directly placed over its opening), there is a need for adjusting the focus. We have therefore developed a focusing system consisting of a so called voice coil, which inhabits a 3D printed adapter socket for the PiCamera. The adapter socket also harbors the optical longpass filter. The voice coil holds a suspended lens, which can be adjusted in its distance to the camera chip. This method is used in most smartphones. In our case, we salvaged our voice coil out of an old webcam. The voice coil is fed with a PWM signal provided directly by the Raspberry Pi's hardware PWM channel. We use a simple L297 H-Bridge Stepperdriver to amplify the PWM signal, and to decouple the Raspberry Pi's precious hardware PWM pin. Different duty cycles mean different focal positions. The coil current is tuned with a potentiometer. In this way we are able to automatically focus the lens by evaluating simple Sobel-filter-based sharpness measurements. Our autofocus is finding best sharpness within 2 seconds, using a robust global search algorithm. It is applied every time a new set of racks and cuvettes is placed into the robot, i.e. prior to each new session. Also, to adjust the focus for individual fluorescence captures, the sharpness of the individual cuvette corners is considered. The camera head also harbors the stimulating LEDs, which are 4 high Power Cree XTE, driven by a 0.9 amp current source and a PWM signal delivered by the Raspberry Pi.
The lightbox is an essential part of the detection. All applied detection algorithms rely on thresholding the image, or filtered versions of it. The thresholding is basically doing a binary selection of relevant vs. irrelevant image information. But with thresholding there is always a loss of relevant image information. If there is less clutter in the image, then there is no need to use strong thresholds, therefore conserving more of the relevant image information. The lightbox is acting as a clean background, creating only low amounts of static noise and clutter due to its uniform radiation of light, allowing us to use less strict thresholds. It also emphasizes the vessel corners. This enhances detection reliability beyond comparison. The heart of the lightbox is a Plexiglas panel called “Endlighten”. Light is laterally injected, and reflected off systematic impurities inside the panel [2]. The light then leaves the panel uniformly in all directions. Light leaving the panel back side is mirrored to the front side by a white reflective Plexiglas, and additionally diffused by a diffusor plate, also made of Plexiglas. All Plexiglas panels are products of Evonik, the original. The light is injected by flat-end infrared LEDs which are mounted on 3D printed rails and tightly clamped to the sides of the Endlighten panel. The LEDs are driven by constant current sources to give them a hopefully long lifetime.

References
[1]: https://picamera.readthedocs.io/en/release-1.12/
[2]: https://www.plexiglas-shop.com/pdfs/en/212-15-PLEXIGLAS-LED-edge-lighting-en.pdf

Cooling
All electronic components produce a significant amount of heat, especially the motor parts, power supply, and the Raspberry Pi. Since the robot chassis is meant to be completely enclosed to keep light out, heat is going to pile up in the upper part of the interior sooner or later. To protect the samples from temperatures above room temperature, it is necessary to include a cooling system, which ensures proper air circulation, and does not let in ambient light. To fulfill these two requirements we decided to adapt a double-walled cooling system. The simplest implementation of it is based on the fact that warm air rises naturally, and incorporates the power supply as the air intake. The power supply is placed on the bottom of the robot, and sucks in fresh air. The air, which is getting warmed up by the interior rises to the top and is being sucked out by a radial fan through an extractor hood, made of a laser cutted MDF grid. The warm air in between the MDF grid and the outer wall is directed through a 3D printed exhaust tunnel. Thus, ambient light is being kept out.
Software
Marlin
OpenCV
PyQt

Qt is a software tool to develop a GUI (Graphical User Interface). It is available under a commercial license and an open-source license. The software is a cross-platform application framework, which means it runs on the most computer systems like Unix or Windows. The underlying programming language is C++ and Qt can use already existing programming languages like Javascript, making it a powerful tool.
The main idea of Qt is to use a system of signals and slots to have an easy framework to connect displayed elements with underlying functions. Also, the reusability of already existing code is enhanced. Every graphical element, for example a button, emits its own signal when it is pressed or used. The signal then can be used to trigger an action, for example closing a window. If the signal is not connected to a function nothing will happen, however the signal will be emitted with no consequences. Now it is possible to connect the emitted signal with a desired action, called slot, and the program gets its unique behavior.
Qt is widely used by companies like the European Space Agency (ESA), Samsung, DreamWorks, Volvo and many more. To be able to combine the possibilities of Qt with the simplicity of the Python programming language, PyQt was developed. PyQt is a binding for Python to be able to use the Qt methods within the Python syntax.
To be able to get a direct preview of the constructed GUI, Qt Designer is a helpful tool. It is basically a constructing tool in which it is possible to use the objects as visible ones, making it possible to move them around and arranging them in the desired manner. To later work with the code itself, PyQt uses a method called pyuic(number) which is executed through the terminal. The number in the brackets stands for the version number.
After converting the code one can open the GUI as a regular python script and work with it as usual.

References:
https://riverbankcomputing.com/software/pyqt/intro
https://www.qt.io/
https://en.wikipedia.org/wiki/Qt_(software)

FURTHER DEVELOPMENTS

Due to a tight time schedule from the start to the end of iGEM it was not possible for us to to realize all ideas and planned developments in respect of improvement of the robot itself and further applications other than its current very specialized task. First of all it should be mentioned that the current model of the robot is designed to work with only one kind of bacteria culture in virtue of the unsolved problem of sterility. For a working process with more kinds of bacteria cultures it is absolutely indispensable to develop a system that is able to avoid all sorts of contamination between the different bacteria. Therefore it would be an option to have an extra reservoir filled with ethanol in which the tip of the syringe can be sterilized between the checks of different samples. Another modification that would be useful for working with individual bacteria cultures is making the power LED's changeable. This is necessary if the the wavelength of the LED's does not overlap with the absorption spectrum of the fluorescent proteins or overlaps with a part of the spectrum that has a very low absorption efficiency.
Moreover, apart from the latter developments it may be useful to add one more syringe pump to the current setup, just so that it would be possible to remove liquid from the samples. Of course, this only makes sense, if the above mentioned idea of a sterility progress is implemented, due to the fact that the tip of the removing syringe has to be inserted into the liquid. And thus a contamination, in case of different bacteria cultures, would occur.
Besides, a usage of the robot except for its “normal” tasks of observing bacteria would be a neat extension. For example a useful modification of the robot to a functional 3D-printer would be convenient, due to its setup that resembles an Ultimaker 3D-printer. Essential alterations would be to replace the sample stage with a heatbed and to replace the current head with a printhead hotend. Since the current head can be clipped it would not be too much of a challenge. Furthermore, a change of the syringe extruder is necessary, if the printer should work with plastics. An alternative approach is a kind of paste 3D-printer. In this case it wouldn't even be needed to change the head and the syringe, because of the already viscous properties of the paste.

BUILDING INSTRUCTIONS
Construction Video
Bill Of Materials