More stories

  • in

    Engineering team develops novel miniaturized organic semiconductor

    Field Effect Transistors (FET) are the core building blocks of modern electronics such as integrated circuits, computer CPUs and display backplanes. Organic Field Effect Transistors (OFETs), which use organic semiconductor as a channel for current flows, have the advantage of being flexible when compared with their inorganic counterparts like silicon.
    OFETs, given their high sensitivity, mechanical flexibility, biocompatibility, property tunability and low-cost fabrication, are considered to have great potential in new applications in wearable electronics, conformal health monitoring sensors, and bendable displays etc. Imagine TV screens that can be rolled up; or smart wearable electronic devices and clothing worn close to the body to collect vital body signals for instant biofeedback; or mini-robots made of harmless organic materials working inside the body for diseases diagnosis, target drug transportations, mini-surgeries and other medications and treatments.
    Until now, the main limitation on enhanced performance and mass production of OFETs lies in the difficulty in miniaturising them. Products currently using OFETs in the market are still in their primitive forms, in terms of product flexibility and durability.
    An engineering team led by Dr Paddy Chan Kwok Leung at the Department of Mechanical Engineering of the University of Hong Kong (HKU) has made an important breakthrough in developing the staggered structure monolayer Organic Field Effect Transistors, which sets a major cornerstone to reduce the size of OFETs. The result has been published in the academic journal Advanced Materials. A US patent has been filed for the innovation.
    The major problem now confronting scientists in reducing the size of OFETs is that the performance of the transistor will drop significantly with a reduction in size, partly due to the problem of contact resistance, i.e. resistance at interfaces which resists current flows. When the device gets smaller, its contact resistance will become a dominating factor in significantly downgrading the device’s performance.
    The staggered structure monolayer OFETs created by Dr Chan’s team demonstrate a record low normalized contact resistance of 40 ? -cm. Compared with conventional devices with a contact resistance of 1000 ? -cm, the new device can save 96% of power dissipation at contact when running the device at the same current level. More importantly, apart from energy saving, the excessive heat generated in the system, a common problem which causes semiconductors to fail, can be greatly reduced.
    “On the basis of our achievement, we can further reduce the dimensions of OFETs and push them to a sub-micrometer scale, a level compatible with their inorganic counterparts, while can still function effectively to exhibit their unique organic properties. This is critical for meeting the requirement for commercialisation of related research.” Dr Chan said.
    “If flexible OFET works, many traditional rigid based electronics such as display panels, computers and cell phones would transform to become flexible and foldable. These future devices would be much lighter in weight, and with low production cost.”
    “Moreover, given their organic nature, they are more likely to be biocompatible for advanced medical applications such as sensors in tracking brain activities or neural spike sensing, and in precision diagnosis of brain related illness such as epilepsy.” Dr Chan added.
    Dr Chan’s team is currently working with researchers at the HKU Faculty of Medicine and biomedical engineering experts at CityU to integrate the miniaturised OFETs into a flexible circuit onto a polymer microprobe for neural spike detections in-vivo on a mouse brain under different external stimulations. They also plan to integrate the OFETs onto surgical tools such as catheter tube, and then put it inside animals’ brains for direct brain activities sensing to locate abnormal activation in brain.
    “Our OFETs provide a much better signal to noise ratio. Therefore, we expect we can pick up some weak signals which cannot be detected before using the conventional bare electrode for sensing.”
    “It has been our goal to connect applied research with fundamental science. Our research achievement would hopefully open a blue ocean for OFETs research and applications. We believe that the setting and achievement on OFETs are now ready for applications in large area display backplane and surgical tools.” Dr Chan concluded. More

  • in

    Study uses mathematical modeling to identify an optimal school return approach

    In a recent study, NYU Abu Dhabi Professor of Practice in Mathematics Alberto Gandolfi has developed a mathematical model to identify the number of days students could attend school to allow them a better learning experience while mitigating infections of COVID-19.
    Published in Physicsa D journal, the study shows that blended models, with almost periodic alternations of in-class and remote teaching days or weeks, would be ideal. In a prototypical example, the optimal strategy results in the school opening 90 days out of 200, with the number of COVID-19 cases among the individuals related to the school increasing by about 66 percent, instead of the almost 250 percent increase, which is predicted should schools fully reopen.
    The study features five different groups; these include students susceptible to infection, students exposed to infection, students displaying symptoms, asymptomatic students, and recovered students. In addition, Gandolfi’s study models other factors, including a seven hour school day as the window for transmission, and the risk of students getting infected outside of school.
    Speaking on the development of this model, Gandolfi commented: “The research comes as over one billion students around the world are using remote learning models in the face of the global pandemic, and educators are in need of plans for the upcoming 2020 — 2021 academic year. Given that children come in very close contact within the classrooms, and that the incubation period lasts several days, the study shows that full re-opening of the classrooms is not a viable possibility in most areas. On the other hand, with the development of a vaccine still in its formative stages, studies have placed the potential impact of COVID-19 on children as losing 30 percent of usual progress in reading and 50 percent or more in math.”
    He added: “The approach aims to provide a viable solution for schools that are planning activities ahead of the 2020 — 2021 academic year. Each school, or group thereof, can adapt the study to its current situation in terms of local COVID-19 diffusion and relative importance assigned to COVID-19 containment versus in-class teaching; it can then compute an optimal opening strategy. As these are mixed solutions in most cases, other aspects of socio-economic life in the area could then be built around the schools’ calendar. This way, children can benefit as much as possible from a direct, in class experience, while ensuring that the spread of infection is kept under control.”
    Using the prevalence of active COVID-19 cases in a region as a proxy for the chance of getting infected, the study gives a first indication, for each country, of the possibilities for school reopening: schools can fully reopen in a few countries, while in most others blended solutions can be attempted, with strict physical distancing, and frequent, generalized, even if not necessarily extremely reliable, testing.

    Story Source:
    Materials provided by New York University. Note: Content may be edited for style and length. More

  • in

    Biochip innovation combines AI and nanoparticle printing for cancer cell analysis

    Electrical engineers, computer scientists and biomedical engineers at the University of California, Irvine have created a new lab-on-a-chip that can help study tumor heterogeneity to reduce resistance to cancer therapies.
    In a paper published today in Advanced Biosystems, the researchers describe how they combined artificial intelligence, microfluidics and nanoparticle inkjet printing in a device that enables the examination and differentiation of cancers and healthy tissues at the single-cell level.
    “Cancer cell and tumor heterogeneity can lead to increased therapeutic resistance and inconsistent outcomes for different patients,” said lead author Kushal Joshi, a former UCI graduate student in biomedical engineering. The team’s novel biochip addresses this problem by allowing precise characterization of a variety of cancer cells from a sample.
    “Single-cell analysis is essential to identify and classify cancer types and study cellular heterogeneity. It’s necessary to understand tumor initiation, progression and metastasis in order to design better cancer treatment drugs,” said co-author Rahim Esfandyarpour, UCI assistant professor of electrical engineering & computer science as well as biomedical engineering. “Most of the techniques and technologies traditionally used to study cancer are sophisticated, bulky, expensive, and require highly trained operators and long preparation times.”
    He said his group overcame these challenges by combining machine learning techniques with accessible inkjet printing and microfluidics technology to develop low-cost, miniaturized biochips that are simple to prototype and capable of classifying various cell types.
    In the apparatus, samples travel through microfluidic channels with carefully placed electrodes that monitor differences in the electrical properties of diseased versus healthy cells in a single pass. The UCI researchers’ innovation was to devise a way to prototype key parts of the biochip in about 20 minutes with an inkjet printer, allowing for easy manufacturing in diverse settings. Most of the materials involved are reusable or, if disposable, inexpensive.
    Another aspect of the invention is the incorporation of machine learning to manage the large amount of data the tiny system produces. This branch of AI accelerates the processing and analysis of large datasets, finding patterns and associations, predicting precise outcomes, and aiding in rapid and efficient decision-making.
    By including machine learning in the biochip’s workflow, the team has improved the accuracy of analysis and reduced the dependency on skilled analysts, which can also make the technology appealing to medical professionals in the developing world, Esfandyarpour said.
    “The World Health Organization says that nearly 60 percent of deaths from breast cancer happen because of a lack of early detection programs in countries with meager resources,” he said. “Our work has potential applications in single-cell studies, in tumor heterogeneity studies and, perhaps, in point-of-care cancer diagnostics — especially in developing nations where cost, constrained infrastructure and limited access to medical technologies are of the utmost importance.”

    Story Source:
    Materials provided by University of California – Irvine. Note: Content may be edited for style and length. More

  • in

    Diamonds are a quantum scientist's best friend

    Diamonds have a firm foothold in our lexicon. Their many properties often serve as superlatives for quality, clarity and hardiness. Aside from the popularity of this rare material in ornamental and decorative use, these precious stones are also highly valued in industry where they are used to cut and polish other hard materials and build radiation detectors.
    More than a decade ago, a new property was uncovered in diamonds when high concentrations of boron are introduced to it — superconductivity. Superconductivity occurs when two electrons with opposite spin form a pair (called a Cooper pair), resulting in the electrical resistance of the material being zero. This means a large supercurrent can flow in the material, bringing with it the potential for advanced technological applications. Yet, little work has been done since to investigate and characterise the nature of a diamond’s superconductivity and therefore its potential applications.
    New research led by Professor Somnath Bhattacharyya in the Nano-Scale Transport Physics Laboratory (NSTPL) in the School of Physics at the University of the Witwatersrand in Johannesburg, South Africa, details the phenomenon of what is called “triplet superconductivity” in diamond. Triplet superconductivity occurs when electrons move in a composite spin state rather than as a single pair. This is an extremely rare, yet efficient form of superconductivity that until now has only been known to occur in one or two other materials, and only theoretically in diamonds.
    “In a conventional superconducting material such as aluminium, superconductivity is destroyed by magnetic fields and magnetic impurities, however triplet superconductivity in a diamond can exist even when combined with magnetic materials. This leads to more efficient and multifunctional operation of the material,” explains Bhattacharyya.
    The team’s work has recently been published in an article in the New Journal of Physics, titled “Effects of Rashba-spin-orbit coupling on superconducting boron-doped nanocrystalline diamond films: evidence of interfacial triplet superconductivity.” This research was done in collaboration with Oxford University (UK) and Diamond Light Source (UK). Through these collaborations, beautiful atomic arrangement of diamond crystals and interfaces that have never been seen before could be visualised, supporting the first claims of ‘triplet’ superconductivity.
    Practical proof of triplet superconductivity in diamonds came with much excitement for Bhattacharyya and his team. “We were even working on Christmas day, we were so excited,” says Davie Mtsuko. “This is something that has never been before been claimed in diamond,” adds Christopher Coleman. Both Mtsuko and Coleman are co-authors of the paper.
    Despite diamonds’ reputation as a highly rare and expensive resource, they can be manufactured in a laboratory using a specialised piece of equipment called a vapour deposition chamber. The Wits NSTPL has developed their own plasma deposition chamber which allows them to grow diamonds of a higher than normal quality — making them ideal for this kind of advanced research.
    This finding expands the potential uses of diamond, which is already well-regarded as a quantum material. “All conventional technology is based on semiconductors associated with electron charge. Thus far, we have a decent understanding of how they interact, and how to control them. But when we have control over quantum states such as superconductivity and entanglement, there is a lot more physics to the charge and spin of electrons, and this also comes with new properties,” says Bhattacharyya. “With the new surge of superconducting materials such as diamond, traditional silicon technology can be replaced by cost effective and low power consumption solutions.”
    The induction of triplet superconductivity in diamond is important for more than just its potential applications. It speaks to our fundamental understanding of physics. “Thus far, triplet superconductivity exists mostly in theory, and our study gives us an opportunity to test these models in a practical way,” says Bhattacharyya.

    Story Source:
    Materials provided by University of the Witwatersrand. Note: Content may be edited for style and length. More

  • in

    Faster COVID-19 testing with simple algebraic equations

    A mathematician from Cardiff University has developed a new method for processing large volumes of COVID-19 tests which he believes could lead to significantly more tests being performed at once and results being returned much quicker.
    Dr Usama Kadri, from the University’s School of Mathematics, believes the new technique could allow many more patients to be tested using the same amount of tests tubes and with a lower possibility of false negatives occurring.
    Dr Kadri’s technique, which has been published in the journal Health Systems, uses simple algebraic equations to identify positive samples in tests and takes advantage of a testing technique known as ‘pooling’.
    Pooling involves grouping a large number of samples from different patients into one test tube and performing a single test on that tube.
    If the tube is returned negative then you know that everybody from that group does not have the virus.
    Pooling can be applied by laboratories to test more samples in a shorter space of time, and works well when the overall infection rate in a certain population is expected to be low. If a tube is returned positive then each person within that group needs to be tested once again, this time individually, to determine who has the virus.

    advertisement

    In this instance, and particularly when it is known that infection rates in the population are high, the savings from the pooling technique in terms of time and cost become less significant.
    However, Dr Kadri’s new technique removes the need to perform a second round of tests once a batch is returned positive and can identify the individuals who have the virus using simple equations.
    The technique works with a fixed number of individuals and test tubes, for example 200 individuals and 10 test tubes, and starts by taking a fixed number of samples from a single individual, for example 5, and distributing these into 5 of the 10 test tubes.
    Another 5 samples are taken from the second individual and these are distributed into a different combination of 5 of the 10 tubes.
    This is then repeated for each of the 200 individuals in the group so that no individual shares the same combination of tubes.

    advertisement

    Each of the 10 test tubes is then sent for testing and any tube that returns negative indicates that all patients that have samples in that tube must be negative.
    If only one individual has the virus, then the combinations of the tubes that return positive, which is unique to the individual, will directly indicate that individual.
    However, if the number of positive tubes is larger than the number of samples from each individual, in this example 5, then there should be at least two individuals with the virus.
    The individuals that have all of their test tubes return positive are then selected.
    The method assumes that each individual that is positive should have the same quantity of virus in each tube, and that each of the individuals testing positive will have a unique quantity of virus in their sample which is different to the others.
    From this, the method then assumes that there are exactly two individuals with the virus and, for every two suspected individuals, a computer is used to calculate any combination of virus quantity that would return the actual overall quantity of virus that was measured in the tests.
    If the right combination is found then the selected two individuals have to be positive and no one else. Otherwise, the procedure is repeated but with an additional suspected individual, and so on until the right combination is found.
    “Applying the proposed method allows testing many more patients using the same number of testing tubes, where all positives are identified with no false negatives, and no need for a second round of independent testing, with the effective testing time reduced drastically,” Dr Kadri said.
    So far, the method has been assessed using simulations of testing scenarios and Dr Kadri acknowledges that lab testing will need to be carried out to increase confidence in the proposed method.
    Moreover, for clinical use, additional factors need to be considered including sample types, viral load, prevalence, and inhibitor substances. More

  • in

    Applying artificial intelligence to science education

    A new review published in the Journal of Research in Science Teaching highlights the potential of machine learning — a subset of artificial intelligence — in science education. Although the authors initiated their review before the COVID-19 outbreak, the pandemic highlights the need to examine cutting-edge digital technologies as we re-think the future of teaching and learning.
    Based on a review of 47 studies, investigators developed a framework to conceptualize machine learning applications in science assessment. The article aims to examine how machine learning has revolutionized the capacity of science assessment in terms of tapping into complex constructs, improving assessment functionality, and facilitating scoring automaticity.
    Based on their investigation, the researchers identified various ways in which machine learning has transformed traditional science assessment, as well as anticipated impacts that it will likely have in the future (such as providing personalized science learning and changing the process of educational decision-making).
    “Machine learning is increasingly impacting every aspect of our lives, including education,” said lead author Xiaoming Zhai, an assistant professor in the University of Georgia’s Mary Frances Early’s Department of Mathematics and Science Education. “It is anticipated that the cutting-edge technology may be able to redefine science assessment practices and significantly change education in the future.”

    Story Source:
    Materials provided by Wiley. Note: Content may be edited for style and length. More

  • in

    Deep learning takes on synthetic biology

    DNA and RNA have been compared to “instruction manuals” containing the information needed for living “machines” to operate. But while electronic machines like computers and robots are designed from the ground up to serve a specific purpose, biological organisms are governed by a much messier, more complex set of functions that lack the predictability of binary code. Inventing new solutions to biological problems requires teasing apart seemingly intractable variables — a task that is daunting to even the most intrepid human brains.
    Two teams of scientists from the Wyss Institute at Harvard University and the Massachusetts Institute of Technology have devised pathways around this roadblock by going beyond human brains; they developed a set of machine learning algorithms that can analyze reams of RNA-based “toehold” sequences and predict which ones will be most effective at sensing and responding to a desired target sequence. As reported in two papers published concurrently today in Nature Communications, the algorithms could be generalizable to other problems in synthetic biology as well, and could accelerate the development of biotechnology tools to improve science and medicine and help save lives.
    “These achievements are exciting because they mark the starting point of our ability to ask better questions about the fundamental principles of RNA folding, which we need to know in order to achieve meaningful discoveries and build useful biological technologies,” said Luis Soenksen, Ph.D., a Postdoctoral Fellow at the Wyss Institute and Venture Builder at MIT’s Jameel Clinic who is a co-first author of the first of the two papers.
    Getting ahold of toehold switches
    The collaboration between data scientists from the Wyss Institute’s Predictive BioAnalytics Initiative and synthetic biologists in Wyss Core Faculty member Jim Collins’ lab at MIT was created to apply the computational power of machine learning, neural networks, and other algorithmic architectures to complex problems in biology that have so far defied resolution. As a proving ground for their approach, the two teams focused on a specific class of engineered RNA molecules: toehold switches, which are folded into a hairpin-like shape in their “off” state. When a complementary RNA strand binds to a “trigger” sequence trailing from one end of the hairpin, the toehold switch unfolds into its “on” state and exposes sequences that were previously hidden within the hairpin, allowing ribosomes to bind to and translate a downstream gene into protein molecules. This precise control over the expression of genes in response to the presence of a given molecule makes toehold switches very powerful components for sensing substances in the environment, detecting disease, and other purposes.
    However, many toehold switches do not work very well when tested experimentally, even though they have been engineered to produce a desired output in response to a given input based on known RNA folding rules. Recognizing this problem, the teams decided to use machine learning to analyze a large volume of toehold switch sequences and use insights from that analysis to more accurately predict which toeholds reliably perform their intended tasks, which would allow researchers to quickly identify high-quality toeholds for various experiments.

    advertisement

    The first hurdle they faced was that there was no dataset of toehold switch sequences large enough for deep learning techniques to analyze effectively. The authors took it upon themselves to generate a dataset that would be useful to train such models. “We designed and synthesized a massive library of toehold switches, nearly 100,000 in total, by systematically sampling short trigger regions along the entire genomes of 23 viruses and 906 human transcription factors,” said Alex Garruss, a Harvard graduate student working at the Wyss Institute who is a co-first author of the first paper. “The unprecedented scale of this dataset enables the use of advanced machine learning techniques for identifying and understanding useful switches for immediate downstream applications and future design.”
    Armed with enough data, the teams first employed tools traditionally used for analyzing synthetic RNA molecules to see if they could accurately predict the behavior of toehold switches now that there were manifold more examples available. However, none of the methods they tried — including mechanistic modeling based on thermodynamics and physical features — were able to predict with sufficient accuracy which toeholds functioned better.
    A picture is worth a thousand base pairs
    The researchers then explored various machine learning techniques to see if they could create models with better predictive abilities. The authors of the first paper decided to analyze toehold switches not as sequences of bases, but rather as two-dimensional “images” of base-pair possibilities. “We know the baseline rules for how an RNA molecule’s base pairs bond with each other, but molecules are wiggly — they never have a single perfect shape, but rather a probability of different shapes they could be in,” said Nicolaas Angenent-Mari, a MIT graduate student working at the Wyss Institute and co-first author of the first paper. “Computer vision algorithms have become very good at analyzing images, so we created a picture-like representation of all the possible folding states of each toehold switch, and trained a machine learning algorithm on those pictures so it could recognize the subtle patterns indicating whether a given picture would be a good or a bad toehold.”
    Another benefit of their visually-based approach is that the team was able to “see” which parts of a toehold switch sequence the algorithm “paid attention” to the most when determining whether a given sequence was “good” or “bad.” They named this interpretation approach Visualizing Secondary Structure Saliency Maps, or VIS4Map, and applied it to their entire toehold switch dataset. VIS4Map successfully identified physical elements of the toehold switches that influenced their performance, and allowed the researchers to conclude that toeholds with more potentially competing internal structures were “leakier” and thus of lower quality than those with fewer such structures, providing insight into RNA folding mechanisms that had not been discovered using traditional analysis techniques.

    advertisement

    “Being able to understand and explain why certain tools work or don’t work has been a secondary goal within the artificial intelligence community for some time, but interpretability needs to be at the forefront of our concerns when studying biology because the underlying reasons for those systems’ behaviors often cannot be intuited,” said Jim Collins, Ph.D., the senior author of the first paper. “Meaningful discoveries and disruptions are the result of deep understanding of how nature works, and this project demonstrates that machine learning, when properly designed and applied, can greatly enhance our ability to gain important insights about biological systems.” Collins is also the Termeer Professor of Medical Engineering and Science at MIT.
    Now you’re speaking my language
    While the first team analyzed toehold switch sequences as 2D images to predict their quality, the second team created two different deep learning architectures that approached the challenge using orthogonal techniques. They then went beyond predicting toehold quality and used their models to optimize and redesign poorly performing toehold switches for different purposes, which they report in the second paper.
    The first model, based on a convolutional neural network (CNN) and multi-layer perceptron (MLP), treats toehold sequences as 1D images, or lines of nucleotide bases, and identifies patterns of bases and potential interactions between those bases to predict good and bad toeholds. The team used this model to create an optimization method called STORM (Sequence-based Toehold Optimization and Redesign Model), which allows for complete redesign of a toehold sequence from the ground up. This “blank slate” tool is optimal for generating novel toehold switches to perform a specific function as part of a synthetic genetic circuit, enabling the creation of complex biological tools.
    “The really cool part about STORM and the model underlying it is that after seeding it with input data from the first paper, we were able to fine-tune the model with only 168 samples and use the improved model to optimize toehold switches. That calls into question the prevailing assumption that you need to generate massive datasets every time you want to apply a machine learning algorithm to a new problem, and suggests that deep learning is potentially more applicable for synthetic biologists than we thought,” said co-first author Jackie Valeri, a graduate student at MIT and the Wyss Institute.
    The second model is based on natural language processing (NLP), and treats each toehold sequence as a “phrase” consisting of patterns of “words,” eventually learning how certain words are put together to make a coherent phrase. “I like to think of each toehold switch as a haiku poem: like a haiku, it’s a very specific arrangement of phrases within its parent language — in this case, RNA. We are essentially training this model to learn how to write a good haiku by feeding it lots and lots of examples,” said co-first author Pradeep Ramesh, Ph.D., a Visiting Postdoctoral Fellow at the Wyss Institute and Machine Learning Scientist at Sherlock Biosciences.
    Ramesh and his co-authors integrated this NLP-based model with the CNN-based model to create NuSpeak (Nucleic Acid Speech), an optimization approach that allowed them to redesign the last 9 nucleotides of a given toehold switch while keeping the remaining 21 nucleotides intact. This technique allows for the creation of toeholds that are designed to detect the presence of specific pathogenic RNA sequences, and could be used to develop new diagnostic tests.
    The team experimentally validated both of these platforms by optimizing toehold switches designed to sense fragments from the SARS-CoV-2 viral genome. NuSpeak improved the sensors’ performances by an average of 160%, while STORM created better versions of four “bad” SARS-CoV-2 viral RNA sensors whose performances improved by up to 28 times.
    “A real benefit of the STORM and NuSpeak platforms is that they enable you to rapidly design and optimize synthetic biology components, as we showed with the development of toehold sensors for a COVID-19 diagnostic,” said co-first author Katie Collins, an undergraduate MIT student at the Wyss Institute who worked with MIT Associate Professor Timothy Lu, M.D., Ph.D., a corresponding author of the second paper.
    “The data-driven approaches enabled by machine learning open the door to really valuable synergies between computer science and synthetic biology, and we’re just beginning to scratch the surface,” said Diogo Camacho, Ph.D., a corresponding author of the second paper who is a Senior Bioinformatics Scientist and co-lead of the Predictive BioAnalytics Initiative at the Wyss Institute. “Perhaps the most important aspect of the tools we developed in these papers is that they are generalizable to other types of RNA-based sequences such as inducible promoters and naturally occurring riboswitches, and therefore can be applied to a wide range of problems and opportunities in biotechnology and medicine.”
    Additional authors of the papers include Wyss Core Faculty member and Professor of Genetics at HMS George Church, Ph.D.; and Wyss and MIT Graduate Students Miguel Alcantar and Bianca Lepe.
    “Artificial intelligence is wave that is just beginning to impact science and industry, and has incredible potential for helping to solve intractable problems. The breakthroughs described in these studies demonstrate the power of melding computation with synthetic biology at the bench to develop new and more powerful bioinspired technologies, in addition to leading to new insights into fundamental mechanisms of biological control,” said Don Ingber, M.D., Ph.D., the Wyss Institute’s Founding Director. Ingber is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children’s Hospital, as well as Professor of Bioengineering at Harvard’s John A. Paulson School of Engineering and Applied Sciences.
    This work was supported by the DARPA Synergistic Discovery and Design program, the Defense Threat Reduction Agency, the Paul G. Allen Frontiers Group, the Wyss Institute for Biologically Inspired Engineering, Harvard University, the Institute for Medical Engineering and Science, the Massachusetts Institute of Technology, the National Science Foundation, the National Human Genome Research Institute, the Department of Energy, the National Institutes of Health, and a CONACyT grant. More

  • in

    This 'squidbot' jets around and takes pics of coral and fish

    Engineers at the University of California San Diego have built a squid-like robot that can swim untethered, propelling itself by generating jets of water. The robot carries its own power source inside its body. It can also carry a sensor, such as a camera, for underwater exploration.
    The researchers detail their work in a recent issue of Bioinspiration and Biomimetics.
    “Essentially, we recreated all the key features that squids use for high-speed swimming,” said Michael T. Tolley, one of the paper’s senior authors and a professor in the Department of Mechanical and Aerospace Engineering at UC San Diego. “This is the first untethered robot that can generate jet pulses for rapid locomotion like the squid and can achieve these jet pulses by changing its body shape, which improves swimming efficiency.”
    This squid robot is made mostly from soft materials such as acrylic polymer, with a few rigid, 3D printed and laser cut parts. Using soft robots in underwater exploration is important to protect fish and coral, which could be damaged by rigid robots. But soft robots tend to move slowly and have difficulty maneuvering.
    The research team, which includes roboticists and experts in computer simulations as well as experimental fluid dynamics, turned to cephalopods as a good model to solve some of these issues. Squid, for example, can reach the fastest speeds of any aquatic invertebrates thanks to a jet propulsion mechanism.
    Their robot takes a volume of water into its body while storing elastic energy in its skin and flexible ribs. It then releases this energy by compressing its body and generates a jet of water to propel itself.
    At rest, the squid robot is shaped roughly like a paper lantern, and has flexible ribs, which act like springs, along its sides. The ribs are connected to two circular plates at each end of the robot. One of them is connected to a nozzle that both takes in water and ejects it when the robot’s body contracts. The other plate can carry a water-proof camera or a different type of sensor.
    Engineers first tested the robot in a water testbed in the lab of Professor Geno Pawlak, in the UC San Diego Department of Mechanical and Aerospace Engineering. Then they took it out for a swim in one of the tanks at the UC San Diego Birch Aquarium at the Scripps Institution of Oceanography.
    They demonstrated that the robot could steer by adjusting the direction of the nozzle. As with any underwater robot, waterproofing was a key concern for electrical components such as the battery and camera.They clocked the robot’s speed at about 18 to 32 centimeters per second (roughly half a mile per hour), which is faster than most other soft robots.
    “After we were able to optimize the design of the robot so that it would swim in a tank in the lab, it was especially exciting to see that the robot was able to successfully swim in a large aquarium among coral and fish, demonstrating its feasibility for real-world applications,” said Caleb Christianson, who led the study as part of his Ph.D. work in Tolley’s research group. He is now a senior medical devices engineering at San Diego-based Dexcom.
    Researchers conducted several experiments to find the optimal size and shape for the nozzle that would propel the robot. This in turn helped them increase the robot’s efficiency and its ability to maneuver and go faster. This was done mostly by simulating this kind of jet propulsion, work that was led by Professor Qiang Zhu and his team in the Department of Structural Engineering at UC San Diego. The team also learned more about how energy can be stored in the elastic component of the robot’s body and skin, which is later released to generate a jet.
    Video: https://www.youtube.com/watch?v=v-UMDnSB8k0&feature=emb_logo

    Story Source:
    Materials provided by University of California – San Diego. Note: Content may be edited for style and length. More