More stories

  • in

    Learning from pangolins and peacocks: Researchers explore next-gen structural materials

    From pangolin scales that can stand up to hard hits to colorful but sturdy peacock feathers, nature can do a lot with a few simple molecules.
    In a new review paper, a team of international researchers have laid out how engineers are taking inspiration from the biological world — and designing new kinds of materials that are potentially tougher, more versatile and more sustainable than what humans can make on their own.
    “Even today, nature makes things way simpler and way smarter than what we can do synthetically in the lab,” said Dhriti Nepal, first author and a research materials engineer at the Air Force Research Laboratory in Ohio.
    Nepal along with Vladimir Tsukruk from Georgia Institute of Technology and Hendrik Heinz of the University of Colorado Boulder served as co-corresponding authors for the new analysis. The team published its findings Nov. 28 in the journal Nature Materials.
    The researchers, who come from three countries, delve into the promise and challenges behind “bioinspired nanocomposites.” These materials mix together different kinds of proteins and other molecules at incredibly small scales to achieve properties that may not be possible with traditional metals or plastics. Researchers often design them using advanced computer simulations or models. Examples include thin films that resist wear and tear by incorporating proteins from silkworm cocoons; new kinds of laminates made from polymers and clay materials; carbon fibers produced using bioinspired principles; and panes of glass that don’t easily crack because they include nacre — the iridescent lining inside many mollusk shells.
    Such nature-inspired materials could, one day, lead to new and better solar panels, soft robots and even coatings for hypersonic jets, said Heinz, professor of chemical and biological engineering at CU Boulder. But first, researchers will need to learn how to build them from the bottom up, ensuring that every molecule is in the right place. More

  • in

    The entanglement advantage

    Researchers affiliated with the Q-NEXT quantum research center show how to create quantum-entangled networks of atomic clocks and accelerometers — and they demonstrate the setup’s superior, high-precision performance.
    What happened
    For the first time, scientists have entangled atoms for use as networked quantum sensors, specifically, atomic clocks and accelerometers.
    The research team’s experimental setup yielded ultraprecise measurements of time and acceleration. Compared to a similar setup that does not draw on quantum entanglement, their time measurements were 3.5 times more precise, and acceleration measurements exhibited 1.2 times greater precision.
    The result, published in Nature, is supported by Q-NEXT, a U.S. Department of Energy (DOE) National Quantum Information Science Research Center led by DOE’s Argonne National Laboratory. The research was conducted by scientists at Stanford University, Cornell University and DOE’s Brookhaven National Laboratory.
    “The impact of using entanglement in this configuration was that it produced better sensor network performance than would have been available if quantum entanglement were not used as a resource,” said Mark Kasevich, lead author of the paper, a member of Q-NEXT, the William R. Kenan, Jr. professor in the Stanford School of Humanities and Sciences and professor of physics and of applied physics. “For atomic clocks and accelerometers, ours is a pioneering demonstration.”
    What is quantum entanglement? How does it apply to sensors? Entanglement, a special property of nature at the quantum level, is a correlation between two or more objects. When two atoms are entangled, one can measure the properties of both atoms by observing only one. This is true no matter how much distance — even if it’s light-years — separates the entangled atoms. A helpful everyday analogy: A red marble and a blue marble are placed in a box. If you draw a red marble from the box, you know, without having to look at the other one, that it’s blue. The color of the marbles is correlated, or entangled. In the quantum realm, entanglement is subtler. An atom can take on multiple states (colors) at once. If our marbles were like atoms, each marble would be both red and blue at the same time. Neither is fully red or blue while it sits the box. The quantum marble “decides” its color only at the moment of revelation. And once you draw one marble of “decided” color, you know the color of its entangled partner. To take a measurement of one member of an entangled pair is effectively to take a simultaneous reading of both. Taking this further: Two entangled clocks are practically equivalent to a single clock with two displays. Time measurements taken using entangled clocks can be more precise than measurements from two separate, synchronized clocks. More

  • in

    Nanoengineers develop a predictive database for materials

    Nanoengineers at the University of California San Diego’s Jacobs School of Engineering have developed an AI algorithm that predicts the structure and dynamic properties of any material — whether existing or new — almost instantaneously. Known as M3GNet, the algorithm was used to develop matterverse.ai, a database of more than 31 million yet-to-be-synthesized materials with properties predicted by machine learning algorithms. Matterverse.ai facilitates the discovery of new technological materials with exceptional properties.
    The team behind M3GNet, led by UC San Diego nanoengineering professor Shyue Ping Ong, uses matterverse.ai and the new capabilities of M3GNet in their search for safer and more energy-dense electrodes and electrolytes for rechargeable lithium-ion batteries. The project is explored in the Nov. 28 issue of the journal Nature Computational Science.
    The properties of a material are determined by the arrangement of its atoms. However, existing approaches to obtain that arrangement are either prohibitively expensive or ineffective for many elements.
    “Similar to proteins, we need to know the structure of a material to predict its properties.” said Ong, the associate director of the Sustainable Power and Energy Center at the Jacobs School of Engineering. “What we need is an AlphaFold for materials.”
    AlphaFold is an AI algorithm developed by Google DeepMind to predict protein structure. To build the equivalent for materials, Ong and his team combined graph neural networks with many-body interactions to build a deep learning architecture that works universally, with high accuracy, across all the elements of the periodic table.
    “Mathematical graphs are really natural representations of a collection of atoms,” said Chi Chen, a former senior project scientist in Ong’s lab and first author of the work, who is now a senior quantum architect at Microsoft Quantum. “Using graphs, we can represent the full complexity of materials without being subject to the combinatorial explosion of terms in traditional formalisms.”
    To train their model, the team used the huge database of materials energies, forces and stresses collected in the Materials Project over the past decade. The result is the M3GNet interatomic potential (IAP), which can predict the energies and forces in any collection of atoms. Matterverse.ai was generated through combinatorial elemental substitutions on more than 5,000 structural prototypes in the Inorganic Crystal Structure Database (ICSD). The M3GNet IAP was then used to obtain the equilibrium crystal structure — a process called “relaxation” — for property prediction.
    Of the 31 million materials in matterverse.ai today, more than a million are predicted to be potentially stable. Ong and his team intend to greatly expand not just the number of materials, but also the number of ML-predicted properties, including high-value properties with small data sizes using a multi-fidelity approach they developed earlier.
    Beyond structural relaxations, the M3GNet IAP also has broad applications in dynamic simulations of materials and property predictions as well.
    “For instance, we are often interested in how fast lithium ions diffuse in a lithium-ion battery electrode or electrolyte. The faster the diffusion, the more quickly you can charge or discharge a battery,” Ong said. “We have shown that the M3GNet IAP can be used to predict the lithium conductivity of a material with good accuracy. We truly believe that the M3GNet architecture is a transformative tool that can greatly expand our ability to explore new material chemistries and structures.”
    To promote the use of M3GNet, the team has released the framework as an open-source Python code on Github. Since posting the preprint on Arxiv in Feb 2022, the team has received interest from academic researchers and those in the industry. There are plans to integrate the M3GNet IAP as a tool in commercial materials simulation packages.
    This work was authored by Chi Chen and Shyue Ping Ong at UC San Diego. The research was primarily funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division under the Materials Project program. Part of the work was funded by LG Energy Solution through the Frontier Research Laboratory Program. This work used the Extreme Science and Engineering Discovery Environment (XSEDE).
    Story Source:
    Materials provided by University of California – San Diego. Original written by Emerson Dameron. Note: Content may be edited for style and length. More

  • in

    A life-inspired system dynamically adjusts to its environment

    Researchers have developed a synthetic system that responds to environmental changes in the same way as living organisms, using a feedback loop to maintain its internal conditions. This not only keeps the material’s conditions stable but also makes it possible to build mechanisms that react dynamically to their environment, an important trait for interactive materials and soft robotics.
    Living systems, from individual cells up to organisms, use feedback systems to maintain their conditions. For example, we sweat to cool down when we’re too warm, and a variety of systems work to keep our blood pressure and chemistry in the right range. These homeostatic systems make living organisms robust by enabling them to cope with changes in their environment. While feedback is important in some artificial systems, such as thermostats, they don’t have the dynamic adaptability or robustness of homeostatic living systems.
    Now, researchers at Aalto University and Tampere University have developed a system of materials that maintains its state in a manner similar to living systems. The new system consists of two side-by-side gels with different properties. Interactions between the gels make the system respond homeostatically to environmental changes, keeping its temperature within a narrow range when stimulated by a laser.
    ‘The tissues of living organisms are typically soft, elastic and deformable,’ says Hang Zhang, an Academy of Finland postdoctoral researcher at Aalto who was one of the lead authors of the study. ‘The gels used in our system are similar. They are soft polymers swollen in water, and they can provide a fascinating variety of responses upon environmental stimuli.’
    The laser shines through the first gel and then bounces off a mirror onto the second gel, where it heats suspended gold nanoparticles. The heat moves through the second gel to the first, raising its temperature. The first gel is only transparent when it is below a specific temperature; once it gets hotter, it becomes opaque. This change stops the laser from reaching the mirror and heating the second gel. The two gels then cool down until the first becomes transparent again, at which point the laser passes through and the heating process starts again.
    In other words, the arrangement of the laser, gels and mirror creates a feedback loop that keeps the gels at a specific temperature. At higher temperatures, the laser is blocked and can’t heat the gold nanoparticles; at lower temperatures, the first gel becomes transparent, so the laser shines through and heats the gold particles.
    ‘Like a living system, our homeostatic system is dynamic. The temperature oscillates around the threshold, but the range of the oscillation is pretty small and is robust to outside disturbances. It’s a robust homeostatic system,’ says Hao Zeng, an Academy of Finland research fellow at Tampere University who was the other lead author of the study.
    The researchers then built touch-responsive triggers on top of the feedback system. To accomplish this, they added mechanical components that respond to changes in temperature. Touching the gel system in the right way pushes it out of its steady state, and the resulting change in temperature causes the mechanical component to deform. Afterwards, everything returns to its original condition.
    The team designed two systems that respond to different types of touch. In one case, a single touch triggers the response, just as a touch-me-not mimosa plant folds its leaves when stroked. The second setup only responds to repeated touches, in the same way as a Venus flytrap needs to be touched twice in 30 seconds to make it snap shut. ‘We can trigger a snapping behaviour with mechanical touches at suitable intervals, just like a Venus flytrap. Our artificial material system can discriminate between low-frequency and high-frequency touches,’ explains Professor Arri Priimägi of Tampere University.
    The researchers also showed how the homeostatic system could control a dynamic colour display or even push cargo along its body. They emphasize that these demonstrations showcase only a handful of the possibilities opened up by the new material concept.
    ‘Life-inspired materials offer a new paradigm for dynamic and adaptive materials which will likely attract researchers for years to come,’ says Professor Olli Ikkala of Aalto University. ‘Carefully designed systems that mimic some of the basic behaviours of living systems will pave the way for truly smart materials and interactive soft robotics.’
    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More

  • in

    The whole in a part: Synchronizing chaos through a narrow slice of spectrum

    Engineers at the Tokyo Institute of Technology (Tokyo Tech) have uncovered some intricate effects arising when chaotic systems, which typically generate broad spectra, are coupled by conveying only a narrow range of frequencies from one to another. The synchronization of chaotic oscillators, such as electronic circuits, continues to attract considerable fascination due to the richness of the complex behaviors that can emerge. Recently, hypothetical applications in distributed sensing have been envisaged, however, wireless couplings are only practical over narrow frequency intervals. The proposed research shows that, even under such constraints, chaos synchronization can occur and give rise to phenomena that could one day be leveraged to realize useful operations over ensembles of distant nodes.
    The abstract notion that the whole can be found in each part of something has for long fascinated thinkers engaged in all walks of philosophy and experimental science: from Immanuel Kant on the essence of time to David Bohm on the notion of order, and from the self-similarity of fractal structures to the defining properties of holograms. It has, however, remained understandably extraneous to electronic engineering, which strives to develop ever more specialized and efficient circuits exchanging signals that possess highly controlled characteristics. By contrast, across the most diverse complex systems in nature, such as the brain, the generation of activity having features that present themselves similarly over different temporal scales, or frequencies, is nearly a ubiquitous observation.
    In a quest to explore new and unorthodox approaches to designing systems capable of solving difficult computation and control problems, physicists and engineers have, for decades, been investigating networks made up of chaotic oscillators. These are systems that can be easily realized using analog electronic, optical, and mechanical components. Their striking property is that, despite being quite simple in their structure, they can generate behaviors that are, at the same time, incredibly intricate and far from random. “Chaos entails an extreme sensitivity to initial conditions, meaning that the activity at each point in time is effectively unpredictable. However, a crucial aspect is that the geometrical arrangements of the trajectories generated by chaotic signals have well-defined properties which, alongside the distribution of frequencies, are rather stable and repeatable. Since these features can change in many ways depending on the voltage input or parameter settings like a resistor value, these circuits are interesting as a basis for realizing new forms of distributed computation, for example, based on sensor readings,” explains Dr. Ludovico Minati, lead author of the study. “In our recent work, we showed that they could be effectively used to realize the kind of physical reservoirs that can simplify neural network training,” adds Mr. Jim Bartels, doctoral student at the Nano Sensing Unit, where the study was conducted [1].
    When two or more chaotic oscillators are coupled together, the most interesting behaviors emerge as they attract and repulse their activities while trying to find an equilibrium, in ways that usual periodic oscillators simply cannot access. “Two years ago, work done in our laboratory demonstrated that these behaviors could, at least in principle, be used as a means to gather readings from distant sensors and directly provide statistics such as the average value,” adds Dr. Ludovico Minati [2]. However, the complex nature of chaotic signals implies that they generally feature broad frequency spectra, which are very different from those, narrow and neatly delineated, that are typically used in modern wireless communication. “As a consequence, it becomes very difficult, if not impossible, to realize couplings over the air. That’s not only because antennas are often highly tuned for specific frequencies, but also and especially because radio regulations do not allow broadcasting except within tightly-defined regions,” explains Mr. Boyan Li, master student and second author of the study.
    To date, there is a substantial body of literature covering the many effects that can arise in ensembles of chaotic oscillators. For example, small groups of nodes that preferentially synchronize with each other can appear, a little like groups of people coalescing together at a party, together with unexpected remote inter-dependencies that remind us of the binding problem in the brain. However, surprisingly, almost no studies have considered the possibility (or otherwise) of coupling chaotic oscillators via a mechanism, basically a filter, that transfers only a narrow range of frequencies. For this reason, the researchers at Tokyo Tech decided to explore the behavior of a pair of chaotic oscillators. They coupled them using a filter that they could easily tune to let through only a narrow range of frequencies, while for the time being retaining a wired connection between them.
    “We decided to use a type of chaotic oscillator that is extraordinarily simple, involving only one transistor and a handful of passive components, and known as the Minati-Frasca oscillator. This family of oscillators was introduced about five years ago by researchers from Italy and Poland, and has many remarkable properties, as outlined in a recent book. Recently, we become interested in understanding them and their several potential applications,” explains Dr. Hiroyuki Ito, head of the Nano Sensing Unit where the study was conducted.
    Based on simulations and measurements, the research team was able to demonstrate that it is in fact possible to synchronize these oscillators even without transferring the entire broad spectrum, but just a relatively narrow “slice” of it. They like to compare this to a situation where the whole is found, at least partially, in a part. When operating in the lower gigahertz region, close to where first-generation wireless devices work, the oscillators could synchronize when conveying only a few point percent of the bandwidth. As expected, the synchronization was not complete, meaning that the oscillators did not completely follow each other’s activity. “This sort of incomplete, or weak, interdependence is precisely the region where the most interesting effects can appear at the level of a network of nodes. It is quite similar between oscillators and neurons, as one of our previous works showed. These are the mechanisms that represent the next frontier for implementing distributed computation based on emergent behaviors, as many research groups worldwide are pursuing,” adds Dr. Mattia Frasca from the University of Catania in Italy, who initially co-discovered these circuits with Dr. Minati, later analyzing together their behaviors and relationship to other systems in nature, and provided several theoretical foundations which were used for the study by the Tokyo Tech researchers.
    The researchers observed that while a narrow slice of the spectrum was enough to obtain some detectable synchronization, the center location and width of the filter had important effects. Using a multitude of analysis techniques, they could see that over some regions, the activity of the slave oscillator tracked the filter setting in an evident way, whereas in others, different and rather more complex effects appeared. “This is a good example of the richness of behaviors available to these circuits, which remain not widely known in the electronic engineering community. It is quite different compared to the simpler responses of periodic systems, which are either locked or not to each other. It is a long way before we are really able to realize effective applications using these phenomena, so it must be said that this is fundamental research at the moment. However, it is very fascinating to think that in the future we may realize some aspects of sensing also using these unusual approaches,” adds Ms. Zixuan Li, doctoral student, and co-author of the study.
    After this interview, the team explained that this type of research will firstly need to be extended by understanding more deeply the phenomena and how they can be used to generate interesting collective activity. Then, the two main engineering challenges will be to demonstrate couplings over an actual wireless link, while meeting all radio requirements, and to substantially minimizing the power consumption, also using some results from their previous research. “If successful solutions are found to these challenges, then one of our main goals is to demonstrate usable distributed sensing in applications that are important to society, such as monitoring the condition of land in precision agriculture,” concludes Dr. Hiroyuki Ito. The methodology and results are reported in a recent article published in the journal Chaos, Solitons and Fractals [3], and all of the experimental recordings have been made freely available for others to use in future work. More

  • in

    Making the most of quite little: Improving AI training for edge sensor time series

    Engineers at the Tokyo Institute of Technology (Tokyo Tech) have demonstrated a simple computational approach for improving the way artificial intelligence classifiers, such as neural networks, can be trained based on limited amounts of sensor data. The emerging applications of the internet of things often require edge devices that can reliably classify behaviors and situations based on time series. However, training data are difficult and expensive to acquire. The proposed approach promises to substantially increase the quality of classifier training, at almost no extra cost.
    In recent times, the prospect of having huge numbers of Internet of Things (IoT) sensors quietly and diligently monitoring countless aspects of human, natural, and machine activities has gained ground. As our society becomes more and more hungry for data, scientists, engineers, and strategists increasingly hope that the additional insight which we can derive from this pervasive monitoring will improve the quality and efficiency of many production processes, also resulting in improved sustainability.
    The world in which we live is incredibly complex, and this complexity is reflected in a huge multitude of variables that IoT sensors may be designed to monitor. Some are natural, such as the amount of sunlight, moisture, or the movement of an animal, while others are artificial, for example, the number of cars crossing an intersection or the strain applied to a suspended structure like a bridge. What these variables all have in common is that they evolve over time, creating what is known as time series, and that meaningful information is expected to be contained in their relentless changes. In many cases, researchers are interested in classifying a set of predetermined conditions or situations based on these temporal changes, as a way of reducing the amount of data and making it easier to understand. For instance, measuring how frequently a particular condition or situation arises is often taken as the basis for detecting and understanding the origin of malfunctions, pollution increases, and so on.
    Some types of sensors measure variables that in themselves change very slowly over time, such as moisture. In such cases, it is possible to transmit each individual reading over a wireless network to a cloud server, where the analysis of large amounts of aggregated data takes place. However, more and more applications require measuring variables that change rather quickly, such as the accelerations tracking the behavior of an animal or the daily activity of a person. Since many readings per second are often required, it becomes impractical or impossible to transmit the raw data wirelessly, due to limitations of available energy, data charges, and, in remote locations, bandwidth. To circumvent this issue, engineers all over the world have long been looking for clever and efficient ways to pull aspects of data analysis away from the cloud and into the sensor nodes themselves. This is often called edge artificial intelligence, or edge AI. In general terms, the idea is to send wirelessly not the raw recordings, but the results of a classification algorithm searching for particular conditions or situations of interest, resulting in a much more limited amount of data from each node.
    There are, however, many challenges to face. Some are physical and stem from the need to fit a good classifier in what is usually a rather limited amount of space and weight, and often making it run on a very small amount of power so that long battery life can be achieved. “Good engineering solutions to these requirements are emerging every day, but the real challenge holding back many real-world solutions is actually another. Classification accuracy is often just not good enough, and society requires reliable answers to start trusting a technology,” says Dr. Hiroyuki Ito, head of the Nano Sensing Unit where the study was conducted. “Many exemplary applications of artificial intelligence such as self-driving cars have shown that how good or poor an artificial classifier is, depends heavily on the quality of the data used to train it. But, more often than not, sensor time series data are really demanding and expensive to acquire in the field. For example, considering cattle behavior monitoring, to acquire it engineers need to spend time at farms, instrumenting individual cows and having experts patiently annotate their behavior based on video footage,” adds co-author Dr. Korkut Kaan Tokgoz, formerly part of the same research unit and now with Sabanci University in Turkey.
    As a consequence of the fact that training data is so precious, engineers have started looking at new ways of making the most out of even quite a limited amount of data available to train edge AI devices. An important trend in this area is using techniques known as “data augmentation,” wherein some manipulations, deemed reasonable based on experience, are applied to the recorded data so as to try and mimic the variability and uncertainty that can be encountered in real applications. “For example, in our previous work, we simulated the unpredictable rotation of a collar containing an acceleration sensor around the neck of a monitored cow, and found that the additional data generated in this way could really improve the performance in behavior classification,” explains Ms. Chao Li, doctoral student and lead author of the study [1]. “However, we also realized that we needed a much more general approach to augmenting sensor time series, one that could in principle be used for any kind of data and not make specific assumptions about the measurement condition. Moreover, in real-world situations, there are actually two issues, related but distinct. The first is that the overall amount of training data is often limited. The second is that some situations or conditions occur much more frequently than others, and this is unavoidable. For example, cows naturally spend much more time resting or ruminating than drinking. Yet, accurately measuring the less frequent behaviors is quite essential to properly judge the welfare status of an animal. A cow that does not drink will surely succumb, even thought the accuracy of classifying drinking may have low impact on common training approaches due to its rarity. This is called the data imbalance problem,” she adds.
    The computational research performed by the researchers at Tokyo Tech and initially targeted at improving cattle behavior monitoring offers a possible solution to these problems, by combining two very different and complementary approaches. The first one is known as sampling, and consists of extracting “snippets” of time series corresponding to the conditions to be classified always starting from different and random instants. How many snippets are extracted is adjusted carefully, ensuring that one always ends up with approximately the same number of snippets across all the behaviors to be classified, regardless of how common or rare they are. This results in a more balanced dataset, which is decidedly preferable as a basis for training any classifier such as a neural network. Because the procedure is based on selecting subsets of actual data, it is safe in terms of avoiding the generation of the artifacts which may stem from artificially synthesizing new snippets to make up for the less represented behaviors. The second one is known as surrogate data, and involves a very robust numerical procedure to generate, from any existing time series, any number of new ones that preserve some key features, but are completely uncorrelated. “This virtuous combination turned out to be very important, because sampling may cause a lot of duplication of the same data, when certain behaviors are too rare compared to others. Surrogate data are never the same and prevent this problem, which can very negatively affect the training process. And a key aspect of this work is that the data augmentation is integrated with the training process, so, different data are always presented to the network throughout its training,” explains Mr. Jim Bartels, co-author and doctoral student at the unit.
    Surrogate time series are generated by completely scrambling the phases of one or more signals, thus rendering them totally unrecognizable when their changes over time are considered. However, the distribution of values, the autocorrelation, and, if there are multiple signals, the crosscorrelation, are perfectly preserved. “In another previous work, we found that many empirical operations such as reversing and recombining time series actually helped to improve training. As these operations change the nonlinear content of the data, we later reasoned that the sort of linear features which are retained during surrogate generation are probably key to performance, at least for the application of cow behavior recognition that I focus on,” further explains Ms. Chao Li [2]. “The method of surrogate time series originates from an entirely different field, namely the study of nonlinear dynamics in complex systems like the brain, for which such time series are used to help distinguish chaotic behavior from noise. By bringing together our different experiences, we quickly realized that they could be helpful for this application, too,” adds Dr. Ludovico Minati, second author of the study and also with the Nano Sensing Unit. “However, considerable caution is needed because no two application scenarios are ever the same, and what holds true for the time series reflecting cow behaviors may not be valid for other sensors monitoring different types of dynamics. In any case, the elegance of the proposed method is that it is quite essential, simple, and generic. Therefore, it will be easy for other researchers to quickly try it out on their specific problems,” he adds.
    After this interview, the team explained that this type of research will be applied first of all to improving the classification of cattle behaviors, for which it was initially intended and on which the unit is conducting multidisciplinary research in partnership with other universities and companies. “One of our main goals is to successfully demonstrate high accuracy on a small, inexpensive device that can monitor a cow over its entire lifetime, allowing early detection of disease and therefore really improving not only animal welfare but also the efficiency and sustainability of farming,” concludes Dr. Hiroyuki Ito. The methodology and results are reported in a recent article published in the journal IEEE Sensors [3]. More

  • in

    Using math to better treat cancer

    Researchers at the University of Waterloo have identified a new method for scheduling radiation therapy that could be as much as 22 percent more effective at killing cancer cells than current standard radiation treatment regimens.
    While many mathematical studies have examined how to optimize the scheduling of radiation treatment for maximum effectiveness against cancer, most of these studies assume “intratumoral homogeneity” — that is, that all of the cancer cells are the same. In recent years, however, scientists have realized that tumours are made up of many different kinds of cells. Most importantly, they include cancer stem cells, which are more resistant to radiation than other kinds of cells.
    “The problem with any calculation involving cancer is that it’s super hard to get exact values because things vary from cancer type to cancer type, patient to patient, even within the tumour,” said Cameron Meaney, a PhD candidate in Applied Mathematics at Waterloo and the lead researcher on the study.
    This new algorithm can generalize the differing radiation resistances of stem cells and non-stem cells, allowing doctors to predict how a tumour will respond to treatment before gathering exact data on an individual’s cancer.
    The model has limitations, Meaney explained, as tumours contain far more than two kinds of cells. What it does, however, is provide clinical researchers with a better starting point for treatment research.
    “The results of the algorithm are important because they shed light on the idea that heterogeneity in tumours matters for planning treatment,” Meaney said.
    The next step the researchers hope to see is an application of their algorithm to clinical studies: will their suggested therapy schedule outperform existing scheduling practices in a lab trial?
    Story Source:
    Materials provided by University of Waterloo. Note: Content may be edited for style and length. More

  • in

    AI tailors artificial DNA for future drug development

    With the help of an AI, researchers at Chalmers University of Technology, Sweden, have succeeded in designing synthetic DNA that controls the cells’ protein production. The technology can contribute to the development and production of vaccines, drugs for severe diseases, as well as alternative food proteins much faster and at significantly lower costs than today.
    How our genes are expressed is a process that is fundamental to the functionality of cells in all living organisms. Simply put, the genetic code in DNA is transcribed to the molecule messenger RNA (mRNA), which tells the cell’s factory which protein to produce and in which quantities.
    Researchers have put a lot of effort into trying to control gene expression because it can, among other things, contribute to the development of protein-based drugs. A recent example is the mRNA vaccine against Covid-19, which instructed the body’s cells to produce the same protein found on the surface of the coronavirus. The body’s immune system could then learn to form antibodies against the virus. Likewise, it is possible to teach the body’s immune system to defeat cancer cells or other complex diseases if one understands the genetic code behind the production of specific proteins.
    Most of today’s new drugs are protein-based, but the techniques for producing them are both expensive and slow, because it is difficult to control how the DNA is expressed. Last year, a research group at Chalmers, led by Aleksej Zelezniak, Associate Professor of Systems Biology, took an important step in understanding and controlling how much of a protein is made from a certain DNA sequence.
    “First it was about being able to fully ‘read’ the DNA molecule’s instructions. Now we have succeeded in designing our own DNA that contains the exact instructions to control the quantity of a specific protein,” says Aleksej Zelezniak about the research group’s latest important breakthrough.
    DNA molecules made-to-order
    The principle behind the new method is similar to when an AI generates faces that look like real people. By learning what a large selection of faces looks like, the AI can then create completely new but natural-looking faces. It is then easy to modify a face by, for example, saying that it should look older, or have a different hairstyle. On the other hand, programming a believable face from scratch, without the use of AI, would have been much more difficult and time-consuming. Similarly, the researchers’ AI has been taught the structure and regulatory code of DNA. The AI then designs synthetic DNA, where it is easy to modify its regulatory information in the desired direction of gene expression. Simply put, the AI is told how much of a gene is desired and then ‘prints’ the appropriate DNA sequence.
    “DNA is an incredibly long and complex molecule. It is thus experimentally extremely challenging to make changes to it by iteratively reading and changing it, then reading and changing it again. This way it takes years of research to find something that works. Instead, it is much more effective to let an AI learn the principles of navigating DNA. What otherwise takes years is now shortened to weeks or days,” says first author Jan Zrimec, a research associate at the National Institute of Biology in Slovenia and past postdoc in Aleksej Zelezniak’s group.
    The researchers have developed their method in the yeast Saccharomyces cerevisiae, whose cells resemble mammalian cells. The next step is to use human cells. The researchers have hopes that their progress will have an impact on the development of new as well as existing drugs.
    “Protein-based drugs for complex diseases or alternative sustainable food proteins can take many years and can be extremely expensive to develop. Some are so expensive that it is impossible to obtain a return on investment, making them economically nonviable. With our technology, it is possible to develop and manufacture proteins much more efficiently so that they can be marketed,” says Aleksej Zelezniak.
    The authors of the study are Jan Zrimec, Xiaozhi Fu, Azam Sheikh Muhammad, Christos Skrekas, Vykintas Jauniskis, Nora K. Speicher, Christoph S. Börlin, Vilhelm Verendel, Morteza Haghir Chehreghani, Devdatt Dubhashi, Verena Siewers, Florian David, Jens Nielsen and Aleksej Zelezniak.
    The researcher are active at Chalmers University of Technology, Sverige; National Institute of Biology, Slovenia; Biomatter Designs, Lithuania; Institute of Biotechnology, Lithuania; BioInnovation Institute, Denmark; King’s College London, UK. More