More stories

  • in

    Novel robust-optimal controllers based on fuzzy descriptor system

    Nonlinear systems have applications in many diverse fields from robotics to economics. Unlike linear systems, the output is not proportional to the input is such systems. A classic example is the motion of a pendulum. Due to the inherent nature of nonlinear systems, their mathematical modelling and, consequently, control is difficult. In this context, the Takagi-Sugeno (T-S) fuzzy system emerges as a highly effective tool. This system leverages fuzzy logic to map input and output values to approximate a nonlinear system as multiple linear systems which are easier to model. Fuzzy logic is a form of mathematical logic in which, instead of requiring all statements to be true (1) or false (0), the truth values can be any value between 0 and 1. T-S fuzzy system has thus served as the foundation for several nonlinear control methods, with the Parallel Distributed Compensator (PDC) method being the most prominent.
    Furthermore, scientists have developed an enhanced version of this system, known as the fuzzy descriptor system (FDS). It combines the T-S fuzzy system with the powerful space-state representation, which describes a physical system in terms of state variables, input variables, and output variables. Despite extensive research, optimal control strategies in the context of T-S FDSs are still largely unexplored. Additionally, while robust control methods, which protect against disturbances, have been explored for T-S FDS using methods like Linear Matrix Inequalities (LMI), these methods introduce additional complexity and optimization challenges.
    To overcome these limitations, a group of researchers, led by Associate Professor Ngoc-Tam Bui from the Innovative Global Program of the College of Engineering at Shibaura Institute of Technology in Japan and including Thi-Van-Anh Nguyen, Quy-Thinh Dao, and Duc-Binh Pham, all from Hanoi University of Science and Technology, developed novel optimal and robust-optimal controllers based on the T-S fuzzy descriptor model. Their study was published in the journal Scientific Reports on March 07, 2024.
    To develop the controllers, the team first utilized the powerful Lyapunov stability theory to establish the stability conditions for the mathematical model of the FDS. However, these stability conditions cannot be directly used. As Dr. Bui explains, “The stability conditions for the FDS model make it difficult to solve using established mathematical tools. To make them more amenable, we systematically transformed them into LMI.” These modified conditions formed the basis for developing three controllers: the stability controller which uses PDC to manage deviations, the optimal controller which minimizes a cost function to obtain optimal control, and the robust-optimal controller which combines the benefits of both of them.
    The researchers demonstrated the effectiveness of these controllers in controlling a rotary inverted pendulum, a challenging system comprising an inverted pendulum sitting on a rotating base. The problem is to keep the pendulum upright by controlling the rotation of the base. The researchers tested the performance of the controllers using distinct simulation scenarios. Simulations revealed that the stability controller effectively stabilized the system when the initial displacement angle was small, while with larger initial angles, there were more oscillations, and the settling time was higher. The high settling time was effectively addressed by the optimal controller, reducing it from 13 to 2 seconds, representing a six-fold reduction. Moreover, it also reduced the maximum amplitudes during oscillations.
    The robust-optimal controller was tested using two different scenarios. In the first case, the mass of the pendulum bar was changed, while in the second, white noise was introduced into the control input. Compared to the optimal controller, it performed the same in the first scenario. However, the controller was considerably better in the second scenario, showing no oscillations while the optimal controller showed clear oscillations. Notably, the robust-optimal controller showed the lowest error values.
    These results highlight the adaptability and potential of these controllers in practical scenarios. “The research findings hold promising implications for various real-life applications where stable control in dynamic and uncertain environments is paramount. Specifically, autonomous vehicles and industrial robots can achieve enhanced performance and adaptability using the proposed controllers,” remarks Dr. Bui. “Overall, our research opens avenues for advancing control strategies in various domains, ultimately contributing to more capable autonomous systems, making transportationsafer, healthcare more effective, and manufacturing more efficient.” More

  • in

    Protecting art and passwords with biochemistry

    Security experts fear Q-​Day, the day when quantum computers become so powerful that they can crack today’s passwords. Some experts estimate that this day will come within the next ten years. Password checks are based on cryptographic one-​way functions, which calculate an output value from an input value. This makes it possible to check the validity of a password without transmitting the password itself: the one-​way function converts the password into an output value that can then be used to check its validity in, say, online banking. What makes one-​way functions special is that it’s impossible to use their output value to deduce the input value — in other words, the password. At least not with today’s resources. However, future quantum computers could make this kind of inverse calculation easier.
    Researchers at ETH Zurich have now presented a cryptographic one-​way function that works differently from today’s and will also be secure in the future. Rather than processing the data using arithmetic operations, it is stored as a sequence of nucleotides — the chemical building blocks of DNA.
    Based on true randomness
    “Our system is based on true randomness. The input and output values are physically linked, and it’s only possible to get from the input value to the output value, not the other way round,” explains Robert Grass, a professor in the Department of Chemistry and Applied Biosciences. “Since it’s a physical system and not a digital one, it can’t be decoded by an algorithm, not even by one that runs on a quantum computer,” adds Anne Lüscher, a doctoral student in Grass’s group. She is the lead author of the paper, which was published in the journal Nature Communications.
    The researchers’ new system can serve as a counterfeit-​proof way of certifying the authenticity of valuable objects such as works of art. The technology could also be used to trace raw materials and industrial products.
    How it works
    The new biochemical one-​way function is based on a pool of one hundred million different DNA molecules. Each of the molecules contains two segments featuring a random sequence of nucleotides: one segment for the input value and one for the output value. There are several hundred identical copies of each of these DNA molecules in the pool, and the pool can also be divided into several pools; these are identical because they contain the same random DNA molecules. The pools can be located in different places, or they can be built into objects.

    Anyone in possession of this DNA pool holds the security system’s lock. The polymerase chain reaction (PCR) can be used to test a key, or input value, which takes the form of a short sequence of nucleotides. During the PCR, this key searches the pool of hundreds of millions of DNA molecules for the molecule with the matching input value, and the PCR then amplifies the output value located on the same molecule. DNA sequencing is used to make the output value readable.
    At first glance, the principle seems complicated. “However, producing DNA molecules with built-​in randomness is cheap and easy,” Grass says. The production costs for a DNA pool that can be divided up in this way are less than 1 Swiss franc. Using DNA sequencing to read out the output value is more time-​consuming and expensive, but many biology laboratories already possess the necessary equipment.
    Securing valuable goods and supply chains
    ETH Zurich has applied for a patent on this new technology. The researchers now want to optimise and refine it to bring it to market. Because using the method calls for specialised laboratory infrastructure, the scientists think the most likely application for this form of password verification is currently for highly sensitive goods or for access to buildings with restricted access. This technology won’t be an option for the broader public to check passwords until DNA sequencing in particular becomes easier.
    A little more thought has already gone into the idea of using the technology for the forgery-​proof certification of works of art. For instance, if there are ten copies of a picture, the artist can mark them all with the DNA pool — perhaps by mixing the DNA into the paint, spraying it onto the picture or applying it to a specific spot.
    If several owners later wish to have the authenticity of these artworks confirmed, they can get together, agree on a key (i.e. an input value) and carry out the DNA test. All the copies for which the test produces the same output value will have been proven genuine. The new technology could also be used to link crypto-​assets such as NFTs, which exist only in the digital world, to an object and thus to the physical world.
    Furthermore, it would support counterfeit-​proof tracking along supply chains of industrial goods or raw materials. “The aviation industry, for example, has to be able to provide complete proof that it uses only original components. Our technology can guarantee traceability,” Grass says. In addition, the method could be used to label the authenticity of original medicines or cosmetics. More

  • in

    How scientists are accelerating chemistry discoveries with automation

    A new automated workflow developed by scientists at Lawrence Berkeley National Laboratory (Berkeley Lab) has the potential to allow researchers to analyze the products of their reaction experiments in real time, a key capability needed for future automated chemical processes.
    The developed workflow — which applies statistical analysis to process data from nuclear magnetic resonance (NMR) spectroscopy — could help speed the discovery of new pharmaceutical drugs, and accelerate the development of new chemical reactions.
    The Berkeley Lab scientists who developed the groundbreaking technique say that the workflow can quickly identify the molecular structure of products formed by chemical reactions that have never been studied before. They recently reported their findings in the Journal of Chemical Information and Modeling.
    In addition to drug discovery and chemical reaction development, the workflow could also help researchers who are developing new catalysts. Catalysts are substances that facilitate a chemical reaction in the production of useful new products like renewable fuels or biodegradable plastics.
    “What excites people the most about this technique is its potential for real-time reaction analysis, which is an integral part of automated chemistry,” said first author Maxwell C. Venetos, a former researcher in Berkeley Lab’s Materials Sciences Division and former graduate student researcher in materials sciences at UC Berkeley. He completed his doctoral studies last year. “Our workflow really allows you to start pursuing the unknown. You are no longer constrained by things that you already know the answer to.”
    The new workflow can also identify isomers, which are molecules with the same chemical formula but different atomic arrangements. This could greatly accelerate synthetic chemistry processes in pharmaceutical research, for example. “This workflow is the first of its kind where users can generate their own library, and tune it to the quality of that library, without relying on an external database,” Venetos said.
    Advancing new applications
    In the pharmaceutical industry, drug developers currently use machine-learning algorithms to virtually screen hundreds of chemical compounds to identify potential new drug candidates that are more likely to be effective against specific cancers and other diseases. These screening methods comb through online libraries or databases of known compounds (or reaction products) and match them with likely drug “targets” in cell walls.

    But if a drug researcher is experimenting with molecules so new that their chemical structures don’t yet exist in a database, they must typically spend days in the lab to sort out the mixture’s molecular makeup: First, by running the reaction products through a purification machine, and then using one of the most useful characterization tools in a synthetic chemist’s arsenal, an NMR spectrometer, to identify and measure the molecules in the mixture one at a time.
    “But with our new workflow, you could feasibly do all of that work within a couple of hours,” Venetos said. The time-savings come from the workflow’s ability to rapidly and accurately analyze the NMR spectra of unpurified reaction mixtures that contain multiple compounds, a task that is impossible through conventional NMR spectral analysis methods.
    “I’m very excited about this work as it applies novel data-driven methods to the age-old problem of accelerating synthesis and characterization,” said senior author Kristin Persson, a faculty senior scientist in Berkeley Lab’s Materials Sciences Division and UC Berkeley professor of materials science and engineering who also leads the Materials Project.
    Experimental results
    In addition to being much faster than benchtop purification methods, the new workflow has the potential to be just as accurate. NMR simulation experiments performed using the National Energy Research Scientific Computing Center (NERSC) at Berkeley Lab with support from the Materials Project showed that the new workflow can correctly identify compound molecules in reaction mixtures that produce isomers, and also predict the relative concentrations of those compounds.
    To ensure high statistical accuracy, the research team used a sophisticated algorithm known as Hamiltonian Monte Carlo Markov Chain (HMCMC) to analyze the NMR spectra. They also performed advanced theoretical calculations based on a method called density-functional theory.

    Venetos designed the automated workflow as open source so that users can run it on an ordinary desktop computer. That convenience will come in handy for anyone from industry or academia.
    The technique sprouted from conversations between the Persson group and experimental collaborators Masha Elkin and Connor Delaney, former postdoctoral researchers in the John Hartwig group at UC Berkeley. Elkin is now a professor of chemistry at the Massachusetts Institute of Technology, and Delaney a professor of chemistry at the University of Texas at Dallas.
    “In chemistry reaction development, we are constantly spending time to figure out what a reaction made and in what ratio,” said John Hartwig, a senior faculty scientist in Berkeley Lab’s Chemical Sciences Division and UC Berkeley professor of chemistry. “Certain NMR spectrometry methods are precise, but if one is deciphering the contents of a crude reaction mixture containing a bunch of unknown potential products, those methods are far too slow to have as part of a high-throughput experimental or automated workflow. And that’s where this new capability to predict the NMR spectrum could help,” he said.
    Now that they’ve demonstrated the automated workflow’s potential, Persson and team hope to incorporate it into an automated laboratory that analyzes the NMR data of thousands or even millions of new chemical reactions at a time.
    Other authors on the paper include Masha Elkin, Connor Delaney, and John Hartwig at UC Berkeley.
    NERSC is a DOE Office of Science user facility at Berkeley Lab.
    The work was supported by the U.S. Department of Energy’s Office of Science, the U.S. National Science Foundation, and the National Institutes of Health. More

  • in

    Scientists release state-of-the-art spike-sorting software Kilosort4

    How do researchers make sense of the mountains of data collected from recording the simultaneous activity of hundreds of neurons? Neuroscientists all over the world rely on Kilosort, software that enables them to tease apart spikes from individual neurons to understand how the brain’s cells and circuits work together to process information.
    Now, researchers at HHMI’s Janelia Research Campus, led by Group Leader Marius Pachitariu, have released Kilosort4, an updated version of the popular spike-sorting software that has improved processing, requires less manual work, and is more accurate and easier to use than previous versions.
    “Over the past eight years, I’ve been refining the algorithm to make it more and more human-independent so people can use it out of the box,” Pachitariu says.
    Kilosort has become indispensable for many neuroscientists, but it may never have been developed if Pachitariu hadn’t decided he wanted to try something new.
    Pachitariu’s PhD work was in computational neuroscience and machine learning, but he yearned to work on more real-world applications, and he almost left academia for industry after he completed his PhD. Instead, Pachitariu opted for a postdoc in the joint lab of Kenneth Harris and Matteo Carandini at University College London where he could do more experimental neuroscience.
    The lab was then part of a consortium testing a probe called Neuropixels, developed at HHMI’s Janelia Research Campus. Pachitariu had no idea how to use the probes, which record activity from hundreds of neurons simultaneously, but he knew how to develop algorithms to keep up with the enormous amount of data his labmates were generating.
    In the first year of his postdoc, Pachitariu developed the initial version of Kilosort. The software, which was 50 times faster than previous approaches, allowed researchers to process the millions of data points generated by the Neuropixels probes. Eight years later, the probes and the software are staples in neuroscience labs worldwide, allowing researchers to identify and classify the spikes of individual neurons.

    In 2017, Pachitariu became a group leader at Janelia, where he and his team seek to understand how thousands of neurons work together to enable animals to think, decide, and act. These days, Pachitariu spends most of his time doing experiments and analyzing data, but he still finds time to work on improving Kilosort. The newly released Kilosort4 is the best in its class, outperforming other algorithms and correctly identifying even hard-to-detect neurons, according to the researchers.
    Pachitariu says it is much easier to squeeze in work on projects like Kilosort at Janelia than at other institutions where he would have to spend time writing grants and teaching.
    “Every now and then, I can put a few months into spearheading a new version and writing new code,” he says.
    Pachitariu says he also enjoys refining Kilosort, which allows him to use the core set of skills he developed during his PhD work. More

  • in

    Proof-of-principle demonstration of 3-D magnetic recording

    Research groups from NIMS, Seagate Technology, and Tohoku University have made a breakthrough in the field of hard disk drives (HDD) by demonstrating the feasibility of multi-level recording using a three-dimensional magnetic recording medium to store digital information. The research groups have shown that this technology can be used to increase the storage capacity of HDDs, which could lead to more efficient and cost-effective data storage solutions in the future.
    Data centers are increasingly storing vast amounts of data on hard disk drives (HDDs) that use perpendicular magnetic recording (PMR) to store information at areal densities of around 1.5 Tbit/in². However, to transition to higher areal densities, a high anisotropy magnetic recording medium consisting of FePt grains combined with heat-assisted laser writing is required. This method, known as heat-assisted magnetic recording (HAMR), is capable of sustaining areal recording densities of up to 10 Tbit/in². Furthermore, densities of larger than 10 Tbit/in² are possible based on a new principle demonstrated by storing multiple recording levels of 3 or 4 compared with the binary level used in HDD technology.
    In this study, we succeeded in arranging the FePt recording layers three dimensionally, by fabricating lattice-matched, FePt/Ru/FePt multilayer films, with Ru as a spacer layer. Measurements of the magnetization show the two FePt layers have different Curie temperatures. This means that three-dimensional recording becomes possible by adjusting the laser power when writing. In addition, we have demonstrated the principle of 3D recording through recording simulations, using a media model that mimics the microstructure and magnetic properties of the fabricated media.
    The three-dimensional magnetic recording method can increase recording capacity by stacking recording layers in three dimensions. This means that more digital information can be stored with fewer HDDs, leading to energy savings for data centers. In the future, we plan to develop processes to reduce the size of FePt grains, to improve the orientation and magnetic anisotropy, and to stack more FePt layers to realize a media structure suitable for practical use as a high-density HDD. More

  • in

    Heat waves cause more illness and death in U.S. cities with fewer trees

    In the United States, urban neighborhoods with primarily white residents tend to have more trees than neighborhoods whose residents are predominantly people of color. A new analysis has now linked this inequity to a disparity in heat-related illness and death, researchers report April 8 in npj Urban Sustainability. 

    Neighborhoods with predominantly people of color have 11 percent less tree cover on average than majority white neighborhoods, and air temperatures are about 0.2 degrees Celsius higher during summer, urban ecologist Rob McDonald of The Nature Conservancy and colleagues found. Trees already prevent 442 excess deaths and about 85,000 doctor visits annually in these neighborhoods. In majority white neighborhoods, trees save around 200 more lives and prevent 30,000 more doctor visits. More

  • in

    Innovative sensing platform unlocks ultrahigh sensitivity in conventional sensors

    Optical sensors serve as the backbone of numerous scientific and technological endeavors, from detecting gravitational waves to imaging biological tissues for medical diagnostics. These sensors use light to detect changes in properties of the environment they’re monitoring, including chemical biomarkers and physical properties like temperature. A persistent challenge in optical sensing has been enhancing sensitivity to detect faint signals amid noise.
    New research from Lan Yang, the Edwin H. & Florence G. Skinner Professor in the Preston M. Green Department of Electrical & Systems Engineering in the McKelvey School of Engineering at Washington University in St. Louis, unlocks the power of exceptional points (EPs) for advanced optical sensing. In a study published April 5 in Science Advances, Yang and first author Wenbo Mao, a doctoral student in Yang’s lab, showed that these unique EPs — specific conditions in systems where extraordinary optical phenomena can occur — can be deployed on conventional sensors to achieve a striking sensitivity to environmental perturbations.
    Yang and Mao developed an EP-enhanced sensing platform that overcomes the limitations of previous approaches. Unlike traditional methods that require modifications to the sensor itself, their innovative system features an EP control unit that can plug into physically separated external sensors. This configuration allows EPs to be tuned solely through adjustments to the control unit, allowing for ultrahigh sensitivity without the need for complex modifications to the sensor.
    “We’ve implemented a novel platform that can impart EP enhancement to conventional optical sensors,” Yang said. “This system represents a revolutionary extension of EP-enhanced sensing, significantly expanding its applicability and universality. Any phase-sensitive sensor can acquire improved sensitivity and reduced detection limit by connecting to this configuration. Simply by tuning the control unit, this EP configuration can adapt to various sensing scenarios, such as environmental detection, health monitoring and biomedical imaging.”
    By decoupling the sensing and control functions, Yang and Mao have effectively skirted the stringent physical requirements for operating sensors at EPs that have so far hindered their widespread adoption. This clears the way for EP enhancement to be applied to a wide range of conventional sensors — including ring resonators, thermal and magnetic sensors, and sensors that pick up vibrations or detect perturbations in biomarkers — vastly improving the detection limit of sensors scientists are already using. With the control unit set to an EP, the sensor can operate differently — not at an EP — and still reap the benefits of EP enhancement.
    As a proof-of-concept, Yang’s team tested a system’s detection limit, or ability to detect weak perturbations over system noise. They demonstrated a six-fold reduction in the detection limit of a sensor using their EP-enhanced configuration compared to the conventional sensor.
    “With this work, we’ve shown that we can significantly enhance our ability to detect perturbations that have weak signals,” Mao said. “We’re now focused on bringing that theory to broad applications. I’m specifically focused on medical applications, especially working to enhance magnetic sensing, which could be used to improve MRI technology. Currently, MRIs require a whole room with careful temperature control. Our EP platform could be used to enhance magnetic sensing to enable portable, bedside MRI.” More

  • in

    Can language models read the genome? This one decoded mRNA to make better vaccines

    The same class of artificial intelligence that made headlines coding software and passing the bar exam has learned to read a different kind of text — the genetic code.
    That code contains instructions for all of life’s functions and follows rules not unlike those that govern human languages. Each sequence in a genome adheres to an intricate grammar and syntax, the structures that give rise to meaning. Just as changing a few words can radically alter the impact of a sentence, small variations in a biological sequence can make a huge difference in the forms that sequence encodes.
    Now Princeton University researchers led by machine learning expertMengdi Wang are using language models to home in on partial genome sequences and optimize those sequences to study biology and improve medicine. And they are already underway.
    In a paper published April 5 in the journal Nature Machine Intelligence, the authors detail a language model that used its powers of semantic representation to design a more effective mRNA vaccine such as those used to protect against COVID-19.
    Found in Translation
    Scientists have a simple way to summarize the flow of genetic information. They call it the central dogma of biology. Information moves from DNA to RNA to proteins. Proteins create the structures and functions of living cells.
    Messenger RNA, or mRNA, converts the information into proteins in that final step, called translation. But mRNA is interesting. Only part of it holds the code for the protein. The rest is not translated but controls vital aspects of the translation process.

    Governing the efficiency of protein production is a key mechanism by which mRNA vaccines work. The researchers focused their language model there, on the untranslated region, to see how they could optimize efficiency and improve vaccines.
    After training the model on a small variety of species, the researchers generated hundreds of new optimized sequences and validated those results through lab experiments. The best sequences outperformed several leading benchmarks for vaccine development, including a 33% increase in the overall efficiency of protein production.
    Increasing protein production efficiency by even a small amount provides a major boost for emerging therapeutics, according to the researchers. Beyond COVID-19, mRNA vaccines promise to protect against many infectious diseases and cancers.
    Wang, a professor ofelectrical and computer engineering and the principal investigator in this study, said the model’s success also pointed to a more fundamental possibility. Trained on mRNA from a handful of species, it was able to decode nucleotide sequences and reveal something new about gene regulation. Scientists believe gene regulation, one of life’s most basic functions, holds the key to unlocking the origins of disease and disorder. Language models like this one could provide a new way to probe.
    Wang’s collaborators include researchers from the biotech firm RVAC Medicines as well as the Stanford University School of Medicine.
    The Language of Disease
    The new model differs in degree, not kind, from the large language models that power today’s AI chat bots. Instead of being trained on billions of pages of text from the internet, their model was trained on a few hundred thousand sequences. The model also was trained to incorporate additional knowledge about the production of proteins, including structural and energy-related information.

    The research team used the trained model to create a library of 211 new sequences. Each was optimized for a desired function, primarily an increase in the efficiency of translation. Those proteins, like the spike protein targeted by COVID-19 vaccines, drive the immune response to infectious disease.
    Previous studies have created language models to decode various biological sequences, including proteins and DNA, but this was the first language model to focus on the untranslated region of mRNA. In addition to a boost in overall efficiency, it was also able to predict how well a sequence would perform at a variety of related tasks.
    Wang said the real challenge in creating this language model was in understanding the full context of the available data. Training a model requires not only the raw data with all its features but also the downstream consequences of those features. If a program is designed to filter spam from email, each email it trains on would be labeled “spam” or “not spam.” Along the way, the model develops semantic representations that allow it to determine what sequences of words indicate a “spam” label. Therein lies the meaning.
    Wang said looking at one narrow dataset and developing a model around it was not enough to be useful for life scientists. She needed to do something new. Because this model was working at the leading edge of biological understanding, the data she found was all over the place.
    “Part of my dataset comes from a study where there are measures for efficiency,” Wang said. “Another part of my dataset comes from another study [that] measured expression levels. We also collected unannotated data from multiple resources.” Organizing those parts into one coherent and robust whole — a multifaceted dataset that she could use to train a sophisticated language model — was a massive challenge.
    “Training a model is not only about putting together all those sequences, but also putting together sequences with the labels that have been collected so far. This had never been done before.” More