More stories

  • in

    Breaking through the resolution barrier with quantum-limited precision

    Researchers at Paderborn University have developed a new method of distance measurement for systems such as GPS, which achieves more precise results than ever before. Using quantum physics, the team led by Leibniz Prize winner Professor Christine Silberhorn has successfully overcome the so-called resolution limit, which causes the “noise” we may see in photos, for example. Their findings have just been published in the academic journal Physical Review X Quantum (PRX Quantum).
    Physicist Dr Benjamin Brecht explains the problem of the resolution limit: “In laser distance measurements a detector registers two light pulses of different intensities with a time difference. The more precise the time measurement is, the more accurately the distance can be determined. Providing the time separation between the pulses is greater than the length of the pulses, this works well.” Problems arise, however, as Brecht explains, if the pulses overlap: “Then you can no longer measure the time difference using conventional methods. This is known as the “resolution limit” and is a well-known effect in photos. Very small structures or textures can no longer be resolved. That’s the same problem — just with position rather than time.”
    A further challenge, according to Brecht, is to determine the different intensities of two light pulses, simultaneously with their time difference and the arrival time. But this is exactly what the researchers have managed to do — “with quantum-limited precision,” adds Brecht. Working with partners from the Czech Republic and Spain, the Paderborn physicists were even able to measure these values when the pulses overlapped by 90 per cent. Brecht says: “This is far beyond the resolution limit. The precision of the measurement is 10,000 times better. Using methods from quantum information theory, we can find new forms of measurement which overcome the limitations of established methods.”
    These findings could allow significant improvements in the future to the precision of applications such as LIDAR, a method of optical distance and speed measurement, and GPS. It will take some time, however, before this is ready for the market, points out Brecht.

    Story Source:
    Materials provided by Universität Paderborn. Note: Content may be edited for style and length. More

  • in

    Deep neural network predicts transcription factors

    A joint research team from KAIST and UCSD has developed a deep neural network named DeepTFactor that predicts transcription factors from protein sequences. DeepTFactor will serve as a useful tool for understanding the regulatory systems of organisms, accelerating the use of deep learning for solving biological problems.
    A transcription factor is a protein that specifically binds to DNA sequences to control the transcription initiation. Analyzing transcriptional regulation enables the understanding of how organisms control gene expression in response to genetic or environmental changes. In this regard, finding the transcription factor of an organism is the first step in the analysis of the transcriptional regulatory system of an organism.
    Previously, transcription factors have been predicted by analyzing sequence homology with already characterized transcription factors or by data-driven approaches such as machine learning. Conventional machine learning models require a rigorous feature selection process that relies on domain expertise such as calculating the physicochemical properties of molecules or analyzing the homology of biological sequences. Meanwhile, deep learning can inherently learn latent features for the specific task.
    A joint research team comprised of Ph.D. candidate Gi Bae Kim and Distinguished Professor Sang Yup Lee of the Department of Chemical and Biomolecular Engineering at KAIST, and Ye Gao and Professor Bernhard O. Palsson of the Department of Biochemical Engineering at UCSD reported a deep learning-based tool for the prediction of transcription factors. Their research paper “DeepTFactor: A deep learning-based tool for the prediction of transcription factors” was published online in PNAS.
    Their article reports the development of DeepTFactor, a deep learning-based tool that predicts whether a given protein sequence is a transcription factor using three parallel convolutional neural networks. The joint research team predicted 332 transcription factors of Escherichia coli K-12 MG1655 using DeepTFactor and the performance of DeepTFactor by experimentally confirming the genome-wide binding sites of three predicted transcription factors (YqhC, YiaU, and YahB).
    The joint research team further used a saliency method to understand the reasoning process of DeepTFactor. The researchers confirmed that even though information on the DNA binding domains of the transcription factor was not explicitly given the training process, DeepTFactor implicitly learned and used them for prediction. Unlike previous transcription factor prediction tools that were developed only for protein sequences of specific organisms, DeepTFactor is expected to be used in the analysis of the transcription systems of all organisms at a high level of performance.
    Distinguished Professor Sang Yup Lee said, “DeepTFactor can be used to discover unknown transcription factors from numerous protein sequences that have not yet been characterized. It is expected that DeepTFactor will serve as an important tool for analyzing the regulatory systems of organisms of interest.”

    Story Source:
    Materials provided by The Korea Advanced Institute of Science and Technology (KAIST). Note: Content may be edited for style and length. More

  • in

    Supercapacitors challenge batteries

    A team working with Roland Fischer, Professor of Inorganic and Metal-Organic Chemistry at the Technical University Munich (TUM) has developed a highly efficient supercapacitor. The basis of the energy storage device is a novel, powerful and also sustainable graphene hybrid material that has comparable performance data to currently utilized batteries.
    Usually, energy storage is associated with batteries and accumulators that provide energy for electronic devices. However, in laptops, cameras, cellphones or vehicles, so-called supercapacitors are increasingly installed these days.
    Unlike batteries they can quickly store large amounts of energy and put it out just as fast. If, for instance, a train brakes when entering the station, supercapacitors are storing the energy and provide it again when the train needs a lot of energy very quickly while starting up.
    However, one problem with supercapacitors to date was their lack of energy density. While lithium accumulators reach an energy density of up to 265 Kilowatt hours (KW/h), supercapacitors thus far have only been delivering a tenth thereof.
    Sustainable material provides high performance
    The team working with TUM chemist Roland Fischer has now developed a novel, powerful as well as sustainable graphene hybrid material for supercapacitors. It serves as the positive electrode in the energy storage device. The researchers are combining it with a proven negative electrode based on titan and carbon.

    advertisement

    The new energy storage device does not only attain an energy density of up to 73 Wh/kg, which is roughly equivalent to the energy density of an nickel metal hydride battery, but also performs much better than most other supercapacitors at a power density of 16 kW/kg. The secret of the new supercapacitor is the combination of different materials — hence, chemists refer to the supercapacitor as “asymmetrical.”
    Hybrid materials: Nature is the role model
    The researchers are betting on a new strategy to overcome the performance limits of standard materials — they utilize hybrid materials. “Nature is full of highly complex, evolutionarily optimized hybrid materials — bones and teeth are examples. Their mechanical properties, such as hardness and elasticity were optimized through the combination of various materials by nature,” says Roland Fischer.
    The abstract idea of combining basic materials was transferred to supercapacitors by the research team. As a basis, they used the novel positive electrode of the storage unit with chemically modified graphene and combined it with a nano-structured metal organic framework, a so-called MOF.
    Powerful and stable
    Decisive for the performance of graphene hybrids are on the one hand a large specific surface and controllable pore sizes and on the other hand a high electrical conductivity. “The high performance capabilities of the material is based on the combination of the microporous MOFs with the conductive graphene acid,” explains first author Jayaramulu Kolleboyina, a former guest scientist working with Roland Fischer.

    advertisement

    A large surface is important for good supercapacitors. It allows for the collection of a respectively large number of charge carriers within the material — this is the basic principle for the storage of electrical energy.
    Through skillful material design, the researchers achieved the feat of linking the graphene acid with the MOFs. The resulting hybrid MOFs have a very large inner surface of up to 900 square meters per gram and are highly performant as positive electrodes in a supercapacitor.
    Long stability
    However, that is not the only advantage of the new material. To achieve a chemically stable hybrid, one needs strong chemical bonds between the components. The bonds are apparently the same as those between amino acids in proteins, according to Fischer: “In fact, we have connected the graphene acid with a MOF-amino acid, which creates a type of peptide bond.”
    The stable connection between the nano-structured components has huge advantages in terms of long term stability: The more stable the bonds, the more charging and discharging cycles are possible without significant performance impairment.
    For comparison: A classic lithium accumulator has a useful life of around 5,000 cycles. The new cell developed by the TUM researchers retains close to 90 percent capacity even after 10,000 cycles.
    International network of experts
    Fischer emphasizes how important the unfettered international cooperation the researchers controlled themselves was when it came to the development of the new supercapacitor. Accordingly, Jayaramulu Kolleboyina built the team. He was a guest scientist from India invited by the Alexander von Humboldt Foundation and who by now is the head of the chemistry department at the newly established Indian Institute of Technology in Jammu.
    “Our team also networked with electro-chemistry and battery research experts in Barcelona as well as graphene derivate experts from the Czech Republic,” reports Fischer. “Furthermore, we have integrated partners from the USA and Australia. This wonderful, international co-operation promises much for the future.”
    The research was supported by the Deutsche Forschungsgemeinschaft (DFG) within the cluster of excellence e-conversion, the Alexander von Humboldt Foundation, the Indian Institute of Technology Jammu, the Queensland University of Technology and the Australian Research Council (ARC). Further funding came from the European Regional Development Fund provided by the Ministry of Education, Youth and Sports of the Czech Republic. More

  • in

    A robotic revolution for urban nature

    Drones, robots and autonomous systems can transform the natural world in and around cities for people and wildlife.
    International research, involving over 170 experts and led by the University of Leeds, assessed the opportunities and challenges that this cutting-edge technology could have for urban nature and green spaces.
    The researchers highlighted opportunities to improve how we monitor nature, such as identifying emerging pests and ensuring plants are cared for, and helping people engage with and appreciate the natural world around them.
    As robotics, autonomous vehicles and drones become more widely used across cities, pollution and traffic congestion may reduce, making towns and cities more pleasant places to spend time outside.
    But the researchers also warned that advances in robotics and automation could be damaging to the environment.
    For instance, robots and drones might generate new sources of waste and pollution themselves, with potentially substantial negative implications for urban nature. Cities might have to be re-planned to provide enough room for robots and drones to operate, potentially leading to a loss of green space. And they could also increase existing social inequalities, such as unequal access to green space.

    advertisement

    Lead author Dr Martin Dallimer, from the School of Earth and Environment at the University of Leeds, said: “Technology, such as robotics, has the potential to change almost every aspect of our lives. As a society, it is vital that we proactively try to understand any possible side effects and risks of our growing use of robots and automated systems.
    “Although the future impacts on urban green spaces and nature are hard to predict, we need to make sure that the public, policy makers and robotics developers are aware of the potential pros and cons, so we can avoid detrimental consequences and fully realise the benefits.”
    The research, published today in Nature Ecology & Evolution, is authored by a team of 77 academics and practitioners.
    The researchers conducted an online survey of 170 experts from 35 countries, which they say provides a current best guess of what the future could hold.
    Participants gave their views on the potential opportunities and challenges for urban biodiversity and ecosystems, from the growing use of robotics and autonomous systems. These are defined as technologies that can sense, analyse, interact with and manipulate their physical environment. This includes unmanned aerial vehicles (drones), self-driving cars, robots able to repair infrastructure, and wireless sensor networks used for monitoring.

    advertisement

    These technologies have a large range of potential applications, such as autonomous transport, waste collection, infrastructure maintenance and repair, policing and precision agriculture.
    The research was conducted as part of Leeds’ Self Repairing Cities project, which aims to enable robots and autonomous systems to maintain urban infrastructure without causing disruption to citizens.
    First author Dr Mark Goddard conducted the work whilst at the University of Leeds and is now based at the Northumbria University. He said: “Spending time in urban green spaces and interacting with nature brings a range of human health and well-being benefits, and robots are likely to transform many of the ways in which we experience and gain benefits from urban nature.
    “Understanding how robotics and autonomous systems will affect our interaction with nature is vital for ensuring that our future cities support wildlife that is accessible to all.”
    This work was funded by the Engineering and Physical Sciences Research Council (EPSRC). More

  • in

    A high order for a low dimension

    Spintronics refers to a suite of physical systems which may one day replace many electronic systems. To realize this generational leap, material components that confine electrons in one dimension are highly sought after. For the first time, researchers created such a material in the form of a special bismuth-based crystal known as a high-order topological insulator.
    To create spintronic devices, new materials need to be designed that take advantage of quantum behaviors not seen in everyday life. You are probably familiar with conductors and insulators, which permit and restrict the flow of electrons, respectively. Semiconductors are common but less familiar to some; these usually insulate, but conduct under certain circumstances, making them ideal miniature switches.
    For spintronic applications, a new kind of electronic material is required and it’s called a topological insulator. It differs from these other three materials by insulating throughout its bulk, but conducting only along its surface. And what it conducts is not the flow of electrons themselves, but a property of them known as their spin or angular momentum. This spin current, as it’s known, could open up a world of ultrahigh-speed and low-power devices.
    However, not all topological insulators are equal: Two kinds, so-called strong and weak, have already been created, but have some drawbacks. As they conduct spin along their entire surface, the electrons present tend to scatter, which weakens their ability to convey a spin current. But since 2017, a third kind of topological insulator called a higher-order topological insulator has been theorized. Now, for the first time, one has been created by a team at the Institute for Solid State Physics at the University of Tokyo.
    “We created a higher-order topological insulator using the element bismuth,” said Associate Professor Takeshi Kondo. “It has the novel ability of being able to conduct a spin current along only its corner edges, essentially one-dimensional lines. As the spin current is bound to one dimension instead of two, the electrons do not scatter so the spin current remains stable.”
    To create this three-dimensional crystal, Kondo and his team stacked two-dimensional slices of crystal one atom thick in a certain way. For strong or weak topological insulators, crystal slices in the stack are all oriented the same way, like playing cards face down in a deck. But to create the higher-order topological insulator, the orientation of the slices was alternated, the metaphorical playing cards were faced up then down repeatedly throughout the stack. This subtle change in arrangement makes a huge change in the behavior of the resultant three-dimensional crystal.
    The crystal layers in the stack are held together by a quantum mechanical force called the van der Waals force. This is one of the rare kinds of quantum phenomena that you actually do see in daily life, as it is partly responsible for the way that powdered materials clump together and flow the way they do. In the crystal, it adheres the layers together.
    “It was exciting to see that the topological properties appear and disappear depending only on the way the two-dimensional atomic sheets were stacked,” said Kondo. “Such a degree of freedom in material design will bring new ideas, leading toward applications including fast and efficient spintronic devices, and things we have yet to envisage.”

    Story Source:
    Materials provided by University of Tokyo. Note: Content may be edited for style and length. More

  • in

    Using artificial intelligence to find new uses for existing medications

    Scientists have developed a machine-learning method that crunches massive amounts of data to help determine which existing medications could improve outcomes in diseases for which they are not prescribed.
    The intent of this work is to speed up drug repurposing, which is not a new concept — think Botox injections, first approved to treat crossed eyes and now a migraine treatment and top cosmetic strategy to reduce the appearance of wrinkles.
    But getting to those new uses typically involves a mix of serendipity and time-consuming and expensive randomized clinical trials to ensure that a drug deemed effective for one disorder will be useful as a treatment for something else.
    The Ohio State University researchers created a framework that combines enormous patient care-related datasets with high-powered computation to arrive at repurposed drug candidates and the estimated effects of those existing medications on a defined set of outcomes.
    Though this study focused on proposed repurposing of drugs to prevent heart failure and stroke in patients with coronary artery disease, the framework is flexible — and could be applied to most diseases.
    “This work shows how artificial intelligence can be used to ‘test’ a drug on a patient, and speed up hypothesis generation and potentially speed up a clinical trial,” said senior author Ping Zhang, assistant professor of computer science and engineering and biomedical informatics at Ohio State. “But we will never replace the physician — drug decisions will always be made by clinicians.”
    The research is published today (Jan. 4, 2021) in Nature Machine Intelligence.

    advertisement

    Drug repurposing is an attractive pursuit because it could lower the risk associated with safety testing of new medications and dramatically reduce the time it takes to get a drug into the marketplace for clinical use.
    Randomized clinical trials are the gold standard for determining a drug’s effectiveness against a disease, but Zhang noted that machine learning can account for hundreds — or thousands — of human differences within a large population that could influence how medicine works in the body. These factors, or confounders, ranging from age, sex and race to disease severity and the presence of other illnesses, function as parameters in the deep learning computer algorithm on which the framework is based.
    That information comes from “real-world evidence,” which is longitudinal observational data about millions of patients captured by electronic medical records or insurance claims and prescription data.
    “Real-world data has so many confounders. This is the reason we have to introduce the deep learning algorithm, which can handle multiple parameters,” said Zhang, who leads the Artificial Intelligence in Medicine Lab and is a core faculty member in the Translational Data Analytics Institute at Ohio State. “If we have hundreds or thousands of confounders, no human being can work with that. So we have to use artificial intelligence to solve the problem.
    “We are the first team to introduce use of the deep learning algorithm to handle the real-world data, control for multiple confounders, and emulate clinical trials,” Zhang said.

    advertisement

    The research team used insurance claims data on nearly 1.2 million heart-disease patients, which provided information on their assigned treatment, disease outcomes and various values for potential confounders. The deep learning algorithm also has the power to take into account the passage of time in each patient’s experience — for every visit, prescription and diagnostic test. The model input for drugs is based on their active ingredients.
    Applying what is called causal inference theory, the researchers categorized, for the purposes of this analysis, the active drug and placebo patient groups that would be found in a clinical trial. The model tracked patients for two years — and compared their disease status at that end point to whether or not they took medications, which drugs they took and when they started the regimen.
    “With causal inference, we can address the problem of having multiple treatments. We don’t answer whether drug A or drug B works for this disease or not, but figure out which treatment will have the better performance,” Zhang said.
    Their hypothesis: that the model would identify drugs that could lower the risk for heart failure and stroke in coronary artery disease patients.
    The model yielded nine drugs considered likely to provide those therapeutic benefits, three of which are currently in use — meaning the analysis identified six candidates for drug repurposing. Among other findings, the analysis suggested that a diabetes medication, metformin, and escitalopram, used to treat depression and anxiety, could lower risk for heart failure and stroke in the model patient population. As it turns out, both of those drugs are currently being tested for their effectiveness against heart disease.
    Zhang stressed that what the team found in this case study is less important than how they got there.
    “My motivation is applying this, along with other experts, to find drugs for diseases without any current treatment. This is very flexible, and we can adjust case-by-case,” he said. “The general model could be applied to any disease if you can define the disease outcome.”
    The research was supported by the National Center for Advancing Translational Sciences, which funds the Center for Clinical and Translational Science at Ohio State. More

  • in

    Stretching diamond for next-generation microelectronics

    Diamond is the hardest material in nature. It also has great potential as an excellent electronic material. A research team has demonstrated for the first time the large, uniform tensile elastic straining of microfabricated diamond arrays through the nanomechanical approach. Their findings have shown the potential of strained diamonds as prime candidates for advanced functional devices in microelectronics, photonics, and quantum information technologies. More