More stories

  • in

    Machine learning aids in simulating dynamics of interacting atoms

    A revolutionary machine-learning (ML) approach to simulate the motions of atoms in materials such as aluminum is described in this week’s Nature Communications journal. This automated approach to “interatomic potential development” could transform the field of computational materials discovery.
    “This approach promises to be an important building block for the study of materials damage and aging from first principles,” said project lead Justin Smith of Los Alamos National Laboratory. “Simulating the dynamics of interacting atoms is a cornerstone of understanding and developing new materials. Machine learning methods are providing computational scientists new tools to accurately and efficiently conduct these atomistic simulations. Machine learning models like this are designed to emulate the results of highly accurate quantum simulations, at a small fraction of the computational cost.”
    To maximize the general accuracy of these machine learning models, he said, it is essential to design a highly diverse dataset from which to train the model. A challenge is that it is not obvious, a priori, what training data will be most needed by the ML model. The team’s recent work presents an automated “active learning” methodology for iteratively building a training dataset.
    At each iteration, the method uses the current-best machine learning model to perform atomistic simulations; when new physical situations are encountered that are beyond the ML model’s knowledge, new reference data is collected via expensive quantum simulations, and the ML model is retrained. Through this process, the active learning procedure collects data regarding many different types of atomic configurations, including a variety of crystal structures, and a variety of defect patterns appearing within crystals.

    Story Source:
    Materials provided by DOE/Los Alamos National Laboratory. Note: Content may be edited for style and length. More

  • in

    Measuring hemoglobin levels with AI microscope, microfluidic chips

    One of the most performed medical diagnostic tests to ascertain the health of patients is a complete blood count, which typically includes an estimate of the hemoglobin concentration. The hemoglobin level in the blood is an important biochemical parameter that can indicate a host of medical conditions including anemia, polycythemia, and pulmonary fibrosis.
    In AIP Advances, by AIP Publishing, researchers from SigTuple Technologies and the Indian Institute of Science describe a new AI-powered imaging-based tool to estimate hemoglobin levels. The setup was developed in conjunction with a microfluidic chip and an AI-powered automated microscope that was designed for deriving the total as well as differential counts of blood cells.
    Often, medical diagnostics equipment capable of multiparameter assessment, such as hematology analyzers, has dedicated subcompartments with separate optical detection systems. This leads to increased sample volume as well as an increase in cost of the entire equipment.
    “In this study, we demonstrate that the applicability of a system originally designed for the purposes of imaging can be extended towards the performance of biochemical tests without any additional modifications to the hardware unit, thereby retraining the cost and laboratory footprint of the original device,” said author Srinivasan Kandaswamy.
    The hemoglobin testing solution is possible thanks to the design behind the microfluidic chip, a customized biochemical reagent, optimized imaging, and an image analysis procedure specifically tailored to enable the good clinical performance of the medical diagnostic test.
    The data obtained from the microfluidic chip in combination with an automated microscope was comparable with the predictions of hematology analyzers (Pearson correlation of 0.99). The validation study showed the method meets regulatory standards, which means doctors and hospitals are likely to accept it.
    The automated microscope, which normally uses a combination of red, green, and blue LEDs, used only the green LED during the hemoglobin estimation mode, because the optimized reagent (SDS-HB) complex absorbs light in the green wavelength.
    Chip-based, microfluidic, diagnostic platforms are on the verge of revolutionizing the field of health care and colorimetric biochemical assays are widely performed diagnostic tests.
    “This paper lays the foundation and will also serve as a guide to future attempts to translate conventional biochemical assays onto a chip, from point of view of both chip design and reagent development,” said Kandaswamy.
    Besides measuring hemoglobin in the blood, a similar setup with minor modifications could be used to measure protein content, cholesterol, and glycated hemoglobin.

    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Environmental policies not always bad for business, study finds

    Critics claim environmental regulations hurt productivity and profits, but the reality is more nuanced, according to an analysis of environmental policies in China by a pair of Cornell economists.
    The analysis found that, contrary to conventional wisdom, market-based or incentive-based policies may actually benefit regulated firms in the traditional and “green” energy sectors, by spurring innovation and improvements in production processes. Policies that mandate environmental standards and technologies, on the other hand, may broadly harm output and profits.
    “The conventional wisdom is not entirely accurate,” said Shuyang Si, a doctoral student in applied economics and management. “The type of policy matters, and policy effects vary by firm, industry and sector.”
    Si is the lead author of “The Effects of Environmental Policies in China on GDP, Output, and Profits,” published in the current issue of the journal Energy Economics. C.-Y. Cynthia Lin Lawell, associate professor in the Charles H. Dyson School of Applied Economics and Management and the Robert Dyson Sesquicentennial Chair in Environmental, Energy and Resource Economics, is a co-author.
    Si mined Chinese provincial government websites and other online sources to compile a comprehensive data set of nearly 2,700 environmental laws and regulations in effect in at least one of 30 provinces between 2002 and 2013. This period came just before China declared a “war on pollution,” instituting major regulatory changes that shifted its longtime prioritization of economic growth over environmental concerns.
    “We really looked deep into the policies and carefully examined their features and provisions,” Si said.

    advertisement

    The researchers categorized each policy as one of four types: “command and control,” such as mandates to use a portion of electricity from renewable sources; financial incentives, including taxes, subsidies and loans; monetary awards for cutting pollution or improving efficiency and technology; and nonmonetary awards, such as public recognition.
    They assessed how each type of policy impacted China’s gross domestic product, industrial output in traditional energy industries and the profits of new energy sector companies, using publicly available data on economic indicators and publicly traded companies.
    Command and control policies and nonmonetary award policies had significant negative effects on GDP, output and profits, Si and Lin Lawell concluded. But a financial incentive — loans for increasing renewable energy consumption — improved industrial output in the petroleum and nuclear energy industries, and monetary awards for reducing pollution boosted new energy sector profits.
    “Environmental policies do not necessarily lead to a decrease in output or profits,” the researchers wrote.
    That finding, they said, is consistent with the “Porter hypothesis” — Harvard Business School Professor Michael Porter’s 1991 proposal that environmental policies could stimulate growth and development, by spurring technology and business innovation to reduce both pollution and costs.
    While certain policies benefited regulated firms and industries, the study found that those benefits came at a cost to other sectors and to the overall economy. Nevertheless, Si and Lin Lawell said, these costs should be weighed against the benefits of these policies to the environment and society, and to the regulated firms and industries.
    Economists generally prefer market-based or incentive-based environmental policies, Lin Lawell said, with a carbon tax or tradeable permit system representing the gold standard. The new study led by Si, she said, provides more support for those types of policies.
    “This work will make people aware, including firms that may be opposed to environmental regulation, that it’s not necessarily the case that these regulations will be harmful to their profits and productivity,” Lin Lawell said. “In fact, if policies promoting environmental protection are designed carefully, there are some that these firms might actually like.”
    Additional co-authors contributing to the study were Mingjie Lyu of Shanghai Lixin University of Accounting and Finance, and Song Chen of Tongji University. The authors acknowledged financial support from the Shanghai Science and Technology Development Fund and an Exxon-Mobil ITS-Davis Corporate Affiliate Fellowship. More

  • in

    Scientists use machine-learning approach to track disease-carrying mosquitoes

    You might not like mosquitoes, but they like you, says Utah State University biologist Norah Saarman. And where you lead, they will follow.
    In addition to annoying bites and buzzing, some mosquitoes carry harmful diseases. Aedes aegypti, the so-called Yellow Fever mosquito and the subject of a recent study by Saarman and colleagues, is the primary vector for transmission of viruses causing dengue fever, chikungunya and Zika, as well as yellow fever, in humans.
    “Aedes aegypti is an invasive species to North America that’s become widespread in the eastern United States,” says Saarman, assistant professor in USU’s Department of Biology and the USU Ecology Center, whose research focuses on evolutionary ecology and population genomics. “We’re examining the genetic connectivity of this species as it adapts to new landscapes and expands its range.”
    With Evlyn Pless of the University of California, Davis and Jeffrey Powell, Andalgisa Caccone and Giuseppe Amatulli of Yale University, Saarman published findings from a machine-learning approach to mapping landscape connectivity in the February 22, 2021 issue of the Proceedings of the National Academy of Sciences (PNAS).
    The team’s research was supported by the National Institutes of Health.
    “We’re excited about this approach, which uses a random forest algorithm that allows us to overcome some of the constraints of classical spatial models,” Saarman says. “Our approach combines the advantages of a machine-learning framework and an iterative optimization process that integrates genetic and environmental data.”
    In its native Africa, Aedes aegypti was a forest dweller, drawing sustenance in landscapes uninhabited or scarcely populated by humans. The mosquito has since specialized to feed on humans, and thrives in human-impacted areas, favoring trash piles, littered highways and well-irrigated gardens.

    advertisement

    “Using our machine-learning model and NASA-supplied satellite imagery, we can combine this spatial data with the genetic data we have already collected to drill down into very specific movement of these mosquitoes,” Saarman says. “For example, our data reveal their attraction to human transportation networks, indicating that activities such as plant nurseries are inadvertently transporting these insects to new areas.”
    Public officials and land managers once relied on pesticides, including DDT, to keep the pesky mosquitoes at bay.
    “As we now know, those pesticides caused environmental harm, including harm to humans,” she says. “At the same time, mosquitos are evolving resistance to the pesticides that we have found to be safe for the environment. This creates a challenge that can only be solved by more information on where mosquitos live and how they get around.”
    Saarman adds the rugged survivors are not only adapting to different food sources and resisting pesticides, they’re also adapting to varied temperatures, which allows them to expand into colder ranges.
    Current methods to curb disease-carrying mosquitoes focus on biotechnological solutions, including cutting-edge genetic modification.
    “We hope the tools we’re developing can help managers identify effective methods of keeping mosquito populations small enough to avoid disease transmission,” Saarman says. “While native species play an important role in the food chain, invasive species, such as Aedes aegypti pose a significant public health risk that requires our vigilant attention.”

    Story Source:
    Materials provided by Utah State University. Original written by Mary-Ann Muffoletto. Note: Content may be edited for style and length. More

  • in

    'Beautiful marriage' of quantum enemies

    Cornell University scientists have identified a new contender when it comes to quantum materials for computing and low-temperature electronics.
    Using nitride-based materials, the researchers created a material structure that simultaneously exhibits superconductivity — in which electrical resistance vanishes completely — and the quantum Hall effect, which produces resistance with extreme precision when a magnetic field is applied.
    “This is a beautiful marriage of the two things we know, at the microscale, that give electrons the most startling quantum properties,” said Debdeep Jena, the David E. Burr Professor of Engineering in the School of Electrical and Computer Engineering and Department of Materials Science and Engineering. Jena led the research, published Feb. 19 in Science Advances, with doctoral student Phillip Dang and research associate Guru Khalsa, the paper’s senior authors.
    The two physical properties are rarely seen simultaneously because magnetism is like kryptonite for superconducting materials, according to Jena.
    “Magnetic fields destroy superconductivity, but the quantum Hall effect only shows up in semiconductors at large magnetic fields, so you’re having to play with these two extremes,” Jena said. “Researchers in the past few years have been trying to identify materials which show both properties with mixed success.”
    The research is the latest validation from the Jena-Xing Lab that nitride materials may have more to offer science than previously thought. Nitrides have traditionally been used for manufacturing LEDs and transistors for products like smartphones and home lighting, giving them a reputation as an industrial class of materials that has been overlooked for quantum computation and cryogenic electronics.

    advertisement

    “The material itself is not as perfect as silicon, meaning it has a lot more defects,” said co-author Huili Grace Xing, the William L. Quackenbush Professor of Electrical and Computer Engineering and of Materials Science and Engineering. “But because of its robustness, this material has thrown pleasant surprises to the research community more than once despite its extremely large irregularities in structure. There may be a path forward for us to truly integrate different modalities of quantum computing — computation, memory, communication.”
    Such integration could help to condense the size of quantum computers and other next-generation electronics, just as classical computers have shrunk from warehouse to pocket size.
    “We’re wondering what this sort of material platform can enable because we see that it’s checking off a lot of boxes,” said Jena, who added that new physical phenomena and technological applications could emerge with further research. “It has a superconductor, a semiconductor, a filter material — it has all kinds of other components, but we haven’t put them all together. We’ve just discovered they can coexist.”
    For this research, the Cornell team began engineering epitaxial nitride heterostructures — atomically thin layers of gallium nitride and niobium nitride — and searching for conditions in which magnetic fields and temperatures in the layers would retain their respective quantum Hall and superconducting properties.
    They eventually discovered a small window in which the properties were observed simultaneously, thanks to advances in the quality of the materials and structures produced in close collaboration with colleagues at the Naval Research Laboratory.
    “The quality of the niobium-nitride superconductor was improved enough that it can survive higher magnetic fields, and simultaneously we had to improve the quality of the gallium-nitride semiconductor enough that it could exhibit the quantum Hall effect at lower magnetic fields,” Dang said. “And that’s what will really allow for potential new physics to be seen at low temperature.”
    Potential applications for the material structure include more efficient electronics, such as data centers cooled to extremely low temperatures to eliminate heat waste. And the structure is the first to lay the groundwork for the use of nitride semiconductors and superconductors in topological quantum computing, in which the movement of electrons must be resilient to the material defects typically seen in nitrides.
    “What we’ve shown is that the ingredients you need to make this topological phase can be in the same structure,” Khalsa said, “and I think the flexibility of the nitrides really opens up new possibilities and ways to explore topological states of matter.”
    The research was funded by the Office of Naval Research and the National Science Foundation.

    Story Source:
    Materials provided by Cornell University. Original written by Syl Kacapyr. Note: Content may be edited for style and length. More

  • in

    Lack of symmetry in qubits can't fix errors in quantum computing, might explain matter/antimatter

    A team of quantum theorists seeking to cure a basic problem with quantum annealing computers — they have to run at a relatively slow pace to operate properly — found something intriguing instead. While probing how quantum annealers perform when operated faster than desired, the team unexpectedly discovered a new effect that may account for the imbalanced distribution of matter and antimatter in the universe and a novel approach to separating isotopes.
    “Although our discovery did not the cure the annealing time restriction, it brought a class of new physics problems that can now be studied with quantum annealers without requiring they be too slow,” said Nikolai Sinitsyn, a theoretical physicist at Los Alamos National Laboratory. Sinitsyn is author of the paper published Feb. 19 in Physical Review Letters, with coauthors Bin Yan and Wojciech Zurek, both also of Los Alamos, and Vladimir Chernyak of Wayne State University.
    Significantly, this finding hints at how at least two famous scientific problems may be resolved in the future. The first one is the apparent asymmetry between matter and antimatter in the universe.
    “We believe that small modifications to recent experiments with quantum annealing of interacting qubits made of ultracold atoms across phase transitions will be sufficient to demonstrate our effect,” Sinitsyn said.
    Explaining the Matter/Antimatter Discrepancy
    Both matter and antimatter resulted from the energy excitations that were produced at the birth of the universe. The symmetry between how matter and antimatter interact was broken but very weakly. It is still not completely clear how this subtle difference could lead to the large observed domination of matter compared to antimatter at the cosmological scale.

    advertisement

    The newly discovered effect demonstrates that such an asymmetry is physically possible. It happens when a large quantum system passes through a phase transition, that is, a very sharp rearrangement of quantum state. In such circumstances, strong but symmetric interactions roughly compensate each other. Then subtle, lingering differences can play the decisive role.
    Making Quantum Annealers Slow Enough
    Quantum annealing computers are built to solve complex optimization problems by associating variables with quantum states or qubits. Unlike a classical computer’s binary bits, which can only be in a state, or value, of 0 or 1, qubits can be in a quantum superposition of in-between values. That’s where all quantum computers derive their awesome, if still largely unexploited, powers.
    In a quantum annealing computer, the qubits are initially prepared in a simple lowest energy state by applying a strong external magnetic field. This field is then slowly switched off, while the interactions between the qubits are slowly switched on.
    “Ideally an annealer runs slow enough to run with minimal errors, but because of decoherence, one has to run the annealer faster,” Yan explained. The team studied the emerging effect when the annealers are operated at a faster speed, which limits them to a finite operation time.

    advertisement

    “According to the adiabatic theorem in quantum mechanics, if all changes are very slow, so-called adiabatically slow, then the qubits must always remain in their lowest energy state,” Sinitsyn said. “Hence, when we finally measure them, we find the desired configuration of 0s and 1s that minimizes the function of interest, which would be impossible to get with a modern classical computer.”
    Hobbled by Decoherence
    However, currently available quantum annealers, like all quantum computers so far, are hobbled by their qubits’ interactions with the surrounding environment, which causes decoherence. Those interactions restrict the purely quantum behavior of qubits to about one millionth of a second. In that timeframe, computations have to be fast — nonadiabatic — and unwanted energy excitations alter the quantum state, introducing inevitable computational mistakes.
    The Kibble-Zurek theory, co-developed by Wojciech Zurek, predicts that the most errors occur when the qubits encounter a phase transition, that is, a very sharp rearrangement of their collective quantum state.
    For this paper, the team studied a known solvable model where identical qubits interact only with their neighbors along a chain; the model verifies the Kibble-Zurek theory analytically. In the theorists’ quest to cure limited operation time in quantum annealing computers, they increased the complexity of that model by assuming that the qubits could be partitioned into two groups with identical interactions within each group but slightly different interactions for qubits from the different groups.
    In such a mixture, they discovered an unusual effect: One group still produced a large amount of energy excitations during the passage through a phase transition, but the other group remained in the energy minimum as if the system did not experience a phase transition at all.
    “The model we used is highly symmetric in order to be solvable, and we found a way to extend the model, breaking this symmetry and still solving it,” Sinitsyn explained. “Then we found that the Kibble-Zurek theory survived but with a twist — half of the qubits did not dissipate energy and behaved ‘nicely.’ In other words, they maintained their ground states.”
    Unfortunately, the other half of the qubits did produce many computational errors — thus, no cure so far for a passage through a phase transition in quantum annealing computers.
    A New Way to Separate Isotopes
    Another long-standing problem that can benefit from this effect is isotope separation. For instance, natural uranium often must be separated into the enriched and depleted isotopes, so the enriched uranium can be used for nuclear power or national security purposes. The current separation process is costly and energy intensive. The discovered effect means that by making a mixture of interacting ultra-cold atoms pass dynamically through a quantum phase transition, different isotopes can be selectively excited or not and then separated using available magnetic deflection technique.
    The funding: This work was carried out under the support of the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, Condensed Matter Theory Program. Bin Yan also acknowledges support from the Center for Nonlinear Studies at LANL. More

  • in

    Lonely adolescents are susceptible to internet addiction

    Loneliness is a risk factor associated with adolescents being drawn into compulsive internet use. The risk of compulsive use has grown in the coronavirus pandemic: loneliness has become increasingly prevalent among adolescents, who spend longer and longer periods of time online.
    A study investigating detrimental internet use by adolescents involved a total of 1,750 Finnish study subjects, who were studied at three points in time: at 16, 17 and 18 years of age. The results have been published in the Child Development journal.
    Adolescents’ net use is a two-edged sword: while the consequences of moderate use are positive, the effects of compulsive use can be detrimental. Compulsive use denotes, among other things, gaming addiction or the constant monitoring of likes on social media and comparisons to others.
    “In the coronavirus period, loneliness has increased markedly among adolescents. They look for a sense of belonging from the internet. Lonely adolescents head to the internet and are at risk of becoming addicted. Internet addiction can further aggravate their malaise, such as depression,” says Professor of Education and study lead Katariina Salmela-Aro from the University of Helsinki.
    Highest risk for 16-year-old boys
    The risk of being drawn into problematic internet use was at its highest among 16-year-old adolescents, with the phenomenon being more common among boys. For some, the problem persists into adulthood, but for others it eases up as they grow older. The reduction of problematic internet use is often associated with adolescent development where their self-regulation and control improve, their brains adapt and assignments related to education direct their attention.
    “It’s comforting to know that problematic internet use is adaptive and often changes in late adolescence and during the transition to adulthood. Consequently, attention should be paid to the matter both in school and at home. Addressing loneliness too serves as a significant channel for preventing excessive internet use,” Salmela-Aro notes.
    It was found in the study that the household climate and parenting also matter: the children of distant parents have a higher risk of drifting into detrimental internet use. If parents are not very interested in the lives of their adolescents, the latter may have difficulty drawing the lines for their actions.
    Problematic net use and depression form a cycle
    In the study participants, compulsive internet use had a link to depression. Depression predicted problematic internet use, while problematic use further increased depressive symptoms.
    Additionally, problematic use was predictive of poorer academic success, which may be associated with the fact that internet use consumes a great deal of time and can disrupt adolescents’ sleep rhythm and recovery, consequently eating up the time available for academic effort and performance.

    Story Source:
    Materials provided by University of Helsinki. Original written by Katariina Salmela-Aro, Suvi Uotinen. Note: Content may be edited for style and length. More

  • in

    Positive vibes only: Forego negative texts or risk being labelled a downer

    A new study from researchers at the University of Ottawa’s School of Psychology has found that using negative emojis in text messages produces a negative perception of the sender regardless of their true intent.
    Isabelle Boutet, a Full Professor in Psychology in the Faculty of Social Sciences, and her team’s findings are included in the study ‘Emojis influence emotional communication, social attributions, and information processing’ which was published in Computers in Human Behavior.
    Study background: Eye movements of 38 University of Ottawa volunteer undergraduate student participants were tracked and studied, and the volunteers were shown sentence-emoji pairing under 12 different conditions where sentences could be negative, positive, or neutral, accompanied by a negative emoji, positive emoji, neutral emoji, or no emoji. With an average age of 18, participants were asked to rate each message in terms of emotional state of the sender and how warm they found them to be.
    Dr. Boutet, whose research aims at understanding how humans analyze social cues conveyed by faces, discusses the findings.
    He said, “Emojis are consequential and have an impact on the interpretation of the sender by the receiver and if you display any form of negativity — even pairing a positive emoji with a negative message — it is going to be interpreted negatively. You are going to be perceived as a person who is cold, and you will come across as in a negative mood when using negative emojis, regardless of the tone.
    “Even if you have a positive message with a negative emoji, the receiver will interpret the sender as being in a negative mood. Any reference to negativity will drive how people interpret your emotional state when you write a text message.

    advertisement

    “We also found certain types of messages were more difficult to convey; people have a lot of problems interpreting messages that are meant to convey irony or sarcasm.”
    What does this tell us about texting vs. face-to-face interactions?
    “People often try to control the emotion they convey with their faces to avoid social conflict. Yet people use emojis for fun without giving it much thought when, in fact, they have a strong impact on interpersonal interactions.
    “The big question is do emojis act as proxies, do they engage the same mechanism as facial expressions of emotions that play a large role in face-to-face (FTF) interaction? With FTF interactions, we have — through evolution — developed very evolved mechanisms that process these facial expressions of emotions. Kids use a lot of these digital interactions and they risk losing the ability to interact FTF.”
    How can the use of emojis and their meaning be improved?
    “There are a lot of emojis and many we don’t even know what they mean, and people can easily misinterpret them. We are looking at developing new emojis that convey emotions in more consistent and accurate manner, that better mimic facial expressions of emotions and reduce the lexicon of emojis, which could be especially helpful to less tech-savvy older adults. Our goals are to develop new emojis and/or memojis that convey clear signals that are not as confusing.”
    “You should not think that emojis are a cute little thing that you add in a text message and that it has no consequence on your interaction. Emojis have large consequence and strong impact on how your text message will be interpreted and how you will be perceived.”

    Story Source:
    Materials provided by University of Ottawa. Note: Content may be edited for style and length. More