More stories

  • in

    Better batteries start with basics — and a big computer

    To understand the fundamental properties of an industrial solvent, chemists with the University of Cincinnati turned to a supercomputer.
    UC chemistry professor and department head Thomas Beck and UC graduate student Andrew Eisenhart ran quantum simulations to understand glycerol carbonate, a compound used in biodiesel and as a common solvent.
    They found that the simulation provided detail about hydrogen bonding in determining the structural and dynamic properties of the liquid that was missing from classical models. The study was published in the Journal of Physical Chemistry B.
    Glycerol carbonate could be a more environmentally friendly chemical solvent for things like batteries. But chemists have to know more about what’s going on in these solutions. They studied the compounds potassium fluoride and potassium chloride.
    “The study we did gives us a fundamental understanding of how small changes to a molecular structure can have larger consequences for the solvent as a whole,” Eisenhart said. “And how these small changes make its interactions with very important things like ions and can have an effect on things like battery performance.”
    Water is a seemingly simple solvent, as anyone who has stirred sugar in their coffee can attest. More

  • in

    Solving 'barren plateaus' is the key to quantum machine learning

    Many machine learning algorithms on quantum computers suffer from the dreaded “barren plateau” of unsolvability, where they run into dead ends on optimization problems. This challenge had been relatively unstudied — until now. Rigorous theoretical work has established theorems that guarantee whether a given machine learning algorithm will work as it scales up on larger computers.
    “The work solves a key problem of useability for quantum machine learning. We rigorously proved the conditions under which certain architectures of variational quantum algorithms will or will not have barren plateaus as they are scaled up,” said Marco Cerezo, lead author on the paper published in Nature Communications today by a Los Alamos National Laboratory team. Cerezo is a post doc researching quantum information theory at Los Alamos. “With our theorems, you can guarantee that the architecture will be scalable to quantum computers with a large number of qubits.”
    “Usually the approach has been to run an optimization and see if it works, and that was leading to fatigue among researchers in the field,” said Patrick Coles, a coauthor of the study. Establishing mathematical theorems and deriving first principles takes the guesswork out of developing algorithms.
    The Los Alamos team used the common hybrid approach for variational quantum algorithms, training and optimizing the parameters on a classical computer and evaluating the algorithm’s cost function, or the measure of the algorithm’s success, on a quantum computer.
    Machine learning algorithms translate an optimization task — say, finding the shortest route for a traveling salesperson through several cities — into a cost function, said coauthor Lukasz Cincio. That’s a mathematical description of a function that will be minimized. The function reaches its minimum value only if you solve the problem.
    Most quantum variational algorithms initiate their search randomly and evaluate the cost function globally across every qubit, which often leads to a barren plateau. More

  • in

    Researchers help keep pace with Moore's Law by exploring a new material class

    Progress in the field of integrated circuits is measured by matching, exceeding, or falling behind the rate set forth by Gordon Moore, former CEO and co-founder of Intel, who said the number of electronic components, or transistors, per integrated circuit would double every year. That was more than 50 years ago, and surprisingly his prediction, now called Moore’s Law, came true.
    In recent years, it was thought that the pace had slowed; one of the biggest challenges of putting more circuits and power on a smaller chip is managing heat.
    A multidisciplinary group that includes Patrick E. Hopkins, a professor in the University of Virginia’s Department of Mechanical and Aerospace Engineering, and Will Dichtel, a professor in Northwestern University’s Department of Chemistry, is inventing a new class of material with the potential to keep chips cool as they keep shrinking in size — and to help Moore’s Law remain true. Their work was recently published in Nature Materials.
    Electrical insulation materials that minimize electrical crosstalk in chips are called “low-k” dielectrics. This material type is the silent hero that makes all electronics possible by steering the current to eliminate signal erosion and interference; ideally, it can also pull damaging heat caused by electrical current away from the circuitry. The heat problem becomes exponential as the chip gets smaller because not only are there more transistors in a given area, which makes more heat in that same area, they are closer together, which makes it harder for heat to dissipate.
    “Scientists have been in search of a low-k dielectric material that can handle the heat transfer and space issues inherent at much smaller scales,” Hopkins said. “Although we’ve come a long way, new breakthroughs are just not going to happen unless we combine disciplines. For this project we’ve used research and principles from several fields — mechanical engineering, chemistry, materials science, electrical engineering — to solve a really tough problem that none of us could work out on our own.”
    Hopkins is one of the leaders of UVA Engineering’s Multifunctional Materials Integration initiative, which brings together researchers from multiple engineering disciplines to formulate materials with a wide array of functionalities. More

  • in

    New statistical model predicts which cities could become 'superspreaders'

    Researchers have developed a new statistical model that predicts which cities are more likely to become infectious disease hotspots, based both on interconnectivity between cities and the idea that some cities are more suitable environments for infection than others. Brandon Lieberthal and Allison Gardner of the University of Maine present these findings in the open-access journal PLOS Computational Biology.
    In an epidemic, different cities have varying risks of triggering superspreader events, which spread unusually large numbers of infected people to other cities. Previous research has explored how to identify potential “superspreader cities” based on how well each city is connected to others or on each city’s distinct suitability as an environment for infection. However, few studies have incorporated both factors at once.
    Now, Lieberthal and Gardner have developed a mathematical model that identifies potential superspreaders by incorporating both connectivity between cities and their varying suitability for infection. A city’s infection suitability depends on the specific disease being considered, but could incorporate characteristics such as climate, population density, and sanitation.
    The researchers validated their model with a simulation of epidemic spread across randomly generated networks. They found that the risk of a city becoming a superspreader increases with infection suitability only up to a certain extent, but risk increases indefinitely with increased connectivity to other cities.
    “Most importantly, our research produces a formula in which a disease management expert can input the properties of an infectious disease and the human mobility network and output a list of cities that are most likely to become superspreader locations,” Lieberthal says. “This could improve efforts to prevent or mitigate spread.”
    The new model can be applied to both directly transmitted diseases, such as COVID-19, or to vector-borne illnesses, such as the mosquito-borne Zika virus. It could provide more in-depth guidance than traditional metrics of risk, but is also much less computationally intensive than advanced simulations.
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    System detects errors when medication is self-administered

    From swallowing pills to injecting insulin, patients frequently administer their own medication. But they don’t always get it right. Improper adherence to doctors’ orders is commonplace, accounting for thousands of deaths and billions of dollars in medical costs annually. MIT researchers have developed a system to reduce those numbers for some types of medications.
    The new technology pairs wireless sensing with artificial intelligence to determine when a patient is using an insulin pen or inhaler, and flags potential errors in the patient’s administration method. “Some past work reports that up to 70% of patients do not take their insulin as prescribed, and many patients do not use inhalers properly,” says Dina Katabi, the Andrew and Erna Viteri Professor at MIT, whose research group has developed the new solution. The researchers say the system, which can be installed in a home, could alert patients and caregivers to medication errors and potentially reduce unnecessary hospital visits.
    The research appears in the journal Nature Medicine. The study’s lead authors are Mingmin Zhao, a PhD student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), and Kreshnik Hoti, a former visiting scientist at MIT and current faculty member at the University of Prishtina in Kosovo. Other co-authors include Hao Wang, a former CSAIL postdoc and current faculty member at Rutgers University, Aniruddh Raghu, a CSAIL PhD student.
    Some common drugs entail intricate delivery mechanisms. “For example, insulin pens require priming to make sure there are no air bubbles inside. And after injection, you have to hold for 10 seconds,” says Zhao. “All those little steps are necessary to properly deliver the drug to its active site.” Each step also presents opportunity for errors, especially when there’s no pharmacist present to offer corrective tips. Patients might not even realize when they make a mistake — so Zhao’s team designed an automated system that can.
    Their system can be broken down into three broad steps. First, a sensor tracks a patient’s movements within a 10-meter radius, using radio waves that reflect off their body. Next, artificial intelligence scours the reflected signals for signs of a patient self-administering an inhaler or insulin pen. Finally, the system alerts the patient or their health care provider when it detects an error in the patient’s self-administration.
    The researchers adapted their sensing method from a wireless technology they’d previously used to monitor people’s sleeping positions. It starts with a wall-mounted device that emits very low-power radio waves. When someone moves, they modulate the signal and reflect it back to the device’s sensor. Each unique movement yields a corresponding pattern of modulated radio waves that the device can decode. “One nice thing about this system is that it doesn’t require the patient to wear any sensors,” says Zhao. “It can even work through occlusions, similar to how you can access your Wi-Fi when you’re in a different room from your router.”
    The new sensor sits in the background at home, like a Wi-Fi router, and uses artificial intelligence to interpret the modulated radio waves. The team developed a neural network to key in on patterns indicating the use of an inhaler or insulin pen. They trained the network to learn those patterns by performing example movements, some relevant (e.g. using an inhaler) and some not (e.g. eating). Through repetition and reinforcement, the network successfully detected 96 percent of insulin pen administrations and 99 percent of inhaler uses.
    Once it mastered the art of detection, the network also proved useful for correction. Every proper medicine administration follows a similar sequence — picking up the insulin pen, priming it, injecting, etc. So, the system can flag anomalies in any particular step. For example, the network can recognize if a patient holds down their insulin pen for five seconds instead of the prescribed 10 seconds. The system can then relay that information to the patient or directly to their doctor, so they can fix their technique.
    “By breaking it down into these steps, we can not only see how frequently the patient is using their device, but also assess their administration technique to see how well they’re doing,” says Zhao.
    The researchers say a key feature of their radio wave-based system is its noninvasiveness. “An alternative way to solve this problem is by installing cameras,” says Zhao. “But using a wireless signal is much less intrusive. It doesn’t show peoples’ appearance.”
    He adds that their framework could be adapted to medications beyond inhalers and insulin pens — all it would take is retraining the neural network to recognize the appropriate sequence of movements. Zhao says that “with this type of sensing technology at home, we could detect issues early on, so the person can see a doctor before the problem is exacerbated.” More

  • in

    Artificial neuron device could shrink energy use and size of neural network hardware

    Training neural networks to perform tasks, such as recognizing images or navigating self-driving cars, could one day require less computing power and hardware thanks to a new artificial neuron device developed by researchers at the University of California San Diego. The device can run neural network computations using 100 to 1000 times less energy and area than existing CMOS-based hardware.
    Researchers report their work in a paper published Mar. 18 in Nature Nanotechnology.
    Neural networks are a series of connected layers of artificial neurons, where the output of one layer provides the input to the next. Generating that input is done by applying a mathematical calculation called a non-linear activation function. This is a critical part of running a neural network. But applying this function requires a lot of computing power and circuitry because it involves transferring data back and forth between two separate units — the memory and an external processor.
    Now, UC San Diego researchers have developed a nanometer-sized device that can efficiently carry out the activation function.
    “Neural network computations in hardware get increasingly inefficient as the neural network models get larger and more complex,” said Duygu Kuzum, a professor of electrical and computer engineering at the UC San Diego Jacobs School of Engineering. “We developed a single nanoscale artificial neuron device that implements these computations in hardware in a very area- and energy-efficient way.”
    The new study, led by Kuzum and her Ph.D. student Sangheon Oh, was performed in collaboration with a DOE Energy Frontier Research Center led by UC San Diego physics professor Ivan Schuller, which focuses on developing hardware implementations of energy-efficient artificial neural networks. More

  • in

    Teamwork makes light shine ever brighter

    If you’re looking for one technique to maximize photon output from plasmons, stop. It takes two to wrangle.
    Rice University physicists came across a phenomenon that boosts the light from a nanoscale device more than 1,000 times greater than they anticipated.
    When looking at light coming from a plasmonic junction, a microscopic gap between two gold nanowires, there are conditions in which applying optical or electrical energy individually prompted only a modest amount of light emission. Applying both together, however, caused a burst of light that far exceeded the output under either individual stimulus.
    The researchers led by Rice physicist Douglas Natelson and lead authors Longji Cui and Yunxuan Zhu found the effect while following up experiments that discovered driving current through the gap increased the number of light-emitting “hot carrier” electrons in the electrodes.
    Now they know that adding energy from a laser to the same junction makes it even brighter. The effect could be employed to make nanophotonic switches for computer chips and for advanced photocatalysts.
    The details appear in the American Chemical Society journal Nano Letters.
    “It’s been known for a long time that it’s possible to get a light emission from these tiny structures,” Natelson said. “In our previous work, we showed that plasmons play an important role in generating very hot charge carriers, equivalent to a couple of thousand degrees.” More

  • in

    How gamblers plan their actions to maximize rewards

    In their pursuit of maximum reward, people suffering from gambling disorder rely less on exploring new but potentially better strategies, and more on proven courses of action that have already led to success in the past. The neurotransmitter dopamine in the brain may play an important role in this, a study in biological psychology conducted at the University of Cologne’s Faculty of Human Sciences by Professor Dr Jan Peters und Dr Antonius Wiehler suspects. The article ‘Attenuated directed exploration during reinforcement learning in gambling disorder’ has appeared in the latest edition of the Journal of Neuroscience, published by the Society for Neuroscience.
    Gambling disorder affects slightly less than one percent of the population — often men — and is in some ways similar to substance abuse disorders. Scientists suspect that this disorder, like other addiction disorders, is associated with changes in the dopamine system. The brain’s reward system releases the neurotransmitter dopamine during gambling. Since dopamine is important for the planning and control of actions, among other things, it could also affect strategic learning processes.
    ‘Gambling disorder is of scientific interest among other things because it is an addiction disorder that is not tied to a specific substance’, Professor Dr Jan Peters, one of the authors, remarked. The psychologists examined how gamblers plan their actions to maximize rewards — how their so called reinforcement learning works. In the study, participants had to decide between already proven options or new ones in order to win as much as possible. At the same time, the scientists used functional magnetic resonance imaging to measure activity in regions of the brain that are important for processing reward stimuli and planning actions.
    Twenty-three habitual gamblers and twenty-three control subjects (all male) performed what is known as a ‘four-armed bandit task’. The name of this type of decision-making task refers to slot machines, known colloquially as ‘one-armed bandits’. In each run, the participants had to choose between four options (‘four-armed bandit’, in this case four coloured squares), whose winnings slowly changed. Different strategies can be employed here. For example, one can choose the option that yielded the highest profit last time. However, it is also possible to choose the option where the chance of winning is most uncertain — the option promising maximum information gain. The latter is also called directed (or uncertainty-based) exploration.
    Both groups won about the same amount of money and exhibited directed exploration. However, this was significantly less pronounced in the group of gamblers than in the control group. These results indicate that gamblers are less adaptive to changing environments during reinforcement learning. At the neural level, gamblers showed changes in a network of brain regions that has been associated with directed exploration in previous studies. In one previous study by the two biological psychologists, pharmacologically raising the dopamine level in healthy participants had shown a very similar effect on behaviour. ‘Although this indicates that dopamine might also play an important role in the reduction of directed exploration in gamblers, more research would have to be conducted to prove such a correlation,’ said Dr Antonius Wiehler.
    Further research also needs to clarify whether the observed changes in decision-making behaviour in gamblers are a risk factor for, or a consequence of, regular gambling.
    Story Source:
    Materials provided by University of Cologne. Note: Content may be edited for style and length. More