More stories

  • in

    From bits to p-bits: One step closer to probabilistic computing

    Tohoku University scientists in Japan have developed a mathematical description of what happens within tiny magnets as they fluctuate between states when an electric current and magnetic field are applied. Their findings, published in the journal Nature Communications, could act as the foundation for engineering more advanced computers that can quantify uncertainty while interpreting complex data.
    Classical computers have gotten us this far, but there are some problems that they cannot address efficiently. Scientists have been working on addressing this by engineering computers that can utilize the laws of quantum physics to recognize patterns in complex problems. But these so-called quantum computers are still in their early stages of development and are extremely sensitive to their surroundings, requiring extremely low temperatures to function.
    Now, scientists are looking at something different: a concept called probabilistic computing. This type of computer, which could function at room temperature, would be able to infer potential answers from complex input. A simplistic example of this type of problem would be to infer information about a person by looking at their purchasing behaviour. Instead of the computer providing a single, discrete result, it picks out patterns and delivers a good guess of what the result might be.
    There could be several ways to build such a computer, but some scientists are investigating the use of devices called magnetic tunnel junctions. These are made from two layers of magnetic metal separated by an ultrathin insulator (Fig. 1). When these nanomagnetic devices are thermally activated under an electric current and magnetic field, electrons tunnel through the insulating layer. Depending on their spin, they can cause changes, or fluctuations, within the magnets. These fluctuations, called p-bits, which are the alternative to the on/off or 0/1 bits we have all heard about in classical computers, could form the basis of probabilistic computing. But to engineer probabilistic computers, scientists need to be able to describe the physics that happens within magnetic tunnel junctions.
    This is precisely what Shun Kanai, professor at Tohoku University’s Research Institute of Electrical Communication, and his colleagues have achieved.
    “We have experimentally clarified the ‘switching exponent’ that governs fluctuation under the perturbations caused by magnetic field and spin-transfer torque in magnetic tunnel junctions,” says Kanai. “This gives us the mathematical foundation to implement magnetic tunnel junctions into the p-bit in order to sophisticatedly design probabilistic computers. Our work has also shown that these devices can be used to investigate unexplored physics related to thermally activated phenomena.”
    Story Source:
    Materials provided by Tohoku University. Note: Content may be edited for style and length. More

  • in

    Mixing things up: Optimizing fluid mixing with machine learning

    Mixing of fluids is a critical component in many industrial and chemical processes. Pharmaceutical mixing and chemical reactions, for instance, may require homogeneous fluid mixing. Achieving this mixing faster and with less energy would reduce the associated costs greatly. In reality, however, most mixing processes are not mathematically optimized and instead rely on trial-and-error-based empirical methods. Turbulent mixing, which uses turbulence to mix up fluids, is an option but is problematic as it is either difficult to sustain (such as in micro-mixers) or damages the materials being mixed (such as in bioreactors and food mixers).
    Can an optimized mixing be achieved for laminar flows instead? To answer this question, a team of researchers from Japan, in a new study, turned to machine learning. In their study published in Scientific Reports, the team resorted to an approach called “reinforcement learning” (RL), in which intelligent agents take actions in an environment to maximize the cumulative reward (as opposed to an instantaneous reward).
    “Since RL maximizes the cumulative reward, which is global-in-time, it can be expected to be suitable fortackling the problem of efficient fluid mixing, which is also a global-in-time optimization problem,” explains Associate Professor Masanobu Inubushi, the corresponding author of the study. “Personally, I have a conviction that it is important to find the right algorithm for the right problem rather than blindly apply a machine learning algorithm. Luckily, in this study, we managed to connect the two fields (fluid mixing and reinforcement learning) after considering their physical and mathematical characteristics.” The work included contributions from Mr. Mikito Konishi, a graduate student, and Prof. Susumu Goto, both from Osaka University.
    One major roadblock awaited the team, however. While RL is suitable for global optimization problems, it is not particularly well-suited for systems involving high-dimensional state spaces, i.e., systems requiring a large number of variables for their description. Unfortunately, fluid mixing was just such a system.
    To address this issue, the team adopted an approach used in the formulation of another optimization problem, which enabled them to reduce the state space dimension for fluid flow to one. Put simply, the fluid motion could now be described using only a single parameter!
    The RL algorithm is usually formulated in terms of a “Markov decision process” (MDP), a mathematical framework for decision making in situations where the outcomes are part random and part controlled by the decision maker. Using this approach, the team showed that RL was effective in optimizing fluid mixing. More

  • in

    The Tonga eruption may have spawned a tsunami as tall as the Statue of Liberty

    The massive Tonga eruption generated a set of planet-circling tsunamis that may have started out as a single mound of water roughly the height of the Statue of Liberty.

    What’s more, the explosive eruption triggered an immense atmospheric shock wave that spawned a second set of especially fast-moving tsunamis, a rare phenomenon that can complicate early warnings for these oft-destructive waves, researchers report in the October Ocean Engineering.

    As the Hunga Tonga–Hunga Ha’apai undersea volcano erupted in the South Pacific in January, it displaced a large volume of water upward, says Mohammad Heidarzadeh, a civil engineer at the University of Bath in England (SN: 1/21/22). The water in that colossal mound later “ran downhill,” as fluids tend to do, to generate the initial set of tsunamis.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    To estimate the original size of the mound, Heidarzadeh and his team used computer simulations, as well as data from deep-ocean instruments and coastal tide gauges within about 1,500 kilometers of the eruption, many of them in or near New Zealand. The arrival times of tsunami waves, as well as their sizes, at those locations were key pieces of data, Heidarzadeh says.

    The team analyzed nine possibilities for the initial wave, each of which was shaped like a baseball pitcher’s mound and had a distinct height and diameter. The best fit to the real-world data came from a mound of water a whopping 90 meters tall and 12 kilometers in diameter, the researchers report.

    That initial wave would have contained an estimated 6.6 cubic kilometers of water. “This was a really large tsunami,” Heidarzadeh says.

    Despite starting out about nine times as tall as the tsunami that devastated the Tohoku region of Japan in 2011, the Tongan tsunamis killed only five people and caused about $90 million in damage, largely because of their remote source (SN: 2/10/12).

    Another unusual aspect of the Tongan eruption is the second set of tsunamis generated by a strong atmospheric pressure wave.

    That pressure pulse resulted from a steam explosion that occurred when a large volume of seawater infiltrated the hot magma chamber beneath the erupting volcano. As the pressure wave raced across the ocean’s surface at speeds exceeding 300 meters per second, it pushed water ahead of it, creating tsunamis, Heidarzadeh explains.

    The eruption of the Hunga Tonga-Hunga Ha’apai volcano also triggered an atmospheric pressure wave that in turn generated tsunamis that traveled quicker than expected.NASA Earth Observatory

    Along many coastlines, including some in the Indian Ocean and Mediterranean Sea, these pressure wave–generated tsunamis arrived hours ahead of the gravity-driven waves spreading from the 90-meter-tall mound of water. Gravity-driven tsunami waves typically travel across the deepest parts of the ocean, far from continents, at speeds between 100 and 220 meters per second. When the waves reach shallow waters near shore, the waves slow, water stacks up and then strikes shore, where destruction occurs.

    Pressure wave–generated tsunamis have been reported for only one other volcanic eruption: the 1883 eruption of Krakatau in Indonesia (SN: 8/27/83).

    Those quicker-than-expected arrival times — plus the fact that the pressure-wave tsunamis for the Tongan eruption were comparable in size with the gravity-driven ones — could complicate early warnings for these tsunamis. That’s concerning, Heiderzadeh says.

    One way to address the issue would be to install instruments that measure atmospheric pressure with the deep-sea equipment already in place to detect tsunamis, says Hermann Fritz, a tsunami scientist at Georgia Tech in Atlanta.

    With that setup, scientists would be able to discern if a passing tsunami is associated with a pressure pulse, thus providing a clue in real time about how fast the tsunami wave might be traveling. More

  • in

    Getting data to do more for biodiversity

    Michigan State University ecologists have developed a mathematical framework that could help monitor and preserve biodiversity without breaking the bank.
    This framework or model takes low-cost data about relatively abundant species in a community and uses it to generate valuable insights on their harder-to-find neighbors. The journal Conservation Biology published the research as an Early View article on Aug. 25.
    “One of the biggest challenges in monitoring biodiversity is that the species you’re most concerned about tend to be lowest in abundance or they’re the hardest species to observe during data collection,” said Matthew Farr, the lead author on the new report. “This model can be really helpful for those rare and elusive species.”
    Farr, now a postdoctoral researcher at the University of Washington, helped develop the model as a doctoral student in Elise Zipkin’s Quantitative Ecology Lab in the College of Natural Science at MSU.
    “There are a lot of species in the world and many of them are data deficient,” said Zipkin, an associate professor of integrative biology and director of MSU’s Ecology, Evolution and Behavior Program, or EEB. “We’re developing approaches to more quickly estimate what’s going on with biodiversity, which species are in trouble and where, spatially, do we need to focus our conservation efforts.”
    After validating the model with an assist from forest-dwelling antelope in Africa, the researchers say it could be applied to a variety of other animals that meet certain criteria. More

  • in

    Silicon image sensor that computes

    As any driver knows, accidents can happen in the blink of an eye — so when it comes to the camera system in autonomous vehicles, processing time is critical. The time that it takes for the system to snap an image and deliver the data to the microprocessor for image processing could mean the difference between avoiding an obstacle or getting into a major accident.
    In-sensor image processing, in which important features are extracted from raw data by the image sensor itself instead of the separate microprocessor, can speed up the visual processing. To date, demonstrations of in-sensor processing have been limited to emerging research materials which are, at least for now, difficult to incorporate into commercial systems.
    Now, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed the first in-sensor processor that could be integrated into commercial silicon imaging sensor chips — known as complementary metal-oxide-semiconductor (CMOS) image sensors — that are used in nearly all commercial devices that need capture visual information, including smartphones.
    The research is published in Nature Electronics.
    “Our work can harnesses the mainstream semiconductor electronics industry to rapidly bring in-sensor computing to a wide variety of real-world applications,” said Donhee Ham, the Gordon McKay Professor of Electrical Engineering and Applied Physics at SEAS and senior author of the paper.
    Ham and his team developed a silicon photodiode array. Commercially-available image sensing chips also have a silicon photodiode array to capture images, but the team’s photodiodes are electrostatically doped, meaning that sensitivity of individual photodiodes, or pixels, to incoming light can be tuned by voltages. An array that connects multiple voltage-tunable photodiodes together can perform an analog version of multiplication and addition operations central to many image processing pipelines, extracting the relevant visual information as soon as the image is captured.
    “These dynamic photodiodes can concurrently filter images as they are captured, allowing for the first stage of vision processing to be moved from the microprocessor to the sensor itself,” said Houk Jang, a postdoctoral fellow at SEAS and first author of the paper.
    The silicon photodiode array can be programmed into different image filters to remove unnecessary details or noise for various applications. An imaging system in an autonomous vehicle, for example, may call for a high-pass filter to track lane markings, while other applications may call for a filter that blurs for noise reduction.
    “Looking ahead, we foresee the use of this silicon-based in-sensor processor not only in machine vision applications, but also in bio-inspired applications, wherein early information processing allows for the co-location of sensor and compute units, like in the brain,” said Henry Hinton, a graduate student at SEAS and co-first author of the paper.
    Next, the team aims to increase the density of photodiodes and integrate them with silicon integrated circuits.
    “By replacing the standard non-programmable pixels in commercial silicon image sensors with the programmable ones developed here, imaging devices can intelligently trim out unneeded data, thus could be made more efficient in both energy and bandwidth to address the demands of the next generation of sensory applications,” said Jang.
    The research was co-authored by Woo-Bin Jung, Min-Hyun Lee, Changhyun Kim, Min Park, Seoung-Ki Lee and Seongjun Park. It was supported by the Samsung Advanced Institute of Technology under Contract A30216 and by the National Science Foundation Science and Technology Center for Integrated Quantum Materials under Contract DMR-1231319. More

  • in

    Small molecules, giant (surface) potential

    In a molecular feat akin to getting pedestrians in a scramble crosswalk to spontaneously start walking in step, researchers at Kyushu University have created a series of molecules that tend to face the same direction to form a ‘giant surface potential’ when evaporated onto a surface.
    The researchers hope to utilize the approach to generate controlled electric fields that help improve the efficiency of organic light-emitting diodes used in displays and lighting and open new routes for realizing devices that convert vibrations into electricity with organic materials.
    Based on the fantastic chemical versatility of carbon that makes living organisms possible, organic electronics are already driving a wave of vibrant — and even flexible — smartphone and television screens, with applications in solar cells, lasers, and circuits on the horizon.
    This flexibility is in part due to the disordered nature of the thin films of the materials used in the devices. Unlike common inorganic electronics based on silicon atoms tightly connected in rigid, well-organized crystals, organics usually form ‘amorphous’ layers that are not nearly as neatly organized.
    Despite the seemingly random organization of the molecules, researchers have found that some in fact tend to align in similar directions, profoundly impacting the properties of a device and creating new possibilities for controlling device performance.
    “Significant work has already been done on molecules that align in a way that the light they emit can more easily escape a device,” says Masaki Tanaka, an assistant professor at Tokyo University of Agriculture and Technology (TUAT) who started the present work while at Kyushu University’s Center for Organic Photonics and Electronics Research (OPERA) and continued further study of the molecular alignment in amorphous films after his transfer to TUAT. More

  • in

    Artificial intelligence model can detect Parkinson's from breathing patterns, researchers show

    Parkinson’s disease is notoriously difficult to diagnose as it relies primarily on the appearance of motor symptoms such as tremors, stiffness, and slowness, but these symptoms often appear several years after the disease onset. Now, Dina Katabi, the Thuan (1990) and Nicole Pham Professor in the Department of Electrical Engineering and Computer Science (EECS) at MIT and principal investigator at MIT Jameel Clinic, and her team have developed an artificial intelligence model that can detect Parkinson’s just from reading a person’s breathing patterns.
    The tool in question is a neural network, a series of connected algorithms that mimic the way a human brain works, capable of assessing whether someone has Parkinson’s from their nocturnal breathing — i.e., breathing patterns that occur while sleeping. The neural network, which was trained by MIT PhD student Yuzhe Yang and postdoc Yuan Yuan, is also able to discern the severity of someone’s Parkinson’s disease and track the progression of their disease over time.
    Yang is first author on a new paper describing the work, published today in Nature Medicine. Katabi, who is also an affiliate of the MIT Computer Science and Artificial Intelligence Laboratory and director of the Center for Wireless Networks and Mobile Computing, is the senior author. They are joined by Yuan and 12 colleagues from Rutgers University, the University of Rochester Medical Center, the Mayo Clinic, Massachusetts General Hospital, and the Boston University College of Health and Rehabilition.
    Over the years, researchers have investigated the potential of detecting Parkinson’s using cerebrospinal fluid and neuroimaging, but such methods are invasive, costly, and require access to specialized medical centers, making them unsuitable for frequent testing that could otherwise provide early diagnosis or continuous tracking of disease progression.
    The MIT researchers demonstrated that the artificial intelligence assessment of Parkinson’s can be done every night at home while the person is asleep and without touching their body. To do so, the team developed a device with the appearance of a home Wi-Fi router, but instead of providing internet access, the device emits radio signals, analyzes their reflections off the surrounding environment, and extracts the subject’s breathing patterns without any bodily contact. The breathing signal is then fed to the neural network to assess Parkinson’s in a passive manner, and there is zero effort needed from the patient and caregiver.
    “A relationship between Parkinson’s and breathing was noted as early as 1817, in the work of Dr. James Parkinson. This motivated us to consider the potential of detecting the disease from one’s breathing without looking at movements,” Katabi says. “Some medical studies have shown that respiratory symptoms manifest years before motor symptoms, meaning that breathing attributes could be promising for risk assessment prior to Parkinson’s diagnosis.”
    The fastest-growing neurological disease in the world, Parkinson’s is the second-most common neurological disorder, after Alzheimer’s disease. In the United States alone, it afflicts over 1 million people and has an annual economic burden of $51.9 billion. The research team’s algorithm was tested on 7,687 individuals, including 757 Parkinson’s patients.
    Katabi notes that the study has important implications for Parkinson’s drug development and clinical care. “In terms of drug development, the results can enable clinical trials with a significantly shorter duration and fewer participants, ultimately accelerating the development of new therapies. In terms of clinical care, the approach can help in the assessment of Parkinson’s patients in traditionally underserved communities, including those who live in rural areas and those with difficulty leaving home due to limited mobility or cognitive impairment,” she says.
    “We’ve had no therapeutic breakthroughs this century, suggesting that our current approaches to evaluating new treatments is suboptimal,” says Ray Dorsey, a professor of neurology at the University of Rochester and Parkinson’s specialist who co-authored the paper. Dorsey adds that the study is likely one of the largest sleep studies ever conducted on Parkinson’s. “We have very limited information about manifestations of the disease in their natural environment and [Katabi’s] device allows you to get objective, real-world assessments of how people are doing at home. The analogy I like to draw [of current Parkinson’s assessments] is a street lamp at night, and what we see from the street lamp is a very small segment … [Katabi’s] entirely contactless sensor helps us illuminate the darkness.”
    This research was performed in collaboration with the University of Rochester, Mayo Clinic, and Massachusetts General Hospital, and is sponsored by the National Institutes of Health, with partial support by the National Science Foundation and the Michael J. Fox Foundation.
    Story Source:
    Materials provided by Massachusetts Institute of Technology. Original written by Alex Ouyang, Abdul Latif Jameel Clinic for Machine Learning in Health. Note: Content may be edited for style and length. More

  • in

    Making bike-sharing work

    They’re everywhere, from Berlin to Beijing, brightly coloured bicycles you can borrow to move around the city without a car. These systems, along with e-scooters, offer people a quick and convenient way to travel around urban areas. And at a time when cities are scrambling to find ways to meet their climate goals, they’re a welcome tool for urban planners.
    Making sure the bikes and e-scooters are on hand can be something of a challenge — but it’s also key to the success of the offer, says Steffen Bakker, a researcher at NTNU’s Department of Industrial Economic and Technology Management who studies ways to make transport greener and more efficient.
    “If a system like this is going to be successful, then we need to have user satisfaction,” Bakker said. “People want the bikes to be there when they want to use them, and they will only want to use the system if it’s a good service.”
    Bakker was a co-author on a recent paper that describes an optimization model to help cities and companies do a better job keeping their bike-sharing customers happy.
    Like shooting a moving target
    Consider the challenges of providing bikes or scooters where and when people will want them. More