More stories

  • in

    Researchers use infrared light to wirelessly transmit power over 30 meters

    Imagine walking into an airport or grocery store and your smartphone automatically starts charging. This could be a reality one day, thanks to a new wireless laser charging system that overcomes some of the challenges that have hindered previous attempts to develop safe and convenient on-the-go charging systems.
    “The ability to power devices wirelessly could eliminate the need to carry around power cables for our phones or tablets,” said research team leader Jinyong Ha from Sejong University in South Korea. “It could also power various sensors such as those in Internet of Things (IoT) devices and sensors used for monitoring processes in manufacturing plants.”
    In the Optica Publishing Group journal Optics Express, the researchers describe their new system, which uses infrared light to safely transfer high levels of power. Laboratory tests showed that it could transfer 400 mW light power over distances of up to 30 meters. This power is sufficient for charging sensors, and with further development, it could be increased to levels necessary to charge mobile devices.
    Several techniques have been studied for long-range wireless power transfer. However, it has been difficult to safely send enough power over meter-level distances. To overcome this challenge, the researchers optimized a method called distributed laser charging, which has recently gained more attention for this application because it provides safe high-power illumination with less light loss.
    “While most other approaches require the receiving device to be in a special charging cradle or to be stationary, distributed laser charging enables self-alignment without tracking processes as long as the transmitter and receiver are in the line of sight of each other,” said Ha. “It also automatically shifts to a safe low power delivery mode if an object or a person blocks the line of sight.”
    Going the distance
    Distributed laser charging works somewhat like a traditional laser but instead of the optical components of the laser cavity being integrated into one device, they are separated into a transmitter and receiver. When the transmitter and receiver are within a line of sight, a laser cavity is formed between them over the air — or free space — which allows the system to deliver light-based power. If an obstacle cuts the transmitter-receiver line of sight, the system automatically switches to a power-safe mode, achieving hazard-free power delivery in the air. More

  • in

    ROBE Array could let small companies access popular form of AI

    A breakthrough low-memory technique by Rice University computer scientists could put one of the most resource-intensive forms of artificial intelligence — deep-learning recommendation models (DLRM) — within reach of small companies.
    DLRM recommendation systems are a popular form of AI that learns to make suggestions users will find relevant. But with top-of-the-line training models requiring more than a hundred terabytes of memory and supercomputer-scale processing, they’ve only been available to a short list of technology giants with deep pockets.
    Rice’s “random offset block embedding array,” or ROBE Array, could change that. It’s an algorithmic approach for slashing the size of DLRM memory structures called embedding tables, and it will be presented this week at the Conference on Machine Learning and Systems (MLSys 2022) in Santa Clara, California, where it earned Outstanding Paper honors.
    “Using just 100 megabytes of memory and a single GPU, we showed we could match the training times and double the inference efficiency of state-of-the-art DLRM training methods that require 100 gigabytes of memory and multiple processors,” said Anshumali Shrivastava, an associate professor of computer science at Rice who’s presenting the research at MLSys 2022 with ROBE Array co-creators Aditya Desai, a Rice graduate student in Shrivastava’s research group, and Li Chou, a former postdoctoral researcher at Rice who is now at West Texas A&M University.
    “ROBE Array sets a new baseline for DLRM compression,” Shrivastava said. “And it brings DLRM within reach of average users who do not have access to the high-end hardware or the engineering expertise one needs to train models that are hundreds of terabytes in size.”
    DLRM systems are machine learning algorithms that learn from data. For example, a recommendation system that suggests products for shoppers would be trained with data from past transactions, including the search terms users provided, which products they were offered and which, if any, they purchased. One way to improve the accuracy of recommendations is to sort training data into more categories. For example, rather than putting all shampoos in a single category, a company could create categories for men’s, women’s and children’s shampoos. More

  • in

    Underwater messaging app for smartphones

    For millions of people who participate in activities such as snorkeling and scuba diving each year, hand signals are the only option for communicating safety and directional information underwater. While recreational divers may employ around 20 signals, professional divers’ vocabulary can exceed 200 signals on topics ranging from oxygen level, to the proximity of aquatic species, to the performance of cooperative tasks.
    The visual nature of these hand signals limits their effectiveness at distance and in low visibility. Two-way text messaging is a potential alternative, but one that requires expensive custom hardware that is not widely available.
    Researchers at the University of Washington show how to achieve underwater messaging on billions of existing smartphones and smartwatches using only software. The team developed AquaApp, the first mobile app for acoustic-based communication and networking underwater that can be used with existing devices such as smartphones and smartwatches.
    The researchers presented their paper describing AquaApp Aug. 25 at SIGCOMM 2022.
    “Smartphones rely on radio signals like WiFi and Bluetooth for wireless communication. Those don’t propagate well underwater, but acoustic signals do,” said co-lead author Tuochao Chen, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “With AquaApp, we demonstrate underwater messaging using the speaker and microphone widely available on smartphones and watches. Other than downloading an app to their phone, the only thing people will need is a waterproof phone case rated for the depth of their dive.”
    The AquaApp interface enables users to select from a list of 240 pre-set messages that correspond to hand signals employed by professional divers, with the 20 most common signals prominently displayed for easy access. Users can also filter messages according to eight categories, including directional indicators, environmental factors and equipment status. More

  • in

    Artificial Intelligence Improves Treatment in Women with Heart Attacks

    Heart attacks are one of the leading causes of death worldwide, and women who suffer a heart attack have a higher mortality rate than men. This has been a matter of concern to cardiologists for decades and has led to controversy in the medical field about the causes and effects of possible gaps in treatment. The problem starts with the symptoms: unlike men, who usually experience chest pain with radiation to the left arm, a heart attack in women often manifests as abdominal pain radiating to the back or as nausea and vomiting. These symptoms are unfortunately often misinterpreted by the patients and healthcare personnel — with disastrous consequences.
    Risk profile and clinical picture is different in women
    An international research team led by Thomas F. Lüscher, professor at the Center for Molecular Cardiology at the University of Zurich (UZH), has now investigated the role of biological sex in heart attacks in more detail. “Indeed, there are notable differences in the disease phenotype observed in females and males. Our study shows that women and men differ significantly in their risk factor profile at hospital admission,” says Lüscher. When age differences at admission and existing risk factors such as hypertension and diabetes are disregarded, female heart-attack patients have higher mortality than male patients. “However, when these differences are taken into account statistically, women and men have similar mortality,” the cardiologist adds.
    Current risk models favor under-treatment of female patients
    In their study, published in the journal The Lancet, researchers from Switzerland and the United Kingdom analyzed data from 420,781 patients across Europe who had suffered the most common type of heart attack. “The study shows that established risk models which guide current patient management are less accurate in females and favor the undertreatment of female patients,” says first author Florian A. Wenzl of the Center for Molecular Medicine at UZH. “Using a machine learning algorithm and the largest datasets in Europe we were able to develop a novel artificial- intelligence-based risk score which accounts for sex-related differences in the baseline risk profile and improves the prediction of mortality in both sexes,” Wenzl says.
    AI-based risk profiling improves individualized care
    Many researchers and biotech companies agree that artificial intelligence and Big Data analytics are the next step on the road to personalized patient care. “Our study heralds the era of artificial intelligence in the treatment of heart attacks,” says Wenzl. Modern computer algorithms can learn from large data sets to make accurate predictions about the prognosis of individual patients — the key to individualized treatments.
    Thomas F. Lüscher and his team see huge potential in the application of artificial intelligence for the management of heart disease both in male and female patients. “I hope the implementation of this novel score in treatment algorithms will refine current treatment strategies, reduce sex inequalities, and eventually improve the survival of patients with heart attacks — both male and female,” says Lüscher.
    Story Source:
    Materials provided by University of Zurich. Note: Content may be edited for style and length. More

  • in

    From bits to p-bits: One step closer to probabilistic computing

    Tohoku University scientists in Japan have developed a mathematical description of what happens within tiny magnets as they fluctuate between states when an electric current and magnetic field are applied. Their findings, published in the journal Nature Communications, could act as the foundation for engineering more advanced computers that can quantify uncertainty while interpreting complex data.
    Classical computers have gotten us this far, but there are some problems that they cannot address efficiently. Scientists have been working on addressing this by engineering computers that can utilize the laws of quantum physics to recognize patterns in complex problems. But these so-called quantum computers are still in their early stages of development and are extremely sensitive to their surroundings, requiring extremely low temperatures to function.
    Now, scientists are looking at something different: a concept called probabilistic computing. This type of computer, which could function at room temperature, would be able to infer potential answers from complex input. A simplistic example of this type of problem would be to infer information about a person by looking at their purchasing behaviour. Instead of the computer providing a single, discrete result, it picks out patterns and delivers a good guess of what the result might be.
    There could be several ways to build such a computer, but some scientists are investigating the use of devices called magnetic tunnel junctions. These are made from two layers of magnetic metal separated by an ultrathin insulator (Fig. 1). When these nanomagnetic devices are thermally activated under an electric current and magnetic field, electrons tunnel through the insulating layer. Depending on their spin, they can cause changes, or fluctuations, within the magnets. These fluctuations, called p-bits, which are the alternative to the on/off or 0/1 bits we have all heard about in classical computers, could form the basis of probabilistic computing. But to engineer probabilistic computers, scientists need to be able to describe the physics that happens within magnetic tunnel junctions.
    This is precisely what Shun Kanai, professor at Tohoku University’s Research Institute of Electrical Communication, and his colleagues have achieved.
    “We have experimentally clarified the ‘switching exponent’ that governs fluctuation under the perturbations caused by magnetic field and spin-transfer torque in magnetic tunnel junctions,” says Kanai. “This gives us the mathematical foundation to implement magnetic tunnel junctions into the p-bit in order to sophisticatedly design probabilistic computers. Our work has also shown that these devices can be used to investigate unexplored physics related to thermally activated phenomena.”
    Story Source:
    Materials provided by Tohoku University. Note: Content may be edited for style and length. More

  • in

    Mixing things up: Optimizing fluid mixing with machine learning

    Mixing of fluids is a critical component in many industrial and chemical processes. Pharmaceutical mixing and chemical reactions, for instance, may require homogeneous fluid mixing. Achieving this mixing faster and with less energy would reduce the associated costs greatly. In reality, however, most mixing processes are not mathematically optimized and instead rely on trial-and-error-based empirical methods. Turbulent mixing, which uses turbulence to mix up fluids, is an option but is problematic as it is either difficult to sustain (such as in micro-mixers) or damages the materials being mixed (such as in bioreactors and food mixers).
    Can an optimized mixing be achieved for laminar flows instead? To answer this question, a team of researchers from Japan, in a new study, turned to machine learning. In their study published in Scientific Reports, the team resorted to an approach called “reinforcement learning” (RL), in which intelligent agents take actions in an environment to maximize the cumulative reward (as opposed to an instantaneous reward).
    “Since RL maximizes the cumulative reward, which is global-in-time, it can be expected to be suitable fortackling the problem of efficient fluid mixing, which is also a global-in-time optimization problem,” explains Associate Professor Masanobu Inubushi, the corresponding author of the study. “Personally, I have a conviction that it is important to find the right algorithm for the right problem rather than blindly apply a machine learning algorithm. Luckily, in this study, we managed to connect the two fields (fluid mixing and reinforcement learning) after considering their physical and mathematical characteristics.” The work included contributions from Mr. Mikito Konishi, a graduate student, and Prof. Susumu Goto, both from Osaka University.
    One major roadblock awaited the team, however. While RL is suitable for global optimization problems, it is not particularly well-suited for systems involving high-dimensional state spaces, i.e., systems requiring a large number of variables for their description. Unfortunately, fluid mixing was just such a system.
    To address this issue, the team adopted an approach used in the formulation of another optimization problem, which enabled them to reduce the state space dimension for fluid flow to one. Put simply, the fluid motion could now be described using only a single parameter!
    The RL algorithm is usually formulated in terms of a “Markov decision process” (MDP), a mathematical framework for decision making in situations where the outcomes are part random and part controlled by the decision maker. Using this approach, the team showed that RL was effective in optimizing fluid mixing. More

  • in

    The Tonga eruption may have spawned a tsunami as tall as the Statue of Liberty

    The massive Tonga eruption generated a set of planet-circling tsunamis that may have started out as a single mound of water roughly the height of the Statue of Liberty.

    What’s more, the explosive eruption triggered an immense atmospheric shock wave that spawned a second set of especially fast-moving tsunamis, a rare phenomenon that can complicate early warnings for these oft-destructive waves, researchers report in the October Ocean Engineering.

    As the Hunga Tonga–Hunga Ha’apai undersea volcano erupted in the South Pacific in January, it displaced a large volume of water upward, says Mohammad Heidarzadeh, a civil engineer at the University of Bath in England (SN: 1/21/22). The water in that colossal mound later “ran downhill,” as fluids tend to do, to generate the initial set of tsunamis.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    To estimate the original size of the mound, Heidarzadeh and his team used computer simulations, as well as data from deep-ocean instruments and coastal tide gauges within about 1,500 kilometers of the eruption, many of them in or near New Zealand. The arrival times of tsunami waves, as well as their sizes, at those locations were key pieces of data, Heidarzadeh says.

    The team analyzed nine possibilities for the initial wave, each of which was shaped like a baseball pitcher’s mound and had a distinct height and diameter. The best fit to the real-world data came from a mound of water a whopping 90 meters tall and 12 kilometers in diameter, the researchers report.

    That initial wave would have contained an estimated 6.6 cubic kilometers of water. “This was a really large tsunami,” Heidarzadeh says.

    Despite starting out about nine times as tall as the tsunami that devastated the Tohoku region of Japan in 2011, the Tongan tsunamis killed only five people and caused about $90 million in damage, largely because of their remote source (SN: 2/10/12).

    Another unusual aspect of the Tongan eruption is the second set of tsunamis generated by a strong atmospheric pressure wave.

    That pressure pulse resulted from a steam explosion that occurred when a large volume of seawater infiltrated the hot magma chamber beneath the erupting volcano. As the pressure wave raced across the ocean’s surface at speeds exceeding 300 meters per second, it pushed water ahead of it, creating tsunamis, Heidarzadeh explains.

    The eruption of the Hunga Tonga-Hunga Ha’apai volcano also triggered an atmospheric pressure wave that in turn generated tsunamis that traveled quicker than expected.NASA Earth Observatory

    Along many coastlines, including some in the Indian Ocean and Mediterranean Sea, these pressure wave–generated tsunamis arrived hours ahead of the gravity-driven waves spreading from the 90-meter-tall mound of water. Gravity-driven tsunami waves typically travel across the deepest parts of the ocean, far from continents, at speeds between 100 and 220 meters per second. When the waves reach shallow waters near shore, the waves slow, water stacks up and then strikes shore, where destruction occurs.

    Pressure wave–generated tsunamis have been reported for only one other volcanic eruption: the 1883 eruption of Krakatau in Indonesia (SN: 8/27/83).

    Those quicker-than-expected arrival times — plus the fact that the pressure-wave tsunamis for the Tongan eruption were comparable in size with the gravity-driven ones — could complicate early warnings for these tsunamis. That’s concerning, Heiderzadeh says.

    One way to address the issue would be to install instruments that measure atmospheric pressure with the deep-sea equipment already in place to detect tsunamis, says Hermann Fritz, a tsunami scientist at Georgia Tech in Atlanta.

    With that setup, scientists would be able to discern if a passing tsunami is associated with a pressure pulse, thus providing a clue in real time about how fast the tsunami wave might be traveling. More

  • in

    Getting data to do more for biodiversity

    Michigan State University ecologists have developed a mathematical framework that could help monitor and preserve biodiversity without breaking the bank.
    This framework or model takes low-cost data about relatively abundant species in a community and uses it to generate valuable insights on their harder-to-find neighbors. The journal Conservation Biology published the research as an Early View article on Aug. 25.
    “One of the biggest challenges in monitoring biodiversity is that the species you’re most concerned about tend to be lowest in abundance or they’re the hardest species to observe during data collection,” said Matthew Farr, the lead author on the new report. “This model can be really helpful for those rare and elusive species.”
    Farr, now a postdoctoral researcher at the University of Washington, helped develop the model as a doctoral student in Elise Zipkin’s Quantitative Ecology Lab in the College of Natural Science at MSU.
    “There are a lot of species in the world and many of them are data deficient,” said Zipkin, an associate professor of integrative biology and director of MSU’s Ecology, Evolution and Behavior Program, or EEB. “We’re developing approaches to more quickly estimate what’s going on with biodiversity, which species are in trouble and where, spatially, do we need to focus our conservation efforts.”
    After validating the model with an assist from forest-dwelling antelope in Africa, the researchers say it could be applied to a variety of other animals that meet certain criteria. More