More stories

  • in

    Researchers use infrared light to wirelessly transmit power over 30 meters

    Imagine walking into an airport or grocery store and your smartphone automatically starts charging. This could be a reality one day, thanks to a new wireless laser charging system that overcomes some of the challenges that have hindered previous attempts to develop safe and convenient on-the-go charging systems.
    “The ability to power devices wirelessly could eliminate the need to carry around power cables for our phones or tablets,” said research team leader Jinyong Ha from Sejong University in South Korea. “It could also power various sensors such as those in Internet of Things (IoT) devices and sensors used for monitoring processes in manufacturing plants.”
    In the Optica Publishing Group journal Optics Express, the researchers describe their new system, which uses infrared light to safely transfer high levels of power. Laboratory tests showed that it could transfer 400 mW light power over distances of up to 30 meters. This power is sufficient for charging sensors, and with further development, it could be increased to levels necessary to charge mobile devices.
    Several techniques have been studied for long-range wireless power transfer. However, it has been difficult to safely send enough power over meter-level distances. To overcome this challenge, the researchers optimized a method called distributed laser charging, which has recently gained more attention for this application because it provides safe high-power illumination with less light loss.
    “While most other approaches require the receiving device to be in a special charging cradle or to be stationary, distributed laser charging enables self-alignment without tracking processes as long as the transmitter and receiver are in the line of sight of each other,” said Ha. “It also automatically shifts to a safe low power delivery mode if an object or a person blocks the line of sight.”
    Going the distance
    Distributed laser charging works somewhat like a traditional laser but instead of the optical components of the laser cavity being integrated into one device, they are separated into a transmitter and receiver. When the transmitter and receiver are within a line of sight, a laser cavity is formed between them over the air — or free space — which allows the system to deliver light-based power. If an obstacle cuts the transmitter-receiver line of sight, the system automatically switches to a power-safe mode, achieving hazard-free power delivery in the air. More

  • in

    ROBE Array could let small companies access popular form of AI

    A breakthrough low-memory technique by Rice University computer scientists could put one of the most resource-intensive forms of artificial intelligence — deep-learning recommendation models (DLRM) — within reach of small companies.
    DLRM recommendation systems are a popular form of AI that learns to make suggestions users will find relevant. But with top-of-the-line training models requiring more than a hundred terabytes of memory and supercomputer-scale processing, they’ve only been available to a short list of technology giants with deep pockets.
    Rice’s “random offset block embedding array,” or ROBE Array, could change that. It’s an algorithmic approach for slashing the size of DLRM memory structures called embedding tables, and it will be presented this week at the Conference on Machine Learning and Systems (MLSys 2022) in Santa Clara, California, where it earned Outstanding Paper honors.
    “Using just 100 megabytes of memory and a single GPU, we showed we could match the training times and double the inference efficiency of state-of-the-art DLRM training methods that require 100 gigabytes of memory and multiple processors,” said Anshumali Shrivastava, an associate professor of computer science at Rice who’s presenting the research at MLSys 2022 with ROBE Array co-creators Aditya Desai, a Rice graduate student in Shrivastava’s research group, and Li Chou, a former postdoctoral researcher at Rice who is now at West Texas A&M University.
    “ROBE Array sets a new baseline for DLRM compression,” Shrivastava said. “And it brings DLRM within reach of average users who do not have access to the high-end hardware or the engineering expertise one needs to train models that are hundreds of terabytes in size.”
    DLRM systems are machine learning algorithms that learn from data. For example, a recommendation system that suggests products for shoppers would be trained with data from past transactions, including the search terms users provided, which products they were offered and which, if any, they purchased. One way to improve the accuracy of recommendations is to sort training data into more categories. For example, rather than putting all shampoos in a single category, a company could create categories for men’s, women’s and children’s shampoos. More

  • in

    Underwater messaging app for smartphones

    For millions of people who participate in activities such as snorkeling and scuba diving each year, hand signals are the only option for communicating safety and directional information underwater. While recreational divers may employ around 20 signals, professional divers’ vocabulary can exceed 200 signals on topics ranging from oxygen level, to the proximity of aquatic species, to the performance of cooperative tasks.
    The visual nature of these hand signals limits their effectiveness at distance and in low visibility. Two-way text messaging is a potential alternative, but one that requires expensive custom hardware that is not widely available.
    Researchers at the University of Washington show how to achieve underwater messaging on billions of existing smartphones and smartwatches using only software. The team developed AquaApp, the first mobile app for acoustic-based communication and networking underwater that can be used with existing devices such as smartphones and smartwatches.
    The researchers presented their paper describing AquaApp Aug. 25 at SIGCOMM 2022.
    “Smartphones rely on radio signals like WiFi and Bluetooth for wireless communication. Those don’t propagate well underwater, but acoustic signals do,” said co-lead author Tuochao Chen, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “With AquaApp, we demonstrate underwater messaging using the speaker and microphone widely available on smartphones and watches. Other than downloading an app to their phone, the only thing people will need is a waterproof phone case rated for the depth of their dive.”
    The AquaApp interface enables users to select from a list of 240 pre-set messages that correspond to hand signals employed by professional divers, with the 20 most common signals prominently displayed for easy access. Users can also filter messages according to eight categories, including directional indicators, environmental factors and equipment status. More

  • in

    Artificial Intelligence Improves Treatment in Women with Heart Attacks

    Heart attacks are one of the leading causes of death worldwide, and women who suffer a heart attack have a higher mortality rate than men. This has been a matter of concern to cardiologists for decades and has led to controversy in the medical field about the causes and effects of possible gaps in treatment. The problem starts with the symptoms: unlike men, who usually experience chest pain with radiation to the left arm, a heart attack in women often manifests as abdominal pain radiating to the back or as nausea and vomiting. These symptoms are unfortunately often misinterpreted by the patients and healthcare personnel — with disastrous consequences.
    Risk profile and clinical picture is different in women
    An international research team led by Thomas F. Lüscher, professor at the Center for Molecular Cardiology at the University of Zurich (UZH), has now investigated the role of biological sex in heart attacks in more detail. “Indeed, there are notable differences in the disease phenotype observed in females and males. Our study shows that women and men differ significantly in their risk factor profile at hospital admission,” says Lüscher. When age differences at admission and existing risk factors such as hypertension and diabetes are disregarded, female heart-attack patients have higher mortality than male patients. “However, when these differences are taken into account statistically, women and men have similar mortality,” the cardiologist adds.
    Current risk models favor under-treatment of female patients
    In their study, published in the journal The Lancet, researchers from Switzerland and the United Kingdom analyzed data from 420,781 patients across Europe who had suffered the most common type of heart attack. “The study shows that established risk models which guide current patient management are less accurate in females and favor the undertreatment of female patients,” says first author Florian A. Wenzl of the Center for Molecular Medicine at UZH. “Using a machine learning algorithm and the largest datasets in Europe we were able to develop a novel artificial- intelligence-based risk score which accounts for sex-related differences in the baseline risk profile and improves the prediction of mortality in both sexes,” Wenzl says.
    AI-based risk profiling improves individualized care
    Many researchers and biotech companies agree that artificial intelligence and Big Data analytics are the next step on the road to personalized patient care. “Our study heralds the era of artificial intelligence in the treatment of heart attacks,” says Wenzl. Modern computer algorithms can learn from large data sets to make accurate predictions about the prognosis of individual patients — the key to individualized treatments.
    Thomas F. Lüscher and his team see huge potential in the application of artificial intelligence for the management of heart disease both in male and female patients. “I hope the implementation of this novel score in treatment algorithms will refine current treatment strategies, reduce sex inequalities, and eventually improve the survival of patients with heart attacks — both male and female,” says Lüscher.
    Story Source:
    Materials provided by University of Zurich. Note: Content may be edited for style and length. More

  • in

    From bits to p-bits: One step closer to probabilistic computing

    Tohoku University scientists in Japan have developed a mathematical description of what happens within tiny magnets as they fluctuate between states when an electric current and magnetic field are applied. Their findings, published in the journal Nature Communications, could act as the foundation for engineering more advanced computers that can quantify uncertainty while interpreting complex data.
    Classical computers have gotten us this far, but there are some problems that they cannot address efficiently. Scientists have been working on addressing this by engineering computers that can utilize the laws of quantum physics to recognize patterns in complex problems. But these so-called quantum computers are still in their early stages of development and are extremely sensitive to their surroundings, requiring extremely low temperatures to function.
    Now, scientists are looking at something different: a concept called probabilistic computing. This type of computer, which could function at room temperature, would be able to infer potential answers from complex input. A simplistic example of this type of problem would be to infer information about a person by looking at their purchasing behaviour. Instead of the computer providing a single, discrete result, it picks out patterns and delivers a good guess of what the result might be.
    There could be several ways to build such a computer, but some scientists are investigating the use of devices called magnetic tunnel junctions. These are made from two layers of magnetic metal separated by an ultrathin insulator (Fig. 1). When these nanomagnetic devices are thermally activated under an electric current and magnetic field, electrons tunnel through the insulating layer. Depending on their spin, they can cause changes, or fluctuations, within the magnets. These fluctuations, called p-bits, which are the alternative to the on/off or 0/1 bits we have all heard about in classical computers, could form the basis of probabilistic computing. But to engineer probabilistic computers, scientists need to be able to describe the physics that happens within magnetic tunnel junctions.
    This is precisely what Shun Kanai, professor at Tohoku University’s Research Institute of Electrical Communication, and his colleagues have achieved.
    “We have experimentally clarified the ‘switching exponent’ that governs fluctuation under the perturbations caused by magnetic field and spin-transfer torque in magnetic tunnel junctions,” says Kanai. “This gives us the mathematical foundation to implement magnetic tunnel junctions into the p-bit in order to sophisticatedly design probabilistic computers. Our work has also shown that these devices can be used to investigate unexplored physics related to thermally activated phenomena.”
    Story Source:
    Materials provided by Tohoku University. Note: Content may be edited for style and length. More

  • in

    Mixing things up: Optimizing fluid mixing with machine learning

    Mixing of fluids is a critical component in many industrial and chemical processes. Pharmaceutical mixing and chemical reactions, for instance, may require homogeneous fluid mixing. Achieving this mixing faster and with less energy would reduce the associated costs greatly. In reality, however, most mixing processes are not mathematically optimized and instead rely on trial-and-error-based empirical methods. Turbulent mixing, which uses turbulence to mix up fluids, is an option but is problematic as it is either difficult to sustain (such as in micro-mixers) or damages the materials being mixed (such as in bioreactors and food mixers).
    Can an optimized mixing be achieved for laminar flows instead? To answer this question, a team of researchers from Japan, in a new study, turned to machine learning. In their study published in Scientific Reports, the team resorted to an approach called “reinforcement learning” (RL), in which intelligent agents take actions in an environment to maximize the cumulative reward (as opposed to an instantaneous reward).
    “Since RL maximizes the cumulative reward, which is global-in-time, it can be expected to be suitable fortackling the problem of efficient fluid mixing, which is also a global-in-time optimization problem,” explains Associate Professor Masanobu Inubushi, the corresponding author of the study. “Personally, I have a conviction that it is important to find the right algorithm for the right problem rather than blindly apply a machine learning algorithm. Luckily, in this study, we managed to connect the two fields (fluid mixing and reinforcement learning) after considering their physical and mathematical characteristics.” The work included contributions from Mr. Mikito Konishi, a graduate student, and Prof. Susumu Goto, both from Osaka University.
    One major roadblock awaited the team, however. While RL is suitable for global optimization problems, it is not particularly well-suited for systems involving high-dimensional state spaces, i.e., systems requiring a large number of variables for their description. Unfortunately, fluid mixing was just such a system.
    To address this issue, the team adopted an approach used in the formulation of another optimization problem, which enabled them to reduce the state space dimension for fluid flow to one. Put simply, the fluid motion could now be described using only a single parameter!
    The RL algorithm is usually formulated in terms of a “Markov decision process” (MDP), a mathematical framework for decision making in situations where the outcomes are part random and part controlled by the decision maker. Using this approach, the team showed that RL was effective in optimizing fluid mixing. More

  • in

    Getting data to do more for biodiversity

    Michigan State University ecologists have developed a mathematical framework that could help monitor and preserve biodiversity without breaking the bank.
    This framework or model takes low-cost data about relatively abundant species in a community and uses it to generate valuable insights on their harder-to-find neighbors. The journal Conservation Biology published the research as an Early View article on Aug. 25.
    “One of the biggest challenges in monitoring biodiversity is that the species you’re most concerned about tend to be lowest in abundance or they’re the hardest species to observe during data collection,” said Matthew Farr, the lead author on the new report. “This model can be really helpful for those rare and elusive species.”
    Farr, now a postdoctoral researcher at the University of Washington, helped develop the model as a doctoral student in Elise Zipkin’s Quantitative Ecology Lab in the College of Natural Science at MSU.
    “There are a lot of species in the world and many of them are data deficient,” said Zipkin, an associate professor of integrative biology and director of MSU’s Ecology, Evolution and Behavior Program, or EEB. “We’re developing approaches to more quickly estimate what’s going on with biodiversity, which species are in trouble and where, spatially, do we need to focus our conservation efforts.”
    After validating the model with an assist from forest-dwelling antelope in Africa, the researchers say it could be applied to a variety of other animals that meet certain criteria. More

  • in

    Silicon image sensor that computes

    As any driver knows, accidents can happen in the blink of an eye — so when it comes to the camera system in autonomous vehicles, processing time is critical. The time that it takes for the system to snap an image and deliver the data to the microprocessor for image processing could mean the difference between avoiding an obstacle or getting into a major accident.
    In-sensor image processing, in which important features are extracted from raw data by the image sensor itself instead of the separate microprocessor, can speed up the visual processing. To date, demonstrations of in-sensor processing have been limited to emerging research materials which are, at least for now, difficult to incorporate into commercial systems.
    Now, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed the first in-sensor processor that could be integrated into commercial silicon imaging sensor chips — known as complementary metal-oxide-semiconductor (CMOS) image sensors — that are used in nearly all commercial devices that need capture visual information, including smartphones.
    The research is published in Nature Electronics.
    “Our work can harnesses the mainstream semiconductor electronics industry to rapidly bring in-sensor computing to a wide variety of real-world applications,” said Donhee Ham, the Gordon McKay Professor of Electrical Engineering and Applied Physics at SEAS and senior author of the paper.
    Ham and his team developed a silicon photodiode array. Commercially-available image sensing chips also have a silicon photodiode array to capture images, but the team’s photodiodes are electrostatically doped, meaning that sensitivity of individual photodiodes, or pixels, to incoming light can be tuned by voltages. An array that connects multiple voltage-tunable photodiodes together can perform an analog version of multiplication and addition operations central to many image processing pipelines, extracting the relevant visual information as soon as the image is captured.
    “These dynamic photodiodes can concurrently filter images as they are captured, allowing for the first stage of vision processing to be moved from the microprocessor to the sensor itself,” said Houk Jang, a postdoctoral fellow at SEAS and first author of the paper.
    The silicon photodiode array can be programmed into different image filters to remove unnecessary details or noise for various applications. An imaging system in an autonomous vehicle, for example, may call for a high-pass filter to track lane markings, while other applications may call for a filter that blurs for noise reduction.
    “Looking ahead, we foresee the use of this silicon-based in-sensor processor not only in machine vision applications, but also in bio-inspired applications, wherein early information processing allows for the co-location of sensor and compute units, like in the brain,” said Henry Hinton, a graduate student at SEAS and co-first author of the paper.
    Next, the team aims to increase the density of photodiodes and integrate them with silicon integrated circuits.
    “By replacing the standard non-programmable pixels in commercial silicon image sensors with the programmable ones developed here, imaging devices can intelligently trim out unneeded data, thus could be made more efficient in both energy and bandwidth to address the demands of the next generation of sensory applications,” said Jang.
    The research was co-authored by Woo-Bin Jung, Min-Hyun Lee, Changhyun Kim, Min Park, Seoung-Ki Lee and Seongjun Park. It was supported by the Samsung Advanced Institute of Technology under Contract A30216 and by the National Science Foundation Science and Technology Center for Integrated Quantum Materials under Contract DMR-1231319. More