More stories

  • in

    Economist: Tens of billions of dollars in forest products are being overlooked

    Are we missing the forest for the trees? More than timber grows in forests — including products worth many tens of billions of dollars. Because these goods go unrecorded in official trade statistics, their economic value escapes our attention. As a result, clear opportunities to combat poverty are being missed, according to a University of Copenhagen economist.
    In the Roman Empire, custom taxes on spices, black pepper in particular, accounted for up to a third of the empire’s annual income. And during the late Middle Ages, European efforts to cut out middle men and monopolise the spice trade led to colonization in Asia. Historically, non-timber forest products have frequently played a key role in the global economy.
    Today however, non-timber forest products are neglected when the values of forests are recorded in official trade statistics. This applies both in the EU and globally. And it is despite the fact that these products account for a large part of the economies of many countries — from medicinal plants and edible insects to nuts, berries and herbs, to materials like bamboo and latex.
    The UN Food and Agriculture Organization (FAO) estimates that annual producer income from non-wood products is US$ 88 billion — and when the added value of processing and other links in the value chain are included, the value of these products rockets up to trillions of dollars.
    According to Professor Carsten Smith-Hall, an economist at the University of Copenhagen’s Department of Food and Resource Economics, this is a good reason to begin including forest products like ginseng, shea nuts, acai berries, baobab and acacia gum into global trade accounts.
    “We estimate that roughly 30,000 different non-timber forest products are traded internationally, but less than fifty of them currently have a commodity code. We’re talking about goods worth enormous sums of money that are not being recorded in official statistics — and are therefore invisible. This means that the countries and communities that deliver these goods do not earn enough from them, in part because there is no investment in local processing companies,” says Smith-Hall, a world-leading bioeconomy researcher. He adds:
    “Because we underestimate the role of these goods, we’re wasting clear opportunities to combat poverty. These are goods that contribute significantly to food security, health and employment in large parts of the world, especially in low- and middle-income countries.”
    Carsten Smith-Hall and James Chamberlain from the U.S. Department of Agriculture have written a commentary in the journal Forest Policy and Economics, in which they argue for the great, though yet to be realized, potential.

    Adding value
    Examples of valuable products that go unrecorded, but are traded in informal markets, are numerous. One of these is caterpillar fungus (Ophiocordyceps sinensis), a fungus that infects and then erupts from the heads of mummified moth larvae. On the Tibetan plateau and in the Himalayas, people collect the medicinal mushroom that they call yartsa gunbu — and is also known as the Viagra of the Himalayas -at every opportunity.
    “Caterpillar fungus is exported to China, where it is sold as an aphrodisiac and traditional medicine. Rural gatherers can sell it for about €11,500 per kilo. It fights poverty and helps transform local communities. That is, it allows people to send their children to better schools and build new houses. But because the trade goes unrecorded, local communities aren’t getting what they could out of the product,” says Carsten Smith-Hall.
    The professor goes on to explain that the consequence of products like these not appearing in official trade accounts is that they are ignored in important contexts:
    “The products are not prioritised when funds are allocated for the development of industries and new technology. This means that many countries are missing out on the huge sums of money involved in the processing of a product in the country where a raw material is harvested. Processing is where you really see value being added to a product.”
    Another major consequence is that non-timber products are ignored when developing policies for how natural resources should be managed. Though official registries could also serve biodiversity, Smith-Hall points out:
    “Many of these products appear on various red lists because they are believed to be overexploited. In such cases, investment may be needed to develop cultivation technology, as opposed to harvesting them in the wild. But when investors and decision-makers aren’t aware of the importance of a product, the money ends up elsewhere.”

    Focus and systematize
    According to the researchers, one of the obstacles standing in the way of non-timber products being included in trade accounts today is the overwhelmingly large number of products. It is a concern for which they have advice.
    “There is a general perception among researchers and public agencies that there are simply too many products to manage. But if you list the economically important products in a country, ones that are traded in large quantities, you can shorten the list from, for example, 2,000 items to perhaps only fifteen. This lets people know which species to take an interest in and where to better focus research and technological investments. For example, in relation to developing cultivation techniques,” says Carsten Smith-Hall.
    Furthermore, the researchers recommend establishing systematic data collection at local, national and global levels of the volumes traded and prices fetched. They point out that tools have already been developed for this and could be made more widely available.
    “We have a huge untapped potential here that can contribute in tackling extreme poverty and at the same time conserving nature and biodiversity. But this requires us to broaden our horizons and not just maintain the traditional focus on timber as the only important forest resource,” Carsten Smith-Hall concludes.
    THE IMPORTANCE OF NON-TIMBER PRODUCTS Only a very limited number of non-timber product types appear in official trade statistics today. These include coffee, cocoa, rubber, vanilla, avocado and bananas, which are all considered agricultural crops. The researchers estimate that tens of thousands of different non-timber products are traded worldwide which are not included in the statistics. However, the number of economically significant products is much smaller. One study estimates that between 3.5 and 5.8 billion people currently use non-timber products. About half of these users live in rural areas in the Global South, while the other half live in urban areas and the Global North. In the subtropics and tropics, it is estimated that roughly 28% of rural household income comes from non-timber products.SHEA NUTS AS SAFETY NET
    Shea nut oil is a common ingredient in body care products, but is also used in chocolate and other products. Shea nuts are an example of a non-timber forestry product that plays an important role in rural West African communities.
    “Shea nuts prevent people from sinking deeper into poverty in Ghana, Burkina Faso and other places. Global demand for them has grown, contributing to local incomes and providing a safety net for people if, for example, their cattle are stolen or there is a sudden death in the family. At these times, many people go out and harvest these nuts to cover sudden income gaps,” explains Carsten Smith-Hall.
    “Many non-timber products are harvested by small-scale farmers in the countryside at certain times of the year — for example, when they are not working in the fields. At these times, they go into the forest to harvest. This makes production relatively hidden. Typically, smallholders then go to the village and sell the goods to a local trader. The trader loads the goods onto a truck, and they are transported to wholesalers, who often export them unprocessed to other countries. However, these long logistics and value chains are also largely invisible,” says Carsten Smith-Hall. More

  • in

    A faster, better way to prevent an AI chatbot from giving toxic responses

    A user could ask ChatGPT to write a computer program or summarize an article, and the AI chatbot would likely be able to generate useful code or write a cogent synopsis. However, someone could also ask for instructions to build a bomb, and the chatbot might be able to provide those, too.
    To prevent this and other safety issues, companies that build large language models typically safeguard them using a process called red-teaming. Teams of human testers write prompts aimed at triggering unsafe or toxic text from the model being tested. These prompts are used to teach the chatbot to avoid such responses.
    But this only works effectively if engineers know which toxic prompts to use. If human testers miss some prompts, which is likely given the number of possibilities, a chatbot regarded as safe might still be capable of generating unsafe answers.
    Researchers from Improbable AI Lab at MIT and the MIT-IBM Watson AI Lab used machine learning to improve red-teaming. They developed a technique to train a red-team large language model to automatically generate diverse prompts that trigger a wider range of undesirable responses from the chatbot being tested.
    They do this by teaching the red-team model to be curious when it writes prompts, and to focus on novel prompts that evoke toxic responses from the target model.
    The technique outperformed human testers and other machine-learning approaches by generating more distinct prompts that elicited increasingly toxic responses. Not only does their method significantly improve the coverage of inputs being tested compared to other automated methods, but it can also draw out toxic responses from a chatbot that had safeguards built into it by human experts.
    “Right now, every large language model has to undergo a very lengthy period of red-teaming to ensure its safety. That is not going to be sustainable if we want to update these models in rapidly changing environments. Our method provides a faster and more effective way to do this quality assurance,” says Zhang-Wei Hong, an electrical engineering and computer science (EECS) graduate student in the Improbable AI lab and lead author of a paper on this red-teaming approach.

    Hong’s co-authors include EECS graduate students Idan Shenfield, Tsun-Hsuan Wang, and Yung-Sung Chuang; Aldo Pareja and Akash Srivastava, research scientists at the MIT-IBM Watson AI Lab; James Glass, senior research scientist and head of the Spoken Language Systems Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Pulkit Agrawal, director of Improbable AI Lab and an assistant professor in CSAIL. The research will be presented at the International Conference on Learning Representations.
    Automated red-teaming
    Large language models, like those that power AI chatbots, are often trained by showing them enormous amounts of text from billions of public websites. So, not only can they learn to generate toxic words or describe illegal activities, the models could also leak personal information they may have picked up.
    The tedious and costly nature of human red-teaming, which is often ineffective at generating a wide enough variety of prompts to fully safeguard a model, has encouraged researchers to automate the process using machine learning.
    Such techniques often train a red-team model using reinforcement learning. This trial-and-error process rewards the red-team model for generating prompts that trigger toxic responses from the chatbot being tested.
    But due to the way reinforcement learning works, the red-team model will often keep generating a few similar prompts that are highly toxic to maximize its reward.

    For their reinforcement learning approach, the MIT researchers utilized a technique called curiosity-driven exploration. The red-team model is incentivized to be curious about the consequences of each prompt it generates, so it will try prompts with different words, sentence patterns, or meanings.
    “If the red-team model has already seen a specific prompt, then reproducing it will not generate any curiosity in the red-team model, so it will be pushed to create new prompts,” Hong says.
    During its training process, the red-team model generates a prompt and interacts with the chatbot. The chatbot responds, and a safety classifier rates the toxicity of its response, rewarding the red-team model based on that rating.
    Rewarding curiosity
    The red-team model’s objective is to maximize its reward by eliciting an even more toxic response with a novel prompt. The researchers enable curiosity in the red-team model by modifying the reward signal in the reinforcement learning set up.
    First, in addition to maximizing toxicity, they include an entropy bonus that encourages the red-team model to be more random as it explores different prompts. Second, to make the agent curious they include two novelty rewards. One rewards the model based on the similarity of words in its prompts, and the other rewards the model based on semantic similarity. (Less similarity yields a higher reward.)
    To prevent the red-team model from generating random, nonsensical text, which can trick the classifier into awarding a high toxicity score, the researchers also added a naturalistic language bonus to the training objective.
    With these additions in place, the researchers compared the toxicity and diversity of responses their red-team model generated with other automated techniques. Their model outperformed the baselines on both metrics.
    They also used their red-team model to test a chatbot that had been fine-tuned with human feedback so it would not give toxic replies. Their curiosity-driven approach was able to quickly produce 196 prompts that elicited toxic responses from this “safe” chatbot.
    “We are seeing a surge of models, which is only expected to rise. Imagine thousands of models or even more and companies/labs pushing model updates frequently. These models are going to be an integral part of our lives and it’s important that they are verified before released for public consumption. Manual verification of models is simply not scalable, and our work is an attempt to reduce the human effort to ensure a safer and trustworthy AI future,” says Agrawal.
    In the future, the researchers want to enable the red-team model to generate prompts about a wider variety of topics. They also want to explore the use of a large language model as the toxicity classifier. In this way, a user could train the toxicity classifier using a company policy document, for instance, so a red-team model could test a chatbot for company policy violations.
    “If you are releasing a new AI model and are concerned about whether it will behave as expected, consider using curiosity-driven red-teaming,” says Agrawal.
    This research is funded, in part, by Hyundai Motor Company, Quanta Computer Inc., the MIT-IBM Watson AI Lab, an Amazon Web Services MLRA research grant, the U.S. Army Research Office, the U.S. Defense Advanced Research Projects Agency Machine Common Sense Program, the U.S. Office of Naval Research, the U.S. Air Force Research Laboratory, and the U.S. Air Force Artificial Intelligence Accelerator. More

  • in

    Quantum breakthrough when light makes materials magnetic

    The potential of quantum technology is huge but is today largely limited to the extremely cold environments of laboratories. Now, researchers at Stockholm University, at the Nordic Institute for Theoretical Physics and at the Ca’ Foscari University of Venice have succeeded in demonstrating for the very first time how laser light can induce quantum behavior at room temperature — and make non-magnetic materials magnetic. The breakthrough is expected to pave the way for faster and more energy-efficient computers, information transfer and data storage.
    Within a few decades, the advancement of quantum technology is expected to revolutionize several of society’s most important areas and pave the way for completely new technological possibilities in communication and energy. Of particular interest for researchers in the field are the peculiar and bizarre properties of quantum particles — which deviate completely from the laws of classical physics and can make materials magnetic or superconducting. By increasing the understanding of exactly how and why this type of quantum states arise, the goal is to be able to control and manipulate materials to obtain quantum mechanical properties.
    So far, researchers have only been able to induce quantum behaviors, such as magnetism and superconductivity, at extremely cold temperatures. Therefore, the potential of quantum research is still limited to laboratory environments.
    Now, a research team from Stockholm University and the Nordic Institute of Theoretical Physics (NORDITA)* in Sweden, the University of Connecticut and the SLAC National Accelerator Laboratory in USA, the National Institute for Materials Science in Tsukuba, Japan, the Elettra-Sincrotrone Trieste, the ‘Sapienza’ University of Rome and the Ca’ Foscari University of Venice in Italy, is the first in the world to demonstrate in an experiment how laser light can induce magnetism in a non-magnetic material at room temperature. In the study, published in Nature, the researchers subjected the quantum material strontium titanate to short but intense laser beams of a peculiar wavelength and polarization, to induced magnetism.
    “The innovation in this method lies in the concept of letting light move atoms and electrons in this material in circular motion, so to generate currents that make it as magnetic as a refrigerator magnet. We have been able to do so by developing a new light source in the far-infrared with a polarization which has a “corkscrew” shape. This is the first time we have been able to induce and clearly see how the material becomes magnetic at room temperature in an experiment. Furthermore, our approach allows to make magnetic materials out of many insulators, when magnets are typically made of metals. In the long run, this opens for completely new applications in society,” says the research leader Stefano Bonetti at Stockholm University and at the Ca’ Foscari University of Venice
    The method is based on the theory of “dynamic multiferroicity,” which predicts that when titanium atoms are “stirred up” with circularly polarized light in an oxide based on titanium and strontium, a magnetic field will be formed. But it is only now that the theory can be confirmed in practice. The breakthrough is expected to have broad applications in several information technologies.
    “This opens up for ultra-fast magnetic switches that can be used for faster information transfer and considerably better data storage, and for computers that are significantly faster and more energy-efficient,” says Alexander Balatsky, professor of physics at NORDITA.
    In fact, the results of the team have already been reproduced in several other labs, and a publication in the same issue of Nature demonstrates that this approach can be used to write, and hence store, magnetic information. A new chapter in designing new materials using light has been opened. More

  • in

    AI makes retinal imaging 100 times faster, compared to manual method

    Researchers at the National Institutes of Health applied artificial intelligence (AI) to a technique that produces high-resolution images of cells in the eye. They report that with AI, imaging is 100 times faster and improves image contrast 3.5-fold. The advance, they say, will provide researchers with a better tool to evaluate age-related macular degeneration (AMD) and other retinal diseases.
    “Artificial intelligence helps overcome a key limitation of imaging cells in the retina, which is time,” said Johnny Tam, Ph.D., who leads the Clinical and Translational Imaging Section at NIH’s National Eye Institute.
    Tam is developing a technology called adaptive optics (AO) to improve imaging devices based on optical coherence tomography (OCT). Like ultrasound, OCT is noninvasive, quick, painless, and standard equipment in most eye clinics.
    Imaging RPE cells with AO-OCT comes with new challenges, including a phenomenon called speckle. Speckle interferes with AO-OCT the way clouds interfere with aerial photography. At any given moment, parts of the image may be obscured. Managing speckle is somewhat similar to managing cloud cover. Researchers repeatedly image cells over a long period of time. As time passes, the speckle shifts, which allows different parts of the cells to become visible. The scientists then undertake the laborious and time-consuming task of piecing together many images to create an image of the RPE cells that’s speckle-free.
    Tam and his team developed a novel AI-based method called parallel discriminator generative adverbial network (P-GAN) — a deep learning algorithm. By feeding the P-GAN network nearly 6,000 manually analyzed AO-OCT-acquired images of human RPE, each paired with its corresponding speckled original, the team trained the network to identify and recover speckle-obscured cellular features.
    When tested on new images, P-GAN successfully de-speckled the RPE images, recovering cellular details. With one image capture, it generated results comparable to the manual method, which required the acquisition and averaging of 120 images. With a variety of objective performance metrics that assess things like cell shape and structure, P-GAN outperformed other AI techniques. Vineeta Das, Ph.D., a postdoctoral fellow in the Clinical and Translational Imaging Section at NEI, estimates that P-GAN reduced imaging acquisition and processing time by about 100-fold. P-GAN also yielded greater contrast, about 3.5 greater than before.
    “Adaptive optics takes OCT-based imaging to the next level,” said Tam. “It’s like moving from a balcony seat to a front row seat to image the retina. With AO, we can reveal 3D retinal structures at cellular-scale resolution, enabling us to zoom in on very early signs of disease.”
    While adding AO to OCT provides a much better view of cells, processing AO-OCT images after they’ve been captured takes much longer than OCT without AO.

    Tam’s latest work targets the retinal pigment epithelium (RPE), a layer of tissue behind the light-sensing retina that supports the metabolically active retinal neurons, including the photoreceptors. The retina lines the back of the eye and captures, processes, and converts the light that enters the front of the eye into signals that it then transmits through the optic nerve to the brain. Scientists are interested in the RPE because many diseases of the retina occur when the RPE breaks down.
    By integrating AI with AO-OCT, Tam believes that a major obstacle for routine clinical imaging using AO-OCT has been overcome, especially for diseases that affect the RPE, which has traditionally been difficult to image.
    “Our results suggest that AI can fundamentally change how images are captured,” said Tam. “Our P-GAN artificial intelligence will make AO imaging more accessible for routine clinical applications and for studies aimed at understanding the structure, function, and pathophysiology of blinding retinal diseases. Thinking about AI as a part of the overall imaging system, as opposed to a tool that is only applied after images have been captured, is a paradigm shift for the field of AI.” More

  • in

    New method of measuring qubits promises ease of scalability in a microscopic package

    Chasing ever-higher qubit counts in near-term quantum computers constantly demands new feats of engineering.
    Among the troublesome hurdles of this scaling-up race is refining how qubits are measured. Devices called parametric amplifiers are traditionally used to do these measurements. But as the name suggests, the device amplifies weak signals picked up from the qubits to conduct the readout, which causes unwanted noise and can lead to decoherence of the qubits if not protected by additional large components. More importantly, the bulky size of the amplification chain becomes technically challenging to work around as qubit counts increase in size-limited refrigerators.
    Cue the Aalto University research group Quantum Computing and Devices (QCD). They have a hefty track record of showing how thermal bolometers can be used as ultrasensitive detectors, and they just demonstrated in an April 10 Nature Electronics paper that bolometer measurements can be accurate enough for single-shot qubit readout.
    A new method of measuring
    To the chagrin of many physicists, the Heisenberg uncertainty principle determines that one cannot simultaneously know a signal’s position and momentum, or voltage and current, with accuracy. So it goes with qubit measurements conducted with parametric voltage-current amplifiers. But bolometric energy sensing is a fundamentally different kind of measurement — serving as a means of evading Heisenberg’s infamous rule. Since a bolometer measures power, or photon number, it is not bound to add quantum noise stemming from the Heisenberg uncertainty principle in the way that parametric amplifiers are.
    Unlike amplifiers, bolometers very subtly sense microwave photons emitted from the qubit via a minimally invasive detection interface. This form factor is roughly 100 times smaller than its amplifier counterpart, making it extremely attractive as a measurement device.
    ‘When thinking of a quantum-supreme future, it is easy to imagine high qubit counts in the thousands or even millions could be commonplace. A careful evaluation of the footprint of each component is absolutely necessary for this massive scale-up. We have shown in the Nature Electronics paper that our nanobolometers could seriously be considered as an alternative to conventional amplifiers. In our very first experiments, we found these bolometers accurate enough for single-shot readout, free of added quantum noise, and they consume 10,000 times less power than the typical amplifiers — all in a tiny bolometer, the temperature-sensitive part of which can fit inside of a single bacterium,’ says Aalto University Professor Mikko Möttönen, who heads the QCD research group.

    Single-shot fidelity is an important metric physicists use to determine how accurately a device can detect a qubit’s state in just one measurement as opposed to an average of multiple measurements. In the case of the QCD group’s experiments, they were able to obtain a single-shot fidelity of 61.8% with a readout duration of roughly 14 microseconds. When correcting for the qubit’s energy relaxation time, the fidelity jumps up to 92.7%.
    ‘With minor modifications, we could expect to see bolometers approaching the desired 99.9% single-shot fidelity in 200 nanoseconds. For example, we can swap the bolometer material from metal to graphene, which has a lower heat capacity and can detect very small changes in its energy quickly. And by removing other unnecessary components between the bolometer and the chip itself, we can not only make even greater improvements on the readout fidelity, but we can achieve a smaller and simpler measurement device that makes scaling-up to higher qubit counts more feasible,’ says András Gunyhó, the first author on the paper and a doctoral researcher in the QCD group.
    Prior to demonstrating the high single-shot readout fidelity of bolometers in their most recent paper, the QCD research group first showed that bolometers can be used for ultrasensitive, real-time microwave measurements in 2019. They then published in 2020 a paper in Nature showing how bolometers made of graphene can shorten readout times to well below a microsecond.
    The work was carried out in the Research Council of Finland Centre of Excellence for Quantum Technology (QTF) using OtaNano research infrastructure in collaboration with VTT Technical Research Centre of Finland and IQM Quantum Computers. It was primarily funded by the European Research Council Advanced Grant ConceptQ and the Future Makers Program of the Jane and Aatos Erkko Foundation and the Technology Industries of Finland Centennial Foundation. More

  • in

    Breakthrough for next-generation digital displays

    Researchers at Linköping University, Sweden, have developed a digital display screen where the LEDs themselves react to touch, light, fingerprints and the user’s pulse, among other things. Their results, published in Nature Electronics, could be the start of a whole new generation of displays for phones, computers and tablets.
    “We’ve now shown that our design principle works. Our results show that there is great potential for a new generation of digital displays where new advanced features can be created. From now on, it’s about improving the technology into a commercially viable product,” says Feng Gao, professor in optoelectronics at Linköping University (LiU).
    Digital displays have become a cornerstone of almost all personal electronics. However, the most modern LCD and OLED screens on the market can only display information. To become a multi-function display that detects touch, fingerprints or changing lighting conditions, a variety of sensors are required that are layered on top of or around the display.
    Researchers at Linköping University have now developed a completely new type of display where all sensor functions are also found in the display’s LEDs without the need of any additional sensors.
    The LEDs are made of a crystalline material called perovskite. Its excellent ability of light absorption and emission is the key that enables the newly developed screen.
    In addition to the screen reacting to touch, light, fingerprints and the user’s pulse, the device can also be charged through the screen thanks to the perovskites’ ability to also act as solar cells.
    “Here’s an example — your smartwatch screen is off most of the time. During the off-time of the screen, instead of displaying information, it can harvest light to charge your watch, significantly extending how long you can go between charges,” says Chunxiong Bao, associate professor at Nanjing University, previously a postdoc researcher at LiU and the lead author of the paper.
    For a screen to display all colours, there needs to be LEDs in three colours — red, green and blue — that glow with different intensity and thus produce thousands of different colours. The researchers at Linköping University have developed screens with perovskite LEDs in all three colours, paving the way for a screen that can display all colours within the visible light spectrum.
    But there are still many challenges to be solved before the screen is in everyone’s pocket. Zhongcheng Yuan, researcher at the University of Oxford, previously postdoc at LiU and the other lead author of the paper, believes that many of the problems will be solved within ten years:
    “For instance, the service life of perovskite LEDs needs to be improved. At present, the screen only works for a few hours before the material becomes unstable, and the LEDs go out,” he says. More

  • in

    Waterproof ‘e-glove’ could help scuba divers communicate

    When scuba divers need to say “I’m okay” or “Shark!” to their dive partners, they use hand signals to communicate visually. But sometimes these movements are difficult to see. Now, researchers reporting in ACS Nano have constructed a waterproof “e-glove” that wirelessly transmits hand gestures made underwater to a computer that translates them into messages. The new technology could someday help divers communicate better with each other and with boat crews on the surface.
    E-gloves — gloves fitted with electronic sensors that translate hand motions into information — are already in development, including designs that allow the wearer to interact with virtual reality environments or help people recovering from a stroke regain fine motor skills. However, rendering the electronic sensors waterproof for use in a swimming pool or the ocean, while also keeping the glove flexible and comfortable to wear, is a challenge. So Fuxing Chen, Lijun Qu, Mingwei Tian and colleagues wanted to create an e-glove capable of sensing hand motions when submerged underwater.
    The researchers began by fabricating waterproof sensors that rely on flexible microscopic pillars inspired by the tube-like feet of a starfish. Using laser writing tools, they created an array of these micropillars on a thin film of polydimethylsiloxane (PDMS), a waterproof plastic commonly used in contact lenses. After coating the PDMS array with conductive layer of silver, the researchers sandwiched two of the films together with the pillars facing inward to create a waterproof sensor. The sensor — roughly the size of a USB-C port — is responsive when flexed and can detect a range of pressures comparable to the light touch of a dollar bill up to the impact of water streaming from a garden hose. The researchers packaged 10 of these waterproof sensors within self-adhesive bandages and sewed them over the knuckles and first finger joints of their e-glove prototype.
    To create a hand-gesture vocabulary for the researchers’ demonstration, a participant wearing the e-glove made 16 gestures, including “OK” and “Exit.” The researchers recorded the specific electronic signals generated by the e-glove sensors for each corresponding gesture. They applied a machine learning technique for translating sign language into words to create a computer program that could translate the e-glove gestures into messages. When tested, the program translated hand gestures made on land and underwater with 99.8% accuracy. In the future, the team says a version of this e-glove could help scuba divers communicate with visual hand signals even when they cannot clearly see their dive partners.
    The authors acknowledge funding from the Shiyanjia Lab, National Key Research and Development Program, Taishan Scholar Program of Shandong Province in China, Shandong Province Key Research and Development Plan, Shandong Provincial Universities Youth Innovation Technology Plan Team, National Natural Science Foundation of China, Natural Science Foundation of Shandong Province of China, Shandong Province Science and Technology Small and Medium sized Enterprise Innovation Ability Enhancement Project, Natural Science Foundation of Qingdao, Qingdao Key Technology Research and Industrialization Demonstration Projects, Qingdao Shinan District Science and Technology Plan Project, and Suqian Key Research and Development Plan. More

  • in

    AI-assisted breast-cancer screening may reduce unnecessary testing

    Using artificial intelligence (AI) to supplement radiologists’ evaluations of mammograms may improve breast-cancer screening by reducing false positives without missing cases of cancer, according to a study by researchers at Washington University School of Medicine in St. Louis and, a Silicon Valley-based technology startup.
    The researchers developed an algorithm that identified normal mammograms with very high sensitivity. They then ran a simulation on patient data to see what would have happened if all of the very low-risk mammograms had been taken off radiologists’ plates, freeing the doctors to concentrate on the more questionable scans. The simulation revealed that fewer people would have been called back for additional testing but that the same number of cancer cases would have been detected.
    “False positives are when you call a patient back for additional testing, and it turns out to be benign,” explained senior author Richard L. Wahl, MD, a professor of radiology at Washington University’s Mallinckrodt Institute of Radiology (MIR) and a professor of radiation oncology. “That causes a lot of unnecessary anxiety for patients and consumes medical resources. This simulation study showed that very low-risk mammograms can be reliably identified by AI to reduce false positives and improve workflows.”
    The study is published April 10 in the journal Radiology: Artificial Intelligence.
    Wahl previously collaborated with on an algorithm to help radiologists judge breast density on mammograms to identify people who could benefit from additional or alternative screening. That algorithm received clearance from the Food and Drug Administration (FDA) in 2020 and is now marketed by as WRDensity.
    In this study, Wahl and colleagues at worked together to develop a way to rule out cancer using AI to evaluate mammograms. They trained the AI model on 123,248 2D digital mammograms (containing 6,161 showing cancer) that were largely collected and read by Washington University radiologists. Then, they validated and tested the AI model on three independent sets of mammograms, two from institutions in the U.S. and one in the United Kingdom.
    First, the researchers figured out what the doctors did: how many patients were called back for secondary screening and biopsies; the results of those tests; and the final determination in each case. Then, they applied AI to the datasets to see what would have been different if AI had been used to remove negative mammograms in the initial assessments and physicians had followed standard diagnostic procedures to evaluate the rest.
    For example, consider the largest dataset, which contained 11,592 mammograms. When scaled to 10,000 mammograms (to make the math simpler for the purposes of the simulation), AI identified 34.9% as negative. If those 3,485 negative mammograms had been removed from the workload, radiologists would have made 897 callbacks for diagnostic exams, a reduction of 23.7% from the 1,159 they made in reality. At the next step, 190 people would have been called in a second time for biopsies, a reduction of 6.9% from the 200 in reality. At the end of the process, both the AI rule-out and real-world standard-of-care approaches identified the same 55 cancers. In other words, this study of AI suggests that out of 10,000 people who underwent initial mammograms, 262 could have avoided diagnostic exams, and 10 could have avoided biopsies, without any cancer cases being missed.
    “At the end of the day, we believe in a world where the doctor is the superhero who finds cancer and helps patients navigate their journey ahead,” said co-author Jason Su, co-founder and chief technology officer at “The way AI systems can help is by being in a supporting role. By accurately assessing the negatives, it can help remove the hay from the haystack so doctors can find the needle more easily. This study demonstrates that AI can potentially be highly accurate in identifying negative exams. More importantly, the results showed that automating the detection of negatives may also lead to a tremendous benefit in the reduction of false positives without changing the cancer detection rate.” More