More stories

  • in

    Battery-free robots use origami to change shape in mid-air

    Researchers at the University of Washington have developed small robotic devices that can change how they move through the air by “snapping” into a folded position during their descent.
    When these “microfliers” are dropped from a drone, they use a Miura-ori origami fold to switch from tumbling and dispersing outward through the air to dropping straight to the ground. To spread out the fliers, the researchers control the timing of each device’s transition using a few methods: an onboard pressure sensor (estimating altitude), an onboard timer or a Bluetooth signal.
    Microfliers weigh about 400 milligrams — about half as heavy as a nail — and can travel the distance of a football field when dropped from 40 meters (about 131 feet) in a light breeze. Each device has an onboard battery-free actuator, a solar power-harvesting circuit and controller to trigger these shape changes in mid-air. Microfliers also have the capacity to carry onboard sensors to survey temperature, humidity and other conditions while soaring.
    The team published these results Sept. 13 in Science Robotics.
    “Using origami opens up a new design space for microfliers,” said co-senior author Vikram Iyer, UW assistant professor in the Paul G. Allen School of Computer Science & Engineering. “We combine the Miura-ori fold, which is inspired by geometric patterns found in leaves, with power harvesting and tiny actuators to allow our fliers to mimic the flight of different leaf types in mid-air. In its unfolded flat state, our origami structure tumbles chaotically in the wind, similar to an elm leaf. But switching to the folded state changes the airflow around it and enables a stable descent, similarly to how a maple leaf falls. This highly energy efficient method allows us to have battery-free control over microflier descent, which was not possible before.”
    These robotic systems overcome several design challenges. The devices: are stiff enough to avoid accidentally transitioning to the folded state before the signal. transition between states rapidly. The devices’ onboard actuators need only about 25 milliseconds to initiate the folding. change shape while untethered from a power source. The microfliers’ power-harvesting circuit uses sunlight to provide energy to the actuator.The current microfliers can only transition in one direction — from the tumbling state to the falling state. This switch allows researchers to control the descent of multiple microfliers at the same time, so they disperse in different directions on their way down.
    Future devices will be able to transition in both directions, the researchers said. This added functionality will allow for more precise landings in turbulent wind conditions.
    Additional co-authors on this paper are Kyle Johnson and Vicente Arroyos, both UW doctoral students in the Allen School; Amélie Ferran, a UW doctoral student in the mechanical engineering department; Raul Villanueva, Dennis Yin and Tilboon Elberier, who completed this work as UW undergraduate students studying electrical and computer engineering; Alberto Aliseda, UW professor of mechanical engineering; Sawyer Fuller, UW assistant professor of mechanical engineering; and Shyam Gollakota, UW professor in the Allen School.
    This research was funded by a Moore Foundation fellowship, the National Science Foundation, the National GEM Consortium, the Google fellowship program, the Cadence fellowship program, the Washington NASA Space Grant fellowship Program and the SPEEA ACE fellowship program. More

  • in

    AI foundation model for eye care to supercharge global efforts to prevent blindness

    Researchers at Moorfields Eye Hospital and UCL Institute of Ophthalmology have developed an artificial intelligence (AI) system that has the potential to not only identify sight-threatening eye diseases but also predict general health, including heart attacks, stroke, and Parkinson’s disease.
    RETFound, one of the first AI foundation models in healthcare, and the first in ophthalmology, was developed using millions of eye scans from the NHS. The research team are making the system open-source: freely available to use by any institution worldwide, to act as a cornerstone for global efforts to detect and treat blindness using AI. This work has been published in Nature today.
    Progress in AI continues to accelerate at a dizzying pace, with excitement being generated by the development of ‘foundation’ models such as ChatGPT. A foundation model describes a very large, complex AI system, trained on huge amounts of unlabelled data, which can be fine-tuned for a diverse range of subsequent tasks. RETFound consistently outperforms existing state-of-the-art AI systems across a range of complex clinical tasks, and even more importantly, it addresses a significant shortcoming of many current AI systems by working well in diverse populations, and in patients with rare disease.
    Senior author Professor Pearse Keane (UCL Institute of Ophthalmology and Moorfields Eye Hospital) said: “This is another big step towards using AI to reinvent the eye examination for the 21st century, both in the UK and globally. We show several exemplar conditions where RETFound can be used, but it has the potential to be developed further for hundreds of other sight-threatening eye diseases that we haven’t yet explored.
    “If the UK can combine high quality clinical data from the NHS, with top computer science expertise from its universities, it has the true potential to be a world leader in AI-enabled healthcare. We believe that our work provides a template for how this can be done.”
    AI foundation models have been called “a transformative technology” by the UK government in a report published earlier this year, and have come under the spotlight with the launch in November 2022 of ChatGPT, a foundation model trained using vast quantities of text data to develop a versatile language tool. Taking a comparable approach with eye images in a world-first, RETFound has been trained on millions of retinal scans to create a model that can be adapted for potentially limitless uses.
    One of the key challenges when developing AI models is the need for expert human labels, which are often expensive and time-consuming to acquire. As demonstrated in the paper, RETFound is able to match the performance of other AI systems whilst using as little as 10% of human labels in its dataset. This improvement in label efficiency is achieved by using an innovative self-supervising approach in which RETFound masks parts of an image, and then learns to predict the missing portions by itself. More

  • in

    New super-fast flood model has potentially life-saving benefits

    A new simulation model that can predict flooding during an ongoing disaster more quickly and accurately than currently possible has been developed by University of Melbourne researchers.
    Published in Nature Water, researchers say the new model has major potential benefits for emergency responses, reducing flood forecasting time from hours and days to just seconds, and enabling flood behaviour to be accurately predicted quickly as an emergency unfolds.
    University of Melbourne PHD student Niels Fraehr, alongside Professor Q. J. Wang, Dr Wenyan Wu and Professor Rory Nathan, from the Faculty of Engineering and Information Technology, developed the Low-Fidelity, Spatial Analysis and Gaussian Process Learning (LSG) model to predict the impacts of flooding.
    The LSG model can produce predictions that are as accurate as our most advanced simulation models, but at speeds which are 1000 times faster.
    Professor Nathan said the development had enormous potential as an emergency response tool.
    “Currently, our most advanced flood models can accurately simulate flood behaviour, but they’re very slow and can’t be used during a flood event as it unfolds,” said Professor Nathan, who has 40 years’ experience in engineering and environmental hydrology.” Professor Nathan said.
    “This new model provides results a thousand times more quickly than previous models, enabling highly accurate modelling to be used in real-time during an emergency. Being able to access up-to-date modelling during a disaster could help emergency services and communities receive much more accurate information about flooding risks and respond accordingly. It’s a game-changer.”
    When put to the test on two vastly different yet equally complex river systems in Australia, the LSG model was able to predict floods with a 99 per cent accuracy on the Chowilla floodplain in Southern Australia in 33 seconds, instead of 11 hours, and the Burnett River in Queensland in 27 seconds, instead of 36 hours, when compared to presently-used advanced models. More

  • in

    A linear path to efficient quantum technologies

    Researchers at the University of Stuttgart have demonstrated that a key ingredient for many quantum computation and communication schemes can be performed with an efficiency that exceeds the commonly assumed upper theoretical limit — thereby opening up new perspectives for a wide range of photonic quantum technologies.
    Quantum science not only has revolutionized our understanding of nature, but is also inspiring groundbreaking new computing, communication and sensor devices. Exploiting quantum effects in such ‘quantum technologies’ typically requires a combination of deep insight into the underlying quantum-physical principles, systematic methodological advances, and clever engineering. And it is precisely this combination that researches in the group of Prof. Stefanie Barz at the University of Stuttgart and the Center for Integrated Quantum Science and Technology (IQST) have delivered in recent study, in which they have improved the efficiency of an essential building block of many quantum devices beyond a seemingly inherent limit.
    From philosophy to technology
    One of the protagonist in the field of quantum technologies is a property known as quantum entanglement. The first step in the development of this concept involved a passionate debate between Albert Einstein and Niels Bohr. In a nutshell, their argument was about how information can be shared across several quantum systems. Importantly, this can happen in ways that have no analogue in classical physics. The discussion that Einstein and Bohr started remained largely philosophical until the 1960s, when the physicist John Stewart Bell devised a way to resolve the disagreement experimentally. Bell’s framework was first explored in experiments with photons, the quanta of light. Three pioneers in this field — Alain Aspect, John Clauser and Anton Zeilinger — were jointly awarded last year’s Nobel Prize in Physics for their groundbreaking works towards quantum technologies.
    Bell himself died in 1990, but his name is immortalized not least in the so-called Bell states. These describe the quantum states of two particles that are as strongly entangled as is possible. There are four Bell states in all, and Bell-state measurements — which determine which of the four states a quantum system is in — are an essential tool for putting quantum entanglement to practical use. Perhaps most famously, Bell-state measurements are the central component in quantum teleportation, which in turn makes most quantum communication and quantum computation possible.
    But there is a problem: when experiments are performed using conventional optical elements, such as mirrors, beam splitters and waveplates, then two of the four Bell states have identical experimental signatures and are therefore indistinguishable from each other. This means that the overall probability of success (and thus the success rate of, say, a quantum- teleportation experiment) is inherently limited to 50 percent if only such ‘linear’ optical components are used. Or is it?
    With all the bells and whistles
    This is where the work of the Barz group comes in. As they recently reported in the journal Science Advances, doctoral researchers Matthias Bayerbach and Simone D’Aurelio carried out Bell-state measurements in which they achieved a success rate of 57.9 percent. But how did they reach an efficiency that should have been unattainable with the tools available? More

  • in

    In the age of ChatGPT, what’s it like to be accused of cheating?

    While the public release of the artificial intelligence-driven large-language chatbot, ChatGPT, has created a great deal of excitement around the promise of the technology and expanded use of AI, it has also seeded a good bit of anxiety around what a program that can churn out a passable college-level essay in seconds means for the future of teaching and learning. Naturally, this consternation drove a proliferation of detection programs — of varying effectiveness — and a commensurate increase in accusations of cheating. But how are the students feeling about all of this? Recently published research by Drexel University’s Tim Gorichanaz, Ph.D.,provides a first look into some of the reactions of college students who have been accused of using ChatGPT to cheat.
    The study, published in the journal Learning: Research and Practice as part of a series on generative AI, analyzed 49 Reddit posts and their related discussions from college students who had been accused of using ChatGPT on an assignment. Gorichanaz, who is an assistant teaching professor in Drexel’s College of Computing & Informatics, identified a number of themes in these conversations, most notably frustration from wrongly accused students, anxiety about the possibility of being wrongly accused and how to avoid it, and creeping doubt and cynicism about the need for higher education in the age of generative artificial intelligence.
    “As the world of higher ed collectively scrambles to understand and develop best practices and policies around the use of tools like ChatGPT, it’s vital for us to understand how the fascination, anxiety and fear that comes with adopting any new educational technology also affects the students who are going through their own process of figuring out how to use it,” Gorichanaz said.
    Of the 49 students who posted, 38 of them said they did not use ChatGPT, but detection programs like Turnitin or GPTZero had nonetheless flagged their assignment as being AI-generated. As a result, many of the discussions took on the tenor of a legal argument. Students asked how they could present evidence to prove that they hadn’t cheated, some commenters advised continuing to deny that they had used the program because the detectors are unreliable.
    “Many of the students expressed concern over the possibility of being wrongly accused by an AI detector,” Gorichanaz said. “Some discussions went into great detail about how students could collect evidence to prove that they had written an essay without AI, including tracking draft versions and using screen recording software. Others suggested running a detector on their own writing until it came back without being incorrectly flagged.”
    Another theme that emerged in the discussions was the perceived role of colleges and universities as “gatekeepers” to success and, as a result, the high stakes associated with being wrongly accused of cheating. This led to questions about the institutions’ preparedness for the new technology and concerns that professors would be too dependent on AI detectors — whose accuracy remains in doubt.
    “The conversations happening online evolved from specific doubts about the accuracy of AI detection and universities’ policies around the use of generative AI, to broadly questioning the role of higher education in society and suggesting that the technology will render institutions of higher education irrelevant in the near future,” Gorichanaz said. More

  • in

    Ecology and artificial intelligence: Stronger together

    Many of today’s artificial intelligence systems loosely mimic the human brain. In a new paper, researchers suggest that another branch of biology — ecology — could inspire a whole new generation of AI to be more powerful, resilient, and socially responsible.
    Published September 11 in Proceedings of the National Academy of Sciences, the paper argues for a synergy between AI and ecology that could both strengthen AI and help to solve complex global challenges, such as disease outbreaks, loss of biodiversity, and climate change impacts.
    The idea arose from the observation that AI can be shockingly good at certain tasks, but still far from useful at others — and that AI development is hitting walls that ecological principles could help it to overcome.
    “The kinds of problems that we deal with regularly in ecology are not only challenges that AI could benefit from in terms of pure innovation — they’re also the kinds of problems where if AI could help, it could mean so much for the global good,” explained Barbara Han, a disease ecologist at Cary Institute of Ecosystem Studies, who co-led the paper along with IBM Research’s Kush Varshney. “It could really benefit humankind.”
    How AI can help ecology
    Ecologists — Han included — are already using artificial intelligence to search for patterns in large data sets and to make more accurate predictions, such as whether new viruses might be capable of infecting humans, and which animals are most likely to harbor those viruses.
    However, the new paper argues that there are many more possibilities for applying AI in ecology, such as in synthesizing big data and finding missing links in complex systems. More

  • in

    Not too big: Machine learning tames huge data sets

    A machine-learning algorithm demonstrated the capability to process data that exceeds a computer’s available memory by identifying a massive data set’s key features and dividing them into manageable batches that don’t choke computer hardware. Developed at Los Alamos National Laboratory, the algorithm set a world record for factorizing huge data sets during a test run on Oak Ridge National Laboratory’s Summit, the world’s fifth-fastest supercomputer.
    Equally efficient on laptops and supercomputers, the highly scalable algorithm solves hardware bottlenecks that prevent processing information from data-rich applications in cancer research, satellite imagery, social media networks, national security science and earthquake research, to name just a few.
    “We developed an ‘out-of-memory’ implementation of the non-negative matrix factorization method that allows you to factorize larger data sets than previously possible on a given hardware,” said Ismael Boureima, a computational physicist at Los Alamos National Laboratory. Boureima is first author of the paper in The Journal of Supercomputing on the record-breaking algorithm. “Our implementation simply breaks down the big data into smaller units that can be processed with the available resources. Consequently, it’s a useful tool for keeping up with exponentially growing data sets.”
    “Traditional data analysis demands that data fit within memory constraints. Our approach challenges this notion,” said Manish Bhattarai, a machine learning scientist at Los Alamos and co-author of the paper. “We have introduced an out-of-memory solution. When the data volume exceeds the available memory, our algorithm breaks it down into smaller segments. It processes these segments one at a time, cycling them in and out of the memory. This technique equips us with the unique ability to manage and analyze extremely large data sets efficiently.”
    The distributed algorithm for modern and heterogeneous high-performance computer systems can be useful on hardware as small as a desktop computer, or as large and complex as Chicoma, Summit or the upcoming Venado supercomputers, Boureima said.
    “The question is no longer whether it is possible to factorize a larger matrix, rather how long is the factorization going to take,” Boureima said.
    The Los Alamos implementation takes advantage of hardware features such as GPUs to accelerate computation and fast interconnect to efficiently move data between computers. At the same time, the algorithm efficiently gets multiple tasks done simultaneously. More

  • in

    When electronic health records are hard to use, patient safety may be at risk

    New research suggests that hospital electronic health records (EHRs) that are difficult to use are also less likely to catch medical errors that could harm patients.
    As clinicians navigate EHR systems, alerts, reminders, and clinical guidelines pop up to steer decision making. Yet a common complaint is that these notifications are distracting rather than helpful. These frustrations could signal that built-in safety mechanisms similarly suffer from suboptimal design, suggests the new study. Researchers found that EHR systems rated as being difficult to operate did not perform well in safety tests.
    “Poor usability of EHRs is the number one complaint of doctors, nurses, pharmacists, and most health care professionals,” says David Classen, M.D., the study’s corresponding author and a professor of internal medicine at University of Utah Health. “This correlates with poor performance in terms of safety.”
    Classen likens the situation to the software problems that led to two deadly Boeing 737 MAX airplane crashes in 2018 and 2019. In both cases, pilots struggling to use the system foretold deeper safety issues.
    “Our findings suggest that we need to improve EHR systems to make them both easier to use and safer,” Classen says. He collaborated on the study with senior author David Bates, M.D., at Brigham and Women’s Hospital and Harvard T.H. Chan School of Public Health, and scientists at University of California San Diego Health; KLAS Enterprises, LLC; and University of California, San Francisco.
    The research appears in the September 11 issue of JAMA Network Open.
    Experts estimate that as many as 400,000 people are injured each year from medical errors that occur in hospitals. Medical professionals predicted that widespread use of EHRs would mitigate the problem. But research published by Classen, Bates and colleagues in 2020 showed that EHRs failed to reliably detect medical errors that could harm patients, including dangerous drug interactions. Additional reports have indicated that poorly designed EHRs could be a contributing factor. More