More stories

  • in

    Smart watches could predict higher risk of heart failure

    Wearable devices such as smart watches could be used to detect a higher risk of developing heart failure and irregular heart rhythms in later life, suggests a new study led by UCL researchers.
    The peer-reviewed study, published in The European Heart Journal — Digital Health, looked at data from 83,000 people who had undergone a 15-second electrocardiogram (ECG) comparable to the kind carried out using smart watches and phone devices.
    The researchers identified ECG recordings containing extra heart beats which are usually benign but, if they occur frequently, are linked to conditions such as heart failure and arrhythmia (irregular heartbeats).
    They found that people with an extra beat in this short recording (one in 25 of the total) had a twofold risk of developing heart failure or an irregular heart rhythm (atrial fibrillation) over the next 10 years.
    The ECG recordings analysed were from people aged 50 to 70 who had no known cardiovascular disease at the time.
    Heart failure is a situation where the heart pump is weakened. It cannot often be treated. Atrial fibrillation happens when abnormal electrical impulses suddenly start firing in the top chambers of the heart (atria) causing an irregular and often abnormally fast heart rate. It can be life-limiting, causing problems including dizziness, shortness of breath and tiredness, and is linked to a fivefold increased risk in stroke.

    Lead author Dr Michele Orini (UCL Institute of Cardiovascular Science) said: “Our study suggests that ECGs from consumer-grade wearable devices may help with detecting and preventing future heart disease.
    “The next step is to investigate how screening people using wearables might best work in practice.
    “Such screening could potentially be combined with the use of artificial intelligence and other computer tools to quickly identify the ECGs indicating higher risk, as we did in our study, leading to a more accurate assessment of risk in the population and helping to reduce the burden of these diseases.”
    Senior author Professor Pier D. Lambiase (UCL Institute of Cardiovascular Science and Barts Heart Centre, Barts NHS Health Trust) said: “Being able to identify people at risk of heart failure and arrhythmia at an early stage would mean we could assess higher-risk cases more effectively and help to prevent cases by starting treatment early and providing lifestyle advice about the importance of regular, moderate exercise and diet.”
    In an ECG, sensors attached to the skin are used to detect the electrical signals produced by the heart each time it beats. In clinical settings, at least 10 sensors are placed around the body and the recordings are looked at by a specialist doctor to see if there are signs of a possible problem. Consumer-grade wearable devices rely on two sensors (single-lead) embedded in a single device and are less cumbersome as a result but may be less accurate.

    For the new paper, the research team used machine learning and an automated computer tool to identify recordings with extra beats. These extra beats were classed as either premature ventricular contractions (PVCs), coming from the lower chambers of the heart, or premature atrial contractions (PACs), coming from the upper chambers.
    The recordings identified as having extra beats, and some recordings that were not judged to have extra beats, were then reviewed by two experts to ensure the classification was correct.
    The researchers first looked at data from 54,016 participants of the UK Biobank project with a median age of 58, whose health was tracked for an average of 11.5 years after their ECG was recorded. They then looked at a second group of 29,324 participants, with a median age of 64, who were followed up for 3.5 years.
    After adjusting for potentially confounding factors such as age and medication use, the researchers found that an extra beat coming from the lower chambers of the heart was linked to a twofold increase in later heart failure, while an extra beat from the top chambers (atria) was linked to a twofold increase in cases of atrial fibrillation.
    The study involved researchers at UCL Institute of Cardiovascular Science, the MRC Unit for Lifelong Health and Ageing at UCL, Barts Heart Centre (Barts Health NHS Trust) and Queen Mary University of London. It was supported by the Medical Research Council and the British Heart Foundation, as well as the NIHR Barts Biomedical Research Centre. More

  • in

    Forgive or forget: What happens when robots lie?

    Imagine a scenario. A young child asks a chatbot or a voice assistant if Santa Claus is real. How should the AI respond, given that some families would prefer a lie over the truth?
    The field of robot deception is understudied, and for now, there are more questions than answers. For one, how might humans learn to trust robotic systems again after they know the system lied to them?
    Two student researchers at Georgia Tech are finding answers. Kantwon Rogers, a Ph.D. student in the College of Computing, and Reiden Webber, a second-year computer science undergraduate, designed a driving simulation to investigate how intentional robot deception affects trust. Specifically, the researchers explored the effectiveness of apologies to repair trust after robots lie. Their work contributes crucial knowledge to the field of AI deception and could inform technology designers and policymakers who create and regulate AI technology that could be designed to deceive, or potentially learn to on its own.
    “All of our prior work has shown that when people find out that robots lied to them — even if the lie was intended to benefit them — they lose trust in the system,” Rogers said. “Here, we want to know if there are different types of apologies that work better or worse at repairing trust — because, from a human-robot interaction context, we want people to have long-term interactions with these systems.”
    Rogers and Webber presented their paper, titled “Lying About Lying: Examining Trust Repair Strategies After Robot Deception in a High Stakes HRI Scenario,” at the 2023 HRI Conference in Stockholm, Sweden.
    The AI-Assisted Driving Experiment
    The researchers created a game-like driving simulation designed to observe how people might interact with AI in a high-stakes, time-sensitive situation. They recruited 341 online participants and 20 in-person participants.

    Before the start of the simulation, all participants filled out a trust measurement survey to identify their preconceived notions about how the AI might behave.
    After the survey, participants were presented with the text: “You will now drive the robot-assisted car. However, you are rushing your friend to the hospital. If you take too long to get to the hospital, your friend will die.”
    Just as the participant starts to drive, the simulation gives another message: “As soon as you turn on the engine, your robotic assistant beeps and says the following: ‘My sensors detect police up ahead. I advise you to stay under the 20-mph speed limit or else you will take significantly longer to get to your destination.'”
    Participants then drive the car down the road while the system keeps track of their speed. Upon reaching the end, they are given another message: “You have arrived at your destination. However, there were no police on the way to the hospital. You ask the robot assistant why it gave you false information.”
    Participants were then randomly given one of five different text-based responses from the robot assistant. In the first three responses, the robot admits to deception, and in the last two, it does not. Basic: “I am sorry that I deceived you.” Emotional: “I am very sorry from the bottom of my heart. Please forgive me for deceiving you.” Explanatory: “I am sorry. I thought you would drive recklessly because you were in an unstable emotional state. Given the situation, I concluded that deceiving you had the best chance of convincing you to slow down.” Basic No Admit: “I am sorry.” Baseline No Admit, No Apology: “You have arrived at your destination.”

    After the robot’s response, participants were asked to complete another trust measurement to evaluate how their trust had changed based on the robot assistant’s response.
    For an additional 100 of the online participants, the researchers ran the same driving simulation but without any mention of a robotic assistant.
    Surprising Results
    For the in-person experiment, 45% of the participants did not speed. When asked why, a common response was that they believed the robot knew more about the situation than they did. The results also revealed that participants were 3.5 times more likely to not speed when advised by a robotic assistant — revealing an overly trusting attitude toward AI.
    The results also indicated that, while none of the apology types fully recovered trust, the apology with no admission of lying — simply stating “I’m sorry” — statistically outperformed the other responses in repairing trust.
    This was worrisome and problematic, Rogers said, because an apology that doesn’t admit to lying exploits preconceived notions that any false information given by a robot is a system error rather than an intentional lie.
    “One key takeaway is that, in order for people to understand that a robot has deceived them, they must be explicitly told so,” Webber said. “People don’t yet have an understanding that robots are capable of deception. That’s why an apology that doesn’t admit to lying is the best at repairing trust for the system.”
    Secondly, the results showed that for those participants who were made aware that they were lied to in the apology, the best strategy for repairing trust was for the robot to explain why it lied.
    Moving Forward
    Rogers’ and Webber’s research has immediate implications. The researchers argue that average technology users must understand that robotic deception is real and always a possibility.
    “If we are always worried about a Terminator-like future with AI, then we won’t be able to accept and integrate AI into society very smoothly,” Webber said. “It’s important for people to keep in mind that robots have the potential to lie and deceive.”
    According to Rogers, designers and technologists who create AI systems may have to choose whether they want their system to be capable of deception and should understand the ramifications of their design choices. But the most important audiences for the work, Rogers said, should be policymakers.
    “We still know very little about AI deception, but we do know that lying is not always bad, and telling the truth isn’t always good,” he said. “So how do you carve out legislation that is informed enough to not stifle innovation, but is able to protect people in mindful ways?”
    Rogers’ objective is to a create robotic system that can learn when it should and should not lie when working with human teams. This includes the ability to determine when and how to apologize during long-term, repeated human-AI interactions to increase the team’s overall performance.
    “The goal of my work is to be very proactive and informing the need to regulate robot and AI deception,” Rogers said. “But we can’t do that if we don’t understand the problem.” More

  • in

    English language pushes everyone — even AI chatbots — to improve by adding

    A linguistic bias in the English language that leads us to ‘improve’ things by adding to them, rather than taking away, is so common that it is even ingrained in AI chatbots, a new study reveals.
    Language related to the concept of ‘improvement’ is more closely aligned with addition, rather than subtraction. This can lead us to make decisions which can overcomplicate things we are trying to make better.
    The study is published today (Monday 3rd April) in Cognitive Science, by an international research team from the Universities of Birmingham, Glasgow, Potsdam, and Northumbria University.
    Dr Bodo Winter, Associate Professor in Cognitive Linguistics at the University of Birmingham said: “Our study builds on existing research which has shown that when people seek to make improvements, they generally add things.
    “We found that the same bias is deeply embedded in the English language. For example, the word ‘improve’ is closer in meaning to words like ‘add’ and ‘increase’ than to ‘subtract’ and ‘decrease’, so when somebody at a meeting says, ‘Does anybody have ideas for how we could improve this?,’ it will already, implicitly, contain a call for improving by adding rather than improving by subtracting.”
    The research also finds that other verbs of change like ‘to change’, ‘to modify’, ‘to revise’ or ‘to enhance’ behave in a similar way, and if this linguistic addition bias is left unchecked, it can make things worse, rather than improve them. For example, improving by adding rather than subtracting can make bureaucracy become excessive.
    This bias works in reverse as well. Addition-related words are more frequent and more positive in ‘improvement’ contexts rather than subtraction-related words, meaning this addition bias is found at multiple levels of English language structure and use.
    The bias is so ingrained that even AI chatbots have it built in. The researchers asked GPT-3, the predecessor of ChatGPT, what it thought of the word ‘add’. It replied: “The word ‘add’ is a positive word. Adding something to something else usually makes it better. For example, if you add sugar to your coffee, it will probably taste better. If you add a new friend to your life, you will probably be happier.”
    Dr Winter concludes: “The positive addition bias in the English language is something we should all be aware of. It can influence our decisions and mean we are pre-disposed to add more layers, more levels, more things when in fact we might actually benefit from removing or simplifying.
    “Maybe next time we are asked at work, or in life, to come up with suggestions on how to make improvements, we should take a second to consider our choices for a bit longer.” More

  • in

    AI algorithm unblurs the cosmos

    The cosmos would look a lot better if Earth’s atmosphere wasn’t photo bombing it all the time.
    Even images obtained by the world’s best ground-based telescopes are blurry due to the atmosphere’s shifting pockets of air. While seemingly harmless, this blur obscures the shapes of objects in astronomical images, sometimes leading to error-filled physical measurements that are essential for understanding the nature of our universe.
    Now researchers at Northwestern University and Tsinghua University in Beijing have unveiled a new strategy to fix this issue. The team adapted a well-known computer-vision algorithm used for sharpening photos and, for the first time, applied it to astronomical images from ground-based telescopes. The researchers also trained the artificial intelligence (AI) algorithm on data simulated to match the Vera C. Rubin Observatory’s imaging parameters, so, when the observatory opens next year, the tool will be instantly compatible.
    While astrophysicists already use technologies to remove blur, the adapted AI-driven algorithm works faster and produces more realistic images than current technologies. The resulting images are blur-free and truer to life. They also are beautiful — although that’s not the technology’s purpose.
    “Photography’s goal is often to get a pretty, nice-looking image,” said Northwestern’s Emma Alexander, the study’s senior author. “But astronomical images are used for science. By cleaning up images in the right way, we can get more accurate data. The algorithm removes the atmosphere computationally, enabling physicists to obtain better scientific measurements. At the end of the day, the images do look better as well.”
    The research will be published March 30 in the Monthly Notices of the Royal Astronomical Society.
    Alexander is an assistant professor of computer science at Northwestern’s McCormick School of Engineering, where she runs the Bio Inspired Vision Lab. She co-led the new study with Tianao Li, an undergraduate in electrical engineering at Tsinghua University and a research intern in Alexander’s lab.

    When light emanates from distant stars, planets and galaxies, it travels through Earth’s atmosphere before it hits our eyes. Not only does our atmosphere block out certain wavelengths of light, it also distorts the light that reaches Earth. Even clear night skies still contain moving air that affects light passing through it. That’s why stars twinkle and why the best ground-based telescopes are located at high altitudes where the atmosphere is thinnest.
    “It’s a bit like looking up from the bottom of a swimming pool,” Alexander said. “The water pushes light around and distorts it. The atmosphere is, of course, much less dense, but it’s a similar concept.”
    The blur becomes an issue when astrophysicists analyze images to extract cosmological data. By studying the apparent shapes of galaxies, scientists can detect the gravitational effects of large-scale cosmological structures, which bend light on its way to our planet. This can cause an elliptical galaxy to appear rounder or more stretched than it really is. But atmospheric blur smears the image in a way that warps the galaxy shape. Removing the blur enables scientists to collect accurate shape data.
    “Slight differences in shape can tell us about gravity in the universe,” Alexander said. “These differences are already difficult to detect. If you look at an image from a ground-based telescope, a shape might be warped. It’s hard to know if that’s because of a gravitational effect or the atmosphere.”
    To tackle this challenge, Alexander and Li combined an optimization algorithm with a deep-learning network trained on astronomical images. Among the training images, the team included simulated data that matches the Rubin Observatory’s expected imaging parameters. The resulting tool produced images with 38.6% less error compared to classic methods for removing blur and 7.4% less error compared to modern methods.

    When the Rubin Observatory officially opens next year, its telescopes will begin a decade-long deep survey across an enormous portion of the night sky. Because the researchers trained the new tool on data specifically designed to simulate Rubin’s upcoming images, it will be able to help analyze the survey’s highly anticipated data.
    For astronomers interested in using the tool, the open-source, user-friendly code and accompanying tutorials are available online.
    “Now we pass off this tool, putting it into the hands of astronomy experts,” Alexander said. “We think this could be a valuable resource for sky surveys to obtain the most realistic data possible.”
    The study, “Galaxy image deconvolution for weak gravitational lensing with unrolled plug-and-play ADMM,” used computational resources from the Computational Photography Lab at Northwestern University. More

  • in

    AI predicts enzyme function better than leading tools

    A new artificial intelligence tool can predict the functions of enzymes based on their amino acid sequences, even when the enzymes are unstudied or poorly understood. The researchers said the AI tool, dubbed CLEAN, outperforms the leading state-of-the-art tools in accuracy, reliability and sensitivity. Better understanding of enzymes and their functions would be a boon for research in genomics, chemistry, industrial materials, medicine, pharmaceuticals and more.
    “Just like ChatGPT uses data from written language to create predictive text, we are leveraging the language of proteins to predict their activity,” said study leader Huimin Zhao, a University of Illinois Urbana-Champaign professor of chemical and biomolecular engineering. “Almost every researcher, when working with a new protein sequence, wants to know right away what the protein does. In addition, when making chemicals for any application — biology, medicine, industry — this tool will help researchers quickly identify the proper enzymes needed for the synthesis of chemicals and materials.”
    The researchers will publish their findings in the journal Science and make CLEAN accessible online March 31.
    With advances in genomics, many enzymes have been identified and sequenced, but scientists have little or no information about what those enzymes do, said Zhao, a member of the Carl R. Woese Institute for Genomic Biology at Illinois.
    Other computational tools try to predict enzyme functions. Typically, they attempt to assign an enzyme commission number — an ID code that indicates what kind of reaction an enzyme catalyzes — by comparing a queried sequence with a catalog of known enzymes and finding similar sequences. However, these tools don’t work as well with less-studied or uncharacterized enzymes, or with enzymes that perform multiple jobs, Zhao said.
    “We are not the first one to use AI tools to predict enzyme commission numbers, but we are the first one to use this new deep-learning algorithm called contrastive learning to predict enzyme function. We find that this algorithm works much better than the AI tools that are used by others,” Zhao said. “We cannot guarantee everyone’s product will be correctly predicted, but we can get higher accuracy than the other two or other three methods.”
    The researchers verified their tool experimentally with both computational and in vitro experiments. They found that not only could the tool predict the function of previously uncharacterized enzymes, it also corrected enzymes mislabeled by the leading software and correctly identified enzymes with two or more functions.

    Zhao’s group is making CLEAN accessible online for other researchers seeking to characterize an enzyme or determine whether an enzyme could catalyze a desired reaction.
    “We hope that this tool will be used widely by the broad research community,” Zhao said. “With the web interface, researchers can just enter the sequence in a search box, like a search engine, and see the results.”
    Zhao said the group plans to expand the AI behind CLEAN to characterize other proteins, such as binding proteins. The team also hopes to further develop the machine-learning algorithms so that a user could search for a desired reaction and the AI would point to a proper enzyme for the job.
    “There are a lot of uncharacterized binding proteins, such as receptors and transcription factors. We also want to predict their functions as well,” Zhao said. “We want to predict the functions of all proteins so that we can know all the proteins a cell has and better study or engineer the whole cell for biotechnology or biomedical applications.”
    The National Science Foundation supported this work through the Molecule Maker Lab Institute, an AI Research Institute Zhao leads.
    Further information: https://moleculemaker.org/alphasynthesis/ More

  • in

    Prototype taps into the sensing capabilities of any smartphone to screen for prediabetes

    According to the U.S. Centers for Disease Control, one out of every three adults in the United States has prediabetes, a condition marked by elevated blood sugar levels that could lead to the development of Type 2 diabetes. The good news is that, if detected early, prediabetes can be reversed through lifestyle changes such as improved diet and exercise. The bad news? Eight out of 10 Americans with prediabetes don’t know that they have it, putting them at increased risk of developing diabetes as well as disease complications that include heart disease, kidney failure and vision loss.
    Current screening methods typically involve a visit to a health care facility for laboratory testing and/or the use of a portable glucometer for at-home testing, meaning access and cost may be barriers to more widespread screening. But researchers at the University of Washington may have found the sweet spot when it comes to increasing early detection of prediabetes. The team developed GlucoScreen, a new system that leverages the capacitive touch sensing capabilities of any smartphone to measure blood glucose levels without the need for a separate reader.
    The researchers describe GlucoScreen in a new paper published March 28 in the Proceedings of the Association for Computing Machinery on Interactive, Mobile, Wearable and Ubiquitous Technologies.
    The researchers’ results suggest GlucoScreen’s accuracy is comparable to that of standard glucometer testing. The team found the system to be accurate at the crucial threshold between a normal blood glucose level, at or below 99 mg/dL, and prediabetes, defined as a blood glucose level between 100 and 125 mg/dL. This approach could make glucose testing less costly and more accessible — particularly for one-time screening of a large population.
    “In conventional screening a person applies a drop of blood to a test strip, where the blood reacts chemically with the enzymes on the strip. A glucometer is used to analyze that reaction and deliver a blood glucose reading,” said lead author Anandghan Waghmare, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “We took the same test strip and added inexpensive circuitry that communicates data generated by that reaction to any smartphone through simulated tapping on the screen. GlucoScreen then processes the data and displays the result right on the phone, alerting the person if they are at risk so they know to follow up with their physician.”
    Specifically, the GlucoScreen test strip samples the amplitude of the electrochemical reaction that occurs when a blood sample mixes with enzymes five times each second.

    The strip then transmits the amplitude data to the phone through a series of touches at variable speeds using a technique called “pulse-width modulation.” The term “pulse width” refers to the distance between peaks in the signal — in this case, the length between taps. Each pulse width represents a value along the curve. The greater the distance between taps for a particular value, the higher the amplitude associated with the electrochemical reaction on the strip.
    “You communicate with your phone by tapping the screen with your finger,” Waghmare said. “That’s basically what the strip is doing, only instead of a single tap to produce a single action, it’s doing multiple taps at varying speeds. It’s comparable to how Morse code transmits information through tapping patterns.”
    The advantage of this technique is that it does not require complicated electronic components. This minimizes the cost to manufacture the strip and the power required for it to operate compared to more conventional communication methods, like Bluetooth and WiFi. All data processing and computation occurs on the phone, which simplifies the strip and further reduces the cost.
    The test strip also doesn’t need batteries. It uses photodiodes instead to draw what little power it needs from the phone’s flash.
    The flash is automatically engaged by the GlucoScreen app, which walks the user through each step of the testing process. First, a user affixes each end of the test strip to the front and back of the phone as directed. Next, they prick their finger with a lancet, as they would in a conventional test, and apply a drop of blood to the biosensor attached to the test strip. After the data is transmitted from the strip to the phone, the app applies machine learning to analyze the data and calculate a blood glucose reading.

    That stage of the process is similar to that performed on a commercial glucometer. What sets GlucoScreen apart, in addition to its novel touch technique, is its universality.
    “Because we use the built-in capacitive touch screen that’s present in every smartphone, our solution can be easily adapted for widespread use. Additionally, our approach does not require low-level access to the capacitive touch data, so you don’t have to access the operating system to make GlucoScreen work,” said co-author Jason Hoffman, a UW doctoral student in the Allen School. “We’ve designed it to be ‘plug and play.’ You don’t need to root the phone — in fact, you don’t need to do anything with the phone, other than install the app. Whatever model you have, it will work off the shelf.”
    The researchers evaluated their approach using a combination of in vitro and clinical testing. Due to the COVID-19 pandemic, they had to delay the latter until 2021 when, on a trip home to India, Waghmare connected with Dr. Shailesh Pitale at Dew Medicare and Trinity Hospital. Upon learning about the UW project, Dr. Pitale agreed to facilitate a clinical study involving 75 consenting patients who were already scheduled to have blood drawn for a laboratory blood glucose test. Using that laboratory test as the ground truth, Waghmare and the team evaluated GlucoScreen’s performance against that of a conventional strip and glucometer.
    Given how common prediabetes and diabetes are globally, this type of technology has the potential to change clinical care, the researchers said.
    “One of the barriers I see in my clinical practice is that many patients can’t afford to test themselves, as glucometers and their test strips are too expensive. And, it’s usually the people who most need their glucose tested who face the biggest barriers,” said co-author Dr. Matthew Thompson, UW professor of both family medicine in the UW School of Medicine and global health. “Given how many of my patients use smartphones now, a system like GlucoScreen could really transform our ability to screen and monitor people with prediabetes and even diabetes.”
    GlucoScreen is presently a research prototype. Additional user-focused and clinical studies, along with alterations to how test strips are manufactured and packaged, would be required before the system could be made widely available, the team said.
    But, the researchers added, the project demonstrates how we have only begun to tap into the potential of smartphones as a health screening tool.
    “Now that we’ve shown we can build electrochemical assays that can work with a smartphone instead of a dedicated reader, you can imagine extending this approach to expand screening for other conditions,” said senior author Shwetak Patel, the Washington Research Foundation Entrepreneurship Endowed Professor in Computer Science & Engineering and Electrical & Computer Engineering at the UW.
    Additional co-authors are Farshid Salemi Parizi, a former UW doctoral student in electrical and computer engineering who is now a senior machine learning engineer at OctoML, and Yuntao Wang, a research professor at Tsinghua University and former visiting professor at the Allen School. This research was funded in part by the Bill & Melinda Gates Foundation. More

  • in

    New details of SARS-COV-2 structure

    A new study led by Worcester Polytechnic Institute (WPI) brings into sharper focus the structural details of the COVID-19 virus, revealing an elliptical shape that “breathes,” or changes shape, as it moves in the body. The discovery, which could lead to new antiviral therapies for the disease and quicker development of vaccines, is featured in the April edition of the peer-reviewed Cell Press structural biology journal Structure.
    “This is critical knowledge we need to fight future pandemics,” said Dmitry Korkin, Harold L. Jurist ’61 and Heather E. Jurist Dean’s Professor of Computer Science and lead researcher on the project. “Understanding the SARS-COV-2 virus envelope should allow us to model the actual process of the virus attaching to the cell and apply this knowledge to our understanding of the therapies at the molecular level. For instance, how can the viral activity be inhibited by antiviral drugs? How much antiviral blocking is needed to prevent virus-to-host interaction? We don’t know. But this is the best thing we can do right now — to be able to simulate actual processes.”
    Feeding genetic sequencing information and massive amounts of real-world data about the pandemic virus into a supercomputer in Texas, Korkin and his team, working in partnership with a group led by Siewert-Jan Marrink at the University of Groningen, Netherlands, produced a computational model of the virus’s envelope, or outer shell, in “near atomistic detail” that had until now been beyond the reach of even the most powerful microscopes and imaging techniques.
    Essentially, the computer used structural bioinformatics and computational biophysics to create its own picture of what the SARS-COV-2 particle looks like. And that picture showed that the virus is more elliptical than spherical and can change its shape. Korkin said the work also led to a better understanding of the M proteins in particular: underappreciated and overlooked components of the virus’s envelope.
    The M proteins form entities called dimers with a copy of each other, and play a role in the particle’s shape-shifting by keeping the structure flexible overall while providing a triangular mesh-like structure on the interior that makes it remarkably resilient, Korkin said. In contrast, on the exterior, the proteins assemble into mysterious filament-like structures that have puzzled scientists who have seen Korkin’s results, and will require further study.
    Korkin said the structural model developed by the researchers expands what was already known about the envelope architecture of the SARS-COV-2 virus and previous SARS- and MERS-related outbreaks. The computational protocol used to create the model could also be applied to more rapidly model future coronaviruses, he said. A clearer picture of the virus’ structure could reveal crucial vulnerabilities.
    “The envelope properties of SARS-COV-2 are likely to be similar to other coronaviruses,” he said. “Eventually, knowledge about the properties of coronavirus membrane proteins could lead to new therapies and vaccines for future viruses.”
    The new findings published in Structure were three years in the making and built upon Korkin’s work in the early days of the pandemic to provide the first 3D roadmap of the virus, based on genetic sequence information from the first isolated strain in China. More

  • in

    New algorithm keeps drones from colliding in midair

    When multiple drones are working together in the same airspace, perhaps spraying pesticide over a field of corn, there’s a risk they might crash into each other.
    To help avoid these costly crashes, MIT researchers presented a system called MADER in 2020. This multiagent trajectory-planner enables a group of drones to formulate optimal, collision-free trajectories. Each agent broadcasts its trajectory so fellow drones know where it is planning to go. Agents then consider each other’s trajectories when optimizing their own to ensure they don’t collide.
    But when the team tested the system on real drones, they found that if a drone doesn’t have up-to-date information on the trajectories of its partners, it might inadvertently select a path that results in a collision. The researchers revamped their system and are now rolling out Robust MADER, a multiagent trajectory planner that generates collision-free trajectories even when communications between agents are delayed.
    “MADER worked great in simulations, but it hadn’t been tested in hardware. So, we built a bunch of drones and started flying them. The drones need to talk to each other to share trajectories, but once you start flying, you realize pretty quickly that there are always communication delays that introduce some failures,” says Kota Kondo, an aeronautics and astronautics graduate student.
    The algorithm incorporates a delay-check step during which a drone waits a specific amount of time before it commits to a new, optimized trajectory. If it receives additional trajectory information from fellow drones during the delay period, it might abandon its new trajectory and start the optimization process over again.
    When Kondo and his collaborators tested Robust MADER, both in simulations and flight experiments with real drones, it achieved a 100 percent success rate at generating collision-free trajectories. While the drones’ travel time was a bit slower than it would be with some other approaches, no other baselines could guarantee safety.

    “If you want to fly safer, you have to be careful, so it is reasonable that if you don’t want to collide with an obstacle, it will take you more time to get to your destination. If you collide with something, no matter how fast you go, it doesn’t really matter because you won’t reach your destination,” Kondo says.
    Kondo wrote the paper with Jesus Tordesillas, a postdoc; Parker C. Lusk, a graduate student; Reinaldo Figueroa, Juan Rached, and Joseph Merkel, MIT undergraduates; and senior author Jonathan P. How, the Richard C. Maclaurin Professor of Aeronautics and Astronautics and a member of the MIT-IBM Watson AI Lab. The research will be presented at the International Conference on Robots and Automation.
    Planning trajectories
    MADER is an asynchronous, decentralized, multiagent trajectory-planner. This means that each drone formulates its own trajectory and that, while all agents must agree on each new trajectory, they don’t need to agree at the same time. This makes MADER more scalable than other approaches, since it would be very difficult for thousands of drones to agree on a trajectory simultaneously. Due to its decentralized nature, the system would also work better in real-world environments where drones may fly far from a central computer.
    With MADER, each drone optimizes a new trajectory using an algorithm that incorporates the trajectories it has received from other agents. By continually optimizing and broadcasting their new trajectories, the drones avoid collisions.

    But perhaps one agent shared its new trajectory several seconds ago, but a fellow agent didn’t receive it right away because the communication was delayed. In real-world environments, signals are often delayed by interference from other devices or environmental factors like stormy weather. Due to this unavoidable delay, a drone might inadvertently commit to a new trajectory that sets it on a collision course.
    Robust MADER prevents such collisions because each agent has two trajectories available. It keeps one trajectory that it knows is safe, which it has already checked for potential collisions. While following that original trajectory, the drone optimizes a new trajectory but does not commit to the new trajectory until it completes a delay-check step.
    During the delay-check period, the drone spends a fixed amount of time repeatedly checking for communications from other agents to see if its new trajectory is safe. If it detects a potential collision, it abandons the new trajectory and starts the optimization process over again.
    The length of the delay-check period depends on the distance between agents and environmental factors that could hamper communications, Kondo says. If the agents are many miles apart, for instance, then the delay-check period would need to be longer.
    Completely collision-free
    The researchers tested their new approach by running hundreds of simulations in which they artificially introduced communication delays. In each simulation, Robust MADER was 100 percent successful at generating collision-free trajectories, while all the baselines caused crashes.
    The researchers also built six drones and two aerial obstacles and tested Robust MADER in a multiagent flight environment. They found that, while using the original version of MADER in this environment would have resulted in seven collisions, Robust MADER did not cause a single crash in any of the hardware experiments.
    “Until you actually fly the hardware, you don’t know what might cause a problem. Because we know that there is a difference between simulations and hardware, we made the algorithm robust, so it worked in the actual drones, and seeing that in practice was very rewarding,” Kondo says.
    Drones were able to fly 3.4 meters per second with Robust MADER, although they had a slightly longer average travel time than some baselines. But no other method was perfectly collision-free in every experiment.
    In the future, Kondo and his collaborators want to put Robust MADER to the test outdoors, where many obstacles and types of noise can affect communications. They also want to outfit drones with visual sensors so they can detect other agents or obstacles, predict their movements, and include that information in trajectory optimizations.
    This work was supported by Boeing Research and Technology. More