More stories

  • in

    Researchers explore why some people get motion sick playing VR games while others don’t

    The way our senses adjust while playing high-intensity virtual reality games plays a critical role in understanding why some people experience severe cybersickness and others don’t.
    Cybersickness is a form of motion sickness that occurs from exposure to immersive VR and augmented reality applications.
    A new study, led by researchers at the University of Waterloo, found that the subjective visual vertical — a measure of how individuals perceive the orientation of vertical lines — shifted considerably after participants played a high-intensity VR game.
    “Our findings suggest that the severity of a person’s cybersickness is affected by how our senses adjust to the conflict between reality and virtual reality,” said Michael Barnett-Cowan, a professor in the Department of Kinesiology and Health Sciences. “This knowledge could be invaluable for developers and designers of VR experiences, enabling them to create more comfortable and enjoyable environments for users.”
    The researchers collected data from 31 participants. They assessed their perceptions of the vertical before and after playing two VR games, one high-intensity and one low-intensity.
    Those who experienced less sickness were more likely to have the largest change in the subjective visual vertical following exposure to VR, particularly at a high intensity. Conversely, those who had the highest levels of cybersickness were less likely to have changed how they perceived vertical lines. There were no significant differences between males and females, nor between participants with low and high gaming experience.
    “While the subjective vertical visual task significantly predicted the severity of cybersickness symptoms, there is still much to be explained,” said co-author William Chung, a former Waterloo doctoral student who is now a postdoctoral fellow at the Toronto Rehabilitation Institute.
    “By understanding the relationship between sensory reweighting and cybersickness susceptibility, we can potentially develop personalized cybersickness mitigation strategies and VR experiences that take into account individual differences in sensory processing and hopefully lower the occurrence of cybersickness.”
    As VR continues to revolutionize gaming, education and social interaction, addressing the pervasive issue of cybersickness — marked by symptoms such as nausea, disorientation, eye strain and fatigue — is critical for ensuring a positive user experience. More

  • in

    Structured exploration allows biological brains to learn faster than AI

    Neuroscientists have uncovered how exploratory actions enable animals to learn their spatial environment more efficiently. Their findings could help build better AI agents that can learn faster and require less experience.
    Researchers at the Sainsbury Wellcome Centre and Gatsby Computational Neuroscience Unit at UCL found the instinctual exploratory runs that animals carry out are not random. These purposeful actions allow mice to learn a map of the world efficiently. The study, published today in Neuron, describes how neuroscientists tested their hypothesis that the specific exploratory actions that animals undertake, such as darting quickly towards objects, are important in helping them learn how to navigate their environment.
    “There are a lot of theories in psychology about how performing certain actions facilitates learning. In this study, we tested whether simply observing obstacles in an environment was enough to learn about them, or if purposeful, sensory-guided actions help animals build a cognitive map of the world,” said Professor Tiago Branco, Group Leader at the Sainsbury Wellcome Centre and corresponding author on the paper.
    In previous work, scientists at SWC observed a correlation between how well animals learn to go around an obstacle and the number of times they had run to the object. In this study, Philip Shamash, SWC PhD student and first author of the paper, carried out experiments to test the impact of preventing animals from performing exploratory runs. By expressing a light-activated protein called channelrhodopsin in one part of the motor cortex, Philip was able to use optogenetic tools to prevent animals from initiating exploratory runs towards obstacles.
    The team found that even though mice had spent a lot of time observing and sniffing obstacles, if they were prevented in running towards them, they did not learn. This shows that the instinctive exploratory actions themselves are helping the animals learn a map of their environment.
    To explore the algorithms that the brain might be using to learn, the team worked with Sebastian Lee, a PhD student in Andrew Saxe’s lab at SWC, to run different models of reinforcement learning that people have developed for artificial agents, and observe which one most closely reproduces the mouse behaviour.
    There are two main classes of reinforcement learning models: model-free and model-based. The team found that under some conditions mice act in a model-free way but under other conditions, they seem to have a model of the world. And so the researchers implemented an agent that can arbitrate between model-free and model-based. This is not necessarily how the mouse brain works, but it helped them to understand what is required in a learning algorithm to explain the behaviour.
    “One of the problems with artificial intelligence is that agents need a lot of experience in order to learn something. They have to explore the environment thousands of times, whereas a real animal can learn an environment in less than ten minutes. We think this is in part because, unlike artificial agents, animals’ exploration is not random and instead focuses on salient objects. This kind of directed exploration makes the learning more efficient and so they need less experience to learn,” explain Professor Branco.
    The next steps for the researchers are to explore the link between the execution of exploratory actions and the representation of subgoals. The team are now carrying out recordings in the brain to discover which areas are involved in representing subgoals and how the exploratory actions lead to the formation of the representations.
    This research was funded by a Wellcome Senior Research Fellowship (214352/Z/18/Z) and by the Sainsbury Wellcome Centre Core Grant from the Gatsby Charitable Foundation and Wellcome (090843/F/09/Z), the Sainsbury Wellcome Centre PhD Programme and a Sir Henry Dale Fellowship from the Wellcome Trust and Royal Society (216386/Z/19/Z). More

  • in

    Engineers ‘grow’ atomically thin transistors on top of computer chips

    Emerging AI applications, like chatbots that generate natural human language, demand denser, more powerful computer chips. But semiconductor chips are traditionally made with bulk materials, which are boxy 3D structures, so stacking multiple layers of transistors to create denser integrations is very difficult.
    However, semiconductor transistors made from ultrathin 2D materials, each only about three atoms in thickness, could be stacked up to create more powerful chips. To this end, MIT researchers have now demonstrated a novel technology that can effectively and efficiently “grow” layers of 2D transition metal dichalcogenide (TMD) materials directly on top of a fully fabricated silicon chip to enable denser integrations.
    Growing 2D materials directly onto a silicon CMOS wafer has posed a major challenge because the process usually requires temperatures of about 600 degrees Celsius, while silicon transistors and circuits could break down when heated above 400 degrees. Now, the interdisciplinary team of MIT researchers has developed a low-temperature growth process that does not damage the chip. The technology allows 2D semiconductor transistors to be directly integrated on top of standard silicon circuits.
    In the past, researchers have grown 2D materials elsewhere and then transferred them onto a chip or a wafer. This often causes imperfections that hamper the performance of the final devices and circuits. Also, transferring the material smoothly becomes extremely difficult at wafer-scale. By contrast, this new process grows a smooth, highly uniform layer across an entire 8-inch wafer.
    The new technology is also able to significantly reduce the time it takes to grow these materials. While previous approaches required more than a day to grow a single layer of 2D materials, the new approach can grow a uniform layer of TMD material in less than an hour over entire 8-inch wafers.
    Due to its rapid speed and high uniformity, the new technology enabled the researchers to successfully integrate a 2D material layer onto much larger surfaces than has been previously demonstrated. This makes their method better-suited for use in commercial applications, where wafers that are 8 inches or larger are key.

    “Using 2D materials is a powerful way to increase the density of an integrated circuit. What we are doing is like constructing a multistory building. If you have only one floor, which is the conventional case, it won’t hold many people. But with more floors, the building will hold more people that can enable amazing new things. Thanks to the heterogenous integration we are working on, we have silicon as the first floor and then we can have many floors of 2D materials directly integrated on top,” says Jiadi Zhu, an electrical engineering and computer science graduate student and co-lead author of a paper on this new technique.
    Zhu wrote the paper with co-lead-author Ji-Hoon Park, an MIT postdoc; corresponding authors Jing Kong, professor of electrical engineering and computer science (EECS) and a member of the Research Laboratory for Electronics; and Tomás Palacios, professor of EECS and director of the Microsystems Technology Laboratories (MTL); as well as others at MIT, MIT Lincoln Laboratory, Oak Ridge National Laboratory, and Ericsson Research. The paper appears today in Nature Nanotechnology.
    Slim materials with vast potential
    The 2D material the researchers focused on, molybdenum disulfide, is flexible, transparent, and exhibits powerful electronic and photonic properties that make it ideal for a semiconductor transistor. It is composed of a one-atom layer of molybdenum sandwiched between two atoms of sulfide.
    Growing thin films of molybdenum disulfide on a surface with good uniformity is often accomplished through a process known as metal-organic chemical vapor deposition (MOCVD). Molybdenum hexacarbonyl and diethylene sulfur, two organic chemical compounds that contain molybdenum and sulfur atoms, vaporize and are heated inside the reaction chamber, where they “decompose” into smaller molecules. Then they link up through chemical reactions to form chains of molybdenum disulfide on a surface.

    But decomposing these molybdenum and sulfur compounds, which are known as precursors, requires temperatures above 550 degrees Celsius, while silicon circuits start to degrade when temperatures surpass 400 degrees.
    So, the researchers started by thinking outside the box — they designed and built an entirely new furnace for the metal-organic chemical vapor deposition process.
    The oven consists of two chambers, a low-temperature region in the front, where the silicon wafer is placed, and a high-temperature region in the back. Vaporized molybdenum and sulfur precursors are pumped into the furnace. The molybdenum stays in the low-temperature region, where the temperature is kept below 400 degrees Celsius — hot enough to decompose the molybdenum precursor but not so hot that it damages the silicon chip.
    The sulfur precursor flows through into the high-temperature region, where it decomposes. Then it flows back into the low-temperature region, where the chemical reaction to grow molybdenum disulfide on the surface of the wafer occurs.
    “You can think about decomposition like making black pepper — you have a whole peppercorn and you grind it into a powder form. So, we smash and grind the pepper in the high-temperature region, then the powder flows back into the low-temperature region,” Zhu explains.
    Faster growth and better uniformity
    One problem with this process is that silicon circuits typically have aluminum or copper as a top layer so the chip can be connected to a package or carrier before it is mounted onto a printed circuit board. But sulfur causes these metals to sulfurize, the same way some metals rust when exposed to oxygen, which destroys their conductivity. The researchers prevented sulfurization by first depositing a very thin layer of passivation material on top of the chip. Then later they could open the passivation layer to make connections.
    They also placed the silicon wafer into the low-temperature region of the furnace vertically, rather than horizontally. By placing it vertically, neither end is too close to the high-temperature region, so no part of the wafer is damaged by the heat. Plus, the molybdenum and sulfur gas molecules swirl around as they bump into the vertical chip, rather than flowing over a horizontal surface. This circulation effect improves the growth of molybdenum disulfide and leads to better material uniformity.
    In addition to yielding a more uniform layer, their method was also much faster than other MOCVD processes. They could grow a layer in less than an hour, while typically the MOCVD growth process takes at least an entire day.
    Using the state-of-the-art MIT.Nano facilities, they were able to demonstrate high material uniformity and quality across an 8-inch silicon wafer, which is especially important for industrial applications where bigger wafers are needed.
    “By shortening the growth time, the process is much more efficient and could be more easily integrated into industrial fabrications. Plus, this is a silicon-compatible low-temperature process, which can be useful to push 2D materials further into the semiconductor industry,” Zhu says.
    In the future, the researchers want to fine-tune their technique and use it to grow many stacked layers of 2D transistors. In addition, they want to explore the use of the low-temperature growth process for flexible surfaces, like polymers, textiles, or even papers. This could enable the integration of semiconductors onto everyday objects like clothing or notebooks.
    This work is partially funded by the MIT Institute for Soldier Nanotechnologies, the National Science Foundation Center for Integrated Quantum Materials, Ericsson, MITRE, the U.S. Army Research Office, and the U.S. Department of Energy. The project also benefitted from the support of TSMC University Shuttle. More

  • in

    Study finds ChatGTP outperforms physicians in providing high-quality, empathetic advice to patient questions

    There has been widespread speculation about how advances in artificial intelligence (AI) assistants like ChatGPT could be used in medicine.
    A new study published in JAMA Internal Medicine led by Dr. John W. Ayers from the Qualcomm Institute within the University of California San Diego provides an early glimpse into the role that AI assistants could play in medicine. The study compared written responses from physicians and those from ChatGPT to real-world health questions. A panel of licensed healthcare professionals preferred ChatGPT’s responses 79% of the time and rated ChatGPT’s responses as higher quality and more empathetic.
    “The opportunities for improving healthcare with AI are massive,” said Ayers, who is also vice chief of innovation in the UC San Diego School of Medicine Division of Infectious Disease and Global Public Health. “AI-augmented care is the future of medicine.”
    Is ChatGPT Ready for Healthcare?
    In the new study, the research team set out to answer the question: Can ChatGPT respond accurately to questions patients send to their doctors? If yes, AI models could be integrated into health systems to improve physician responses to questions sent by patients and ease the ever-increasing burden on physicians.
    “ChatGPT might be able to pass a medical licensing exam,” said study co-author Dr. Davey Smith, a physician-scientist, co-director of the UC San Diego Altman Clinical and Translational Research Institute and professor at the UC San Diego School of Medicine, “but directly answering patient questions accurately and empathetically is a different ballgame.”
    “The COVID-19 pandemic accelerated virtual healthcare adoption,” added study co-author Dr. Eric Leas, a Qualcomm Institute affiliate and assistant professor in the UC San Diego Herbert Wertheim School of Public Health and Human Longevity Science. “While this made accessing care easier for patients, physicians are burdened by a barrage of electronic patient messages seeking medical advice that have contributed to record-breaking levels of physician burnout.”

    Designing a Study to Test ChatGPT in a Healthcare Setting
    To obtain a large and diverse sample of healthcare questions and physician answers that did not contain identifiable personal information, the team turned to social media where millions of patients publicly post medical questions to which doctors respond: Reddit’s AskDocs.
    r/AskDocs is a subreddit with approximately 452,000 members who post medical questions and verified healthcare professionals submit answers. While anyone can respond to a question, moderators verify healthcare professionals’ credentials and responses display the respondent’s level of credentials. The result is a large and diverse set of patient medical questions and accompanying answers from licensed medical professionals.
    While some may wonder if question-answer exchanges on social media are a fair test, team members noted that the exchanges were reflective of their clinical experience.
    The team randomly sampled 195 exchanges from AskDocs where a verified physician responded to a public question. The team provided the original question to ChatGPT and asked it to author a response. A panel of three licensed healthcare professionals assessed each question and the corresponding responses and were blinded to whether the response originated from a physician or ChatGPT. They compared responses based on information quality and empathy, noting which one they preferred.

    The panel of healthcare professional evaluators preferred ChatGPT responses to physician responses 79% of the time.
    “ChatGPT messages responded with nuanced and accurate information that often addressed more aspects of the patient’s questions than physician responses,” said Jessica Kelley, a nurse practitioner with San Diego firm Human Longevity and study co-author.
    Additionally, ChatGPT responses were rated significantly higher in quality than physician responses: good or very good quality responses were 3.6 times higher for ChatGPT than physicians (physicians 22.1% versus ChatGPT 78.5%). The responses were also more empathic: empathetic or very empathetic responses were 9.8 times higher for ChatGPT than for physicians (physicians 4.6% versus ChatGPT 45.1%).
    “I never imagined saying this,” added Dr. Aaron Goodman, an associate clinical professor at UC San Diego School of Medicine and study coauthor, “but ChatGPT is a prescription I’d like to give to my inbox. The tool will transform the way I support my patients.”
    Harnessing AI Assistants for Patient Messages
    “While our study pitted ChatGPT against physicians, the ultimate solution isn’t throwing your doctor out altogether,” said Dr. Adam Poliak, an assistant professor of Computer Science at Bryn Mawr College and study co-author. “Instead, a physician harnessing ChatGPT is the answer for better and empathetic care.”
    “Our study is among the first to show how AI assistants can potentially solve real world healthcare delivery problems,” said Dr. Christopher Longhurst, Chief Medical Officer and Chief Digital Officer at UC San Diego Health. “These results suggest that tools like ChatGPT can efficiently draft high quality, personalized medical advice for review by clinicians, and we are beginning that process at UCSD Health.”
    Dr. Mike Hogarth, a physician-bioinformatician, co-director of the Altman Clinical and Translational Research Institute at UC San Diego, professor in the UC San Diego School of Medicine and study co-author, added, “It is important that integrating AI assistants into healthcare messaging be done in the context of a randomized controlled trial to judge how the use of AI assistants impact outcomes for both physicians and patients.”
    In addition to improving workflow, investments into AI assistant messaging could impact patient health and physician performance.
    Dr. Mark Dredze, the John C Malone Associate Professor of Computer Science at Johns Hopkins and study co-author, noted: “We could use these technologies to train doctors in patient-centered communication, eliminate health disparities suffered by minority populations who often seek healthcare via messaging, build new medical safety systems, and assist doctors by delivering higher quality and more efficient care.” More

  • in

    Highly dexterous robot hand can operate in the dark — just like us

    Think about what you do with your hands when you’re home at night pushing buttons on your TV’s remote control, or at a restaurant using all kinds of cutlery and glassware. These skills are all based on touch, while you’re watching a TV program or choosing something from the menu. Our hands and fingers are incredibly skilled mechanisms, and highly sensitive to boot.
    Robotics researchers have long been trying to create “true” dexterity in robot hands, but the goal has been frustratingly elusive. Robot grippers and suction cups can pick and place items, but more dexterous tasks such as assembly, insertion, reorientation, packaging, etc. have remained in the realm of human manipulation. However, spurred by advances in both sensing technology and machine-learning techniques to process the sensed data, the field of robotic manipulation is changing very rapidly.
    Highly dexterous robot hand even works in the dark
    Researchers at Columbia Engineering have demonstrated a highly dexterous robot hand, one that combines an advanced sense of touch with motor learning algorithms in order to achieve a high level of dexterity.
    As a demonstration of skill, the team chose a difficult manipulation task: executing an arbitrarily large rotation of an unevenly shaped grasped object in hand while always maintaining the object in a stable, secure hold. This is a very difficult task because it requires constant repositioning of a subset of fingers, while the other fingers have to keep the object stable. Not only was the hand able to perform this task, but it also did it without any visual feedback whatsoever, based solely on touch sensing.
    In addition to the new levels of dexterity, the hand worked without any external cameras, so it’s immune to lighting, occlusion, or similar issues. And the fact that the hand does not rely on vision to manipulate objects means that it can do so in very difficult lighting conditions that would confuse vision-based algorithms — it can even operate in the dark.

    “While our demonstration was on a proof-of-concept task, meant to illustrate the capabilities of the hand, we believe that this level of dexterity will open up entirely new applications for robotic manipulation in the real world,” said Matei Ciocarlie, associate professor in the Departments of Mechanical Engineering and Computer Science. “Some of the more immediate uses might be in logistics and material handling, helping ease up supply chain problems like the ones that have plagued our economy in recent years, and in advanced manufacturing and assembly in factories.”
    Leveraging optics-based tactile fingers
    In earlier work, Ciocarlie’s group collaborated with Ioannis Kymissis, professor of electrical engineering, to develop a new generation of optics-based tactile robot fingers. These were the first robot fingers to achieve contact localization with sub-millimeter precision while providing complete coverage of a complex multi-curved surface. In addition, the compact packaging and low wire count of the fingers allowed for easy integration into complete robot hands.
    Teaching the hand to perform complex tasks
    For this new work, led by CIocarlie’s doctoral researcher, Gagan Khandate, the researchers designed and built a robot hand with five fingers and 15 independently actuated joints — each finger was equipped with the team’s touch-sensing technology. The next step was to test the ability of the tactile hand to perform complex manipulation tasks. To do this, they used new methods for motor learning, or the ability of a robot to learn new physical tasks via practice. In particular, they used a method called deep reinforcement learning, augmented with new algorithms that they developed for effective exploration of possible motor strategies.

    Robot completed approximately one year of practice in only hours of real-time
    The input to the motor learning algorithms consisted exclusively of the team’s tactile and proprioceptive data, without any vision. Using simulation as a training ground, the robot completed approximately one year of practice in only hours of real-time, thanks to modern physics simulators and highly parallel processors. The researchers then transferred this manipulation skill trained in simulation to the real robot hand, which was able to achieve the level of dexterity the team was hoping for. Ciocarlie noted that “the directional goal for the field remains assistive robotics in the home, the ultimate proving ground for real dexterity. In this study, we’ve shown that robot hands can also be highly dexterous based on touch sensing alone. Once we also add visual feedback into the mix along with touch, we hope to be able to achieve even more dexterity, and one day start approaching the replication of the human hand.”
    Ultimate goal: joining abstract intelligence with embodied intelligence
    Ultimately, Ciocarlie observed, a physical robot being useful in the real world needs both abstract, semantic intelligence (to understand conceptually how the world works), and embodied intelligence (the skill to physically interact with the world). Large language models such as OpenAI’s GPT-4 or Google’s PALM aim to provide the former, while dexterity in manipulation as achieved in this study represents complementary advances in the latter.
    For instance, when asked how to make a sandwich, ChatGPT will type out a step-by-step plan in response, but it takes a dexterous robot to take that plan and actually make the sandwich. In the same way, researchers hope that physically skilled robots will be able to take semantic intelligence out of the purely virtual world of the Internet, and put it to good use on real-world physical tasks, perhaps even in our homes.
    The paper has been accepted for publication at the upcoming Robotics: Science and Systems Conference (Daegu, Korea, July 10-14, 2023), and is currently available as a preprint.
    VIDEO: https://youtu.be/mYlc_OWgkyI More

  • in

    Unraveling the mathematics behind wiggly worm knots

    For millennia, humans have used knots for all kinds of reasons — to tie rope, braid hair, or weave fabrics. But there are organisms that are better at tying knots and far superior — and faster — at untangling them.
    Tiny California blackworms intricately tangle themselves by the thousands to form ball-shaped blobs that allow them to execute a wide range of biological functions. But, most striking of all, while the worms tangle over a period of several minutes, they can untangle in mere milliseconds, escaping at the first sign of a threat from a predator.
    Saad Bhamla, assistant professor in the School of Chemical and Biomolecular Engineering at Georgia Tech, wanted to understand precisely how the blackworms execute their tangling and untangling movements. To investigate, Bhamla and a team of researchers at Georgia Tech linked up with mathematicians at MIT. Their research, published in Science, could influence the design of fiber-like, shapeshifting robotics that self-assemble and move in ways that are fast and reversible. The study also highlights how cross-disciplinary collaboration can answer some of the most perplexing questions in disparate fields.
    Capturing the Inside of a Worm Blob
    Fascinated by the science of ultrafast movement and collective behavior, Bhamla and Harry Tuazon, a graduate student in Bhamla’s lab, have studied California blackworms for years, observing how they use collective movement to form blobs and then disperse.
    “We wanted to understand the exact mechanics behind how the worms change their movement dynamics to achieve tangling and ultrafast untangling,” Bhamla said. “Also, these are not just typical filaments like string, ethernet cables, or spaghetti — these are living, active tangles that are out of equilibrium, which adds a fascinating layer to the question.”
    Tuazon, a co-first author of the study, collected videos of his experiments with the worms, including macro videos of the worms’ collective dispersal mechanism and microscopic videos of one, two, three, and several worms to capture their movements.

    “I was shocked when I pointed a UV light toward the worm blobs and they dispersed so explosively,” Tuazon said. “But to understand this complex and mesmerizing maneuver, I started conducting experiments with only a few worms.”
    Bhamla and Tuazon approached MIT mathematicians Jörn Dunkel and Vishal Patil (a graduate student at the time and now a postdoctoral fellow at Stanford University) about a collaboration. After seeing Tuazon’s videos, the two theorists, who specialize in knots and topology, were eager to join.
    “Knots and tangles are a fascinating area where physics and mechanics meet some very interesting math,” said Patil, co-first author on the paper. “These worms seemed like a good playground to investigate topological principles in systems made up of filaments.”
    A key moment for Patil was when he viewed Tuazon’s video of a single worm that had been provoked into the escape response. Patil noticed the worm moved in a figure-eight pattern, turning its head in clockwise and counterclockwise spirals as its body followed.
    The researchers thought this helical gait pattern might play a role in the worms’ ability to tangle and untangle. But to mathematically quantify the worm tangle structures and model how they braid around each other, Patil and Dunkel needed experimental data.

    Bhamla and Tuazon set about to find an imaging technique that would allow them to peer inside the worm blob so they could gather more data. After much trial and error, they landed on an unexpected solution: ultrasound. By placing a live worm blob in nontoxic jelly and using a commercial ultrasound machine, they were finally able to observe the inside of the intricate worm tangles.
    “Capturing the inside structure of a live worm blob was a real challenge,” Tuazon said. “We tried all sorts of imaging techniques for months, including X-rays, confocal microscopy, and tomography, but none of them gave us the real-time resolution we needed. Ultimately, ultrasound turned out to be the solution.”
    After analyzing the ultrasound videos, Tuazon and other researchers in Bhamla’s lab painstakingly tracked the movement of the worms by hand, plotting more than 46,000 data points for Patil and Dunkel to use to understand the mathematics behind the movements.
    Explaining Tangling and Untangling
    Answering the questions of how the worms untangle quickly required a combination of mechanics and topology. Patil built a mathematical model to explain how helical gaits can lead to tangling and untangling. By testing the model using a simulation framework, Patil was able to create a visualization of worms tangling.
    The model predicted that each worm formed a tangle with at least two other worms, revealing why the worm blobs were so cohesive. Patil then showed that the same class of helical gaits could explain how they untangle. The simulations were uncanny in their resemblance to real ultrasound images and showed that the worms’ alternating helical wave motions enabled the tangling and the ultrafast untangling escape mechanism.  
    “What’s striking is these tangled structures are extremely complicated. They are disordered and complex structures, but these living worm structures are able to manipulate these knots for crucial functions,” Patil said.
    While it has been known for decades that the worms move in a helical gait, no one had ever made the connection between that movement and how they escape. The researchers’ work revealed how the mechanical movements of individual worms determine their emergent collective behavior and topological dynamics. It is also the first mathematical theory of active tangling and untangling.
    “This observation may seem like a mere curiosity, but its implications are far-reaching. Active filaments are ubiquitous in biological structures, from DNA strands to entire organisms,” said Eva Kanso, program director at the National Science Foundation and professor of mechanical engineering at the University of Southern California.
    “These filaments serve myriads of functions and can provide a general motif for engineering multifunctional structures and materials that change properties on demand. Just as the worm blobs perform remarkable tangling and untangling feats, so may future bioinspired materials defy the limits of conventional structures by exploiting the interplay between mechanics, geometry, and activity.”
    The researchers’ model demonstrates the advantages of different types of tangles, which could allow for programming a wide range of behaviors into multifunctional, filament-like materials, from polymers to shapeshifting soft robotic systems. Many companies, such as 3M, already use nonwoven materials made of tangling fibers in products, including bandages and N95 masks. The worms could inspire new nonwoven materials and topological shifting matter.
    “Actively shapeshifting topological matter is currently the stuff of science fiction,” said Bhamla. “Imagine a soft, nonwoven material made of millions of stringlike filaments that can tangle and untangle on command, forming a smart adhesive bandage that shape-morphs as a wound heals, or a smart filtration material that alters pore topology to trap particles of different sizes or chemical properties. The possibilities are endless.”
    Georgia Tech researchers Emily Kaufman, Tuhin Chakrabortty, and David Qin contributed to this study.
    CITATION: Patil, et al. “Ultrafast reversible self-assembly of living tangled matter.” Science. 28 April 2023.
    DOI: https://www.science.org/doi/10.1126/science.ade7759
    Writer: Catherine Barzler, Georgia Tech
    Video: Candler Hobbs, Georgia Tech
    Original footage and photography: Georgia Tech
    Simulations: MIT More

  • in

    ChatGPT scores nearly 50 per cent on board certification practice test for ophthalmology, study shows

    A study of ChatGPT found the artificial intelligence tool answered less than half of the test questions correctly from a study resource commonly used by physicians when preparing for board certification in ophthalmology.
    The study, published in JAMA Ophthalmology and led by St. Michael’s Hospital, a site of Unity Health Toronto, found ChatGPT correctly answered 46 per cent of questions when initially conducted in Jan. 2023. When researchers conducted the same test one month later, ChatGPT scored more than 10 per cent higher.
    The potential of AI in medicine and exam preparation has garnered excitement since ChatGPT became publicly available in Nov. 2022. It’s also raising concern for the potential of incorrect information and cheating in academia. ChatGPT is free, available to anyone with an internet connection, and works in a conversational manner.
    “ChatGPT may have an increasing role in medical education and clinical practice over time, however it is important to stress the responsible use of such AI systems,” said Dr. Rajeev H. Muni, principal investigator of the study and a researcher at the Li Ka Shing Knowledge Institute at St. Michael’s. “ChatGPT as used in this investigation did not answer sufficient multiple choice questions correctly for it to provide substantial assistance in preparing for board certification at this time.”
    Researchers used a dataset of practice multiple choice questions from the free trial of OphthoQuestions, a common resource for board certification exam preparation. To ensure ChatGPT’s responses were not influenced by concurrent conversations, entries or conversations with ChatGPT were cleared prior to inputting each question and a new ChatGPT account was used. Questions that used images and videos were not included because ChatGPT only accepts text input.
    Of 125 text-based multiple-choice questions, ChatGPT answered 58 (46 per cent) questions correctly when the study was first conducted in Jan. 2023. Researchers repeated the analysis on ChatGPT in Feb. 2023, and the performance improved to 58 per cent.
    “ChatGPT is an artificial intelligence system that has tremendous promise in medical education. Though it provided incorrect answers to board certification questions in ophthalmology about half the time, we anticipate that ChatGPT’s body of knowledge will rapidly evolve,” said Dr. Marko Popovic, a co-author of the study and a resident physician in the Department of Ophthalmology and Vision Sciences at the University of Toronto.
    ChatGPT closely matched how trainees answer questions, and selected the same multiple-choice response as the most common answer provided by ophthalmology trainees 44 per cent of the time. ChatGPT selected the multiple-choice response that was least popular among ophthalmology trainees 11 per cent of the time, second least popular 18 per cent of the time, and second most popular 22 per cent of the time.
    “ChatGPT performed most accurately on general medicine questions, answering 79 per cent of them correctly. On the other hand, its accuracy was considerably lower on questions for ophthalmology subspecialties. For instance, the chatbot answered 20 per cent of questions correctly on oculoplastics and zero per cent correctly from the subspecialty of retina. The accuracy of ChatGPT will likely improve most in niche subspecialties in the future,” said Andrew Mihalache, lead author of the study and undergraduate student at Western University. More

  • in

    Speedy robo-gripper reflexively organizes cluttered spaces

    When manipulating an arcade claw, a player can plan all she wants. But once she presses the joystick button, it’s a game of wait-and-see. If the claw misses its target, she’ll have to start from scratch for another chance at a prize.
    The slow and deliberate approach of the arcade claw is similar to state-of-the-art pick-and-place robots, which use high-level planners to process visual images and plan out a series of moves to grab for an object. If a gripper misses its mark, it’s back to the starting point, where the controller must map out a new plan.
    Looking to give robots a more nimble, human-like touch, MIT engineers have now developed a gripper that grasps by reflex. Rather than start from scratch after a failed attempt, the team’s robot adapts in the moment to reflexively roll, palm, or pinch an object to get a better hold. It’s able to carry out these “last centimeter” adjustments (a riff on the “last mile” delivery problem) without engaging a higher-level planner, much like how a person might fumble in the dark for a bedside glass without much conscious thought.
    The new design is the first to incorporate reflexes into a robotic planning architecture. For now, the system is a proof of concept and provides a general organizational structure for embedding reflexes into a robotic system. Going forward, the researchers plan to program more complex reflexes to enable nimble, adaptable machines that can work with and among humans in ever-changing settings.
    “In environments where people live and work, there’s always going to be uncertainty,” says Andrew SaLoutos, a graduate student in MIT’s Department of Mechanical Engineering. “Someone could put something new on a desk or move something in the break room or add an extra dish to the sink. We’re hoping a robot with reflexes could adapt and work with this kind of uncertainty.”
    SaLoutos and his colleagues will present a paper on their design in May at the IEEE International Conference on Robotics and Automation (ICRA). His MIT co-authors include postdoc Hongmin Kim, graduate student Elijah Stanger-Jones, Menglong Guo SM ’22, and professor of mechanical engineering Sangbae Kim, the director of the Biomimetic Robotics Laboratory at MIT.

    High and low
    Many modern robotic grippers are designed for relatively slow and precise tasks, such as repetitively fitting together the same parts on a a factory assembly line. These systems depend on visual data from onboard cameras; processing that data limits a robot’s reaction time, particularly if it needs to recover from a failed grasp.
    “There’s no way to short-circuit out and say, oh shoot, I have to do something now and react quickly,” SaLoutos says. “Their only recourse is just to start again. And that takes a lot of time computationally.”
    In their new work, Kim’s team built a more reflexive and reactive platform, using fast, responsive actuators that they originally developed for the group’s mini cheetah — a nimble, four-legged robot designed to run, leap, and quickly adapt its gait to various types of terrain.
    The team’s design includes a high-speed arm and two lightweight, multijointed fingers. In addition to a camera mounted to the base of the arm, the team incorporated custom high-bandwidth sensors at the fingertips that instantly record the force and location of any contact as well as the proximity of the finger to surrounding objects more than 200 times per second.

    The researchers designed the robotic system such that a high-level planner initially processes visual data of a scene, marking an object’s current location where the gripper should pick the object up, and the location where the robot should place it down. Then, the planner sets a path for the arm to reach out and grasp the object. At this point, the reflexive controller takes over.
    If the gripper fails to grab hold of the object, rather than back out and start again as most grippers do, the team wrote an algorithm that instructs the robot to quickly act out any of three grasp maneuvers, which they call “reflexes,” in response to real-time measurements at the fingertips. The three reflexes kick in within the last centimeter of the robot approaching an object and enable the fingers to grab, pinch, or drag an object until it has a better hold.
    They programmed the reflexes to be carried out without having to involve the high-level planner. Instead, the reflexes are organized at a lower decision-making level, so that they can respond as if by instinct, rather than having to carefully evaluate the situation to plan an optimal fix.
    “It’s like how, instead of having the CEO micromanage and plan every single thing in your company, you build a trust system and delegate some tasks to lower-level divisions,” Kim says. “It may not be optimal, but it helps the company react much more quickly. In many cases, waiting for the optimal solution makes the situation much worse or irrecoverable.”
    Cleaning via reflex
    The team demonstrated the gripper’s reflexes by clearing a cluttered shelf. They set a variety of household objects on a shelf, including a bowl, a cup, a can, an apple, and a bag of coffee grounds. They showed that the robot was able to quickly adapt its grasp to each object’s particular shape and, in the case of the coffee grounds, squishiness. Out of 117 attempts, the gripper quickly and successfully picked and placed objects more than 90 percent of the time, without having to back out and start over after a failed grasp.
    A second experiment showed how the robot could also react in the moment. When researchers shifted a cup’s position, the gripper, despite having no visual update of the new location, was able to readjust and essentially feel around until it sensed the cup in its grasp. Compared to a baseline grasping controller, the gripper’s reflexes increased the area of successful grasps by over 55 percent.
    Now, the engineers are working to include more complex reflexes and grasp maneuvers in the system, with a view toward building a general pick-and-place robot capable of adapting to cluttered and constantly changing spaces.
    “Picking up a cup from a clean table — that specific problem in robotics was solved 30 years ago,” Kim notes. “But a more general approach, like picking up toys in a toybox, or even a book from a library shelf, has not been solved. Now with reflexes, we think we can one day pick and place in every possible way, so that a robot could potentially clean up the house.”
    This research was supported, in part, by Advanced Robotics Lab of LG Electronics and the Toyota Research Institute.
    Video: https://youtu.be/XxDi-HEpXn4 More