More stories

  • in

    Scientists develop method to predict the spread of armed conflicts

    Around the world, political violence increased by 27 percent last year, affecting 1.7 billion people. The numbers come from the Armed Conflict Location & Event Data Project (ACLED), which collects real-time data on conflict events worldwide.
    Some armed conflicts occur between states, such as Russia’s invasion of Ukraine. There are, however, many more that take place within the borders of a single state. In Nigeria, violence, particularly from Boko Haram, has escalated in the past few years. In Somalia, populations remain at risk amidst conflict and attacks perpetrated by armed groups, particularly Al-Shabaab.
    To address the challenge of understanding how violent events spread, a team at the Complexity Science Hub (CSH) created a mathematical method that transforms raw data on armed conflicts into meaningful clusters by detecting causal links.
    “Our main question was: what is a conflict? How can we define it?,” says CSH scientist Niraj Kushwaha, one of the coauthors of the study published in the latest issue of PNAS Nexus. “It was important for us to find a quantitative and bias-free way to see if there were any correlations between different violent events, just by looking at the data.”
    Inspiration
    “We often tell multiple narratives about a single conflict, which depend on whether we zoom in on it as an example of local tension or zoom out from it and consider it as part of a geopolitical plot; these are not necessarily incompatible,” explains coauthor Eddie Lee, a postdoctoral fellow at CSH. “Our technique allows us to titrate between them and fill out a multiscale portrait of conflict.”
    In order to investigate the many scales of political violence, the researchers turned to physics and biophysics for inspiration. The approach they developed is inspired by studies of stress propagation in collapsing materials and of neural cascades in the brain. More

  • in

    Google and ChatGPT have mixed results in medical informatiom queries

    When you need accurate information about a serious illness, should you go to Google or ChatGPT?
    An interdisciplinary study led by University of California, Riverside, computer scientists found that both internet information gathering services have strengths and weaknesses for people seeking information about Alzheimer’s disease and other forms of dementia. The team included clinical scientists from the University of Alabama and Florida International University.
    Google provides the most current information, but query results are skewed by service and product providers seeking customers, the researchers found. ChatGPT, meanwhile, provides more objective information, but it can be outdated and lacks the sources of its information in its narrative responses.
    “If you pick the best features of both, you can build a better system, and I think that this is what will happen in the next couple of years,” said Vagelis Hristidis, a professor of computer science and engineering in UCR’s Bourns College of Engineering.
    In their study, Hristidis and his co-authors submitted 60 queries to both Google and ChatGPT that would be typical submissions from people living with dementia and their families.
    The researchers focused on dementia because more than 6 million Americans are impacted by Alzheimer’s disease or a related condition, said study co-author Nicole Ruggiano, a professor of social work at the University of Alabama.
    “Research also shows that caregivers of people living with dementia are among the most engaged stakeholders in pursuing health information, since they often are tasked with making decisions for their loved one’s care,” Ruggiano said. More

  • in

    Scientists create novel approach to control energy waves in 4D

    Everyday life involves the three dimensions or 3D — along an X, Y and Z axis, or up and down, left and right, and forward and back. But, in recent years scientists like Guoliang Huang, the Huber and Helen Croft Chair in Engineering at the University of Missouri, have explored a “fourth dimension” (4D), or synthetic dimension, as an extension of our current physical reality.
    Now, Huang and a team of scientists in the Structured Materials and Dynamics Lab at the MU College of Engineering have successfully created a new synthetic metamaterial with 4D capabilities, including the ability to control energy waves on the surface of a solid material. These waves, called mechanical surface waves, are fundamental to how vibrations travel along the surface of solid materials.
    While the team’s discovery, at this stage, is simply a building block for other scientists to take and adapt as needed, the material also has the potential to be scaled up for larger applications related to civil engineering, micro-electromechanical systems (MEMS) and national defense uses.
    “Conventional materials are limited to only three dimensions with an X, Y and Z axis,” Huang said. “But now we are building materials in the synthetic dimension, or 4D, which allows us to manipulate the energy wave path to go exactly where we want it to go as it travels from one corner of a material to another.”
    This breakthrough discovery, called topological pumping, could one day lead to advancements in quantum mechanics and quantum computing by allowing for the development of higher dimension quantum-mechanical effects.
    “Most of the energy — 90% — from an earthquake happens along the surface of the Earth,” Huang said. “Therefore, by covering a pillow-like structure in this material and placing it on the Earth’s surface underneath a building, and it could potentially help keep the structure from collapsing during an earthquake.”
    The work builds on previous research by Huang and colleagues which demonstrates how a passive metamaterial could control the path of sound waves as they travel from one corner of a material to another. More

  • in

    Researchers find little evidence of cheating with online, unsupervised exams

    When Iowa State University switched from in-person to remote learning halfway through the spring semester of 2020, psychology professor Jason Chan was worried. Would unsupervised, online exams unleash rampant cheating?
    His initial reaction flipped to surprise as test results rolled in. Individual student scores were slightly higher but consistent with their results from in-person, proctored exams. Those receiving B’s before the COVID-19 lockdown were still pulling in B’s when the tests were online and unsupervised. This pattern held true for students up and down the grading scale.
    “The fact that the student rankings stayed mostly the same regardless of whether they were taking in-person or online exams indicated that cheating was either not prevalent or that it was ineffective at significantly boosting scores,” says Chan.
    To know if this was happening at a broader level, Chan and Dahwi Ahn, a Ph.D. candidate in psychology, analyzed test score data from nearly 2,000 students across 18 classes during the spring 2020 semester. Their sample ranged from large, lecture-style courses with high enrollment, like introduction to statistics, to advanced courses in engineering and veterinary medicine.
    Across different academic disciplines, class sizes, course levels and test styles (i.e., predominantly multiple choice or short answer), the researchers found the same results. Unsupervised, online exams produced scores very similar to in-person, proctored exams, indicating they can provide a valid and reliable assessment of student learning.
    The research findings were recently published in Proceedings of the National Academy of Sciences.
    “Before conducting this research, I had doubts about online and unproctored exams, and I was quite hesitant to use them if there was an option to have them in-person. But after seeing the data, I feel more confident and hope other instructors will, as well,” says Ahn. More

  • in

    That’s funny — but AI models don’t get the joke

    Large neural networks, a form of artificial intelligence, can generate thousands of jokes along the lines of “Why did the chicken cross the road?” But do they understand why they’re funny?
    Using hundreds of entries from the New Yorker magazine’s Cartoon Caption Contest as a testbed, researchers challenged AI models and humans with three tasks: matching a joke to a cartoon; identifying a winning caption; and explaining why a winning caption is funny.
    In all tasks, humans performed demonstrably better than machines, even as AI advances such as ChatGPT have closed the performance gap. So are machines beginning to “understand” humor? In short, they’re making some progress, but aren’t quite there yet.
    “The way people challenge AI models for understanding is to build tests for them — multiple choice tests or other evaluations with an accuracy score,” said Jack Hessel, Ph.D. ’20, research scientist at the Allen Institute for AI (AI2). “And if a model eventually surpasses whatever humans get at this test, you think, ‘OK, does this mean it truly understands?’ It’s a defensible position to say that no machine can truly `understand’ because understanding is a human thing. But, whether the machine understands or not, it’s still impressive how well they do on these tasks.”
    Hessel is lead author of “Do Androids Laugh at Electric Sheep? Humor ‘Understanding’ Benchmarks from The New Yorker Caption Contest,” which won a best-paper award at the 61st annual meeting of the Association for Computational Linguistics, held July 9-14 in Toronto.
    Lillian Lee ’93, the Charles Roy Davis Professor in the Cornell Ann S. Bowers College of Computing and Information Science, and Yejin Choi, Ph.D. ’10, professor in the Paul G. Allen School of Computer Science and Engineering at the University of Washington, and the senior director of common-sense intelligence research at AI2, are also co-authors on the paper.
    For their study, the researchers compiled 14 years’ worth of New Yorker caption contests — more than 700 in all. Each contest included: a captionless cartoon; that week’s entries; the three finalists selected by New Yorker editors; and, for some contests, crowd quality estimates for each submission. More

  • in

    GPT-3 can reason about as well as a college student, psychologists report

    People solve new problems readily without any special training or practice by comparing them to familiar problems and extending the solution to the new problem. That process, known as analogical reasoning, has long been thought to be a uniquely human ability.
    But now people might have to make room for a new kid on the block.
    Research by UCLA psychologists shows that, astonishingly, the artificial intelligence language model GPT-3 performs about as well as college undergraduates when asked to solve the sort of reasoning problems that typically appear on intelligence tests and standardized tests such as the SAT. The study is published in Nature Human Behaviour.
    But the paper’s authors write that the study raises the question: Is GPT-3 mimicking human reasoning as a byproduct of its massive language training dataset or it is using a fundamentally new kind of cognitive process?
    Without access to GPT-3’s inner workings — which are guarded by OpenAI, the company that created it — the UCLA scientists can’t say for sure how its reasoning abilities work. They also write that although GPT-3 performs far better than they expected at some reasoning tasks, the popular AI tool still fails spectacularly at others.
    “No matter how impressive our results, it’s important to emphasize that this system has major limitations,” said Taylor Webb, a UCLA postdoctoral researcher in psychology and the study’s first author. “It can do analogical reasoning, but it can’t do things that are very easy for people, such as using tools to solve a physical task. When we gave it those sorts of problems — some of which children can solve quickly — the things it suggested were nonsensical.”
    Webb and his colleagues tested GPT-3’s ability to solve a set of problems inspired by a test known as Raven’s Progressive Matrices, which ask the subject to predict the next image in a complicated arrangement of shapes. To enable GPT-3 to “see,” the shapes, Webb converted the images to a text format that GPT-3 could process; that approach also guaranteed that the AI would never have encountered the questions before. More

  • in

    When electrons slowly vanish during cooling

    Many substances change their properties when they are cooled below a certain critical temperature. Such a phase transition occurs, for example, when water freezes. However, in certain metals there are phase transitions that do not exist in the macrocosm. They arise because of the special laws of quantum mechanics that apply in the realm of nature’s smallest building blocks. It is thought that the concept of electrons as carriers of quantized electric charge no longer applies near these exotic phase transitions. Researchers at the University of Bonn and ETH Zurich have now found a way to prove this directly. Their findings allow new insights into the exotic world of quantum physics. The publication has now been released in the journal Nature Physics.
    If you cool water below zero degrees Celsius, it solidifies into ice. In the process, it abruptly changes its properties. As ice, for example, it has a much lower density than in a liquid state — which is why icebergs float. In physics, this is referred to as a phase transition.
    But there are also phase transitions in which characteristic features of a substance change gradually. If, for example, an iron magnet is heated up to 760 degrees Celsius, it loses its attraction to other pieces of metal — it is then no longer ferromagnetic, but paramagnetic. However, this does not happen abruptly, but continuously: The iron atoms behave like tiny magnets. At low temperatures, they are oriented parallel to each other. When heated, they fluctuate more and more around this rest position until they are completely randomly aligned, and the material loses its magnetism completely. So while the metal is being heated, it can be both somewhat ferromagnetic and somewhat paramagnetic.
    Matter particles cannot be destroyed
    The phase transition thus takes place gradually, so to speak, until finally all the iron is paramagnetic. Along the way, the transition slows down more and more. This behavior is characteristic of all continuous phase transitions. “We call it ‘critical slowing down,'” explains Prof. Dr. Hans Kroha of the Bethe Center for Theoretical Physics at the University of Bonn. “The reason is that with continuous transitions, the two phases get energetically closer and closer together.” It is similar to placing a ball on a ramp: It then rolls downhill, but the smaller the difference in altitude, the more slowly it rolls. When iron is heated, the energy difference between the phases decreases more and more, in part because the magnetization disappears progressively during the transition.
    Such a “slowing down” is typical for phase transitions based on the excitation of bosons. Bosons are particles that “generate” interactions (on which, for example, magnetism is based). Matter, on the other hand, is not made up of bosons but of fermions. Electrons, for example, belong to the fermions.
    Phase transitions are based on the fact that particles (or also the phenomena triggered by them) disappear. This means that the magnetism in iron becomes smaller and smaller as fewer atoms are aligned in parallel. “Fermions, however, cannot be destroyed due to fundamental laws of nature and therefore cannot disappear,” Kroha explains. “That’s why normally they are never involved in phase transitions.”
    Electrons turn into quasi-particles More

  • in

    3D display could soon bring touch to the digital world

    Imagine an iPad that’s more than just an iPad — with a surface that can morph and deform, allowing you to draw 3D designs, create haiku that jump out from the screen and even hold your partner’s hand from an ocean away.
    That’s the vision of a team of engineers from the University of Colorado Boulder. In a new study, they’ve created a one-of-a-kind shape-shifting display that fits on a card table. The device is made from a 10-by-10 grid of soft robotic “muscles” that can sense outside pressure and pop up to create patterns. It’s precise enough to generate scrolling text and fast enough to shake a chemistry beaker filled with fluid.
    It may also deliver something even rarer: the sense of touch in a digital age.
    “As technology has progressed, we started with sending text over long distances, then audio and now video,” said Brian Johnson, one of two lead authors of the new study who earned his doctorate in mechanical engineering at CU Boulder in 2022. “But we’re still missing touch.”
    Johnson and his colleagues described their shape display July 31 in the journal Nature Communications.
    The group’s innovation builds off a class of soft robots pioneered by a team led by Christoph Keplinger, formerly an assistant professor of mechanical engineering at CU Boulder. They’re called Hydraulically Amplified Self-Healing ELectrostatic (HASEL) actuators. The prototype display isn’t ready for the market yet. But the researchers envision that, one day, similar technologies could lead to sensory gloves for virtual gaming or a smart conveyer belt that can undulate to sort apples from bananas.
    “You could imagine arranging these sensing and actuating cells into any number of different shapes and combinations,” said Mantas Naris, co-lead author of the paper and a doctoral student in the Paul M. Rady Department of Mechanical Engineering. “There’s really no limit to what these technologies could, ultimately, lead to.”
    Playing the accordion More