More stories

  • in

    Thermal imaging innovation allows AI to see through pitch darkness like broad daylight

    Researchers at Purdue University are advancing the world of robotics and autonomy with their patent-pending method that improves on traditional machine vision and perception.
    Zubin Jacob, the Elmore Associate Professor of Electrical and Computer Engineering in the Elmore Family School of Electrical and Computer Engineering, and research scientist Fanglin Bao have developed HADAR, or heat-assisted detection and ranging. Their research was featured on the cover of the July 26 issue of the peer-reviewed journal Nature. A video about HADAR is available on YouTube. Nature also has released a podcast episode that includes an interview with Jacob.
    Jacob said it is expected that one in 10 vehicles will be automated and that there will be 20 million robot helpers that serve people by 2030.
    “Each of these agents will collect information about its surrounding scene through advanced sensors to make decisions without human intervention,” Jacob said. “However, simultaneous perception of the scene by numerous agents is fundamentally prohibitive.”
    Traditional active sensors like LiDAR, or light detection and ranging, radar and sonar emit signals and subsequently receive them to collect 3D information about a scene. These methods have drawbacks that increase as they are scaled up, including signal interference and risks to people’s eye safety. In comparison, video cameras that work based on sunlight or other sources of illumination are advantageous, but low-light conditions such as nighttime, fog or rain present a serious impediment.
    Traditional thermal imaging is a fully passive sensing method that collects invisible heat radiation originating from all objects in a scene. It can sense through darkness, inclement weather and solar glare. But Jacob said fundamental challenges hinder its use today.
    “Objects and their environment constantly emit and scatter thermal radiation, leading to textureless images famously known as the ‘ghosting effect,'” Bao said. “Thermal pictures of a person’s face show only contours and some temperature contrast; there are no features, making it seem like you have seen a ghost. This loss of information, texture and features is a roadblock for machine perception using heat radiation.”
    HADAR combines thermal physics, infrared imaging and machine learning to pave the way to fully passive and physics-aware machine perception. More

  • in

    Scientists uncover a surprising connection between number theory and evolutionary genetics

    An interdisciplinary team of mathematicians, engineers, physicists, and medical scientists has uncovered an unexpected link between pure mathematics and genetics, that reveals key insights into the structure of neutral mutations and the evolution of organisms.
    Number theory, the study of the properties of positive integers, is perhaps the purest form of mathematics. At first sight, it may seem far too abstract to apply to the natural world. In fact, the influential American number theorist Leonard Dickson wrote ‘Thank God that number theory is unsullied by any application.’ And yet, again and again, number theory finds unexpected applications in science and engineering, from leaf angles that (almost) universally follow the Fibonacci sequence, to modern encryption techniques based on factoring prime numbers. Now, researchers have demonstrated an unexpected link between number theory and evolutionary genetics.
    Specifically, the team of researchers (from Oxford, Harvard, Cambridge, GUST, MIT, Imperial, and the Alan Turing Institute) have discovered a deep connection between the sums-of-digits function from number theory and a key quantity in genetics, the phenotype mutational robustness. This quality is defined as the average probability that a point mutation does not change a phenotype (a characteristic of an organism).
    The discovery may have important implications for evolutionary genetics. Many genetic mutations are neutral, meaning that they can slowly accumulate over time without affecting the viability of the phenotype. These neutral mutations cause genome sequences to change at a steady rate over time. Because this rate is known, scientists can compare the percentage difference in the sequence between two organisms and infer when their latest common ancestor lived.
    But the existence of these neutral mutations posed an important question: what fraction of mutations to a sequence are neutral? This property, called the phenotype mutational robustness, defines the average amount of mutations that can occur across all sequences without affecting the phenotype.
    Professor Ard Louis from the University of Oxford, who led the study, said: ‘We have known for some time that many biological systems exhibit remarkably high phenotype robustness, without which evolution would not be possible. But we didn’t know what the absolute maximal robustness possible would be, or if there even was a maximum.’
    It is precisely this question that the team has answered. They proved that the maximum robustness is proportional to the logarithm of the fraction of all possible sequences that map to a phenotype, with a correction which is given by the sums of digits function sk(n), defined as the sum of the digits of a natural number n in base k. For example, for n = 123 in base 10, the digit sum would be s10(123) = 1 + 2 + 3 = 6.
    Another surprise was that the maximum robustness also turns out to be related to the famous Tagaki function, a bizarre function that is continuous everywhere, but differentiable nowhere. This fractal function is also called the blancmange curve, because it looks like the French dessert.
    First author Dr. Vaibhav Mohanty (Harvard Medical School) added: ‘What is most surprising is that we found clear evidence in the mapping from sequences to RNA secondary structures that nature in some cases achieves the exact maximum robustness bound. It’s as if biology knows about the fractal sums-of-digits function.’
    Professor Ard Louis added: ‘The beauty of number theory lies not only in the abstract relationships it uncovers between integers, but also in the deep mathematical structures it illuminates in our natural world. We believe that many intriguing new links between number theory and genetics will be found in the future.’ More

  • in

    Scientists develop method to predict the spread of armed conflicts

    Around the world, political violence increased by 27 percent last year, affecting 1.7 billion people. The numbers come from the Armed Conflict Location & Event Data Project (ACLED), which collects real-time data on conflict events worldwide.
    Some armed conflicts occur between states, such as Russia’s invasion of Ukraine. There are, however, many more that take place within the borders of a single state. In Nigeria, violence, particularly from Boko Haram, has escalated in the past few years. In Somalia, populations remain at risk amidst conflict and attacks perpetrated by armed groups, particularly Al-Shabaab.
    To address the challenge of understanding how violent events spread, a team at the Complexity Science Hub (CSH) created a mathematical method that transforms raw data on armed conflicts into meaningful clusters by detecting causal links.
    “Our main question was: what is a conflict? How can we define it?,” says CSH scientist Niraj Kushwaha, one of the coauthors of the study published in the latest issue of PNAS Nexus. “It was important for us to find a quantitative and bias-free way to see if there were any correlations between different violent events, just by looking at the data.”
    Inspiration
    “We often tell multiple narratives about a single conflict, which depend on whether we zoom in on it as an example of local tension or zoom out from it and consider it as part of a geopolitical plot; these are not necessarily incompatible,” explains coauthor Eddie Lee, a postdoctoral fellow at CSH. “Our technique allows us to titrate between them and fill out a multiscale portrait of conflict.”
    In order to investigate the many scales of political violence, the researchers turned to physics and biophysics for inspiration. The approach they developed is inspired by studies of stress propagation in collapsing materials and of neural cascades in the brain. More

  • in

    Google and ChatGPT have mixed results in medical informatiom queries

    When you need accurate information about a serious illness, should you go to Google or ChatGPT?
    An interdisciplinary study led by University of California, Riverside, computer scientists found that both internet information gathering services have strengths and weaknesses for people seeking information about Alzheimer’s disease and other forms of dementia. The team included clinical scientists from the University of Alabama and Florida International University.
    Google provides the most current information, but query results are skewed by service and product providers seeking customers, the researchers found. ChatGPT, meanwhile, provides more objective information, but it can be outdated and lacks the sources of its information in its narrative responses.
    “If you pick the best features of both, you can build a better system, and I think that this is what will happen in the next couple of years,” said Vagelis Hristidis, a professor of computer science and engineering in UCR’s Bourns College of Engineering.
    In their study, Hristidis and his co-authors submitted 60 queries to both Google and ChatGPT that would be typical submissions from people living with dementia and their families.
    The researchers focused on dementia because more than 6 million Americans are impacted by Alzheimer’s disease or a related condition, said study co-author Nicole Ruggiano, a professor of social work at the University of Alabama.
    “Research also shows that caregivers of people living with dementia are among the most engaged stakeholders in pursuing health information, since they often are tasked with making decisions for their loved one’s care,” Ruggiano said. More

  • in

    Scientists create novel approach to control energy waves in 4D

    Everyday life involves the three dimensions or 3D — along an X, Y and Z axis, or up and down, left and right, and forward and back. But, in recent years scientists like Guoliang Huang, the Huber and Helen Croft Chair in Engineering at the University of Missouri, have explored a “fourth dimension” (4D), or synthetic dimension, as an extension of our current physical reality.
    Now, Huang and a team of scientists in the Structured Materials and Dynamics Lab at the MU College of Engineering have successfully created a new synthetic metamaterial with 4D capabilities, including the ability to control energy waves on the surface of a solid material. These waves, called mechanical surface waves, are fundamental to how vibrations travel along the surface of solid materials.
    While the team’s discovery, at this stage, is simply a building block for other scientists to take and adapt as needed, the material also has the potential to be scaled up for larger applications related to civil engineering, micro-electromechanical systems (MEMS) and national defense uses.
    “Conventional materials are limited to only three dimensions with an X, Y and Z axis,” Huang said. “But now we are building materials in the synthetic dimension, or 4D, which allows us to manipulate the energy wave path to go exactly where we want it to go as it travels from one corner of a material to another.”
    This breakthrough discovery, called topological pumping, could one day lead to advancements in quantum mechanics and quantum computing by allowing for the development of higher dimension quantum-mechanical effects.
    “Most of the energy — 90% — from an earthquake happens along the surface of the Earth,” Huang said. “Therefore, by covering a pillow-like structure in this material and placing it on the Earth’s surface underneath a building, and it could potentially help keep the structure from collapsing during an earthquake.”
    The work builds on previous research by Huang and colleagues which demonstrates how a passive metamaterial could control the path of sound waves as they travel from one corner of a material to another. More

  • in

    Researchers find little evidence of cheating with online, unsupervised exams

    When Iowa State University switched from in-person to remote learning halfway through the spring semester of 2020, psychology professor Jason Chan was worried. Would unsupervised, online exams unleash rampant cheating?
    His initial reaction flipped to surprise as test results rolled in. Individual student scores were slightly higher but consistent with their results from in-person, proctored exams. Those receiving B’s before the COVID-19 lockdown were still pulling in B’s when the tests were online and unsupervised. This pattern held true for students up and down the grading scale.
    “The fact that the student rankings stayed mostly the same regardless of whether they were taking in-person or online exams indicated that cheating was either not prevalent or that it was ineffective at significantly boosting scores,” says Chan.
    To know if this was happening at a broader level, Chan and Dahwi Ahn, a Ph.D. candidate in psychology, analyzed test score data from nearly 2,000 students across 18 classes during the spring 2020 semester. Their sample ranged from large, lecture-style courses with high enrollment, like introduction to statistics, to advanced courses in engineering and veterinary medicine.
    Across different academic disciplines, class sizes, course levels and test styles (i.e., predominantly multiple choice or short answer), the researchers found the same results. Unsupervised, online exams produced scores very similar to in-person, proctored exams, indicating they can provide a valid and reliable assessment of student learning.
    The research findings were recently published in Proceedings of the National Academy of Sciences.
    “Before conducting this research, I had doubts about online and unproctored exams, and I was quite hesitant to use them if there was an option to have them in-person. But after seeing the data, I feel more confident and hope other instructors will, as well,” says Ahn. More

  • in

    That’s funny — but AI models don’t get the joke

    Large neural networks, a form of artificial intelligence, can generate thousands of jokes along the lines of “Why did the chicken cross the road?” But do they understand why they’re funny?
    Using hundreds of entries from the New Yorker magazine’s Cartoon Caption Contest as a testbed, researchers challenged AI models and humans with three tasks: matching a joke to a cartoon; identifying a winning caption; and explaining why a winning caption is funny.
    In all tasks, humans performed demonstrably better than machines, even as AI advances such as ChatGPT have closed the performance gap. So are machines beginning to “understand” humor? In short, they’re making some progress, but aren’t quite there yet.
    “The way people challenge AI models for understanding is to build tests for them — multiple choice tests or other evaluations with an accuracy score,” said Jack Hessel, Ph.D. ’20, research scientist at the Allen Institute for AI (AI2). “And if a model eventually surpasses whatever humans get at this test, you think, ‘OK, does this mean it truly understands?’ It’s a defensible position to say that no machine can truly `understand’ because understanding is a human thing. But, whether the machine understands or not, it’s still impressive how well they do on these tasks.”
    Hessel is lead author of “Do Androids Laugh at Electric Sheep? Humor ‘Understanding’ Benchmarks from The New Yorker Caption Contest,” which won a best-paper award at the 61st annual meeting of the Association for Computational Linguistics, held July 9-14 in Toronto.
    Lillian Lee ’93, the Charles Roy Davis Professor in the Cornell Ann S. Bowers College of Computing and Information Science, and Yejin Choi, Ph.D. ’10, professor in the Paul G. Allen School of Computer Science and Engineering at the University of Washington, and the senior director of common-sense intelligence research at AI2, are also co-authors on the paper.
    For their study, the researchers compiled 14 years’ worth of New Yorker caption contests — more than 700 in all. Each contest included: a captionless cartoon; that week’s entries; the three finalists selected by New Yorker editors; and, for some contests, crowd quality estimates for each submission. More

  • in

    GPT-3 can reason about as well as a college student, psychologists report

    People solve new problems readily without any special training or practice by comparing them to familiar problems and extending the solution to the new problem. That process, known as analogical reasoning, has long been thought to be a uniquely human ability.
    But now people might have to make room for a new kid on the block.
    Research by UCLA psychologists shows that, astonishingly, the artificial intelligence language model GPT-3 performs about as well as college undergraduates when asked to solve the sort of reasoning problems that typically appear on intelligence tests and standardized tests such as the SAT. The study is published in Nature Human Behaviour.
    But the paper’s authors write that the study raises the question: Is GPT-3 mimicking human reasoning as a byproduct of its massive language training dataset or it is using a fundamentally new kind of cognitive process?
    Without access to GPT-3’s inner workings — which are guarded by OpenAI, the company that created it — the UCLA scientists can’t say for sure how its reasoning abilities work. They also write that although GPT-3 performs far better than they expected at some reasoning tasks, the popular AI tool still fails spectacularly at others.
    “No matter how impressive our results, it’s important to emphasize that this system has major limitations,” said Taylor Webb, a UCLA postdoctoral researcher in psychology and the study’s first author. “It can do analogical reasoning, but it can’t do things that are very easy for people, such as using tools to solve a physical task. When we gave it those sorts of problems — some of which children can solve quickly — the things it suggested were nonsensical.”
    Webb and his colleagues tested GPT-3’s ability to solve a set of problems inspired by a test known as Raven’s Progressive Matrices, which ask the subject to predict the next image in a complicated arrangement of shapes. To enable GPT-3 to “see,” the shapes, Webb converted the images to a text format that GPT-3 could process; that approach also guaranteed that the AI would never have encountered the questions before. More