More stories

  • in

    Robot preachers get less respect, fewer donations

    As artificial intelligence expands across more professions, robot preachers and AI programs offer new means of sharing religious beliefs, but they may undermine credibility and reduce donations for religious groups that rely on them, according to research published by the American Psychological Association.
    “It seems like robots take over more occupations every year, but I wouldn’t be so sure that religious leaders will ever be fully automated because religious leaders need credibility, and robots aren’t credible,” said lead researcher Joshua Conrad Jackson, PhD, an assistant professor at the University of Chicago in the Booth School of Business.
    The research was published in the Journal of Experimental Psychology: General.
    Jackson and his colleagues conducted an experiment with the Mindar humanoid robot at the Kodai-Ji Buddhist temple in Kyoto, Japan. The robot has a humanlike silicon face with moving lips and blinking eyes on a metal body. It delivers 25-minute Heart Sutra sermons on Buddhist principles with surround sound and multi-media projections.
    Mindar, which was created in 2019 by a Japanese robotics team in partnership with the temple, cost almost $1 million to develop, but it might be reducing donations to the temple, according to the study.
    The researchers surveyed 398 participants who were leaving the temple after hearing a sermon delivered either by Mindar or a human Buddhist priest. Participants viewed Mindar as less credible and gave smaller donations than those who heard a sermon from the human priest.
    In another experiment in a Taoist temple in Singapore, half of the 239 participants heard a sermon by a human priest while the other half heard the same sermon from a humanoid robot called Pepper. That experiment had similar findings — the robot was viewed as less credible and inspired smaller donations. Participants who heard the robot sermon also said they were less likely to share its message or distribute flyers to support the temple. More

  • in

    Going the distance for better wireless charging

    A better way to wirelessly charge over long distances has been developed at Aalto University. Engineers have optimized the way antennas transmitting and receiving power interact with each other, making use of the phenomenon of “radiation suppression.” The result is a better theoretical understanding of wireless power transfer compared to the conventional inductive approach, a significant advancement in the field.
    Charging over short distances, such as through induction pads, uses magnetic near fields to transfer power with high efficiency, but at longer distances the efficiency dramatically drops. New research shows that this high efficiency can be sustained over long distances by suppressing the radiation resistance of the loop antennas that are sending and receiving power. Previously, the same lab created an omnidirectional wireless charging system that allowed devices to be charged at any orientation. Now, they have extended that work with a new dynamic theory of wireless charging that looks more closely at both near (non-radiative) and far (radiative) distances and conditions. In particular, they show that high transfer efficiency, over 80 percent, can be achieved at distances approximately five times the size of the antenna, utilizing the optimal frequency within the hundred-megahertz range.
    ‘We wanted to balance effectively transferring power with the radiation loss that always happens over longer distances,’ says lead author Nam Ha-Van, a postdoctoral researcher at Aalto University. ‘It turns out that when the currents in the loop antennas have equal amplitudes and opposite phases, we can cancel the radiation loss, thus boosting efficiency.’
    The researchers created a way to analyse any wireless power transfer system, either mathematically or experimentally. This allows for a more thorough evaluation of power transfer efficiency, at both near and far distances, which hasn’t been done before. They then tested how charging worked between two loop antennas (see image) positioned at a considerable distance relative to their sizes, establishing that radiation suppression is the mechanism that helps boost transfer efficiency.
    ‘This is all about figuring out the optimal setup for wireless power transfer, whether near or far,’ says Ha-Van. ‘With our approach, we can now extend the transfer distance beyond that of conventional wireless charging systems, while maintaining high efficiency.’ Wireless power transfer is not just important for phones and gadgets; biomedical implants with limited battery capacity can also benefit. The research of Ha-Van and colleagues can also account for barriers like human tissue that can impede charging. More

  • in

    Scientists develop AI-based tracking and early-warning system for viral pandemics

    Scripps Research scientists have developed a machine-learning system — a type of artificial intelligence (AI) application — that can track the detailed evolution of epidemic viruses and predict the emergence of viral variants with important new properties.
    In a paper in Cell Patterns on July 21, 2023, the scientists demonstrated the system by using data on recorded SARS-CoV-2 variants and COVID-19 mortality rates. They showed that the system could have predicted the emergence of new SARS-CoV-2 “variants of concern” (VOCs) ahead of their official designations by the World Health Organization (WHO). Their findings point to the possibility of using such a system in real-time to track future viral pandemics.
    “There are rules of pandemic virus evolution that we have not understood but can be discovered, and used in an actionable sense by private and public health organizations, through this unprecedented machine-learning approach,” says study senior author William Balch, PhD, professor in the Department of Molecular Medicine at Scripps Research.
    The co-first authors of the study were Salvatore Loguercio, PhD, a staff scientist in the Balch lab at the time of the study, and currently a staff scientist at the Scripps Research Translational Institute; and Ben Calverley, PhD, a postdoctoral research associate in the Balch lab.
    The Balch lab specializes in the development of computational, often AI-based methods to illuminate how genetic variations alter the symptoms and spread of diseases. For this study, they applied their approach to the COVID-19 pandemic. They developed machine-learning software, using a strategy called Gaussian process-based spatial covariance, to relate three data sets spanning the course of the pandemic: the genetic sequences of SARS-CoV-2 variants found in infected people worldwide, the frequencies of those variants, and the global mortality rate for COVID-19.
    “This computational method used data from publicly available repositories,” Loguercio says. “But it can be applied to any genetic mapping resource.”
    The software enabled the researchers to track sets of genetic changes appearing in SARS-CoV-2 variants around the world. These changes — typically trending towards increased spread rates and decreased mortality rates — signified the virus’ adaptations to lockdowns, mask wearing, vaccines, increasing natural immunity in the global population, and the relentless competition among SARS-CoV-2 variants themselves. More

  • in

    Detecting threats beyond the limits of human, sensor sight

    Remember what it’s like to twirl a sparkler on a summer night? Hold it still and the fire crackles and sparks but twirl it around and the light blurs into a line tracing each whirl and jag you make.
    A new patented software system developed at Sandia National Laboratories can find the curves of motion in streaming video and images from satellites, drones and far-range security cameras and turn them into signals to find and track moving objects as small as one pixel. The developers say this system can enhance the performance of any remote sensing application.
    “Being able to track each pixel from a distance matters, and it is an ongoing and challenging problem,” said Tian Ma, a computer scientist and co-developer of the system. “For physical security surveillance systems, for example, the farther out you can detect a possible threat, the more time you have to prepare and respond. Often the biggest challenge is the simple fact that when objects are located far away from the sensors, their size naturally appears to be much smaller. Sensor sensitivity diminishes as the distance from the target increases.”
    Ma and Robert Anderson started working on the Multi-frame Moving Object Detection System in 2015 as a Sandia Laboratory Directed Research and Development project. A paper about MMODS was recently published in Sensors.
    Detecting one moving pixel in a sea of 10 million
    The ability to detect objects through remote sensing systems is typically limited to what can be seen in a single video frame, whereas MMODS uses a new, multiframe method to detect small objects in low visibility conditions, Ma said. At a computer station, image streams from various sensors flow in, and MMODS processes the data with an image filter frame by frame in real time. An algorithm finds movement in the video frames and matches it into target signals that can be correlated and then integrated across a set of video frame sequences.
    This process improves the signal-to-noise ratio or overall image quality because the moving target’s signal can be correlated over time and increases steadily, whereas movement from background noise like wind is filtered out because it moves randomly and is not correlated. More

  • in

    Dreaming in technicolor

    A team of computer scientists and designers based out of the University of Waterloo have developed a tool to help people use colour better in graphic design. 
    The tool, De-Stijl, uses powerful machine learning technology to suggest intuitive colour palettes for novice designers and inexperienced users. The software combines and improves on the functionalities of existing tools like Figma, Pixlr, and Coolor, allowing users to select important theme colors and quickly visualize how they’ll impact a design.
    “You put your graphical elements into the canvas,” said Jian Zhao, an assistant professor of computer science at Waterloo. “De-Stijl separates it into background, image, decoration and text, and based on these it creates a palette and then can make recommendations based on the design elements of layout, colour proximity, and proportion.”
    De-Stijl’s most exciting contribution is an innovative 2-D colour palette, developed in consultation with expert graphic designers, that not only suggests colours but also demonstrates their impact in different distributions.
    “Humans perceive colors differently based on their proportion and their placement,” said Xinyu Shi, a PhD student in computer science and the lead author on the research. “With the 2D format, users can better perceive how their current graphic designs look, focusing on the colour itself.”
    The Waterloo-led project grew out of a longstanding relationship with Adobe, the design powerhouse responsible for products like Photoshop and InDesign.
    Adobe realized that a lot of people responsible for creating branding and other marketing materials didn’t have advanced graphic design knowledge or the resources to hire expert designers. They tasked the Waterloo team with helping them find AI-powered solutions for these novice designers. 
    The De-Stijl team worked with a combination of design experts and ordinary users to build and test the software. During the testing phase, users customized marketing materials from provided templates using both De-Stijl and its competitors. More

  • in

    Future AI algorithms have potential to learn like humans

    Memories can be as tricky to hold onto for machines as they can be for humans. To help understand why artificial agents develop holes in their own cognitive processes, electrical engineers at The Ohio State University have analyzed how much a process called “continual learning” impacts their overall performance.
    Continual learning is when a computer is trained to continuously learn a sequence of tasks, using its accumulated knowledge from old tasks to better learn new tasks.
    Yet one major hurdle scientists still need to overcome to achieve such heights is learning how to circumvent the machine learning equivalent of memory loss — a process which in AI agents is known as “catastrophic forgetting.” As artificial neural networks are trained on one new task after another, they tend to lose the information gained from those previous tasks, an issue that could become problematic as society comes to rely on AI systems more and more, said Ness Shroff, an Ohio Eminent Scholar and professor of computer science and engineering at The Ohio State University.
    “As automated driving applications or other robotic systems are taught new things, it’s important that they don’t forget the lessons they’ve already learned for our safety and theirs,” said Shroff. “Our research delves into the complexities of continuous learning in these artificial neural networks, and what we found are insights that begin to bridge the gap between how a machine learns and how a human learns.”
    Researchers found that in the same way that people might struggle to recall contrasting facts about similar scenarios but remember inherently different situations with ease, artificial neural networks can recall information better when faced with diverse tasks in succession, instead of ones that share similar features, Shroff said.
    The team, including Ohio State postdoctoral researchers Sen Lin and Peizhong Ju and professors Yingbin Liang and Shroff, will present their research this month at the 40th annual International Conference on Machine Learning in Honolulu, Hawaii, a flagship conference in machine learning.
    While it can be challenging to teach autonomous systems to exhibit this kind of dynamic, lifelong learning, possessing such capabilities would allow scientists to scale up machine learning algorithms at a faster rate as well as easily adapt them to handle evolving environments and unexpected situations. Essentially, the goal for these systems would be for them to one day mimic the learning capabilities of humans. More

  • in

    Unveiling the quantum dance: Experiments reveal nexus of vibrational and electronic dynamics

    Nearly a century ago, physicists Max Born and J. Robert Oppenheimer developed an assumption regarding how quantum mechanics plays out in molecules, which are comprised of intricate systems of nuclei and electrons. The Born-Oppenheimer approximation assumes that the motion of nuclei and electrons in a molecule are independent of each other and can be treated separately.
    This model works the vast majority of the time, but scientists are testing its limits. Recently, a team of scientists demonstrated the breakdown of this assumption on very fast time scales, revealing a close relationship between the dynamics of nuclei and electrons. The discovery could influence the design of molecules useful for solar energy conversion, energy production, quantum information science and more.
    “Understanding the interplay between the spin-vibronic effect and inter-system crossing could potentially lead to new ways to control and exploit the electronic and spin properties of molecules.” — Lin Chen, Argonne Distinguished Fellow and professor of chemistry at Northwestern University
    The team, including scientists from the U.S. Department of Energy’s (DOE) Argonne National Laboratory, Northwestern University, North Carolina State University and University of Washington, recently published their discovery in two related papers in Nature and Angewandte Chemie International Edition.
    “Our work reveals the interplay between the dynamics of electron spin and the vibrational dynamics of the nuclei in molecules on superfast time scales,” said Shahnawaz Rafiq, a research associate at Northwestern University and first author on the Nature paper. “These properties can’t be treated independently — they mix together and affect electronic dynamics in complex ways.”
    A phenomenon called the spin-vibronic effect occurs when changes in the motion of the nuclei within a molecule affect the motion of its electrons. When nuclei vibrate within a molecule — either due to their intrinsic energy or due to external stimuli, such as light — these vibrations can affect the motion of their electrons, which can in turn change the molecule’s spin, a quantum mechanical property related to magnetism.
    In a process called inter-system crossing, an excited molecule or atom changes its electronic state by flipping its electron spin orientation. Inter-system crossing plays an important role in many chemical processes, including those in photovoltaic devices, photocatalysis and even bioluminescent animals. For this crossing to be possible, it requires specific conditions and energy differences between the electronic states involved. More

  • in

    Allowing robots to explore on their own

    A research group in Carnegie Mellon University’s Robotics Institute is creating the next generation of explorers — robots.
    The Autonomous Exploration Research Team has developed a suite of robotic systems and planners enabling robots to explore more quickly, probe the darkest corners of unknown environments, and create more accurate and detailed maps. The systems allow robots to do all this autonomously, finding their way and creating a map without human intervention.
    “You can set it in any environment, like a department store or a residential building after a disaster, and off it goes,” said Ji Zhang, a systems scientist in the Robotics Institute. “It builds the map in real-time, and while it explores, it figures out where it wants to go next. You can see everything on the map. You don’t even have to step into the space. Just let the robots explore and map the environment.”
    The team has worked on exploration systems for more than three years. They’ve explored and mapped several underground mines, a parking garage, the Cohon University Center, and several other indoor and outdoor locations on the CMU campus. The system’s computers and sensors can be attached to nearly any robotic platform, transforming it into a modern-day explorer. The group uses a modified motorized wheelchair and drones for much of its testing.
    Robots can explore in three modes using the group’s systems. In one mode, a person can control the robot’s movements and direction while autonomous systems keep it from crashing into walls, ceilings or other objects. In another mode, a person can select a point on a map and the robot will navigate to that point. The third mode is pure exploration. The robot sets off on its own, investigates the entire space and creates a map.
    “This is a very flexible system to use in many applications, from delivery to search-and-rescue,” said Howie Choset, a professor in the Robotics Institute.
    The group combined a 3D scanning lidar sensor, forward-looking camera and inertial measurement unit sensors with an exploration algorithm to enable the robot to know where it is, where it has been and where it should go next. The resulting systems are substantially more efficient than previous approaches, creating more complete maps while reducing the algorithm run time by half.
    The new systems work in low-light, treacherous conditions where communication is spotty, like caves, tunnels and abandoned structures. A version of the group’s exploration system powered Team Explorer, an entry from CMU and Oregon State University in DARPA’s Subterranean Challenge. Team Explorer placed fourth in the final competition but won the Most Sectors Explored Award for mapping more of the route than any other team.
    “All of our work is open-sourced. We are not holding anything back. We want to strengthen society with the capabilities of building autonomous exploration robots,” said Chao Cao, a Ph.D. student in robotics and the lead operator for Team Explorer. “It’s a fundamental capability. Once you have it, you can do a lot more.”
    Video: https://youtu.be/pNtC3Twx_2w More