More stories

  • in

    Dreaming in technicolor

    A team of computer scientists and designers based out of the University of Waterloo have developed a tool to help people use colour better in graphic design. 
    The tool, De-Stijl, uses powerful machine learning technology to suggest intuitive colour palettes for novice designers and inexperienced users. The software combines and improves on the functionalities of existing tools like Figma, Pixlr, and Coolor, allowing users to select important theme colors and quickly visualize how they’ll impact a design.
    “You put your graphical elements into the canvas,” said Jian Zhao, an assistant professor of computer science at Waterloo. “De-Stijl separates it into background, image, decoration and text, and based on these it creates a palette and then can make recommendations based on the design elements of layout, colour proximity, and proportion.”
    De-Stijl’s most exciting contribution is an innovative 2-D colour palette, developed in consultation with expert graphic designers, that not only suggests colours but also demonstrates their impact in different distributions.
    “Humans perceive colors differently based on their proportion and their placement,” said Xinyu Shi, a PhD student in computer science and the lead author on the research. “With the 2D format, users can better perceive how their current graphic designs look, focusing on the colour itself.”
    The Waterloo-led project grew out of a longstanding relationship with Adobe, the design powerhouse responsible for products like Photoshop and InDesign.
    Adobe realized that a lot of people responsible for creating branding and other marketing materials didn’t have advanced graphic design knowledge or the resources to hire expert designers. They tasked the Waterloo team with helping them find AI-powered solutions for these novice designers. 
    The De-Stijl team worked with a combination of design experts and ordinary users to build and test the software. During the testing phase, users customized marketing materials from provided templates using both De-Stijl and its competitors. More

  • in

    Future AI algorithms have potential to learn like humans

    Memories can be as tricky to hold onto for machines as they can be for humans. To help understand why artificial agents develop holes in their own cognitive processes, electrical engineers at The Ohio State University have analyzed how much a process called “continual learning” impacts their overall performance.
    Continual learning is when a computer is trained to continuously learn a sequence of tasks, using its accumulated knowledge from old tasks to better learn new tasks.
    Yet one major hurdle scientists still need to overcome to achieve such heights is learning how to circumvent the machine learning equivalent of memory loss — a process which in AI agents is known as “catastrophic forgetting.” As artificial neural networks are trained on one new task after another, they tend to lose the information gained from those previous tasks, an issue that could become problematic as society comes to rely on AI systems more and more, said Ness Shroff, an Ohio Eminent Scholar and professor of computer science and engineering at The Ohio State University.
    “As automated driving applications or other robotic systems are taught new things, it’s important that they don’t forget the lessons they’ve already learned for our safety and theirs,” said Shroff. “Our research delves into the complexities of continuous learning in these artificial neural networks, and what we found are insights that begin to bridge the gap between how a machine learns and how a human learns.”
    Researchers found that in the same way that people might struggle to recall contrasting facts about similar scenarios but remember inherently different situations with ease, artificial neural networks can recall information better when faced with diverse tasks in succession, instead of ones that share similar features, Shroff said.
    The team, including Ohio State postdoctoral researchers Sen Lin and Peizhong Ju and professors Yingbin Liang and Shroff, will present their research this month at the 40th annual International Conference on Machine Learning in Honolulu, Hawaii, a flagship conference in machine learning.
    While it can be challenging to teach autonomous systems to exhibit this kind of dynamic, lifelong learning, possessing such capabilities would allow scientists to scale up machine learning algorithms at a faster rate as well as easily adapt them to handle evolving environments and unexpected situations. Essentially, the goal for these systems would be for them to one day mimic the learning capabilities of humans. More

  • in

    Unveiling the quantum dance: Experiments reveal nexus of vibrational and electronic dynamics

    Nearly a century ago, physicists Max Born and J. Robert Oppenheimer developed an assumption regarding how quantum mechanics plays out in molecules, which are comprised of intricate systems of nuclei and electrons. The Born-Oppenheimer approximation assumes that the motion of nuclei and electrons in a molecule are independent of each other and can be treated separately.
    This model works the vast majority of the time, but scientists are testing its limits. Recently, a team of scientists demonstrated the breakdown of this assumption on very fast time scales, revealing a close relationship between the dynamics of nuclei and electrons. The discovery could influence the design of molecules useful for solar energy conversion, energy production, quantum information science and more.
    “Understanding the interplay between the spin-vibronic effect and inter-system crossing could potentially lead to new ways to control and exploit the electronic and spin properties of molecules.” — Lin Chen, Argonne Distinguished Fellow and professor of chemistry at Northwestern University
    The team, including scientists from the U.S. Department of Energy’s (DOE) Argonne National Laboratory, Northwestern University, North Carolina State University and University of Washington, recently published their discovery in two related papers in Nature and Angewandte Chemie International Edition.
    “Our work reveals the interplay between the dynamics of electron spin and the vibrational dynamics of the nuclei in molecules on superfast time scales,” said Shahnawaz Rafiq, a research associate at Northwestern University and first author on the Nature paper. “These properties can’t be treated independently — they mix together and affect electronic dynamics in complex ways.”
    A phenomenon called the spin-vibronic effect occurs when changes in the motion of the nuclei within a molecule affect the motion of its electrons. When nuclei vibrate within a molecule — either due to their intrinsic energy or due to external stimuli, such as light — these vibrations can affect the motion of their electrons, which can in turn change the molecule’s spin, a quantum mechanical property related to magnetism.
    In a process called inter-system crossing, an excited molecule or atom changes its electronic state by flipping its electron spin orientation. Inter-system crossing plays an important role in many chemical processes, including those in photovoltaic devices, photocatalysis and even bioluminescent animals. For this crossing to be possible, it requires specific conditions and energy differences between the electronic states involved. More

  • in

    Allowing robots to explore on their own

    A research group in Carnegie Mellon University’s Robotics Institute is creating the next generation of explorers — robots.
    The Autonomous Exploration Research Team has developed a suite of robotic systems and planners enabling robots to explore more quickly, probe the darkest corners of unknown environments, and create more accurate and detailed maps. The systems allow robots to do all this autonomously, finding their way and creating a map without human intervention.
    “You can set it in any environment, like a department store or a residential building after a disaster, and off it goes,” said Ji Zhang, a systems scientist in the Robotics Institute. “It builds the map in real-time, and while it explores, it figures out where it wants to go next. You can see everything on the map. You don’t even have to step into the space. Just let the robots explore and map the environment.”
    The team has worked on exploration systems for more than three years. They’ve explored and mapped several underground mines, a parking garage, the Cohon University Center, and several other indoor and outdoor locations on the CMU campus. The system’s computers and sensors can be attached to nearly any robotic platform, transforming it into a modern-day explorer. The group uses a modified motorized wheelchair and drones for much of its testing.
    Robots can explore in three modes using the group’s systems. In one mode, a person can control the robot’s movements and direction while autonomous systems keep it from crashing into walls, ceilings or other objects. In another mode, a person can select a point on a map and the robot will navigate to that point. The third mode is pure exploration. The robot sets off on its own, investigates the entire space and creates a map.
    “This is a very flexible system to use in many applications, from delivery to search-and-rescue,” said Howie Choset, a professor in the Robotics Institute.
    The group combined a 3D scanning lidar sensor, forward-looking camera and inertial measurement unit sensors with an exploration algorithm to enable the robot to know where it is, where it has been and where it should go next. The resulting systems are substantially more efficient than previous approaches, creating more complete maps while reducing the algorithm run time by half.
    The new systems work in low-light, treacherous conditions where communication is spotty, like caves, tunnels and abandoned structures. A version of the group’s exploration system powered Team Explorer, an entry from CMU and Oregon State University in DARPA’s Subterranean Challenge. Team Explorer placed fourth in the final competition but won the Most Sectors Explored Award for mapping more of the route than any other team.
    “All of our work is open-sourced. We are not holding anything back. We want to strengthen society with the capabilities of building autonomous exploration robots,” said Chao Cao, a Ph.D. student in robotics and the lead operator for Team Explorer. “It’s a fundamental capability. Once you have it, you can do a lot more.”
    Video: https://youtu.be/pNtC3Twx_2w More

  • in

    Learning from superheroes and AI: Researchers study how a chatbot can teach kids supportive self-talk

    At first, some parents were wary: An audio chatbot was supposed to teach their kids to speak positively to themselves through lessons about a superhero named Zip. In a world of Siri and Alexa, many people are skeptical that the makers of such technologies are putting children’s welfare first.
    Researchers at the University of Washington created a new web app aimed to help children develop skills like self-awareness and emotional management. In Self-Talk with Superhero Zip, a chatbot guided pairs of siblings through lessons. The UW team found that, after speaking with the app for a week, most children could explain the concept of supportive self-talk (the things people say to themselves either audibly or mentally) and apply it in their daily lives. And kids who’d engaged in negative self-talk before the study were able to turn that habit positive.
    The UW team published its findings in June at the 2023 Interaction Design and Children conference. The app is still a prototype and is not yet publicly available.
    The UW team saw a few reasons to develop an educational chatbot. Positive self-talk has shown a range of benefits for kids, from improved sport performance to increased self-esteem and lower risk of depression. And previous studies have shown children can learn various tasks and abilities from chatbots. Yet little research explores how chatbots can help kids effectively acquire socioemotional skills.
    “There is room to design child-centric experiences with a chatbot that provide fun and educational practice opportunities without invasive data harvesting that compromises children’s privacy,” said senior author Alexis Hiniker, an associate professor in the UW Information School. “Over the last few decades, television programs like ‘Sesame Street,’ ‘Mister Rogers,’ and ‘Daniel Tiger’s Neighborhood’ have shown that it is possible for TV to help kids cultivate socioemotional skills. We asked: Can we make a space where kids can practice these skills in an interactive app? We wanted to create something useful and fun — a ‘Sesame Street’ experience for a smart speaker.”
    The UW researchers began with two prototype ideas with the goal to teach socioemotional skills broadly. After testing, they narrowed the scope, focusing on a superhero named Zip and the aim of teaching supportive self-talk. They decided to test the app on siblings, since research shows that children are more engaged when they use technology with another person.
    Ten pairs of Seattle-area siblings participated in the study. For a week, they opened the app and met an interactive narrator who told them stories about Zip and asked them to reflect on Zip’s encounters with other characters, including a supervillain. During and after the study, kids described applying positive self-talk; several mentioned using it when they were upset or angry. More

  • in

    AI-guided brain stimulation aids memory in traumatic brain injury

    Traumatic brain injury (TBI) has disabled 1 to 2% of the population, and one of their most common disabilities is problems with short-term memory. Electrical stimulation has emerged as a viable tool to improve brain function in people with other neurological disorders.
    Now, a new study in the journal Brain Stimulation shows that targeted electrical stimulation in patients with traumatic brain injury led to an average 19% boost in recalling words.
    Led by University of Pennsylvania psychology professor Michael Jacob Kahana, a team of neuroscientists studied TBI patients with implanted electrodes, analyzed neural data as patients studied words, and used a machine learning algorithm to predict momentary memory lapses. Other lead authors included Wesleyan University psychology professor Youssef Ezzyat and Penn research scientist Paul Wanda.
    “The last decade has seen tremendous advances in the use of brain stimulation as a therapy for several neurological and psychiatric disorders including epilepsy, Parkinson’s disease, and depression,” Kahana says. “Memory loss, however, represents a huge burden on society. We lack effective therapies for the 27 million Americans suffering.”
    Study co-author Ramon Diaz-Arrastia, director of the Traumatic Brain Injury Clinical Research Center at Penn Medicine, says the technology Kahana and his team developed delivers “the right stimulation at the right time, informed by the wiring of the individual’s brain and that individual’s successful memory retrieval.”
    He says the top causes of TBI are motor vehicle accidents, which are decreasing, and falls, which are rising because of the aging population. The next most common causes are assaults and head injuries from participation in contact sports.
    This new study builds off the previous work of Ezzyat, Kahana, and their collaborators. Publishing their findings in 2017, they showed that stimulation delivered when memory is expected to fail can improve memory, whereas stimulation administered during periods of good functioning worsens memory. The stimulation in that study was open-loop, meaning it was applied by a computer without regard to the state of the brain. More

  • in

    A faster way to teach a robot

    Imagine purchasing a robot to perform household tasks. This robot was built and trained in a factory on a certain set of tasks and has never seen the items in your home. When you ask it to pick up a mug from your kitchen table, it might not recognize your mug (perhaps because this mug is painted with an unusual image, say, of MIT’s mascot, Tim the Beaver). So, the robot fails.
    “Right now, the way we train these robots, when they fail, we don’t really know why. So you would just throw up your hands and say, ‘OK, I guess we have to start over.’ A critical component that is missing from this system is enabling the robot to demonstrate why it is failing so the user can give it feedback,” says Andi Peng, an electrical engineering and computer science (EECS) graduate student at MIT.
    Peng and her collaborators at MIT, New York University, and the University of California at Berkeley created a framework that enables humans to quickly teach a robot what they want it to do, with a minimal amount of effort.
    When a robot fails, the system uses an algorithm to generate counterfactual explanations that describe what needed to change for the robot to succeed. For instance, maybe the robot would have been able to pick up the mug if the mug were a certain color. It shows these counterfactuals to the human and asks for feedback on why the robot failed. Then the system utilizes this feedback and the counterfactual explanations to generate new data it uses to fine-tune the robot.
    Fine-tuning involves tweaking a machine-learning model that has already been trained to perform one task, so it can perform a second, similar task.
    The researchers tested this technique in simulations and found that it could teach a robot more efficiently than other methods. The robots trained with this framework performed better, while the training process consumed less of a human’s time.
    This framework could help robots learn faster in new environments without requiring a user to have technical knowledge. In the long run, this could be a step toward enabling general-purpose robots to efficiently perform daily tasks for the elderly or individuals with disabilities in a variety of settings. More

  • in

    Efficient discovery of improved energy materials by a new AI-guided workflow

    Scientists of the NOMAD Laboratory at the Fritz Haber Institute of the Max Planck Society recently proposed a workflow that can dramatically accelerate the search for novel materials with improved properties. They demonstrated the power of the approach by identifying more than 50 strongly thermally insulating materials. These can help alleviate the ongoing energy crisis, by allowing for more efficient thermoelectric elements, i.e., devices able to convert otherwise wasted heat into useful electrical voltage.
    Discovering new and reliable thermoelectric materials is paramount for making use of the more than 40% of energy given off as waste heat globally and help mitigate the growing challenges of climate change. One way to increase the thermoelectric efficiency of a material is to reduce its thermal conductivity, κ, and thereby maintaining the temperature gradient needed to generate electricity. However, the cost associated with studying these properties limited the computational and experimental investigations of κ to only a minute subset of all possible materials. A team of the NOMAD Laboratory recently made efforts to reduce these costs by creating an AI-guided workflow that hierarchically screens out materials to efficiently find new and better thermal insulators.
    The work recently published in npj Computational Materials proposes a new way of using Artificial Intelligence (AI) to guide the high-throughput search for new materials. Instead of using physical/chemical intuition to screen out materials based on general, known or suspected trends, the new procedure learns the conditions that lead to the desired outcome with advanced AI methods. This work has the potential to quantify the search for new energy materials and increase the efficiency of these searches.
    The first step in designing these workflows is to use advanced statistical and AI methods to approximate the target property of interest, κ in this case. To this end, the sure-independence screening and sparsifying operator (SISSO) approach is used. SISSO is a machine learning method that reveals the fundamental dependencies between different materials properties from a set of billions of possible expressions. Compared to other “black-box” AI models, this approach is similarly accurate, but additionally yields analytic relationships between different material properties. This allows us to apply modern feature importance metrics to shed light on which material properties are the most important. In the case of κ, these are the molar volume, Vm; the high-temperature limit Debye Temperature, θD,∞; and the anharmonicity metricfactor, σA.
    Furthermore, the described statistical analysis allows to distill out rule-of-thumbs for the individual features that enable to a priori estimate the potential of material to be a thermal insulator. Working with the three most important primary features hence allowed to create AI-guided computational workflows for discovering new thermal insulators. These workflows use state-of-the-art electronic structure programs to calculate each of the selected features. During each step materials were screened out that are unlikely to be good insulators based on their values of Vm, θD,∞, and σA. With this, it is possible to reduce the number of calculations needed to find thermally insulating materials by over two orders of magnitude. In this work, this is demonstrated by identifying 96 thermal insulators (κ < 10 Wm-1K-1) in an initial set of 732 materials. The reliability of this approach was further verified by calculating κ for 4 of these predictions with highest possible accuracy. Besides facilitating the active search for new thermoelectric materials, the formalisms proposed by the NOMAD team can be also applied to solve other urgent material science problems. More