More stories

  • in

    Developing next-generation superconducting cables

    Researchers at Florida State University’s Center for Advanced Power Systems (CAPS), in collaboration with Colorado-based Advanced Conductor Technologies, have demonstrated a new, ready-to-use superconducting cable system — an improvement to superconductor technology that drives the development of technologies such as all-electric ships or airplanes.
    In a paper published in Superconductor Science and Technology, the researchers demonstrated a system that uses helium gas for crucial cooling. Superconducting cables can move electrical current with no resistance, but they need very cold temperatures to function.
    “We want to make these cables smaller, with lower weight and lower volume,” said paper co-author Sastry Pamidi, a FAMU-FSU College of Engineering professor and CAPS associate director. “These are very efficient power cables, and this research is focused on improving efficiency and practicality needed to achieve the promise of next-generation superconductor technology.”
    Previous work showed that the body of superconducting cables could be cooled with helium gas, but the cable ends needed another medium for cooling, such as liquid nitrogen. In this paper, researchers overcame that obstacle and were able to cool an entire cable system with helium gas.
    The work gives engineers more design flexibility because helium remains a gas in a wider range of temperatures than other mediums. Liquid nitrogen, for example, isn’t a suitable cooling medium for some applications, and this research moves superconducting technology closer to practical solutions for those scenarios.
    The paper is the latest outcome of the partnership between researchers at CAPS and Advanced Conductor Technologies (ACT). Previous teamwork has led to other publications and to the development of Conductor on Round Core (CORC®) cables that were the subject of this research.
    “Removing the need for liquid nitrogen to pre-cool the current leads of the superconducting cable and instead using the same helium gas that cools the cable allowed us to make a highly compact superconducting power cable that can be operated in a continuous mode,” said Danko van der Laan, ACT’s founder. “It therefore has become an elegant system that’s small and lightweight and it allows much easier integration into electric ships and aircraft.”
    The ongoing collaboration has been funded by Small Business Innovation Research (SBIR) grants from the U.S. Navy. The grants encourage businesses to partner with universities to conduct high-level research.
    The collaboration provides benefits for all involved. Companies receive help creating new products. Students see how their classwork applies to real-life engineering problems. Taxpayers get the technical and economic benefits that come from the innovations. And faculty members receive a share of a company’s research funding and the opportunity to tackle exciting work.
    “We like challenges,” Pamidi said. “These grants come with challenges that have a clear target. The company says ‘This is what we want to develop. Can you help us with this?’ It is motivating, and it also provides students with connections. The small businesses we work with not only provide money, but they also see the skills our students are gaining.”
    CAPS researcher Chul Kim and ACT researcher Jeremy Weiss were co-authors on this work. Along with the U.S. Navy grant, this research was supported by the U.S. Department of Energy.
    Story Source:
    Materials provided by Florida State University. Original written by Bill Wellock. Note: Content may be edited for style and length. More

  • in

    Scientists use quantum computers to simulate quantum materials

    Scientists achieve important milestone in making quantum computing more effective.
    Quantum computers promise to revolutionize science by enabling computations that were once thought impossible. But for quantum computers to become an everyday reality, there is a long way to go and many challenging tests to pass.
    One of the tests involves using quantum computers to simulate the properties of materials for next-generation quantum technologies.
    In a new study from the U.S. Department of Energy’s (DOE) Argonne National Laboratory and the University of Chicago, researchers performed quantum simulations of spin defects, which are specific impurities in materials that could offer a promising basis for new quantum technologies. The study improved the accuracy of calculations on quantum computers by correcting for noise introduced by quantum hardware.
    “We want to learn how to use new computational technologies that are up-and-coming. Developing robust strategies in the early days of quantum computing is an important first step in being able to understand how to use these machines efficiently in the future.” — Giulia Galli, Argonne and University of Chicago
    The research was conducted as part of the Midwest Integrated Center for Computational Materials (MICCoM), a DOE computational materials science program headquartered at Argonne, as well as Q-NEXT, a DOE National Quantum Information Science Research Center. More

  • in

    Significant energy savings using neuromorphic hardware

    For the first time TU Graz’s Institute of Theoretical Computer Science and Intel Labs demonstrated experimentally that a large neural network can process sequences such as sentences while consuming four to sixteen times less energy while running on neuromorphic hardware than non-neuromorphic hardware. The new research based on Intel Labs’ Loihi neuromorphic research chip that draws on insights from neuroscience to create chips that function similar to those in the biological brain.
    The research was funded by The Human Brain Project (HBP), one of the largest research projects in the world with more than 500 scientists and engineers across Europe studying the human brain. The results of the research are published in the research paper “Memory for AI Applications in Spike-based Neuromorphic Hardware” (DOI 10.1038/s42256-022-00480-w) which in published in Nature Machine Intelligence.
    Human brain as a role model
    Smart machines and intelligent computers that can autonomously recognize and infer objects and relationships between different objects are the subjects of worldwide artificial intelligence (AI) research. Energy consumption is a major obstacle on the path to a broader application of such AI methods. It is hoped that neuromorphic technology will provide a push in the right direction. Neuromorphic technology is modelled after the human brain, which is highly efficient in using energy. To process information, its hundred billion neurons consume only about 20 watts, not much more energy than an average energy-saving light bulb.
    In the research, the group focused on algorithms that work with temporal processes. For example, the system had to answer questions about a previously told story and grasp the relationships between objects or people from the context. The hardware tested consisted of 32 Loihi chips.
    Loihi research chip: up to sixteen times more energy-efficient than non-neuromorphic hardware
    “Our system is four to sixteen times more energy-efficient than other AI models on conventional hardware,” says Philipp Plank, a doctoral student at TU Graz’s Institute of Theoretical Computer Science. Plank expects further efficiency gains as these models are migrated to the next generation of Loihi hardware, which significantly improves the performance of chip-to-chip communication. More

  • in

    Researchers develop algorithm to divvy up tasks for human-robot teams

    As robots increasingly join people on the factory floor, in warehouses and elsewhere on the job, dividing up who will do which tasks grows in complexity and importance. People are better suited for some tasks, robots for others. And in some cases, it is advantageous to spend time teaching a robot to do a task now and reap the benefits later.
    Researchers at Carnegie Mellon University’s Robotics Institute (RI) have developed an algorithmic planner that helps delegate tasks to humans and robots. The planner, “Act, Delegate or Learn” (ADL), considers a list of tasks and decides how best to assign them. The researchers asked three questions: When should a robot act to complete a task? When should a task be delegated to a human? And when should a robot learn a new task?
    “There are costs associated with the decisions made, such as the time it takes a human to complete a task or teach a robot to complete a task and the cost of a robot failing at a task,” said Shivam Vats, the lead researcher and a Ph.D. student in the RI. “Given all those costs, our system will give you the optimal division of labor.”
    The team’s work could be useful in manufacturing and assembly plants, for sorting packages, or in any environment where humans and robots collaborate to complete several tasks. The researchers tested the planner in scenarios where humans and robots had to insert blocks into a peg board and stack parts of different shapes and sizes made of Lego bricks.
    Using algorithms and software to decide how to delegate and divide labor is not new, even when robots are part of the team. However, this work is among the first to include robot learning in its reasoning.
    “Robots aren’t static anymore,” Vats said. “They can be improved and they can be taught.”
    Often in manufacturing, a person will manually manipulate a robotic arm to teach the robot how to complete a task. Teaching a robot takes time and, therefore, has a high upfront cost. But it can be beneficial in the long run if the robot can learn a new skill. Part of the complexity is deciding when it is best to teach a robot versus delegating the task to a human. This requires the robot to predict what other tasks it can complete after learning a new task.
    Given this information, the planner converts the problem into a mixed integer program — an optimization program commonly used in scheduling, production planning or designing communication networks — that can be solved efficiently by off-the-shelf software. The planner performed better than traditional models in all instances and decreased the cost of completing the tasks by 10% to 15%.
    Vats presented the work, “Synergistic Scheduling of Learning and Allocation of Tasks in Human-Robot Teams” at the International Conference on Robotics and Automation in Philadelphia, where it was nominated for the outstanding interaction paper award. The research team included Oliver Kroemer, an assistant professor in RI; and Maxim Likhachev, an associate professor in RI.
    The research was funded by the Office of Naval Research and the Army Research Laboratory.
    Story Source:
    Materials provided by Carnegie Mellon University. Original written by Aaron Aupperlee. Note: Content may be edited for style and length. More

  • in

    Designers find better solutions with computer assistance, but sacrifice creative touch

    From building software to designing cars, engineers grapple with complex design situations every day. ‘Optimizing a technical system, whether it’s making it more usable or energy-efficient, is a very hard problem!’ says Antti Oulasvirta, professor of electrical engineering at Aalto University and the Finnish Center for Artificial Intelligence. Designers often rely on a mix of intuition, experience and trial and error to guide them. Besides being inefficient, this process can lead to ‘design fixation’, homing in on familiar solutions while new avenues go unexplored. A ‘manual’ approach also won’t scale to larger design problems and relies a lot on individual skill.
    Oulasvirta and colleagues tested an alternative, computer-assisted method that uses an algorithm to search through a design space, the set of possible solutions given multi-dimensional inputs and constraints for a particular design issue. They hypothesized that a guided approach could yield better designs by scanning a broader swath of solutions and balancing out human inexperience and design fixation.
    Along with collaborators from the University of Cambridge, the researchers set out to compare the traditional and assisted approaches to design, using virtual reality as their laboratory. They employed Bayesian optimization, a machine learning technique that both explores the design space and steers towards promising solutions. ‘We put a Bayesian optimizer in the loop with a human, who would try a combination of parameters. The optimizer then suggests some other values, and they proceed in a feedback loop. This is great for designing virtual reality interaction techniques,’ explains Oulasvirta. ‘What we didn’t know until now is how the user experiences this kind of optimization-driven design approach.’
    To find out, Oulasvirta’s team asked 40 novice designers to take part in their virtual reality experiment. The subjects had to find the best settings for mapping the location of their real hand holding a vibrating controller to the virtual hand seen in the headset. Half of these designers were free to follow their own instincts in the process, and the other half were given optimizer-selected designs to evaluate. Both groups had to choose three final designs that would best capture accuracy and speed in the 3D virtual reality interaction task. Finally, subjects reported how confident and satisfied they were with the experience and how in control they felt over the process and the final designs.
    The results were clear-cut: ‘Objectively, the optimizer helped designers find better solutions, but designers did not like being hand-held and commanded. It destroyed their creativity and sense of agency,’ reports Oulasvirta. The optimizer-led process allowed designers to explore more of the design space compared with the manual approach, leading to more diverse design solutions. The designers who worked with the optimizer also reported less mental demand and effort in the experiment. By contrast, this group also scored lower on expressiveness, agency and ownership, compared with the designers who did the experiment without a computer assistant.
    ‘There is definitely a trade-off,’ says Oulasvirta. ‘With the optimizer, designers came up with better designs and covered a more extensive set of solutions with less effort. On the other hand, their creativity and sense of ownership of the outcomes was reduced.’ These results are instructive for the development of AI that assists humans in decision-making. Oulasvirta suggests that people need to be engaged in tasks such as assisted design so they retain a sense of control, don’t get bored, and receive more insight into how a Bayesian optimizer or other AI is actually working. ‘We’ve seen that inexperienced designers especially can benefit from an AI boost when engaging in our design experiment,’ says Oulasvirta. ‘Our goal is that optimization becomes truly interactive without compromising human agency.’
    This paper was selected for an honourable mention at the ACM CHI Conference on Human Factors in Computing Systems in May 2022.
    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More

  • in

    Charting a safe course through a highly uncertain environment

    An autonomous spacecraft exploring the far-flung regions of the universe descends through the atmosphere of a remote exoplanet. The vehicle, and the researchers who programmed it, don’t know much about this environment.
    With so much uncertainty, how can the spacecraft plot a trajectory that will keep it from being squashed by some randomly moving obstacle or blown off course by sudden, gale-force winds?
    MIT researchers have developed a technique that could help this spacecraft land safely. Their approach can enable an autonomous vehicle to plot a provably safe trajectory in highly uncertain situations where there are multiple uncertainties regarding environmental conditions and objects the vehicle could collide with.
    The technique could help a vehicle find a safe course around obstacles that move in random ways and change their shape over time. It plots a safe trajectory to a targeted region even when the vehicle’s starting point is not precisely known and when it is unclear exactly how the vehicle will move due to environmental disturbances like wind, ocean currents, or rough terrain.
    This is the first technique to address the problem of trajectory planning with many simultaneous uncertainties and complex safety constraints, says co-lead author Weiqiao Han, a graduate student in the Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory (CSAIL).
    “Future robotic space missions need risk-aware autonomy to explore remote and extreme worlds for which only highly uncertain prior knowledge exists. In order to achieve this, trajectory-planning algorithms need to reason about uncertainties and deal with complex uncertain models and safety constraints,” adds co-lead author Ashkan Jasour, a former CSAIL research scientist who now works on robotics systems at the NASA Jet Propulsion Laboratory. More

  • in

    Twisted soft robots navigate mazes without human or computer guidance

    Researchers from North Carolina State University and the University of Pennsylvania have developed soft robots that are capable of navigating complex environments, such as mazes, without input from humans or computer software.
    “These soft robots demonstrate a concept called ‘physical intelligence,’ meaning that structural design and smart materials are what allow the soft robot to navigate various situations, as opposed to computational intelligence,” says Jie Yin, corresponding author of a paper on the work and an associate professor of mechanical and aerospace engineering at NC State.
    The soft robots are made of liquid crystal elastomers in the shape of a twisted ribbon, resembling translucent rotini. When you place the ribbon on a surface that is at least 55 degrees Celsius (131 degrees Fahrenheit), which is hotter than the ambient air, the portion of the ribbon touching the surface contracts, while the portion of the ribbon exposed to the air does not. This induces a rolling motion in the ribbon. And the warmer the surface, the faster it rolls. Video of the ribbon-like soft robots can be found at https://youtu.be/7q1f_JO5i60.
    “This has been done before with smooth-sided rods, but that shape has a drawback — when it encounters an object, it simply spins in place,” says Yin. “The soft robot we’ve made in a twisted ribbon shape is capable of negotiating these obstacles with no human or computer intervention whatsoever.”
    The ribbon robot does this in two ways. First, if one end of the ribbon encounters an object, the ribbon rotates slightly to get around the obstacle. Second, if the central part of the robot encounters an object, it “snaps.” The snap is a rapid release of stored deformation energy that causes the ribbon to jump slightly and reorient itself before landing. The ribbon may need to snap more than once before finding an orientation that allows is to negotiate the obstacle, but ultimately it always finds a clear path forward.
    “In this sense, it’s much like the robotic vacuums that many people use in their homes,” Yin says. “Except the soft robot we’ve created draws energy from its environment and operates without any computer programming.”
    “The two actions, rotating and snapping, that allow the robot to negotiate obstacles operate on a gradient,” says Yao Zhao, first author of the paper and a postdoctoral researcher at NC State. “The most powerful snap occurs if an object touches the center of the ribbon. But the ribbon will still snap if an object touches the ribbon away from the center, it’s just less powerful. And the further you are from the center, the less pronounced the snap, until you reach the last fifth of the ribbon’s length, which does not produce a snap at all.”
    The researchers conducted multiple experiments demonstrating that the ribbon-like soft robot is capable of navigating a variety of maze-like environments. The researchers also demonstrated that the soft robots would work well in desert environments, showing they were capable of climbing and descending slopes of loose sand.
    “This is interesting, and fun to look at, but more importantly it provides new insights into how we can design soft robots that are capable of harvesting heat energy from natural environments and autonomously negotiating complex, unstructured settings such as roads and harsh deserts.” Yin says.
    The paper, “Twisting for Soft Intelligent Autonomous Robot in Unstructured Environments,” will be published the week of May 23 in the Proceedings of the National Academy of Sciences. The paper was co-authored by NC State Ph.D. students Yinding Chi, Yaoye Hong and Yanbin Li; as well as Shu Yang, the Joseph Bordogna Professor of Materials Science and Engineering at the University of Pennsylvania.
    The work was done with support from the National Science Foundation, under grants CMMI-431 2010717, CMMI-2005374 and DMR-1410253.
    Story Source:
    Materials provided by North Carolina State University. Original written by Matt Shipman. Note: Content may be edited for style and length. More

  • in

    Using Artificial Intelligence to Predict Life-Threatening Bacterial Disease in Dogs

    Leptospirosis, a disease that dogs can get from drinking water contaminated with Leptospira bacteria, can cause kidney failure, liver disease and severe bleeding into the lungs. Early detection of the disease is crucial and may mean the difference between life and death.
    Veterinarians and researchers at the University of California, Davis, School of Veterinary Medicine have discovered a technique to predict leptospirosis in dogs through the use of artificial intelligence. After many months of testing various models, the team has developed one that outperformed traditional testing methods and provided accurate early detection of the disease. The groundbreaking discovery was published in Journal of Veterinary Diagnostic Investigation.
    “Traditional testing for Leptospira lacks sensitivity early in the disease process,” said lead author Krystle Reagan, a board-certified internal medicine specialist and assistant professor focusing on infectious diseases. “Detection also can take more than two weeks because of the need to demonstrate a rise in the level of antibodies in a blood sample. Our AI model eliminates those two roadblocks to a swift and accurate diagnosis.”
    The research involved historical data of patients at the UC Davis Veterinary Medical Teaching Hospital that had been tested for leptospirosis. Routinely collected blood work from these 413 dogs was used to train an AI prediction model. Over the next year, the hospital treated an additional 53 dogs with suspected leptospirosis. The model correctly identified all nine dogs that were positive for leptospirosis (100% sensitivity). The model also correctly identified approximately 90% of the 44 dogs that were ultimately leptospirosis negative.
    The goal for the model is for it to become an online resource for veterinarians to enter patient data and receive a timely prediction.
    “AI-based, clinical decision making is going to be the future for many aspects of veterinary medicine,” said School of Veterinary Medicine Dean Mark Stetter. “I am thrilled to see UC Davis veterinarians and scientists leading that charge. We are committed to putting resources behind AI ventures and look forward to partnering with researchers, philanthropists, and industry to advance this science.”
    Detection model may help people
    Leptospirosis is a life-threatening zoonotic disease, meaning it can transfer from animals to humans. As the disease is also difficult to diagnose in people, Reagan hopes the technology behind this groundbreaking detection model has translational ability into human medicine.
    “My hope is this technology will be able to recognize cases of leptospirosis in near real time, giving clinicians and owners important information about the disease process and prognosis,” said Reagan. “As we move forward, we hope to apply AI methods to improve our ability to quickly diagnose other types of infections.”
    Reagan is a founding member of the school’s Artificial Intelligence in Veterinary Medicine Interest Group comprising veterinarians promoting the use of AI in the profession. This research was done in collaboration with members of UC Davis’ Center for Data Science and Artificial Intelligence Research, led by professor of mathematics Thomas Strohmer. He and his students were involved in the algorithm building.
    Reagan’s group is actively pursuing AI for prediction of outcome for other types of infections, including a prediction model for antimicrobial resistant infections, which is a growing problem in veterinary and human medicine. Previously, the group developed an AI algorithm to predict Addison’s disease with an accuracy rate greater than 99%.
    Funding support comes from the National Science Foundation.
    Story Source:
    Materials provided by University of California – Davis. Original written by Rob Warren. Note: Content may be edited for style and length. More