More stories

  • in

    Training robots how to learn, make decisions on the fly

    Mars rovers have teams of human experts on Earth telling them what to do. But robots on lander missions to moons orbiting Saturn or Jupiter are too far away to receive timely commands from Earth. Researchers in the Departments of Aerospace Engineering and Computer Science at the University of Illinois Urbana-Champaign developed a novel learning-based method so robots on extraterrestrial bodies can make decisions on their own about where and how to scoop up terrain samples.
    “Rather than simulating how to scoop every possible type of rock or granular material, we created a new way for autonomous landers to learn how to learn to scoop quickly on a new material it encounters,” said Pranay Thangeda, a Ph.D. student in the Department of Aerospace Engineering.
    “It also learns how to adapt to changing landscapes and their properties, such as the topology and the composition of the materials,” he said.
    Using this method, Thangeda said a robot can learn how to scoop a new material with very few attempts. “If it makes several bad attempts, it learns it shouldn’t scoop in that area and it will try somewhere else.”
    The proposed deep Gaussian process model is trained on the offline database with deep meta-learning with controlled deployment gaps, which repeatedly splits the training set into mean-training and kernel-training and learns kernel parameters to minimize the residuals from the mean models. In deployment, the decision-maker uses the trained model and adapts it to the data acquired online.
    One of the challenges for this research is the lack of knowledge about ocean worlds like Europa.

    “Before we sent the recent rovers to Mars, orbiters gave us pretty good information about the terrain features,” Thangeda said. “But the best image we have of Europa has a resolution of 256 to 340 meters per pixel, which is not clear enough to ascertain features.”
    Thangeda’s adviser Melkior Ornik said, “All we know is that Europa’s surface is ice, but it could be big blocks of ice or much finer like snow. We also don’t know what’s underneath the ice.”
    For some trials, the team hid material under a layer of something else. The robot only sees the top material and thinks it might be good to scoop. “When it actually scoops and hits the bottom layer, it learns it is unscoopable and moves to a different area,” Thangeda said.
    NASA wants to send battery-powered rovers rather than nuclear to Europa because, among other mission-specific considerations, it is critical to minimize the risk of contaminating ocean worlds with potentially hazardous materials.
    “Although nuclear power supplies have a lifespan of months, batteries have about a 20-day lifespan. We can’t afford to waste a few hours a day to send messages back and forth. This provides another reason why the robot’s autonomy to make decisions on its own is vital,” Thangeda said.

    This method of learning to learn is also unique because it allows the robot to use vision and very little on-line experience to achieve high-quality scooping actions on unfamiliar terrains — significantly outperforming non-adaptive methods and other state-of-the-art meta-learning methods.
    From these 12 materials and terrains made of a unique composition of one or more materials, a database of 6,700 was created.
    The team used a robot in the Department of Computer Science at Illinois. It is modeled after the arm of a lander with sensors to collect scooping data on a variety of materials, from 1-millimeter grains of sand to 8-centimeter rocks, as well as different volume materials such as shredded cardboard and packing peanuts. The resulting database in the simulation contains 100 points of knowledge for each of 67 different terrains, or 6,700 total points.
    “To our knowledge, we are the first to open source a large-scale dataset on granular media,” Thangeda said. “We also provided code to easily access the dataset so others can start using it in their applications.”
    The model the team created will be deployed at NASA’s Jet Propulsion Laboratory’s Ocean World Lander Autonomy Testbed.
    “We’re interested in developing autonomous robotic capabilities on extraterrestrial surfaces, and in particular challenging extraterrestrial surfaces,” Ornik said. “This unique method will help inform NASA’s continuing interest in exploring ocean worlds.
    “The value of this work is in adaptability and transferability of knowledge or methods from Earth to an extraterrestrial body, because it is clear that we will not have a lot of information before the lander gets there. And because of the short battery lifespan, we won’t have a long time for the learning process. The lander might last for just a few days, then die, so learning and making decisions autonomously is extremely beneficial.”
    The open-source dataset is available at: drillaway.github.io/scooping-dataset.html. More

  • in

    Researcher turns one of the basic rules of construction upside down

    An Aston University researcher has turned one of the basic rules of construction on its head.
    For centuries a hanging chain has been used as an example to explain how masonry arches stand.
    Structural engineers are familiar with seventeenth-century scientist Robert Hooke’s theory that a hanging chain will mirror the shape of an upstanding rigid arch.
    However, research from Aston University’s College of Engineering and Physical Sciences, shows that this common-held belief is incorrect because, regardless of the similarities, the hanging chain and the arch are two incompatible mechanical systems.
    Dr Haris Alexakis used the transition in science from Newtonian to Lagrangian mechanics, that led to the development of modern physics and mathematics, to prove this with mathematical rigour.
    In his paper Vector analysis and the stationary potential energy for assessing equilibrium
    of curved masonry structures he revisits the equilibrium of the hanging chain and the arch, explaining that the two systems operate in different spatial frameworks. One consequence of this is that the hanging chain requires only translational force to be in equilibrium whereas the inverted arch needs both translational and rotational. As a result, the solutions are always different.

    Dr Alexakis’s analysis unearthed subtle inconsistencies in the way Hooke’s analogy has been interpreted and applied over the centuries for the design and safety assessment of arches, and highlights its practical limitations.
    He said: “The analogy between inverted hanging chains and the optimal shape of masonry arches is a concept deeply rooted in our structural analysis practices.
    “Curved structures have enabled masons, engineers, and architects to carry heavy loads and cover large spans with the use of low-tensile strength materials for centuries, while creating the marvels of the world’s architectural heritage.
    “Despite the long history of these practices, finding optimal structural forms and assessing the stability and safety of curved structures remains as topical as ever. This is due to an increasing interest to preserve heritage structures and reduce material use in construction, while replacing steel and concrete with low-carbon natural materials.”
    His paper, which is published in the journal Mathematics and Mechanics of Solids, suggests a new structural analysis method based on the principle of stationary potential energy which would be faster, more flexible and help calculate more complex geometries.
    As a result, analysts won’t need to consider equilibrium of each individual block or describe geometrically the load path of thrust forces to obtain a rigorous solution.
    Dr Alexakis added: “The analysis tools discussed in the paper will enable us to assess the condition and safety of heritage structures and build more sustainable curved structures, like vaults and shells.
    “The major advantage of these structures, apart from having appealing aesthetics, is that they can have reduced volume, and can be made of economic, low-tensile-strength and low-carbon natural materials, contributing to net zero construction.” More

  • in

    World’s largest association of computing professionals issues Principles for Generative AI Technologies

    In response to major advances in Generative AI technologies — as well as the significant questions these technologies pose in areas including intellectual property, the future of work, and even human safety — the Association for Computing Machinery’s global Technology Policy Council (ACM TPC) has issued “Principles for the Development, Deployment, and Use of Generative AI Technologies.”
    Drawing on the deep technical expertise of computer scientists in the United States and Europe, the ACM TPC statement outlines eight principles intended to foster fair, accurate, and beneficial decision-making concerning generative and all other AI technologies. Four of the principles are specific to Generative AI, and an additional four principles are adapted from the TPC’s 2022 “Statement on Principles for Responsible Algorithmic Systems.”
    The Introduction to the new Principles advances the core argument that “the increasing power of Generative AI systems, the speed of their evolution, broad application, and potential to cause significant or even catastrophic harm, means that great care must be taken in researching, designing, developing, deploying, and using them. Existing mechanisms and modes for avoiding such harm likely will not suffice.”
    The document then sets out these eight instrumental principles, outlined here in abbreviated form:
    Generative AI-Specific Principles Limits and guidance on deployment and use: In consultation with all stakeholders, law and regulation should be reviewed and applied as written or revised to limit the deployment and use of Generative AI technologies when required to minimize harm. No high-risk AI system should be allowed to operate without clear and adequate safeguards, including a “human in the loop” and clear consensus among relevant stakeholders that the system’s benefits will substantially outweigh its potential negative impacts. One approach is to define a hierarchy of risk levels, with unacceptable risk at the highest level and minimal risk at the lowest level. Ownership: Inherent aspects of how Generative AI systems are structured and function are not yet adequately accounted for in intellectual property (IP) law and regulation. Personal data control: Generative AI systems should allow a person to opt out of their data being used to train a system or facilitate its generation of information. Correctability: Providers of Generative AI systems should create and maintain public repositories where errors made by the system can be noted and, optionally, corrections made.Adapted Prior Principles Transparency: Any application or system that utilizes Generative AI should conspicuously disclose that it does so to the appropriate stakeholders. Auditability and contestability: Providers of Generative AI systems should ensure that system models, algorithms, data, and outputs can be recorded where possible (with due consideration to privacy), so that they may be audited and/or contested in appropriate cases. Limiting environmental impact: Given the large environmental impact of Generative AI models, we recommend that consensus on methodologies be developed to measure, attribute, and actively reduce such impact. Heightened security and privacy: Generative AI systems are susceptible to a broad range of new security and privacy risks, including new attack vectors and malicious data leaks, among others.”Our field needs to tread carefully with the development of Generative AI because this is a new paradigm that goes significantly beyond previous AI technology and applications,” explained Ravi Jain, Chair of the ACM Technology Policy Council’s Working Group on Generative AI and lead author of the Principles. “Whether you celebrate Generative AI as a wonderful scientific advancement or fear it, everyone agrees that we need to develop this technology responsibly. In outlining these eight instrumental principles, we’ve tried to consider a wide range of areas where Generative AI might have an impact. These include aspects that have not been covered as much in the media, including environmental considerations and the idea of creating public repositories where errors in a system can be noted and corrected.”
    “These are guidelines, but we must also build a community of scientists, policymakers, and industry leaders who will work together in the public interest to understand the limits and risks of Generative AI as well as its benefits. ACM’s position as the world’s largest association for computing professionals makes us well-suited to foster that consensus and look forward to working with policy makers to craft the regulations by which Generative AI should be developed, deployed, but also controlled,” added James Hendler, Professor at Rensselaer Polytechnic Institute and Chair of ACM’s Technology Policy Council.
    “Principles for the Development, Deployment, and Use of Generative AI Technologies” was jointly produced and adopted by ACM’s US Technology Policy Committee (USTPC) and Europe Technology Policy Committee (Europe TPC).
    Lead authors of this document for USTPC were Ravi Jain, Jeanna Matthews, and Alejandro Saucedo. Important contributions were made by Harish Arunachalam, Brian Dean, Advait Deshpande, Simson Garfinkel, Andrew Grosso, Jim Hendler, Lorraine Kisselburgh, Srivatsa Kundurthy, Marc Rotenberg, Stuart Shapiro, and Ben Shneiderman. Assistance also was provided by Ricardo Baeza-Yates, Michel Beaudouin-Lafon, Vint Cerf, Charalampos Chelmis, Paul DeMarinis, Nicholas Diakopoulos, Janet Haven, Ravi Iyer, Carlos E. Jimenez-Gomez, Mark Pastin, Neeti Pokhriyal, Jason Schmitt, and Darryl Scriven. More

  • in

    Acoustics researchers decompose sound accurately into its three basic components

    Researchers have been looking for ways to decompose sound into its basic ingredients for over 200 years. In the 1820s, French scientist Joseph Fourier proposed that any signal, including sounds, can be built using sufficiently many sine waves. These waves sound like whistles, each have their own frequency, level and start time, and are the basic building blocks of sound.
    However, some sounds, such as the flute and a breathy human voice, may require hundreds or even thousands of sines to exactly imitate the original waveform. This comes from the fact that such sounds contain a less harmonical, more noisy structure, where all frequencies occur at once. One solution is to divide sound into two types of components, sines and noise, with a smaller number of whistling sine waves and combined with variable noises, or hisses, to complete the imitation.
    Even this ‘complete’ two-component sound model has issues with the smoothing of the beginnings of sound events, such as consonants in voice or drum sounds in music. A third component, named transient, was introduced around the year 2000 to help model the sharpness of such sounds. Transients alone sound like clicks. From then on, sound has been often divided into three components: sines, noise, and transients.
    The three-component model of sines, noise and transients has now been refined by researchers at Aalto University Acoustics Lab, using ideas from auditory perception, fuzzy logic, and perfect reconstruction.
    Decomposition mirrors the way we hear sounds
    Doctoral researcher Leonardo Fierro and professor Vesa Välimäki realized the way that people hear the different components and separate whistles, clicks, and hisses is important. If a click gets spread in time, it starts to ring and sound noisier; by contrast, focusing on very brief sounds might cause some loss of tonality.

    This insight from auditory perception was coupled with fuzzy logic: at any moment, part of the sound can belong to each of the three classes of sines, transients or noise, not just one of them. With the goal of perfect reconstruction, Fierro optimized the way sound is decomposed.
    In the enhanced method, sines and transients are two opposite characteristics of sound, and the sound is not allowed to belong to both classes at the same time. However, any of two opposite component types can still occur simultaneously with noise. Thus, the idea of fuzzy logic is present in a restricted way. The noise works as a fuzzy link between the sines and transients, describing all the nuances of the sound that are not captured by simple clicks and whistles. ‘It’s like finding the missing piece of a puzzle to connect those two parts that did not fit together before,’ says Fierro.
    This enhanced decomposition method was compared with previous methods in a listening test. Eleven experienced listeners were individually asked to audit several short music excepts and the components extracted from them using different methods.
    The new method emerged as the winning way to decompose most sounds, based on the listeners’ ratings. Only when there is a strong vibrato in a musical sound, such as in a singing voice or the violin, all decomposition methods struggle, and in these cases some previous methods are superior.
    A test use case for the new decomposition method is the time-scale modification of sound, especially slowing down of music. This was tested in a preference listening test against the lab’s own previous method, which was selected as the best academic technique in a comparative study a few years ago. Again, Fierro’s new method was a clear winner.
    ‘The new sound decomposition method opens many exciting possibilities in sound processing,’ says professor Välimäki. ‘The slowing down of sound is currently our main interest. It is striking that for example in sports news, the slow-motion videos are always silent. The reason is probably that the sound quality in current slow-down audio tools is not good enough. We have already started developing better time-scale modification methods, which use a deep neural network to help stretch some components.’
    The high-quality sound decomposition also enables novel types of music remixing techniques. One of them leads to distortion-free dynamic range compression. Namely, the transient component often contains the loudest peaks in the sound waveform, so simply reducing the level of the transient component and mixing it back with the others can limit the peak-to-peak value of audio.
    Leonardo Fierro demonstrates how the “SiTraNo” app can be used to break sound into its atoms — in this case himself rapping, in this video: https://youtu.be/nZldIAYzzOs More

  • in

    Capturing the immense potential of microscopic DNA for data storage

    In a world first, a ‘biological camera’ bypasses the constraints of current DNA storage methods, harnessing living cells and their inherent biological mechanisms to encode and store data. This represents a significant breakthrough in encoding and storing images directly within DNA, creating a new model for information storage reminiscent of a digital camera.
    Led by Principal Investigator Associate Professor Chueh Loo Poh from the College of Design and Engineering at the National University of Singapore, and the NUS Synthetic Biology for Clinical and Technological Innovation (SynCTI), the team’s findings, which could potentially shake up the data-storage industry, were published in Nature Communications on 3 July 2023.
    A new paradigm to address global data overload
    As the world continues to generate data at an unprecedented rate, data has come to be seen as the ‘currency’ of the 21st century. Estimated to be 33 ZB in 2018, it has been forecasted that the Global Datasphere will reach 175 ZB by 2025. That has sparked a quest for a storage alternative that can transcend the confines of conventional data storage and address the environmental impact of resource-intensive data centres.
    It is only recently that the idea of using DNA to store other types of information, such as images and videos, has garnered attention. This is due to DNA’s exceptional storage capacity, stability, and long-standing relevance as a medium for information storage.
    “We are facing an impending data overload. DNA, the key biomaterial of every living thing on Earth, stores genetic information that encodes for an array of proteins responsible for various life functions. To put it into perspective, a single gram of DNA can hold over 215,000 terabytes of data — equivalent to storing 45 million DVDs combined,” said Assoc Prof Poh.

    “DNA is also easy to manipulate with current molecular biology tools, can be stored in various forms at room temperature, and is so durable it can last centuries,” says Cheng Kai Lim, a graduate student working with Assoc Prof Poh.
    Despite its immense potential, current research in DNA storage focuses on synthesising DNA strands outside the cells. This process is expensive and relies on complex instruments, which are also prone to errors.
    To overcome this bottleneck, Assoc Prof Poh and his team turned to live cells, which contain an abundance of DNA that can act as a ‘data bank’, circumventing the need to synthesise the genetic material externally.
    Through sheer ingenuity and clever engineering, the team developed ‘BacCam’ — a novel system that merges various biological and digital techniques to emulate a digital camera’s functions using biological components.
    “Imagine the DNA within a cell as an undeveloped photographic film,” explained Assoc Prof Poh. “Using optogenetics — a technique that controls the activity of cells with light akin to the shutter mechanism of a camera, we managed to capture ‘images’ by imprinting light signals onto the DNA ‘film’.”
    Next, using barcoding techniques akin to photo labelling, the researchers marked the captured images for unique identification. Machine-learning algorithms were employed to organise, sort, and reconstruct the stored images. These constitute the ‘biological camera’, mirroring a digital camera’s data capture, storage, and retrieval processes.
    The study showcased the camera’s ability to capture and store multiple images simultaneously using different light colours. More crucially, compared to earlier methods of DNA data storage, the team’s innovative system is easily reproducible and scalable.
    “As we push the boundaries of DNA data storage, there is an increasing interest in bridging the interface between biological and digital systems,” said Assoc Prof Poh.
    “Our method represents a major milestone in integrating biological systems with digital devices. By harnessing the power of DNA and optogenetic circuits, we have created the first ‘living digital camera,’ which offers a cost-effective and efficient approach to DNA data storage. Our work not only explores further applications of DNA data storage but also re-engineers existing data-capture technologies into a biological framework. We hope this will lay the groundwork for continued innovation in recording and storing information.” More

  • in

    Revolutionary self-sensing electric artificial muscles

    Researchers from Queen Mary University of London have made groundbreaking advancements in bionics with the development of a new electric variable-stiffness artificial muscle. Published in Advanced Intelligent Systems, this innovative technology possesses self-sensing capabilities and has the potential to revolutionize soft robotics and medical applications. The artificial muscle seamlessly transitions between soft and hard states, while also sensing forces and deformations. With flexibility and stretchability similar to natural muscle, it can be integrated into intricate soft robotic systems and adapt to various shapes. By adjusting voltages, the muscle rapidly changes its stiffness and can monitor its own deformation through resistance changes. The fabrication process is simple and reliable, making it ideal for a range of applications, including aiding individuals with disabilities or patients in rehabilitation training.
    In a study published recently in Advanced Intelligent Systems, researchers from Queen Mary University of London have made significant advancements in the field of bionics with the development of a new type of electric variable-stiffness artificial muscle that possesses self-sensing capabilities. This innovative technology has the potential to revolutionize soft robotics and medical applications.
    Muscle contraction hardening is not only essential for enhancing strength but also enables rapid reactions in living organisms. Taking inspiration from nature, the team of researchers at QMUL’s School of Engineering and Materials Science has successfully created an artificial muscle that seamlessly transitions between soft and hard states while also possessing the remarkable ability to sense forces and deformations.
    Dr. Ketao Zhang, a Lecturer at Queen Mary and the lead researcher, explains the importance of variable stiffness technology in artificial muscle-like actuators. “Empowering robots, especially those made from flexible materials, with self-sensing capabilities is a pivotal step towards true bionic intelligence,” says Dr. Zhang.
    The cutting-edge artificial muscle developed by the researchers exhibits flexibility and stretchability similar to natural muscle, making it ideal for integration into intricate soft robotic systems and adapting to various geometric shapes. With the ability to withstand over 200% stretch along the length direction, this flexible actuator with a striped structure demonstrates exceptional durability.
    By applying different voltages, the artificial muscle can rapidly adjust its stiffness, achieving continuous modulation with a stiffness change exceeding 30 times. Its voltage-driven nature provides a significant advantage in terms of response speed over other types of artificial muscles. Additionally, this novel technology can monitor its deformation through resistance changes, eliminating the need for additional sensor arrangements and simplifying control mechanisms while reducing costs.
    The fabrication process for this self-sensing artificial muscle is simple and reliable. Carbon nanotubes are mixed with liquid silicone using ultrasonic dispersion technology and coated uniformly using a film applicator to create the thin layered cathode, which also serves as the sensing part of the artificial muscle. The anode is made directly using a soft metal mesh cut, and the actuation layer is sandwiched between the cathode and the anode. After the liquid materials cure, a complete self-sensing variable-stiffness artificial muscle is formed.
    The potential applications of this flexible variable stiffness technology are vast, ranging from soft robotics to medical applications. The seamless integration with the human body opens up possibilities for aiding individuals with disabilities or patients in performing essential daily tasks. By integrating the self-sensing artificial muscle, wearable robotic devices can monitor a patient’s activities and provide resistance by adjusting stiffness levels, facilitating muscle function restoration during rehabilitation training.
    “While there are still challenges to be addressed before these medical robots can be deployed in clinical settings, this research represents a crucial stride towards human-machine integration,” highlights Dr. Zhang. “It provides a blueprint for the future development of soft and wearable robots.”
    The groundbreaking study conducted by researchers at Queen Mary University of London marks a significant milestone in the field of bionics. With their development of self-sensing electric artificial muscles, they have paved the way for advancements in soft robotics and medical applications. More

  • in

    A varied life boosts the brain’s functional networks

    That experiences leave their trace in the connectivity of the brain has been known for a while, but a pioneering study by researchers at the German Center for Neurodegenerative Diseases (DZNE) and TUD Dresden University of Technology now shows how massive these effects really are. The findings in mice provide unprecedented insights into the complexity of large-scale neural networks and brain plasticity. Moreover, they could pave the way for new brain-inspired artificial intelligence methods. The results, based on an innovative “brain-on-chip” technology, are published in the scientific journal Biosensors and Bioelectronics.
    The Dresden researchers explored the question of how an enriched experience affects the brain’s circuitry. For this, they deployed a so-called neurochip with more than 4,000 electrodes to detect the electrical activity of brain cells. This innovative platform enabled registering the “firing” of thousands of neurons simultaneously. The area examined — much smaller than the size of a human fingernail — covered an entire mouse hippocampus. This brain structure, shared by humans, plays a pivotal role in learning and memory, making it a prime target for the ravages of dementias like Alzheimer’s disease. For their study, the scientists compared brain tissue from mice, which were raised differently. While one group of rodents grew up in standard cages, which did not offer any special stimuli, the others were housed in an “enriched environment” that included rearrangeable toys and maze-like plastic tubes.
    “The results by far exceeded our expectations,” said Dr. Hayder Amin, lead scientist of the study. Amin, a neuroelectronics and nomputational neuroscience expert, heads a research group at DZNE. With his team, he developed the technology and analysis tools used in this study. “Simplified, one can say that the neurons of mice from the enriched environment were much more interconnected than those raised in standard housing. No matter which parameter we looked at, a richer experience literally boosted connections in the neuronal networks. These findings suggest that leading an active and varied life shapes the brain on whole new grounds.”
    Unprecedented Insight into Brain Networks
    Prof. Gerd Kempermann, who co-leads the study and has been working on the question of how physical and cognitive activity helps the brain to form resilience towards aging and neurodegenerative disease, attests: “All we knew in this area so far has either been taken from studies with single electrodes or imaging techniques like magnetic resonance imaging. The spatial and temporal resolution of these techniques is much coarser than our approach. Here we can literally see the circuitry at work down to the scale of single cells. We applied advanced computational tools to extract a huge amount of details about network dynamics in space and time from our recordings.”
    “We have uncovered a wealth of data that illustrates the benefits of a brain shaped by rich experience. This paves the way to understand the role of plasticity and reserve formation in combating neurodegenerative diseases, especially with respect to novel preventive strategies,” Prof. Kempermann said, who, in addition to being a DZNE researcher, is also affiliated with the Center for Regenerative Therapies Dresden (CRTD) at TU Dresden. “Also, this will help provide insights into disease processes associated with neurodegeneration, such as dysfunctions of brain networks.”
    Potential Regarding Brain-inspired Artificial Intelligence
    “By unraveling how experiences shape the brain’s connectome and dynamics, we are not only pushing the boundaries of brain research,” states Dr. Amin. “Artificial intelligence is inspired by how the brain computes information. Thus, our tools and the insights they allow to generate could open the way for novel machine learning algorithms.” More

  • in

    Canada’s Crawford Lake could mark the beginning of the Anthropocene

    McKenzie Prillaman was the Spring 2023 science writing intern at Science News. She holds a bachelor’s degree in neuroscience with a minor in bioethics from the University of Virginia and a master’s degree in science communication from the University of California, Santa Cruz. More