More stories

  • in

    Wafer-thin nanopaper changes from firm to soft at the touch of a button

    Materials science likes to take nature and the special properties of living beings that could potentially be transferred to materials as a model. A research team led by chemist Professor Andreas Walther of Johannes Gutenberg University Mainz (JGU) has succeeded in endowing materials with a bioinspired property: Wafer-thin stiff nanopaper instantly becomes soft and elastic at the push of a button. “We have equipped the material with a mechanism so that the strength and stiffness can be modulated via an electrical switch,” explained Walther. As soon as an electric current is applied, the nanopaper becomes soft; when the current flow stops, it regains its strength. From an application perspective, this switchability could be interesting for damping materials, for example. The work, which also involved scientists from the University of Freiburg and the Cluster of Excellence on “Living, Adaptive, and Energy-autonomous Materials Systems” (livMatS) funded by the German Research Foundation (DFG), was published in Nature Communications.
    Inspiration from the seafloor: Mechanical switch serves a protective function
    The nature-based inspiration in this case comes from sea cucumbers. These marine creatures have a special defense mechanism: When they are attacked by predators in their habitat on the seafloor, sea cucumbers can adapt and strengthen their tissue so that their soft exterior immediately stiffens. “This is an adaptive mechanical behavior that is fundamentally difficult to replicate,” said Professor Andreas Walther. With their work now published, his team has succeeded in mimicking the basic principle in a modified form using an attractive material and an equally attractive switching mechanism.
    The scientists used cellulose nanofibrils extracted and processed from the cell wall of trees. Nanofibrils are even finer than the microfibers in standard paper and result in a completely transparent, almost glass-like paper. The material is stiff and strong, appealing for lightweight construction. Its characteristics are even comparable to those of aluminum alloys. In their work, the research team applied electricity to these cellulose nanofibril-based nanopapers. By means of specially designed molecular changes, the material becomes flexible as a result. The process is reversible and can be controlled by an on/off switch.
    “This is extraordinary. All the materials around us are not very changeable, they do not easily switch from stiff to elastic and vice versa. Here, with the help of electricity, we can do that in a simple and elegant way,” said Walther. The development is thus moving away from classic static materials toward materials with properties that can be adaptively adjusted. This is relevant for mechanical materials, which can thus be made more resistant to fracture, or for adaptive damping materials, which could switch from stiff to compliant when overloaded, for example.
    Targeting a material with its own energy storage for autonomous on/off switching
    At the molecular level, the process involves heating the material by applying a current and thus reversibly breaking cross-linking points. The material softens in correlation with the applied voltage, i.e., the higher the voltage, the more cross-linking points are broken and the softer the material becomes. Professor Andreas Walther’s vision for the future also starts at the point of power supply: While currently a power source is needed to start the reaction, the next goal would be to produce a material with its own energy storage system, so that the reaction is essentially triggered “internally” as soon as, for example, an overload occurs and damping becomes necessary. “Now we still have to flip the switch ourselves, but our dream would be for the material system to be able to accomplish this on its own.”
    Story Source:
    Materials provided by Johannes Gutenberg Universitaet Mainz. Note: Content may be edited for style and length. More

  • in

    More than words: Using AI to map how the brain understands sentences

    Have you ever wondered why you are able to hear a sentence and understand its meaning — given that the same words in a different order would have an entirely different meaning? New research involving neuroimaging and A.I., describes the complex network within the brain that comprehends the meaning of a spoken sentence.
    “It has been unclear whether the integration of this meaning is represented in a particular site in the brain, such as the anterior temporal lobes, or reflects a more network level operation that engages multiple brain regions,” said Andrew Anderson, Ph.D., research assistant professor in the University of Rochester Del Monte Institute for Neuroscience and lead author on of the study which was published in the Journal of Neuroscience. “The meaning of a sentence is more than the sum of its parts. Take a very simple example — ‘the car ran over the cat’ and ‘the cat ran over the car’ — each sentence has exactly the same words, but those words have a totally different meaning when reordered.”
    The study is an example of how the application of artificial neural networks, or A.I., are enabling researchers to unlock the extremely complex signaling in the brain that underlies functions such as processing language. The researchers gather brain activity data from study participants who read sentences while undergoing fMRI. These scans showed activity in the brain spanning across a network of different regions — anterior and posterior temporal lobes, inferior parietal cortex, and inferior frontal cortex. Using the computational model InferSent — an A.I. model developed by Facebook trained to produce unified semantic representations of sentences — the researchers were able to predict patterns of fMRI activity reflecting the encoding of sentence meaning across those brain regions.
    “It’s the first time that we’ve applied this model to predict brain activity within these regions, and that provides new evidence that contextualized semantic representations are encoded throughout a distributed language network, rather than at a single site in the brain.”
    Anderson and his team believe the findings could be helpful in understanding clinical conditions. “We’re deploying similar methods to try to understand how language comprehension breaks down in early Alzheimer’s disease. We are also interested in moving the models forward to predict brain activity elicited as language is produced. The current study had people read sentences, in the future we’re interested in moving forward to predict brain activity as people might speak sentences.”
    Story Source:
    Materials provided by University of Rochester Medical Center. Original written by Kelsie Smith Hayduk. Note: Content may be edited for style and length. More

  • in

    How UK, South Africa coronavirus variants escape immunity

    All viruses mutate as they make copies of themselves to spread and thrive. SARS-CoV-2, the virus the causes COVID-19, is proving to be no different. There are currently more than 4,000 variants of COVID-19, which has already killed more than 2.7 million people worldwide during the pandemic.
    The UK variant, also known as B.1.1.7, was first detected in September 2020, and is now causing 98 percent of all COVID-19 cases in the United Kingdom. And it appears to be gaining a firm grip in about 100 other countries it has spread to in the past several months, including France, Denmark, and the United States.
    The World Health Organization says B.1.1.7 is one of several variants of concern along with others that have emerged in South Africa and Brazil.
    “The UK, South Africa, and Brazil variants are more contagious and escape immunity easier than the original virus,” said Victor Padilla-Sanchez, a research scientist at The Catholic University of America. “We need to understand why they are more infectious and, in many cases, more deadly.”
    All three variants have undergone changes to their spike protein — the part of the virus which attaches to human cells. As a result, they are better at infecting cells and spreading.
    In a research paper published in January 2021 in Research Ideas and Outcomes, Padilla-Sanchez discusses the UK and South African variants in detail. He presents a computational analysis of the structure of the spike glycoprotein bound to the ACE2 receptor where the mutations have been introduced. His paper outlines the reason why these variants bind better to human cells. More

  • in

    Discovery of non-toxic semiconductors with a direct band gap in the near-infrared

    NIMS and the Tokyo Institute of Technology have jointly discovered that the chemical compound Ca3SiO is a direct transition semiconductor, making it a potentially promising infrared LED and infrared detector component. This compound — composed of calcium, silicon and oxygen — is cheap to produce and non-toxic. Many of the existing infrared semiconductors contain toxic chemical elements, such as cadmium and tellurium. Ca3SiO may be used to develop less expensive and safer near-infrared semiconductors.
    Infrared wavelengths have been used for many purposes, including optical fiber communications, photovoltaic power generation and night vision devices. Existing semiconductors capable of emitting infrared radiation (i.e., direct transition semiconductors) contain toxic chemical compounds, such as mercury cadmium telluride and gallium arsenide. Infrared semiconductors free of toxic chemical elements are generally incapable of emitting infrared radiation (i.e., indirect transition semiconductors). It is desirable to develop high-performance infrared devices using non-toxic, direct transition semiconductors with a band gap in the infrared range.
    Conventionally, the semiconductive properties of materials, such as energy band gap, have been controlled by combining two chemical elements that are located on the left and right side of group IV elements, such as III and V or II and VI. In this conventional strategy, energy band gap becomes narrower by using heavier elements: consequently, this strategy has led to the development of direct transition semiconductors composed of toxic elements, such as mercury cadmium telluride and gallium arsenide. To discover infrared semiconductors free of toxic elements, this research group took an unconventional approach: they focused on crystalline structures in which silicon atoms behave as tetravalent anions rather than their normal tetravalent cation state. The group ultimately chose oxysilicides (e.g., Ca3SiO) and oxygermanides with an inverse perovskite crystalline structure, synthesized them, evaluated their physical properties and conducted theoretical calculations. These processes revealed that these compounds exhibit a very small band gap of approximately 0.9 eV at a wavelength of 1.4 ?m, indicating their great potential to serve as direct transition semiconductors. These compounds with a small direct band gap may potentially be effective in absorbing, detecting and emitting long infrared wavelengths even when they are processed into thin films, making them very promising near-infrared semiconductor materials to be used in infrared sources (e.g., LEDs) and detectors.
    In future research, we plan to develop high-intensity infrared LEDs and highly sensitive infrared detectors by synthesizing these compounds in the form of large single-crystals, developing thin film growth processes and controlling their physical properties through doping and transforming them into solid solutions. If these efforts bear fruit, toxic chemical elements currently used in existing near-infrared semiconductors may be replaced with non-toxic ones.
    Story Source:
    Materials provided by National Institute for Materials Science, Japan. Note: Content may be edited for style and length. More

  • in

    Novel thermometer can accelerate quantum computer development

    Researchers at Chalmers University of Technology, Gothenburg, Sweden, have developed a novel type of thermometer that can simply and quickly measure temperatures during quantum calculations with extremely high accuracy. The breakthrough provides a benchmarking tool for quantum computing of great value — and opens up for experiments in the exciting field of quantum thermodynamics.
    A key component in quantum computers are coaxial cables and waveguides — structures which guide waveforms, and act as the vital connection between the quantum processor, and the classical electronics which control it. Microwave pulses travel along the waveguides to the quantum processor, and are cooled down to extremely low temperatures along the way. The waveguide also attenuates and filters the pulses, enabling the extremely sensitive quantum computer to work with stable quantum states.
    In order to have maximum control over this mechanism, the researchers need to be sure that these waveguides are not carrying noise due to thermal motion of electrons on top of the pulses that they send. In other words, they have to measure the temperature of the electromagnetic fields at the cold end of the microwave waveguides, the point where the controlling pulses are delivered to the computer’s qubits. Working at the lowest possible temperature minimises the risk of introducing errors in the qubits.
    Until now, researchers have only been able to measure this temperature indirectly, with relatively large delay. Now, with the Chalmers researchers’ novel thermometer, very low temperatures can be measured directly at the receiving end of the waveguide — very accurately and with extremely high time resolution.
    “Our thermometer is a superconducting circuit, directly connected to the end of the waveguide being measured. It is relatively simple — and probably the world’s fastest and most sensitive thermometer for this particular purpose at the millikelvin scale,” says Simone Gasparinetti, Assistant Professor at the Quantum Technology Laboratory, Chalmers University of Technology.
    Important for measuring quantum computer performance
    The researchers at the Wallenberg Centre for Quantum Technology, WACQT, have the goal to build a quantum computer — based on superconducting circuits — with at least 100 well-functioning qubits, performing correct calculations by 2030. It requires a processor working temperature close to absolute zero, ideally down to 10 millikelvin. The new thermometer gives the researchers an important tool for measuring how good their systems are and what shortcomings exist — a necessary step to be able to refine the technology and achieve their goal. More

  • in

    Machine learning shows potential to enhance quantum information transfer

    Army-funded researchers demonstrated a machine learning approach that corrects quantum information in systems composed of photons, improving the outlook for deploying quantum sensing and quantum communications technologies on the battlefield.
    When photons are used as the carriers of quantum information to transmit data, that information is often distorted due to environment fluctuations destroying the fragile quantum states necessary to preserve it.
    Researchers from Louisiana State University exploited a type of machine learning to correct for information distortion in quantum systems composed of photons. Published in Advanced Quantum Technologies, the team demonstrated that machine learning techniques using the self-learning and self-evolving features of artificial neural networks can help correct distorted information. This results outperformed traditional protocols that rely on conventional adaptive optics.
    “We are still in the fairly early stages of understanding the potential for machine learning techniques to play a role in quantum information science,” said Dr. Sara Gamble, program manager at the Army Research Office, an element of U.S. Army Combat Capabilities Development Command, known as DEVCOM, Army Research Laboratory. “The team’s result is an exciting step forward in developing this understanding, and it has the potential to ultimately enhance the Army’s sensing and communication capabilities on the battlefield.”
    For this research, the team used a type of neural network to correct for distorted spatial modes of light at the single-photon level.
    “The random phase distortion is one of the biggest challenges in using spatial modes of light in a wide variety of quantum technologies, such as quantum communication, quantum cryptography, and quantum sensing,” said Narayan Bhusal, doctoral candidate at LSU. “Our method is remarkably effective and time-efficient compared to conventional techniques. This is an exciting development for the future of free-space quantum technologies.”
    According to the research team, this smart quantum technology demonstrates the possibility of encoding of multiple bits of information in a single photon in realistic communication protocols affected by atmospheric turbulence.
    “Our technique has enormous implications for optical communication and quantum cryptography,” said Omar Magaña Loaiza, assistant professor of physics at LSU. “We are now exploring paths to implement our machine learning scheme in the Louisiana Optical Network Initiative to make it smart, more secure, and quantum.”
    Story Source:
    Materials provided by U.S. Army Research Laboratory. Note: Content may be edited for style and length. More

  • in

    Researchers' algorithm designs soft robots that sense

    There are some tasks that traditional robots — the rigid and metallic kind — simply aren’t cut out for. Soft-bodied robots, on the other hand, may be able to interact with people more safely or slip into tight spaces with ease. But for robots to reliably complete their programmed duties, they need to know the whereabouts of all their body parts. That’s a tall task for a soft robot that can deform in a virtually infinite number of ways.
    MIT researchers have developed an algorithm to help engineers design soft robots that collect more useful information about their surroundings. The deep-learning algorithm suggests an optimized placement of sensors within the robot’s body, allowing it to better interact with its environment and complete assigned tasks. The advance is a step toward the automation of robot design. “The system not only learns a given task, but also how to best design the robot to solve that task,” says Alexander Amini. “Sensor placement is a very difficult problem to solve. So, having this solution is extremely exciting.”
    The research will be presented during April’s IEEE International Conference on Soft Robotics and will be published in the journal IEEE Robotics and Automation Letters. Co-lead authors are Amini and Andrew Spielberg, both PhD students in MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Other co-authors include MIT PhD student Lillian Chin, and professors Wojciech Matusik and Daniela Rus.
    Creating soft robots that complete real-world tasks has been a long-running challenge in robotics. Their rigid counterparts have a built-in advantage: a limited range of motion. Rigid robots’ finite array of joints and limbs usually makes for manageable calculations by the algorithms that control mapping and motion planning. Soft robots are not so tractable.
    Soft-bodied robots are flexible and pliant — they generally feel more like a bouncy ball than a bowling ball. “The main problem with soft robots is that they are infinitely dimensional,” says Spielberg. “Any point on a soft-bodied robot can, in theory, deform in any way possible.” That makes it tough to design a soft robot that can map the location of its body parts. Past efforts have used an external camera to chart the robot’s position and feed that information back into the robot’s control program. But the researchers wanted to create a soft robot untethered from external aid.
    “You can’t put an infinite number of sensors on the robot itself,” says Spielberg. “So, the question is: How many sensors do you have, and where do you put those sensors in order to get the most bang for your buck?” The team turned to deep learning for an answer.
    The researchers developed a novel neural network architecture that both optimizes sensor placement and learns to efficiently complete tasks. First, the researchers divided the robot’s body into regions called “particles.” Each particle’s rate of strain was provided as an input to the neural network. Through a process of trial and error, the network “learns” the most efficient sequence of movements to complete tasks, like gripping objects of different sizes. At the same time, the network keeps track of which particles are used most often, and it culls the lesser-used particles from the set of inputs for the networks’ subsequent trials.
    By optimizing the most important particles, the network also suggests where sensors should be placed on the robot to ensure efficient performance. For example, in a simulated robot with a grasping hand, the algorithm might suggest that sensors be concentrated in and around the fingers, where precisely controlled interactions with the environment are vital to the robot’s ability to manipulate objects. While that may seem obvious, it turns out the algorithm vastly outperformed humans’ intuition on where to site the sensors.
    The researchers pitted their algorithm against a series of expert predictions. For three different soft robot layouts, the team asked roboticists to manually select where sensors should be placed to enable the efficient completion of tasks like grasping various objects. Then they ran simulations comparing the human-sensorized robots to the algorithm-sensorized robots. And the results weren’t close. “Our model vastly outperformed humans for each task, even though I looked at some of the robot bodies and felt very confident on where the sensors should go,” says Amini. “It turns out there are a lot more subtleties in this problem than we initially expected.”
    Spielberg says their work could help to automate the process of robot design. In addition to developing algorithms to control a robot’s movements, “we also need to think about how we’re going to sensorize these robots, and how that will interplay with other components of that system,” he says. And better sensor placement could have industrial applications, especially where robots are used for fine tasks like gripping. “That’s something where you need a very robust, well-optimized sense of touch,” says Spielberg. “So, there’s potential for immediate impact.”
    “Automating the design of sensorized soft robots is an important step toward rapidly creating intelligent tools that help people with physical tasks,” says Rus. “The sensors are an important aspect of the process, as they enable the soft robot to “see” and understand the world and its relationship with the world.”
    This research was funded, in part, by the National Science Foundation and the Fannie and John Hertz Foundation. More

  • in

    Tunable smart materials

    Researchers developed a system of self-assembling polymer microparticles with adjustable concentrations of two types of attached residues. They found that tuning the concentration of each type allowed them to control the aggregation and resulting shape of the clusters. This work may lead to advances in ‘smart’ materials, including sensors and damage-resistant surfaces.
    Scientists from the Graduate School of Science at Osaka University created superabsorbent polymer (SAP) microparticles that self-assemble into structures that can be modified by adjusting the proportion of particle type. This research may lead to new tunable biomimetic “smart materials” that can sense and respond to specific chemicals.
    Biological molecules in living organisms have a remarkable ability to form self-assembled structures when triggered by an external molecule. This has led scientists to try to create other “smart materials” that respond to their environment. Now, a team of researchers at Osaka University has come up with a tunable system involving poly(sodium acrylate) microparticles that can have one of two types of chemical groups attached. The adjustable parameters x and y refer to the molar percent of microparticles with β-cyclodextrin (βCD) and adamantyl (Ad) residues, respectively.
    “We found that the macroscopic shape of assemblies formed by microparticles was dependent on the residue content,” co-senior author Akihito Hashidzume says. In order for assemblies to form, x needed to be at least 22.3; however, the shape of assemblies could be controlled by varying y. As the value of y increased, the clusters became more and more elongated. The team hypothesized that at higher values of y, small clusters could form early and stick together, leading to elongated aggregates. Conversely, when y was small, clusters would only stick together after many collisions, resulting in more spherical aggregates. This provides a way to tune to the shape of the resulting clusters. The team measured the aggregates under a microscope to determine the shapes of assemblies using a statistical analysis.
    “On the basis of these findings, we hope to help reveal the origin of the diverse shape of living organisms, which are macroscopic assemblies controlled by molecular recognition,” co-senior author Akira Harada says. This research may also lead to the development of new smart sensors that can form clusters large enough to be seen with the naked eye.
    Story Source:
    Materials provided by Osaka University. Note: Content may be edited for style and length. More