More stories

  • in

    Energy harvesting to power the Internet of Things

    The wireless interconnection of everyday objects known as the Internet of Things depends on wireless sensor networks that need a low but constant supply of electrical energy. This can be provided by electromagnetic energy harvesters that generate electricity directly from the environment. Lise-Marie Lacroix from the Université de Toulouse, France, with colleagues from Toulouse, Grenoble and Atlanta, Georgia, USA, has used a mathematical technique, finite element simulation, to optimise the design of one such energy harvester so that it generates electricity as efficiently as possible. This work has now been published in the journal EPJ Special Topics.

    advertisement More

  • in

    Learning and remembering movement

    From the moment we are born, and even before that, we interact with the world through movement. We move our lips to smile or to talk. We extend our hand to touch. We move our eyes to see. We wiggle, we walk, we gesture, we dance. How does our brain remember this wide range of motions? How does it learn new ones? How does it make the calculations necessary for us to grab a glass of water, without dropping it, squashing it, or missing it?
    Technion Professor Jackie Schiller from the Ruth and Bruce Rappaport Faculty of Medicine and her team examined the brain at a single-neuron level to shed light on this mystery. They found that computation happens not just in the interaction between neurons (nerve cells ), but within each individual neuron. Each of these cells, it turns out, is not a simple switch, but a complicated calculating machine. This discovery, published recently in the Science magazine, promises changes not only to our understanding of how the brain works, but better understanding of conditions ranging from Parkinson’s disease to autism. And if that weren’t enough, these same findings are expected to advance machine learning, offering inspiration for new architectures.
    Movement is controlled by the primary motor cortex of the brain. In this area, researchers are able to pinpoint exactly which neuron(s) fire at any given moment to produce the movement we see. Prof. Schiller’s team was the first to get even closer, examining the activity not of the whole neuron as a single unit, but of its parts.
    Every neuron has branched extensions called dendrites. These dendrites are in close contact with the terminals (called axons) of other nerve cells, allowing the communication between them. A signal travels from the dendrites to the cell’s body, and then transferred onwards through the axon. The number and structure of dendrites varies greatly between nerve cells, like the crown of one tree differs from the crown of another.
    The particular neurons Prof. Schiller’s team focused on were the largest pyramidal neurons of the cortex. These cells, known to be heavily involved in movement, have a large dendritic tree, with many branches, sub-branches, and sub-sub-branches. What the team discovered is that these branches do not merely pass information onwards. Each sub-sub-branch performs a calculation on the information it receives and passes the result to the bigger sub-branch. The sub-branch than performs a calculation on the information received from all its subsidiaries and passes that on. Moreover, multiple dendritic branchlets can interact with one another to amplify their combined computational product. The result is a complex calculation performed within each individual neuron. For the first time, Prof. Schiller’s team showed that the neuron is compartmentalised, and that its branches perform calculations independently.
    “We used to think of each neuron as a sort of whistle, which either toots, or doesn’t,” Prof. Schiller explains. “Instead, we are looking at a piano. Its keys can be struck simultaneously, or in sequence, producing an infinity of different tunes.” This complex symphony playing in our brains is what enables us to learn and perform an infinity of different, complex and precise movements.
    Multiple neurodegenerative and neurodevelopmental disorders are likely to be linked to alterations in the neuron’s ability to process data. In Parkinson’s disease, it has been observed that the dendritic tree undergoes anatomical and physiological changes. In light of the new discoveries by the Technion team, we understand that as a result of these changes, the neuron’s ability to perform parallel computation is reduced. In autism, it looks possible that the excitability of the dendritic branches is altered, resulting in the numerous effects associated with the condition. The novel understanding of how neurons work opens new research pathways with regards to these and other disorders, with the hope of their alleviation.
    These same findings can also serve as an inspiration for the machine learning community. Deep neural networks, as their name suggests, attempt to create software that learns and functions somewhat similarly to a human brain. Although their advances constantly make the news, these networks are primitive compared to a living brain. A better understanding of how our brain actually works can help in designing more complex neural networks, enabling them to perform more complex tasks.
    This study was led by two of Prof. Schiller’s M.D.-Ph.D. candidate students Yara Otor and Shay Achvat, who contributed equally to the research. The team also included postdoctoral fellow Nate Cermak (now a neuroengineer) and Ph.D. student Hadas Benisty, as well as three collaborators: Professors Omri Barak, Yitzhak Schiller, and Alon Poleg-Polsky.
    The study was partially supported by the Israeli Science Foundation, Prince funds, the Rappaport Foundation, and the Zuckerman Postdoctoral Fellowship. More

  • in

    Quantum physics exponentially improves some types of machine learning

    Machine learning can get a boost from quantum physics.

    On certain types of machine learning tasks, quantum computers have an exponential advantage over standard computation, scientists report in the June 10 Science. The researchers proved that, according to quantum math, the advantage applies when using machine learning to understand quantum systems. And the team showed that the advantage holds up in real-world tests.

    “People are very excited about the potential of using quantum technology to improve our learning ability,” says theoretical physicist and computer scientist Hsin-Yuan Huang of Caltech. But it wasn’t entirely clear if machine learning could benefit from quantum physics in practice.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    In certain machine learning tasks, scientists attempt to glean information about a quantum system — say a molecule or a group of particles — by performing repeated experiments, and analyzing data from those experiments to learn about the system.

    Huang and colleagues studied several such tasks. In one, scientists aim to discern properties of the quantum system, such as the position and momentum of particles within. Quantum data from multiple experiments could be input into a quantum computer’s memory, and the computer would process the data jointly to learn the quantum system’s characteristics.

    The researchers proved theoretically that doing the same characterization with standard, or classical, techniques would require exponentially more experiments in order to learn the same information. Unlike a classical computer, a quantum computer can exploit entanglement — ethereal quantum linkages — to better analyze the results of multiple experiments.

    But the new work goes beyond just the theoretical. “It’s crucial to understand if this is realistic, if this is something we could see in the lab or if this is just theoretical,” says Dorit Aharonov of Hebrew University in Jerusalem, who was not involved with the research.

    So the researchers tested machine learning tasks with Google’s quantum computer, Sycamore (SN: 10/23/19). Rather than measuring a real quantum system, the team used simulated quantum data, and analyzed it using either quantum or classical techniques.

    Quantum machine learning won out there, too, even though Google’s quantum computer is noisy, meaning errors can slip into calculations. Eventually, scientists plan to build quantum computers that can correct their own errors (SN: 6/22/20). But for now, even without that error correction, quantum machine learning prevailed. More

  • in

    Scientists craft living human skin for robots

    From action heroes to villainous assassins, biohybrid robots made of both living and artificial materials have been at the center of many sci-fi fantasies, inspiring today’s robotic innovations. It’s still a long way until human-like robots walk among us in our daily lives, but scientists from Japan are bringing us one step closer by crafting living human skin on robots. The method developed, presented June 9 in the journal Matter, not only gave a robotic finger skin-like texture, but also water-repellent and self-healing functions.
    “The finger looks slightly ‘sweaty’ straight out of the culture medium,” says first author Shoji Takeuchi, a professor at the University of Tokyo, Japan. “Since the finger is driven by an electric motor, it is also interesting to hear the clicking sounds of the motor in harmony with a finger that looks just like a real one.”
    Looking “real” like a human is one of the top priorities for humanoid robots that are often tasked to interact with humans in healthcare and service industries. A human-like appearance can improve communication efficiency and evoke likability. While current silicone skin made for robots can mimic human appearance, it falls short when it comes to delicate textures like wrinkles and lacks skin-specific functions. Attempts at fabricating living skin sheets to cover robots have also had limited success, since it’s challenging to conform them to dynamic objects with uneven surfaces.
    “With that method, you have to have the hands of a skilled artisan who can cut and tailor the skin sheets,” says Takeuchi. “To efficiently cover surfaces with skin cells, we established a tissue molding method to directly mold skin tissue around the robot, which resulted in a seamless skin coverage on a robotic finger.”
    To craft the skin, the team first submerged the robotic finger in a cylinder filled with a solution of collagen and human dermal fibroblasts, the two main components that make up the skin’s connective tissues. Takeuchi says the study’s success lies within the natural shrinking tendency of this collagen and fibroblast mixture, which shrank and tightly conformed to the finger. Like paint primers, this layer provided a uniform foundation for the next coat of cells — human epidermal keratinocytes — to stick to. These cells make up 90% of the outermost layer of skin, giving the robot a skin-like texture and moisture-retaining barrier properties.
    The crafted skin had enough strength and elasticity to bear the dynamic movements as the robotic finger curled and stretched. The outermost layer was thick enough to be lifted with tweezers and repelled water, which provides various advantages in performing specific tasks like handling electrostatically charged tiny polystyrene foam, a material often used in packaging. When wounded, the crafted skin could even self-heal like humans’ with the help of a collagen bandage, which gradually morphed into the skin and withstood repeated joint movements.
    “We are surprised by how well the skin tissue conforms to the robot’s surface,” says Takeuchi. “But this work is just the first step toward creating robots covered with living skin.” The developed skin is much weaker than natural skin and can’t survive long without constant nutrient supply and waste removal. Next, Takeuchi and his team plan to address those issues and incorporate more sophisticated functional structures within the skin, such as sensory neurons, hair follicles, nails, and sweat glands.
    “I think living skin is the ultimate solution to give robots the look and touch of living creatures since it is exactly the same material that covers animal bodies,” says Takeuchi.
    This work was supported by funding from JSPS Grants-in-Aid for Scientific Research (KAKENHI) and JSPS Grant-in-Aid for Early-Career Scientists (KAKENHI).
    Story Source:
    Materials provided by Cell Press. Note: Content may be edited for style and length. More

  • in

    Researchers demonstrate 40-channel optical communication link

    Researchers have demonstrated a silicon-based optical communication link that combines two multiplexing technologies to create 40 optical data channels that can simultaneously move data. The new chip-scale optical interconnect can transmit about 400 GB of data per second — the equivalent of about 100,000 streaming movies. This could improve data-intensive internet applications from video streaming services to high-capacity transactions for the stock market.
    “As demands to move more information across the internet continue to grow, we need new technologies to push data rates further,” said Peter Delfyett, who led the University of Central Florida College of Optics and Photonics (CREOL) research team. “Because optical interconnects can move more data than their electronic counterparts, our work could enable better and faster data processing in the data centers that form the backbone of the internet.”
    A multi-institutional group of researchers describes the new optical communication link in the Optica Publishing Group journal Optics Letters. It achieves 40 channels by combining a frequency comb light source based on a new photonic crystal resonator developed by the National Institute of Standards and Technology (NIST) with an optimized mode-division multiplexer designed by the researchers at Stanford University. Each channel can be used to carry information much like different stereo channels, or frequencies, transmit different music stations.
    “We show that these new frequency combs can be used in fully integrated optical interconnects,” said Chinmay Shirpurkar, co-first author of the paper. “All the photonic components were made from silicon-based material, which demonstrates the potential for making optical information handling devices from low-cost, easy-to-manufacture optical interconnects.”
    In addition to improving internet data transmission, the new technology could also be used to make faster optical computers that could provide the high levels of computing power needed for artificial intelligence, machine learning, large-scale emulation and other applications.
    Using multiple light dimensions
    The new work involved research teams led by Firooz Aflatouni of the University of Pennsylvania, Scott B. Papp from NIST, Jelena Vuckovic from Stanford University and Delfyett from CREOL. It is part of the DARPA Photonics in the Package for Extreme Scalability (PIPES) program, which aims to use light to vastly improve the digital connectivity of packaged integrated circuits using microcomb-based light sources. More

  • in

    Artificial intelligence reveals a never-before described 3D structure in rotavirus spike protein

    Of the three groups of rotavirus that cause gastroenteritis in people, called groups A, B and C, groups A and C affect mostly children and are the best characterized. On the other hand, of group B, which causes severe diarrhea predominantly in adults, little is known about the tip of the virus’s spike protein, called VP8* domain, which mediates the infection of cells in the gut.
    “Determining the structure of VP8* in group B rotavirus is important because it will help us understand how the virus infects gastrointestinal cells and design strategies to prevent and treat this infection that causes severe diarrheal outbreaks,” said corresponding author Dr B. V. Venkataram Prasad, professor of biochemistry and molecular biology at Baylor College of Medicine.
    The team’s first step was to determine the 3D structure of VP8* B using X-Ray crystallography, a laborious and time-consuming process. However, this traditional approach was unsuccessful in this case. The researchers then turned to a recently developed artificial intelligence-based computational program called AlphaFold2.
    “AlphaFold2 predicts the 3D structure of proteins according to their genetic sequence,” said first author and co-corresponding author Dr. Liya Hu, assistant professor of biochemistry and molecular biology at Baylor. “We knew that the protein sequence of VP8* of rotavirus group B was about 10% similar to the sequences of VP8* of rotavirus A and C, so we expected differences in the 3D structure as well. But we were surprised when AlphaFold2 predicted a 3D structure for the VP8* B that was not just totally different from that of the VP8* domain in rotavirus A and C, but also that no other protein before had been reported to have this structure.”
    With this information in hand, the researchers went back to the lab bench and experimentally confirmed that the structure of VP8* B predicted by ALphaFold2 indeed coincided with the actual structure of the protein using X-ray crystallography.
    How rotavirus infects cells
    Previous research has shown that rotavirus A and C infect cells by using the VP8* domain to bind to specific sugar components on histo-blood group antigens, including the A, B, AB and O blood groups, present in many cells in the body. It has been proposed that the ability of different rotavirus to bind to different sugars on the histo-group antigens might explain why some of these viruses specifically infect young children while others affect other populations. Unlike the VP8* A and VP8* C, the sugar specificity of VP8* B had not been characterized until now. More

  • in

    Ancient penguin bones reveal unprecedented shrinkage in key Antarctic glaciers

    Antarctica’s Pine Island and Thwaites glaciers are losing ice more quickly than they have at any time in the last few thousand years, ancient penguin bones and limpet shells suggest.

    Scientists are worried that the glaciers, two of Antarctica’s fastest-shrinking ones, are in the process of unstable, runaway retreat. By reconstructing the history of the glaciers using the old bones and shells, researchers wanted to find out whether these glaciers have ever been smaller than they are today.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    “If the ice has been smaller in the past, and did readvance, that shows that we’re not necessarily in runaway retreat” right now, says glacial geologist Brenda Hall of the University of Maine in Orono. The new result, described June 9 in Nature Geoscience, “doesn’t give us any comfort,” Hall says. “We can’t refute the hypothesis of a runaway retreat.”

    Pine Island and Thwaites glaciers sit in a broad ocean basin shaped like a bowl, deepening toward the middle. This makes the ice vulnerable to warm currents of dense, salty water that hug the ocean floor (SN: 4/9/21). Scientists have speculated that as the glaciers retreat farther inland, they could tip into an irreversible collapse (SN: 12/13/21).  That collapse could play out over centuries and raise the sea level by roughly a meter.

    Researchers dated ancient shorelines (seen here as the series of small ridges in the rocky terrain between the foreground boulders and background snow) on islands roughly 100 kilometers from Pine Island and Thwaites glaciers in Antarctica to help figure out if the glaciers are in the process of unstable, runaway retreat.James Kirkham

    To reconstruct how the glaciers have changed over thousands of years, the researchers turned to old penguin bones and shells, collected by Scott Braddock, a glacial geologist in Hall’s lab, during a research cruise in 2019 on the U.S. icebreaker Nathaniel B. Palmer.

    One afternoon, Braddock clambered from a bobbing inflatable boat onto the barren shores of Lindsey 1 — one of a dozen or more rocky islands that sit roughly 100 kilometers from where Pine Island Glacier terminates in the ocean. As he climbed the slope, his boots slipped over rocks covered in penguin guano and dotted with dingy white feathers. Then, he came upon a series of ridges — rocks and pebbles that were piled up by waves during storms thousands of years before — that marked ancient shorelines.

    Twelve thousand years ago, just as the last ice age was ending, this island would have been entirely submerged in the ocean. But as nearby glaciers shed billions of metric tons of ice, the removal of that weight allowed Earth’s crust to spring up like a bed mattress — pushing Lindsey 1 and other nearby islands out of the water, a few millimeters per year.

    As Lindsey 1 rose, a series of shorelines formed on the edges of the island — and then were lifted, one after another, out of reach of the waves. By measuring the ages and heights of those stranded shorelines, the researchers could tell how quickly the island had risen. Because the rate of uplift is determined by the amount of ice being lost from nearby glaciers, this would reveal how quickly Pine Island and Thwaites glaciers had retreated — and whether they had gotten smaller than they are today and then readvanced.

    Braddock dug into the pebbly ridges, collecting ancient cone-shaped limpet shells and marble-sized fragments of penguin bones deposited when the shorelines formed. Back in Maine, he and his colleagues radiocarbon dated those objects to estimate the ages of the shorelines. Ultimately, the researchers dated nearly two dozen shorelines, spread across several islands in the region.

    These dates showed that the oldest and highest beach formed 5,500 years ago. Since that time, up until the last few decades, the islands have risen at a steady rate of about 3.5 millimeters per year. This is far slower than the 20 to 40 millimeters per year that the land around Pine Island and Thwaites is currently rising, suggesting that the rate of ice loss from nearby glaciers has skyrocketed due to the onset of rapid human-caused warming, after thousands of years of relative stability.

    “We’re going into unknown territory,” Braddock says. “We don’t have an analog to compare what’s going on today with what happened in the past.”

    Slawek Tulaczyk, a glaciologist at the University of California, Santa Cruz, sees the newly dated shorelines as “an important piece of information.” But he cautions against overinterpreting the results. While these islands are 100 kilometers from Pine Island and Thwaites, they are less than 50 kilometers from several smaller glaciers — and changes in these closer glaciers might have obscured whatever was happening at Pine Island and Thwaites long ago. He suspects that Pine Island and Thwaites could still have retreated and then readvanced a few dozen kilometers: “I don’t think this study settles it.” More

  • in

    Paving the way for faster computers, longer-lasting batteries

    University of Queensland scientists have cracked a problem that’s frustrated chemists and physicists for years, potentially leading to a new age of powerful, efficient, and environmentally friendly technologies.
    Using quantum mechanics, Professor Ben Powell from UQ’s School of Mathematics and Physics has discovered a ‘recipe’ which allows molecular switches to work at room temperature.
    “Switches are materials that can shift between two or more states, such as on and off or 0 and 1, and are the basis of all digital technologies,” Professor Powell said.
    “This discovery paves the way for smaller and more powerful and energy efficient technologies.
    “You can expect batteries will last longer and computers to run faster.”
    Until now, molecular switching has only been possible when the molecules are extremely cold — at temperatures below minus 250 degrees centigrade. More