More stories

  • in

    A new, positive approach could be the key to next-generation, transparent electronics

    A new study, out this week, could pave the way to revolutionary, transparent electronics.
    Such see-through devices could potentially be integrated in glass, in flexible displays and in smart contact lenses, bringing to life futuristic devices that seem like the product of science fiction.
    For several decades, researchers have sought a new class of electronics based on semiconducting oxides, whose optical transparency could enable these fully-transparent electronics.
    Oxide-based devices could also find use in power electronics and communication technology, reducing the carbon footprint of our utility networks.
    A RMIT-led team has now introduced ultrathin beta-tellurite to the two-dimensional (2D) semiconducting material family, providing an answer to this decades-long search for a high mobility p-type oxide.
    “This new, high-mobility p-type oxide fills a crucial gap in the materials spectrum to enable fast, transparent circuits,” says team leader Dr Torben Daeneke, who led the collaboration across three FLEET nodes. More

  • in

    Qubits comprised of holes could be the trick to build faster, larger quantum computers

    A new study indicates holes the solution to operational speed/coherence trade-off, potential scaling up of qubits to a mini-quantum computer.
    Quantum computers are predicted to be much more powerful and functional than today’s ‘classical’ computers.
    One way to make a quantum bit is to use the ‘spin’ of an electron, which can point either up or down. To make quantum computers as fast and power-efficient as possible we would like to operate them using only electric fields, which are applied using ordinary electrodes.
    Although spin does not ordinarily ‘talk’ to electric fields, in some materials spins can interact with electric fields indirectly, and these are some of the hottest materials currently studied in quantum computing.
    The interaction that enables spins to talk to electric fields is called the spin-orbit interaction, and is traced all the way back to Einstein’s theory of relativity.
    The fear of quantum-computing researchers has been that when this interaction is strong, any gain in operation speed would be offset by a loss in coherence (essentially, how long we can preserve quantum information). More

  • in

    Kirigami-style fabrication may enable new 3D nanostructures

    A new technique that mimics the ancient Japanese art of kirigami may offer an easier way to fabricate complex 3D nanostructures for use in electronics, manufacturing and health care.
    Kirigami enhances the Japanese artform of origami, which involves folding paper to create 3D structural designs, by strategically incorporating cuts to the paper prior to folding. The method enables artists to create sophisticated three-dimensional structures more easily.
    “We used kirigami at the nanoscale to create complex 3D nanostructures,” said Daniel Lopez, Penn State Liang Professor of Electrical Engineering and Computer Science, and leader of the team that published this research in Advanced Materials. “These 3D structures are difficult to fabricate because current nanofabrication processes are based on the technology used to fabricate microelectronics which only use planar, or flat, films. Without kirigami techniques, complex three-dimensional structures would be much more complicated to fabricate or simply impossible to make.”
    Lopez said that if force is applied to a uniform structural film, nothing really happens other than stretching it a bit, like what happens when a piece of paper is stretched. But when cuts are introduced to the film, and forces are applied in a certain direction, a structure pops up, similar to when a kirigami artist applies force to a cut paper. The geometry of the planar pattern of cuts determines the shape of the 3D architecture.
    “We demonstrated that it is possible to use conventional planar fabrication methods to create different 3D nanostructures from the same 2D cut geometry,” Lopez said. “By introducing minimum changes to the dimensions of the cuts in the film, we can drastically change the three-dimensional shape of the pop-up architectures. We demonstrated nanoscale devices that can tilt or change their curvature just by changing the width of the cuts a few nanometers.”
    This new field of kirigami-style nanoengineering enables the development of machines and structures that can change from one shape to another, or morph, in response to changes in the environment. One example is an electronic component that changes shape in elevated temperatures to enable more air flow within a device to keep it from overheating.
    “This kirigami technique will allow the development of adaptive flexible electronics that can be incorporated onto surfaces with complicated topography, such as a sensor resting on the human brain,” Lopez said. “We could use these concepts to design sensors and actuators that can change shape and configuration to perform a task more efficiently. Imagine the potential of structures that can change shape with minuscule changes in temperature, illumination or chemical conditions.”
    Lopez will focus his future research on applying these kirigami techniques to materials that are one atom thick, and thin actuators made of piezoelectrics. These 2D materials open new possibilities for applications of kirigami-induced structures. Lopez said his goal is to work with other researchers at Penn State’s Materials Research Institute (MRI) to develop a new generation of miniature machines that are atomically flat and are more responsive to changes in the environment.
    “MRI is a world leader in the synthesis and characterization of 2D materials, which are the ultimate thin-films that can be used for kirigami engineering,” Lopez said. “Moreover, by incorporating ultra-thin piezo and ferroelectric materials onto kirigami structures, we will develop agile and shape-morphing structures. These shape-morphing micro-machines would be very useful for applications in harsh environments and for drug delivery and health monitoring. I am working at making Penn State and MRI the place where we develop these super-small machines for a specific variety of applications.”
    Story Source:
    Materials provided by Penn State. Original written by Jamie Oberdick. Note: Content may be edited for style and length. More

  • in

    New method uses device cameras to measure pulse, breathing rate and could help telehealth

    Telehealth has become a critical way for doctors to still provide health care while minimizing in-person contact during COVID-19. But with phone or Zoom appointments, it’s harder for doctors to get important vital signs from a patient, such as their pulse or respiration rate, in real time.
    A University of Washington-led team has developed a method that uses the camera on a person’s smartphone or computer to take their pulse and respiration signal from a real-time video of their face. The researchers presented this state-of-the-art system in December at the Neural Information Processing Systems conference.
    Now the team is proposing a better system to measure these physiological signals. This system is less likely to be tripped up by different cameras, lighting conditions or facial features, such as skin color. The researchers will present these findings April 8 at the ACM Conference on Health, Interference, and Learning.
    “Machine learning is pretty good at classifying images. If you give it a series of photos of cats and then tell it to find cats in other images, it can do it. But for machine learning to be helpful in remote health sensing, we need a system that can identify the region of interest in a video that holds the strongest source of physiological information — pulse, for example — and then measure that over time,” said lead author Xin Liu, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering.
    “Every person is different,” Liu said. “So this system needs to be able to quickly adapt to each person’s unique physiological signature, and separate this from other variations, such as what they look like and what environment they are in.”
    The team’s system is privacy preserving — it runs on the device instead of in the cloud — and uses machine learning to capture subtle changes in how light reflects off a person’s face, which is correlated with changing blood flow. Then it converts these changes into both pulse and respiration rate. More

  • in

    Microscopic images reveal the science and beauty of face masks

    Studying fabrics at very high magnification helps determine how some face masks filter out particles better than others. And the close-ups reveal an unseen beauty of the mundane objects that have now become an essential part of life around the world.

    As scientists continue to show how effective masks can be at slowing the spread of the new coronavirus, particularly when they have a good fit and are worn correctly, some have taken microscopic approaches (SN: 2/12/21).

    “Embedded in microscale textures are clues as to why materials have various properties,” says Edward Vicenzi, a microanalysis expert at the Smithsonian’s Museum Conservation Institute in Suitland, Md. “Unraveling that evidence turns out to be a fun job.”

    Before the pandemic, Vicenzi spent his days observing meteorites, stones and other museum specimens under the microscope. But in March 2020, as the COVID-19 pandemic progressed, he and colleagues from the National Institute for Standards and Technology in Gaithersburg, Md., felt a strong desire to contribute to beating back the virus. So they started studying face-covering materials instead.

    Cotton flannel: A network of cotton fibers “hovers” above a woven surface in this view of the fabric. This chaotic arrangement gives cotton flannel fibers additional opportunities to grab particles as they flow through the fabric. E.P. Vicenzi/Smithsonian’s Museum Conservation Institute and NIST

    Polyester-cotton blend: Disheveled natural cotton fibers (pale) contrast with nearly identical polyester fibers (blue) in this false-color image. Polyester fibers are highly organized, mostly straight and smooth, making them less effective than cotton fibers alone at trapping nanoscale particles. E.P. Vicenzi/Smithsonian’s Museum Conservation Institute and NIST

    Rayon: Like patterns observed on rigatoni pasta, grooves run along the length of rayon fibers. Unlike cotton flannels, rayon has no apparent weblike structures formed from raised fibers, making it easier for particles to move from one side of the synthetic fabric to the other. E.P. Vicenzi/Smithsonian’s Museum Conservation Institute and NIST

    Wool flannel: Seen in cross-section, these fibers resemble a hurricane swirl. Wool flannel can also form fiber webs that block particles, but those webs are not as effective as ones in 100-percent cotton, researchers found. E.P. Vicenzi/Smithsonian’s Museum Conservation Institute and NIST

    N95 mask: In an N95 mask (seen in false color cross-section), a thin outer layer (top) and thick inner layer (bottom) sandwich a filtration layer (purple), which traps the smallest particles. The multilayered assemblage made of plastic is melted and blown into a weblike fabric, which makes N95s filter particles better than cloth masks, even cotton ones. E.P. Vicenzi/Smithsonian’s Museum Conservation Institute and NIST

    Using a scanning electron microscope, Vicenzi and colleagues have examined dozens of materials, including coffee filters, pillowcases, surgical masks and N95 masks. In 2020, the team found that N95 respirator masks are the most effective at providing protection from aerosols like the ones in which SARS-CoV-2, the virus that causes COVID-19, travels. And the researchers reported that synthetic fabrics, like chiffon or rayon, don’t trap as many particles as tightly woven cotton flannels.

    Microscopic textures can explain each fabric’s ability to filter out aerosols. The random nature of cotton fibers — with its wrinkled texture and complex shapes such as kinks, bends and folds — probably allows cotton to trap more nanoscale particles than other fabrics, Vicenzi says. In contrast, polyester fabrics have highly organized, mostly straight and smooth fibers, which makes them less efficient as face masks.

    Sign up for e-mail updates on the latest coronavirus news and research

    Cotton flannels also provide additional protection by absorbing moisture from breath, Vicenzi and colleagues report March 8 in ACS Applied Nano Materials.

    “Since cotton loves water, it swells up in humid environments, and that makes it harder for particles to make their way through a mask,” says Vicenzi. Polyester and nylon masks, on the other hand, “repel water from your breath, so there’s no added benefit.”

    Through his work, Vicenzi has explored the unseen world of face-covering materials. Some textiles remind him of food, such as rayon’s fibers that resemble the texture of rigatoni pasta. Others, like wool, remind him of atmospheric patterns such as the swirl of a hurricane.

    Vicenzi plans to keep observing face masks up close. And he hopes his research helps people decide how to best protect themselves and others during the COVID-19 pandemic. “It’s nice to use an effective material for a mask if you can,” he says. “However, wearing any mask compared to none at all makes the biggest difference in slowing the spread of pathogens.”

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox More

  • in

    A robot that senses hidden objects

    In recent years, robots have gained artificial vision, touch, and even smell. “Researchers have been giving robots human-like perception,” says MIT Associate Professor Fadel Adib. In a new paper, Adib’s team is pushing the technology a step further. “We’re trying to give robots superhuman perception,” he says.
    The researchers have developed a robot that uses radio waves, which can pass through walls, to sense occluded objects. The robot, called RF-Grasp, combines this powerful sensing with more traditional computer vision to locate and grasp items that might otherwise be blocked from view. The advance could one day streamline e-commerce fulfillment in warehouses or help a machine pluck a screwdriver from a jumbled toolkit.
    The research will be presented in May at the IEEE International Conference on Robotics and Automation. The paper’s lead author is Tara Boroushaki, a research assistant in the Signal Kinetics Group at the MIT Media Lab. Her MIT co-authors include Adib, who is the director of the Signal Kinetics Group; and Alberto Rodriguez, the Class of 1957 Associate Professor in the Department of Mechanical Engineering. Other co-authors include Junshan Leng, a research engineer at Harvard University, and Ian Clester, a PhD student at Georgia Tech.
    As e-commerce continues to grow, warehouse work is still usually the domain of humans, not robots, despite sometimes-dangerous working conditions. That’s in part because robots struggle to locate and grasp objects in such a crowded environment. “Perception and picking are two roadblocks in the industry today,” says Rodriguez. Using optical vision alone, robots can’t perceive the presence of an item packed away in a box or hidden behind another object on the shelf — visible light waves, of course, don’t pass through walls.
    But radio waves can.
    For decades, radio frequency (RF) identification has been used to track everything from library books to pets. RF identification systems have two main components: a reader and a tag. The tag is a tiny computer chip that gets attached to — or, in the case of pets, implanted in — the item to be tracked. The reader then emits an RF signal, which gets modulated by the tag and reflected back to the reader. More

  • in

    Dynamic model of SARS-CoV-2 spike protein reveals potential new vaccine targets

    A new, detailed model of the surface of the SARS-CoV-2 spike protein reveals previously unknown vulnerabilities that could inform development of vaccines. Mateusz Sikora of the Max Planck Institute of Biophysics in Frankfurt, Germany, and colleagues present these findings in the open-access journal PLOS Computational Biology.
    SARS-CoV-2 is the virus responsible for the COVID-19 pandemic. A key feature of SARS-CoV-2 is its spike protein, which extends from its surface and enables it to target and infect human cells. Extensive research has resulted in detailed static models of the spike protein, but these models do not capture the flexibility of the spike protein itself nor the movements of protective glycans — chains of sugar molecules — that coat it.
    To support vaccine development, Sikora and colleagues aimed to identify novel potential target sites on the surface of the spike protein. To do so, they developed molecular dynamics simulations that capture the complete structure of the spike protein and its motions in a realistic environment.
    These simulations show that glycans on the spike protein act as a dynamic shield that helps the virus evade the human immune system. Similar to car windshield wipers, the glycans cover nearly the entire spike surface by flopping back and forth, even though their coverage is minimal at any given instant.
    By combining the dynamic spike protein simulations with bioinformatic analysis, the researchers identified spots on the surface of the spike proteins that are least protected by the glycan shields. Some of the detected sites have been identified in previous research, but some are novel. The vulnerability of many of these novel sites was confirmed by other research groups in subsequent lab experiments.
    “We are in a phase of the pandemic driven by the emergence of new variants of SARS-CoV-2, with mutations concentrated in particular in the spike protein,” Sikora says. “Our approach can support the design of vaccines and therapeutic antibodies, especially when established methods struggle.”
    The method developed for this study could also be applied to identify potential vulnerabilities of other viral proteins.
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    First-of-its-kind mechanical model simulates bending of mammalian whiskers

    Researchers have developed a new mechanical model that simulates how whiskers bend within a follicle in response to an external force, paving the way toward better understanding of how whiskers contribute to mammals’ sense of touch. Yifu Luo and Mitra Hartmann of Northwestern University and colleagues present these findings in the open-access journal PLOS Computational Biology.
    With the exception of some primates, most mammals use whiskers to explore their environment through the sense of touch. Whiskers have no sensors along their length, but when an external force bends a whisker, that deformation extends into the follicle at the base of the whisker, where the whisker pushes or pulls on sensor cells, triggering touch signals in the nervous system.
    Few previous studies have examined how whiskers deform within follicles in order to impinge on the sensor cells — mechanoreceptors — inside. To better understand this process, Luo and colleagues drew on data from experimental studies of whisker follicles to create the first mechanical model capable of simulating whisker deformation within follicles.
    The simulations suggest that whisker deformation within follicles most likely occurs in an “S” shape, although future experimental data may show that the deformation is “C” shaped. The researchers demonstrate that these shape estimates can be used to predict how whiskers push and pull on different kinds of mechanoreceptors located in different parts of the follicle, influencing touch signals sent to the brain.
    The new model applies to both passive touch and active “whisking,” when an animal uses muscles to move its whiskers. The simulations suggest that, during active whisking, the tactile sensitivity of the whisker system is enhanced by increased blood pressure in the follicle and by increased stiffness of follicular muscle and tissue structures.
    “It is exciting to use simulations, constrained by anatomical observations, to gain insights into biological processes that cannot be directly measured experimentally,” Hartmann says. “The work also underscores just how important mechanics are to understanding the sensory signals that the brain has evolved to process.”
    Future research will be needed to refine the model, both computationally and by incorporating new experimental data.
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More