More stories

  • in

    Qubits comprised of holes could be the trick to build faster, larger quantum computers

    A new study indicates holes the solution to operational speed/coherence trade-off, potential scaling up of qubits to a mini-quantum computer.
    Quantum computers are predicted to be much more powerful and functional than today’s ‘classical’ computers.
    One way to make a quantum bit is to use the ‘spin’ of an electron, which can point either up or down. To make quantum computers as fast and power-efficient as possible we would like to operate them using only electric fields, which are applied using ordinary electrodes.
    Although spin does not ordinarily ‘talk’ to electric fields, in some materials spins can interact with electric fields indirectly, and these are some of the hottest materials currently studied in quantum computing.
    The interaction that enables spins to talk to electric fields is called the spin-orbit interaction, and is traced all the way back to Einstein’s theory of relativity.
    The fear of quantum-computing researchers has been that when this interaction is strong, any gain in operation speed would be offset by a loss in coherence (essentially, how long we can preserve quantum information). More

  • in

    Kirigami-style fabrication may enable new 3D nanostructures

    A new technique that mimics the ancient Japanese art of kirigami may offer an easier way to fabricate complex 3D nanostructures for use in electronics, manufacturing and health care.
    Kirigami enhances the Japanese artform of origami, which involves folding paper to create 3D structural designs, by strategically incorporating cuts to the paper prior to folding. The method enables artists to create sophisticated three-dimensional structures more easily.
    “We used kirigami at the nanoscale to create complex 3D nanostructures,” said Daniel Lopez, Penn State Liang Professor of Electrical Engineering and Computer Science, and leader of the team that published this research in Advanced Materials. “These 3D structures are difficult to fabricate because current nanofabrication processes are based on the technology used to fabricate microelectronics which only use planar, or flat, films. Without kirigami techniques, complex three-dimensional structures would be much more complicated to fabricate or simply impossible to make.”
    Lopez said that if force is applied to a uniform structural film, nothing really happens other than stretching it a bit, like what happens when a piece of paper is stretched. But when cuts are introduced to the film, and forces are applied in a certain direction, a structure pops up, similar to when a kirigami artist applies force to a cut paper. The geometry of the planar pattern of cuts determines the shape of the 3D architecture.
    “We demonstrated that it is possible to use conventional planar fabrication methods to create different 3D nanostructures from the same 2D cut geometry,” Lopez said. “By introducing minimum changes to the dimensions of the cuts in the film, we can drastically change the three-dimensional shape of the pop-up architectures. We demonstrated nanoscale devices that can tilt or change their curvature just by changing the width of the cuts a few nanometers.”
    This new field of kirigami-style nanoengineering enables the development of machines and structures that can change from one shape to another, or morph, in response to changes in the environment. One example is an electronic component that changes shape in elevated temperatures to enable more air flow within a device to keep it from overheating.
    “This kirigami technique will allow the development of adaptive flexible electronics that can be incorporated onto surfaces with complicated topography, such as a sensor resting on the human brain,” Lopez said. “We could use these concepts to design sensors and actuators that can change shape and configuration to perform a task more efficiently. Imagine the potential of structures that can change shape with minuscule changes in temperature, illumination or chemical conditions.”
    Lopez will focus his future research on applying these kirigami techniques to materials that are one atom thick, and thin actuators made of piezoelectrics. These 2D materials open new possibilities for applications of kirigami-induced structures. Lopez said his goal is to work with other researchers at Penn State’s Materials Research Institute (MRI) to develop a new generation of miniature machines that are atomically flat and are more responsive to changes in the environment.
    “MRI is a world leader in the synthesis and characterization of 2D materials, which are the ultimate thin-films that can be used for kirigami engineering,” Lopez said. “Moreover, by incorporating ultra-thin piezo and ferroelectric materials onto kirigami structures, we will develop agile and shape-morphing structures. These shape-morphing micro-machines would be very useful for applications in harsh environments and for drug delivery and health monitoring. I am working at making Penn State and MRI the place where we develop these super-small machines for a specific variety of applications.”
    Story Source:
    Materials provided by Penn State. Original written by Jamie Oberdick. Note: Content may be edited for style and length. More

  • in

    New method uses device cameras to measure pulse, breathing rate and could help telehealth

    Telehealth has become a critical way for doctors to still provide health care while minimizing in-person contact during COVID-19. But with phone or Zoom appointments, it’s harder for doctors to get important vital signs from a patient, such as their pulse or respiration rate, in real time.
    A University of Washington-led team has developed a method that uses the camera on a person’s smartphone or computer to take their pulse and respiration signal from a real-time video of their face. The researchers presented this state-of-the-art system in December at the Neural Information Processing Systems conference.
    Now the team is proposing a better system to measure these physiological signals. This system is less likely to be tripped up by different cameras, lighting conditions or facial features, such as skin color. The researchers will present these findings April 8 at the ACM Conference on Health, Interference, and Learning.
    “Machine learning is pretty good at classifying images. If you give it a series of photos of cats and then tell it to find cats in other images, it can do it. But for machine learning to be helpful in remote health sensing, we need a system that can identify the region of interest in a video that holds the strongest source of physiological information — pulse, for example — and then measure that over time,” said lead author Xin Liu, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering.
    “Every person is different,” Liu said. “So this system needs to be able to quickly adapt to each person’s unique physiological signature, and separate this from other variations, such as what they look like and what environment they are in.”
    The team’s system is privacy preserving — it runs on the device instead of in the cloud — and uses machine learning to capture subtle changes in how light reflects off a person’s face, which is correlated with changing blood flow. Then it converts these changes into both pulse and respiration rate. More

  • in

    A robot that senses hidden objects

    In recent years, robots have gained artificial vision, touch, and even smell. “Researchers have been giving robots human-like perception,” says MIT Associate Professor Fadel Adib. In a new paper, Adib’s team is pushing the technology a step further. “We’re trying to give robots superhuman perception,” he says.
    The researchers have developed a robot that uses radio waves, which can pass through walls, to sense occluded objects. The robot, called RF-Grasp, combines this powerful sensing with more traditional computer vision to locate and grasp items that might otherwise be blocked from view. The advance could one day streamline e-commerce fulfillment in warehouses or help a machine pluck a screwdriver from a jumbled toolkit.
    The research will be presented in May at the IEEE International Conference on Robotics and Automation. The paper’s lead author is Tara Boroushaki, a research assistant in the Signal Kinetics Group at the MIT Media Lab. Her MIT co-authors include Adib, who is the director of the Signal Kinetics Group; and Alberto Rodriguez, the Class of 1957 Associate Professor in the Department of Mechanical Engineering. Other co-authors include Junshan Leng, a research engineer at Harvard University, and Ian Clester, a PhD student at Georgia Tech.
    As e-commerce continues to grow, warehouse work is still usually the domain of humans, not robots, despite sometimes-dangerous working conditions. That’s in part because robots struggle to locate and grasp objects in such a crowded environment. “Perception and picking are two roadblocks in the industry today,” says Rodriguez. Using optical vision alone, robots can’t perceive the presence of an item packed away in a box or hidden behind another object on the shelf — visible light waves, of course, don’t pass through walls.
    But radio waves can.
    For decades, radio frequency (RF) identification has been used to track everything from library books to pets. RF identification systems have two main components: a reader and a tag. The tag is a tiny computer chip that gets attached to — or, in the case of pets, implanted in — the item to be tracked. The reader then emits an RF signal, which gets modulated by the tag and reflected back to the reader. More

  • in

    Dynamic model of SARS-CoV-2 spike protein reveals potential new vaccine targets

    A new, detailed model of the surface of the SARS-CoV-2 spike protein reveals previously unknown vulnerabilities that could inform development of vaccines. Mateusz Sikora of the Max Planck Institute of Biophysics in Frankfurt, Germany, and colleagues present these findings in the open-access journal PLOS Computational Biology.
    SARS-CoV-2 is the virus responsible for the COVID-19 pandemic. A key feature of SARS-CoV-2 is its spike protein, which extends from its surface and enables it to target and infect human cells. Extensive research has resulted in detailed static models of the spike protein, but these models do not capture the flexibility of the spike protein itself nor the movements of protective glycans — chains of sugar molecules — that coat it.
    To support vaccine development, Sikora and colleagues aimed to identify novel potential target sites on the surface of the spike protein. To do so, they developed molecular dynamics simulations that capture the complete structure of the spike protein and its motions in a realistic environment.
    These simulations show that glycans on the spike protein act as a dynamic shield that helps the virus evade the human immune system. Similar to car windshield wipers, the glycans cover nearly the entire spike surface by flopping back and forth, even though their coverage is minimal at any given instant.
    By combining the dynamic spike protein simulations with bioinformatic analysis, the researchers identified spots on the surface of the spike proteins that are least protected by the glycan shields. Some of the detected sites have been identified in previous research, but some are novel. The vulnerability of many of these novel sites was confirmed by other research groups in subsequent lab experiments.
    “We are in a phase of the pandemic driven by the emergence of new variants of SARS-CoV-2, with mutations concentrated in particular in the spike protein,” Sikora says. “Our approach can support the design of vaccines and therapeutic antibodies, especially when established methods struggle.”
    The method developed for this study could also be applied to identify potential vulnerabilities of other viral proteins.
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    First-of-its-kind mechanical model simulates bending of mammalian whiskers

    Researchers have developed a new mechanical model that simulates how whiskers bend within a follicle in response to an external force, paving the way toward better understanding of how whiskers contribute to mammals’ sense of touch. Yifu Luo and Mitra Hartmann of Northwestern University and colleagues present these findings in the open-access journal PLOS Computational Biology.
    With the exception of some primates, most mammals use whiskers to explore their environment through the sense of touch. Whiskers have no sensors along their length, but when an external force bends a whisker, that deformation extends into the follicle at the base of the whisker, where the whisker pushes or pulls on sensor cells, triggering touch signals in the nervous system.
    Few previous studies have examined how whiskers deform within follicles in order to impinge on the sensor cells — mechanoreceptors — inside. To better understand this process, Luo and colleagues drew on data from experimental studies of whisker follicles to create the first mechanical model capable of simulating whisker deformation within follicles.
    The simulations suggest that whisker deformation within follicles most likely occurs in an “S” shape, although future experimental data may show that the deformation is “C” shaped. The researchers demonstrate that these shape estimates can be used to predict how whiskers push and pull on different kinds of mechanoreceptors located in different parts of the follicle, influencing touch signals sent to the brain.
    The new model applies to both passive touch and active “whisking,” when an animal uses muscles to move its whiskers. The simulations suggest that, during active whisking, the tactile sensitivity of the whisker system is enhanced by increased blood pressure in the follicle and by increased stiffness of follicular muscle and tissue structures.
    “It is exciting to use simulations, constrained by anatomical observations, to gain insights into biological processes that cannot be directly measured experimentally,” Hartmann says. “The work also underscores just how important mechanics are to understanding the sensory signals that the brain has evolved to process.”
    Future research will be needed to refine the model, both computationally and by incorporating new experimental data.
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    BrainGate: High-bandwidth wireless brain-computer interface for humans

    Brain-computer interfaces (BCIs) are an emerging assistive technology, enabling people with paralysis to type on computer screens or manipulate robotic prostheses just by thinking about moving their own bodies. For years, investigational BCIs used in clinical trials have required cables to connect the sensing array in the brain to computers that decode the signals and use them to drive external devices.
    Now, for the first time, BrainGate clinical trial participants with tetraplegia have demonstrated use of an intracortical wireless BCI with an external wireless transmitter. The system is capable of transmitting brain signals at single-neuron resolution and in full broadband fidelity without physically tethering the user to a decoding system. The traditional cables are replaced by a small transmitter about 2 inches in its largest dimension and weighing a little over 1.5 ounces. The unit sits on top of a user’s head and connects to an electrode array within the brain’s motor cortex using the same port used by wired systems.
    For a study published in IEEE Transactions on Biomedical Engineering, two clinical trial participants with paralysis used the BrainGate system with a wireless transmitter to point, click and type on a standard tablet computer. The study showed that the wireless system transmitted signals with virtually the same fidelity as wired systems, and participants achieved similar point-and-click accuracy and typing speeds.
    “We’ve demonstrated that this wireless system is functionally equivalent to the wired systems that have been the gold standard in BCI performance for years,” said John Simeral, an assistant professor of engineering (research) at Brown University, a member of the BrainGate research consortium and the study’s lead author. “The signals are recorded and transmitted with appropriately similar fidelity, which means we can use the same decoding algorithms we used with wired equipment. The only difference is that people no longer need to be physically tethered to our equipment, which opens up new possibilities in terms of how the system can be used.”
    The researchers say the study represents an early but important step toward a major objective in BCI research: a fully implantable intracortical system that aids in restoring independence for people who have lost the ability to move. While wireless devices with lower bandwidth have been reported previously, this is the first device to transmit the full spectrum of signals recorded by an intracortical sensor. That high-broadband wireless signal enables clinical research and basic human neuroscience that is much more difficult to perform with wired BCIs.
    The new study demonstrated some of those new possibilities. The trial participants — a 35-year-old man and a 63-year-old man, both paralyzed by spinal cord injuries — were able to use the system in their homes, as opposed to the lab setting where most BCI research takes place. Unencumbered by cables, the participants were able to use the BCI continuously for up to 24 hours, giving the researchers long-duration data including while participants slept. More

  • in

    Even without a brain, metal-eating robots can search for food

    When it comes to powering mobile robots, batteries present a problematic paradox: the more energy they contain, the more they weigh, and thus the more energy the robot needs to move. Energy harvesters, like solar panels, might work for some applications, but they don’t deliver power quickly or consistently enough for sustained travel.
    James Pikul, assistant professor in Penn Engineering’s Department of Mechanical Engineering and Applied Mechanics, is developing robot-powering technology that has the best of both worlds. His environmentally controlled voltage source, or ECVS, works like a battery, in that the energy is produced by repeatedly breaking and forming chemical bonds, but it escapes the weight paradox by finding those chemical bonds in the robot’s environment, like a harvester. While in contact with a metal surface, an ECVS unit catalyzes an oxidation reaction with the surrounding air, powering the robot with the freed electrons.
    Pikul’s approach was inspired by how animals power themselves through foraging for chemical bonds in the form of food. And like a simple organism, these ECVS-powered robots are now capable of searching for their own food sources despite lacking a “brain.”
    In a new study published as an Editor’s Choice article in Advanced Intelligent Systems, Pikul, along with lab members Min Wang and Yue Gao, demonstrate a wheeled robot that can navigate its environment without a computer. By having the left and right wheels of the robot powered by different ECVS units, they show a rudimentary form of navigation and foraging, where the robot will automatically steer toward metallic surfaces it can “eat.”
    Their study also outlines more complicated behavior that can be achieved without a central processor. With different spatial and sequential arrangements of ECVS units, a robot can perform a variety of logical operations based on the presence or absence of its food source.
    “Bacteria are able to autonomously navigate toward nutrients through a process called chemotaxis, where they sense and respond to changes in chemical concentrations,” Pikul says. “Small robots have similar constraints to microorganisms, since they can’t carry big batteries or complicated computers, so we wanted to explore how our ECVS technology could replicate that kind of behavior.”
    In the researchers’ experiments, they placed their robot on aluminum surfaces capable of powering its ECVS units. By adding “hazards” that would prevent the robot from making contact with the metal, they showed how ECVS units could both get the robot moving and navigate it toward more energy-rich sources. More