More stories

  • in

    This stick-on ultrasound patch could let you watch your own heart beat

    Picture a smartwatch that doesn’t just show your heart rate, but a real-time image of your heart as it beats in your chest. Researchers may have taken the first step down that road by creating a wearable ultrasound patch — think of a Band-Aid with sonar — that provides a flexible way to see deep inside the body. 

    Ultrasound, which maps tissues and fluids by recording how sound waves bounce off them, can help doctors examine organs for damage, diagnose cancer or even track bacteria (SN: 1/3/18). But most ultrasound machines aren’t portable, and the wearable ones either struggle to spot details or can be used for only short periods. 

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    The new patch can work for up to 48 hours straight — even while the user is doing something active, like exercising. And the miniature device sees just as well as a more unwieldy hospital machine, researchers report in the July 29 Science. 

    “This is just the beginning,” says Xuanhe Zhao, a mechanical engineer at MIT. His team plans to make the patch wireless and able to interface with a user’s phone, which could then show the ultrasound signals as 3-D images. 

    The medical possibilities range wide. Stick a patch over a person’s heart, and the frequent images it takes could help predict heart attacks and blood clots potentially months before disaster hits, explains Aparna Singh, a biomedical engineer at Columbia University. Placed on a COVID-19 patient, the patch — which is only about the size of a quarter — could be an easy way to catch lung problems as they develop.

    “This also has a huge potential to be available for developing countries,” where limited access to hospitals can make monitoring patients difficult, Singh says. The patch costs about $100 to make. One of the researchers’ next steps will be to try to make the device cheaper. More

  • in

    Friendly skies? Study charts COVID-19 odds for plane flights

    What are the chances you will contract Covid-19 on a plane flight? A study led by MIT scholars offers a calculation of that for the period from June 2020 through February 2021. While the conditions that applied at that stage of the Covid-19 pandemic differ from those of today, the study offers a method that could be adapted as the pandemic evolves.
    The study estimates that from mid-2020 through early 2021, the probability of getting Covid-19 on an airplane surpassed 1 in 1,000 on a totally full flight lasting two hours at the height of the early pandemic, roughly December 2020 and January 2021. It dropped to about 1 in 6,000 on a half-full two-hour flight when the pandemic was at its least severe, in the summer of 2020. The overall risk of transmission from June 2020 through February 2021 was about 1 in 2,000, with a mean of 1 in 1,400 and a median of 1 in 2,250.
    To be clear, current conditions differ from the study’s setting. Masks are no longer required for U.S. domestic passengers; in the study’s time period, airlines were commonly leaving middle seats open, which they are no longer doing; and newer Covid-19 variants are more contagious than the virus was during the study period. While those factors may increase the current risk, most people have received Covid-19 vaccinations since February 2021, which could serve to lower today’s risk — though the precise impact of those vaccines against new variants is uncertain.
    Still, the study does provide a general estimate about air travel safety with regard to Covid-19 transmission, and a methodology that can be applied to future studies. Some U.S. carriers at the time stated that onboard transmission was “virtually nonexistent” and “nearly nonexistent,” but as the research shows, there was a discernible risk. On the other hand, passengers were not exactly facing coin-flip odds of catching the virus in flight, either.
    “The aim is to set out the facts,” says Arnold Barnett, a management professor at MIT and aviation risk expert, who is co-author of a recent paper detailing the study’s results. “Some people might say, ‘Oh, that doesn’t sound like very much.’ But if we at least tell people what the risk is, they can make judgments.”
    As Barnett also observes, a round-trip flight with a change of planes and two two-hour segments in each direction counts as four flights in this accounting, so a 1 in 1,000 probability, per flight, would lead to approximately a 1 in 250 chance for such a trip as a whole. More

  • in

    A 'nano-robot' built entirely from DNA to explore cell processes

    Constructing a tiny robot from DNA and using it to study cell processes invisible to the naked eye… You would be forgiven for thinking it is science fiction, but it is in fact the subject of serious research by scientists from Inserm, CNRS and Université de Montpellier at the Structural Biology Center in Montpellier[1]. This highly innovative “nano-robot” should enable closer study of the mechanical forces applied at microscopic levels, which are crucial for many biological and pathological processes. It is described in a new study published in Nature Communications.
    Our cells are subject to mechanical forces exerted on a microscopic scale, triggering biological signals essential to many cell processes involved in the normal functioning of our body or in the development of diseases.
    For example, the feeling of touch is partly conditional on the application of mechanical forces on specific cell receptors (the discovery of which was this year rewarded by the Nobel Prize in Physiology or Medicine). In addition to touch, these receptors that are sensitive to mechanical forces (known as mechanoreceptors) enable the regulation of other key biological processes such as blood vessel constriction, pain perception, breathing or even the detection of sound waves in the ear, etc.
    The dysfunction of this cellular mechanosensitivity is involved in many diseases — for example, cancer: cancer cells migrate within the body by sounding and constantly adapting to the mechanical properties of their microenvironment. Such adaptation is only possible because specific forces are detected by mechanoreceptors that transmit the information to the cell cytoskeleton.
    At present, our knowledge of these molecular mechanisms involved in cell mechanosensitivity is still very limited. Several technologies are already available to apply controlled forces and study these mechanisms, but they have a number of limitations. In particular, they are very costly and do not allow us to study several cell receptors at a time, which makes their use very time-consuming if we want to collect a lot of data.
    DNA origami structures
    In order to propose an alternative, the research team led by Inserm researcher Gaëtan Bellot at the Structural Biology Center (Inserm/CNRS/Université de Montpellier) decided to use the DNA origami method. This enables the self-assembly of 3D nanostructures in a pre-defined form using the DNA molecule as construction material. Over the last ten years, the technique has allowed major advances in the field of nanotechnology.
    This enabled the researchers to design a “nano-robot” composed of three DNA origami structures. Of nanometric size, it is therefore compatible with the size of a human cell. It makes it possible for the first time to apply and control a force with a resolution of 1 piconewton, namely one trillionth of a Newton — with 1 Newton corresponding to the force of a finger clicking on a pen. This is the first time that a human-made, self-assembled DNA-based object can apply force with this accuracy.
    The team began by coupling the robot with a molecule that recognizes a mechanoreceptor. This made it possible to direct the robot to some of our cells and specifically apply forces to targeted mechanoreceptors localized on the surface of the cells in order to activate them.
    Such a tool is very valuable for basic research, as it could be used to better understand the molecular mechanisms involved in cell mechanosensitivity and discover new cell receptors sensitive to mechanical forces. Thanks to the robot, the scientists will also be able to study more precisely at what moment, when applying force, key signaling pathways for many biological and pathological processes are activated at cell level.
    “The design of a robot enabling the in vitro and in vivo application of piconewton forces meets a growing demand in the scientific community and represents a major technological advance. However, the biocompatibility of the robot can be considered both an advantage for in vivo applications but may also represent a weakness with sensitivity to enzymes that can degrade DNA. So our next step will be to study how we can modify the surface of the robot so that it is less sensitive to the action of enzymes. We will also try to find other modes of activation of our robot using, for example, a magnetic field,” emphasizes Bellot.
    [1] Also contributed to this research: the Institute of Functional Genomics (CNRS/Inserm/Université de Montpellier), the Max Mousseron Biomolecules Institute (CNRS/Université de Montpellier/ENSCM), the Paul Pascal Research Center (CNRS/Université de Bordeaux) and the Physiology and Experimental Medicine: Heart-Muscles laboratory (CNRS/Inserm/Université de Montpellier). More

  • in

    AI performs as well as medical specialists in analyzing lung disease, research shows

    A Nagoya University research group has developed an AI algorithm that accurately and quickly diagnoses idiopathic pulmonary fibrosis, a lung disease. The algorithm makes its diagnosis based only on information from non-invasive examinations, including lung images and medical information collected during daily medical care.
    Doctors have waited a long time for an early means of diagnosing idiopathic pulmonary fibrosis, a potentially fatal disease that can scar a person’s lungs. Except for drugs that can delay the disease’s progression, established therapies do not exist. Since doctors face many difficulties diagnosing the disease, they often have to request a specialist diagnosis. In addition, many of the diagnostic techniques, such as lung biopsy, are highly invasive. These investigative measures may exacerbate the disease, increasing a patient’s risk of dying.
    Taiki Furukawa, Assistant Professor of the Nagoya University Hospital, in collaboration with RIKEN and Tosei General Hospital, has developed a new technology to diagnose idiopathic pulmonary fibrosis. Using artificial intelligence (AI), the group analyzed medical data from patients in Tosei General Hospital’s interstitial pneumonia treatment facility, collected during normal care. They found that their AI diagnosed idiopathic pulmonary fibrosis with a similar level of accuracy as a human specialist. They published their results in the journal Respirology.
    Despite finding that their AI performed just as well as experts, the team stress that they do not see it as replacing medical professionals. Instead, they hope that specialists will use AI in medical treatment to ensure that they do not miss opportunities for early treatment. Its use would also avoid invasive procedures, such as lung biopsies, which could save lives.
    “Idiopathic pulmonary fibrosis has a very poor prognosis among lung diseases,” Furukawa says. “It has been difficult to diagnose even for general respiratory physicians. The diagnostic AI developed in this study would allow any hospital to get a diagnosis equivalent to that of a specialist. For idiopathic pulmonary fibrosis, the developed diagnostic AI is useful as a screening tool and may lead to personalized medicine by collaborating with medical specialists.”
    Furukawa is excited about the possibilities: “The practical application of diagnostic AI and collaborative diagnosis with specialists may lead to a more accurate diagnosis and treatment. We expect it to revolutionize medical care.”
    This study was supported by JSPS KAKENHI, Grant/Award Number: JP19110253; The Hori Science and Arts Foundation; The Japanese Respiratory Foundation.
    Story Source:
    Materials provided by Nagoya University. Note: Content may be edited for style and length. More

  • in

    Magnetic quantum material broadens platform for probing next-gen information technologies

    Scientists at the Department of Energy’s Oak Ridge National Laboratory used neutron scattering to determine whether a specific material’s atomic structure could host a novel state of matter called a spiral spin liquid. By tracking tiny magnetic moments known as “spins” on the honeycomb lattice of a layered iron trichloride magnet, the team found the first 2D system to host a spiral spin liquid.
    The discovery provides a test bed for future studies of physics phenomena that may drive next-generation information technologies. These include fractons, or collective quantized vibrations that may prove promising in quantum computing, and skyrmions, or novel magnetic spin textures that could advance high-density data storage.
    “Materials hosting spiral spin liquids are particularly exciting due to their potential to be used to generate quantum spin liquids, spin textures and fracton excitations,” said ORNL’s Shang Gao, who led the study published in Physical Review Letters.
    A long-held theory predicted that the honeycomb lattice can host a spiral spin liquid — a novel phase of matter in which spins form fluctuating corkscrew-like structures.
    Yet, until the present study, experimental evidence of this phase in a 2D system had been lacking. A 2D system comprises a layered crystalline material in which interactions are stronger in the planar than in the stacking direction.
    Gao identified iron trichloride as a promising platform for testing the theory, which was proposed more than a decade ago. He and co-author Andrew Christianson of ORNL approached Michael McGuire, also of ORNL, who has worked extensively on growing and studying 2D materials, asking if he would synthesize and characterize a sample of iron trichloride for neutron diffraction measurements. Like 2D graphene layers exist in bulk graphite as honeycomb lattices of pure carbon, 2D iron layers exist in bulk iron trichloride as 2D honeycomb layers. “Previous reports hinted that this interesting honeycomb material could show complex magnetic behavior at low temperatures,” McGuire said. More

  • in

    Researchers 3D print sensors for satellites

    MIT scientists have created the first completely digitally manufactured plasma sensors for orbiting spacecraft. These plasma sensors, also known as retarding potential analyzers (RPAs), are used by satellites to determine the chemical composition and ion energy distribution of the atmosphere.
    The 3D-printed and laser-cut hardware performed as well as state-of-the-art semiconductor plasma sensors that are manufactured in a cleanroom, which makes them expensive and requires weeks of intricate fabrication. By contrast, the 3D-printed sensors can be produced for tens of dollars in a matter of days.
    Due to their low cost and speedy production, the sensors are ideal for CubeSats. These inexpensive, low-power, and lightweight satellites are often used for communication and environmental monitoring in Earth’s upper atmosphere.
    The researchers developed RPAs using a glass-ceramic material that is more durable than traditional sensor materials like silicon and thin-film coatings. By using the glass-ceramic in a fabrication process that was developed for 3D printing with plastics, there were able to create sensors with complex shapes that can withstand the wide temperature swings a spacecraft would encounter in lower Earth orbit.
    “Additive manufacturing can make a big difference in the future of space hardware. Some people think that when you 3D-print something, you have to concede less performance. But we’ve shown that is not always the case. Sometimes there is nothing to trade off,” says Luis Fernando Velásquez-García, a principal scientist in MIT’s Microsystems Technology Laboratories (MTL) and senior author of a paper presenting the plasma sensors.
    Joining Velásquez-García on the paper are lead author and MTL postdoc Javier Izquierdo-Reyes; graduate student Zoey Bigelow; and postdoc Nicholas K. Lubinsky. The research is published in Additive Manufacturing. More

  • in

    A key role for quantum entanglement

    A method known as quantum key distribution has long held the promise of communication security unattainable in conventional cryptography. An international team of scientists has now demonstrated experimentally, for the first time, an approach to quantum key distribution that is based on high-quality quantum entanglement — offering much broader security guarantees than previous schemes.
    The art of cryptography is to skillfully transform messages so that they become meaningless to everyone but the intended recipients. Modern cryptographic schemes, such as those underpinning digital commerce, prevent adversaries from illegitimately deciphering messages — say, credit-card information — by requiring them to perform mathematical operations that consume a prohibitively large amount of computational power. Starting from the 1980s, however, ingenious theoretical concepts have been introduced in which security does not depend on the eavesdropper’s finite number-crunching capabilities. Instead, basic laws of quantum physics limit how much information, if any, an adversary can ultimately intercept. In one such concept, security can be guaranteed with only a few general assumptions about the physical apparatus used. Implementations of such ‘device-independent’ schemes have long been sought after, but remained out of reach. Until now, that is. Writing in Nature, an international team of researchers from the University of Oxford, EPFL, ETH Zurich, the University of Geneva and CEA report the first demonstration of this sort of protocol — taking a decisive step towards practical devices offering such exquisite security.
    The key is a secret
    Secure communication is all about keeping information private. It might be surprising, therefore, that in real-world applications large parts of the transactions between legitimate users are played out in public. The key is that sender and receiver do not have to keep their entire communication hidden. In essence, they only have to share one ‘secret’; in practice, this secret is string of bits, known as a cryptographic key, that enables everyone in its possession to turn coded messages into meaningful information. Once the legitimate parties have ensured for a given round of communication that they, and only they, share such a key, pretty much all the other communication can happen in plain view, for everyone to see. The question, then, is how to ensure that only the legitimate parties share a secret key. The process of accomplishing this is known as ‘key distribution’.
    In the cryptographic algorithms underlying, for instance, RSA — one of the most widely used cryptographic systems — key distribution is based on the (unproven) conjecture that certain mathematical functions are easy to compute but hard to revert. More specifically, RSA relies on the fact that for today’s computers it is hard to find the prime factors of a large number, whereas it is easy for them to multiply known prime factors to obtain that number. Secrecy is therefore ensured by mathematical difficulty. But what is impossibly difficult today might be easy tomorrow. Famously, quantum computers can find prime factors significantly more efficiently than classical computers. Once quantum computers with a sufficiently large number of qubits become available, RSA encoding is destined to become penetrable.
    But quantum theory provides the basis not only for cracking the cryptosystems at the heart of digital commerce, but also for a potential solution to the problem: a way entirely different from RSA for distributing cryptographic keys — one that has nothing to do with the hardness of performing mathematical operations, but with fundamental physical laws. Enter quantum key distribution, or QKD for short.
    Quantum-certified security
    In 1991, the Polish-British physicist Artur Ekert showed in a seminal paper that the security of the key-distribution process can be guaranteed by directly exploiting a property that is unique to quantum systems, with no equivalent in classical physics: quantum entanglement. Quantum entanglement refers to certain types of correlations in the outcomes of measurements performed on separate quantum systems. Importantly, quantum entanglement between two systems is exclusive, in that nothing else can be correlated to these systems. In the context of cryptography this means that sender and receiver can produce between them shared outcomes through entangled quantum systems, without a third party being able to secretly gain knowledge about these outcomes. Any eavesdropping leaves traces that clearly flag the intrusion. In short: the legitimate parties can interact with one another in ways that are — thanks to quantum theory — fundamentally beyond any adversary’s control. In classical cryptography, an equivalent security guarantee is provably impossible.
    Over the years, it was realized that QKD schemes based on the ideas introduced by Ekert can have a further remarkable benefit: users have to make only very general assumptions regarding the devices employed in the process. By contrast, earlier forms of QKD based on other basic principles require detailed knowledge about the inner workings of the devices used. The novel form of QKD is now generally known as ‘device-independent QKD’ (DIQKD), and an experimental implementation thereof became a major goal in the field. Hence the excitement as such a breakthrough experiment has now finally been achieved.
    Culmination of years of work
    The scale of the challenge is reflected in the breadth of the team, which combines leading experts in theory and experiment. The experiment involved two single ions — one for the sender and one for the receiver — confined in separate traps that were connected with an optical-fibre link. In this basic quantum network, entanglement between the ions was generated with record-high fidelity over millions of runs. Without such a sustained source of high-quality entanglement, the protocol could not have been run in a practically meaningful manner. Equally important was to certify that the entanglement is suitably exploited, which is done by showing that conditions known as Bell inequalities are violated. Moreover, for the analysis of the data and an efficient extraction of the cryptographic key, significant advances in the theory were needed.
    In the experiment, the ‘legitimate parties’ — the ions — were located in one and the same laboratory. But there is a clear route to extending the distance between them to kilometres and beyond. With that perspective, together with further recent progress made in related experiments in Germany and China, there is now a real prospect of turning the theoretical concept of Ekert into practical technology. More

  • in

    Quantum cryptography: Hacking is futile

    The Internet is teeming with highly sensitive information. Sophisticated encryption techniques generally ensure that such content cannot be intercepted and read. But in the future high-performance quantum computers could crack these keys in a matter of seconds. It is just as well, then, that quantum mechanical techniques not only enable new, much faster algorithms, but also exceedingly effective cryptography.
    Quantum key distribution (QKD) — as the jargon has it — is secure against attacks on the communication channel, but not against attacks on or manipulations of the devices themselves. The devices could therefore output a key which the manufacturer had previously saved and might conceivably have forwarded to a hacker. With device- independent QKD (abbreviated to DIQKD), it is a different story. Here, the cryptographic protocol is independent of the device used. Theoretically known since the 1990s, this method has now been experimentally realized for the first time, by an international research group led by LMU physicist Harald Weinfurter and Charles Lim from the National University of Singapore (NUS).
    For exchanging quantum mechanical keys, there are different approaches available. Either light signals are sent by the transmitter to the receiver, or entangled quantum systems are used. In the present experiment, the physicists used two quantum mechanically entangled rubidium atoms, situated in two laboratories located 400 meters from each other on the LMU campus. The two locations are connected via a fiber optic cable 700 meters in length, which runs beneath Geschwister Scholl Square in front of the main building.
    To create an entanglement, first the scientists excite each of the atoms with a laser pulse. After this, the atoms spontaneously fall back into their ground state, each thereby emitting a photon. Due to the conservation of angular momentum, the spin of the atom is entangled with the polarization of its emitted photon. The two light particles travel along the fiber optic cable to a receiver station, where a joint measurement of the photons indicates an entanglement of the atomic quantum memories.
    To exchange a key, Alice und Bob — as the two parties are usually dubbed by cryptographers — measure the quantum states of their respective atom. In each case, this is done randomly in two or four directions. If the directions correspond, the measurement results are identical on account of entanglement and can be used to generate a secret key. With the other measurement results, a so-called Bell inequality can be evaluated. Physicist John Stewart Bell originally developed these inequalities to test whether nature can be described with hidden variables. “It turned out that it cannot,” says Weinfurter. In DIQKD, the test is used “specifically to ensure that there are no manipulations at the devices — that is to say, for example, that hidden measurement results have not been saved in the devices beforehand,” explains Weinfurter.
    In contrast to earlier approaches, the implemented protocol, which was developed by researchers at NUS, uses two measurement settings for key generation instead of one: “By introducing the additional setting for key generation, it becomes more difficult to intercept information, and therefore the protocol can tolerate more noise and generate secret keys even for lower-quality entangled states,” says Charles Lim.
    With conventional QKD methods, by contrast, security is guaranteed only when the quantum devices used have been characterized sufficiently well. “And so, users of such protocols have to rely on the specifications furnished by the QKD providers and trust that the device will not switch into another operating mode during the key distribution,” explains Tim van Leent, one of the four lead authors of the paper alongside Wei Zhang and Kai Redeker. It has been known for at least a decade that older QKD devices could easily be hacked from outside, continues van Leent.
    “With our method, we can now generate secret keys with uncharacterized and potentially untrustworthy devices,” explains Weinfurter. In fact, he had his doubts initially whether the experiment would work. But his team proved his misgivings were unfounded and significantly improved the quality of the experiment, as he happily admits. Alongside the cooperation project between LMU and NUS, another research group from the University of Oxford demonstrated the device-independent key distribution. To do this, the researchers used a system comprising two entangled ions in the same laboratory. “These two projects lay the foundation for future quantum networks, in which absolutely secure communication is possible between far distant locations,” says Charles Lim.
    One of the next goals is to expand the system to incorporate several entangled atom pairs. “This would allow many more entanglement states to be generated, which increases the data rate and ultimately the key security,” says van Leent. In addition, the researchers would like to increase the range. In the present set-up, it was limited by the loss of around half the photons in the fiber between the laboratories. In other experiments, the researchers were able to transform the wavelength of the photons into a low-loss region suitable for telecommunications. In this way, for just a little extra noise, they managed to increase the range of the quantum network connection to 33 kilometers. More