More stories

  • in

    Scientific advance leads to a new tool in the fight against hackers

    A new form of security identification could soon see the light of day and help us protect our data from hackers and cybercriminals. Quantum mathematicians at the University of Copenhagen have solved a mathematical riddle that allows for a person’s geographical location to be used as a personal ID that is secure against even the most advanced cyber attacks.
    People have used codes and encryption to protect information from falling into the wrong hands for thousands of years. Today, encryption is widely used to protect our digital activity from hackers and cybercriminals who assume false identities and exploit the internet and our increasing number of digital devices to steal from us.
    As such, there is an ever-growing need for new security measures to detect hackers posing as our banks or other trusted institutions. Within this realm, researchers from the University of Copenhagen’s Department of Mathematical Sciences have just made a giant leap.
    “There is a constant battle in cryptography between those who want to protect information and those seeking to crack it. New security keys are being developed and later broken and so the cycle continues. Until, that is, a completely different type of key has been found.” , says Professor Matthias Christandl.
    For nearly twenty years, researchers around the world have been trying to solve the riddle of how to securely determine a person’s geographical location and use it as a secure ID. Until now, this had not been possible by way of normal methods like GPS tracking.
    “Today, there are no traditional ways, whether by internet or radio signals for example, to determine where another person is situated geographically with one hundred percent accuracy. Current methods are not unbreakable, and hackers can impersonate someone you trust even when they are far far away. However, quantum physics opens up a few entirely different possibilities,” says Matthias Christandl. More

  • in

    A sharper image for proteins

    Proteins may be the most important and varied biomolecules within living systems. These strings of amino acids, assuming complex 3-dimensional forms, are essential for the growth and maintenance of tissue, the initiation of thousands of biochemical reactions, and the protection of the body from pathogens through the immune system. They play a central role in health and disease and are primary targets for pharmaceutical drugs.
    To fully understand proteins and their myriad functions, researchers have developed sophisticated means to see and study them through advanced microscopy, improving light detection, imaging software, and the integration of advanced hardware systems.
    In a new study, corresponding author Shaopeng Wang and his colleagues at Arizona State University describe a new technique that promises to revolutionize the imaging of proteins and other vital biomolecules, allowing these tiny entities to be visualized with unprecedented clarity and by simpler means than existing methods.
    “The method we report in this study uses normal cover glass instead of gold coated cover glass, which has two advantages over our previously reported label-free single-protein imaging method, Wang says. It is compatible with fluorescence imaging for in-situ cross validation, and it reduces the light-induced heating effect that could harm the biological samples. Pengfei Zhang, an outstanding postdoctoral researcher in my group, is the technical lead of this project.”
    Wang has a joint faculty position in the Biodesign Center for Bioelectronics and Biosensors and School of Biological and Health Systems Engineering. The group’s research findings appear in the current issue of the journal Nature Communications.
    The new method, known as evanescent scattering microscopy (ESM), is based on an optical property first recognized in antiquity, known as total internal reflection. This occurs when light passes from a high-refractive medium, (like glass) into a low-refractive medium (like water). More

  • in

    From blurry to bright: AI tech helps researchers peer into the brains of mice

    Johns Hopkins biomedical engineers have developed an artificial intelligence (AI) training strategy to capture images of mouse brain cells in action. The researchers say the AI system, in concert with specialized ultra-small microscopes, make it possible to find precisely where and when cells are activated during movement, learning and memory. The data gathered with this technology could someday allow scientists to understand how the brain functions and is affected by disease.
    The researcher’s experiments in mice were published in Nature Communications on March 22.
    “When a mouse’s head is restrained for imaging, its brain activity may not truly represent its neurological function,” says Xingde Li, Ph.D., professor of biomedical engineering at the Johns Hopkins University School of Medicine. “To map brain circuits that control daily functions in mammals, we need to see precisely what is happening among individual brain cells and their connections, while the animal is freely moving around, eating and socializing.”
    To gather this extremely detailed data, Li’s team developed ultra-small microscopes that the mice can wear on the top of their head. Measuring in a couple of millimeter in diameter, the size of these microscopes limit the imaging technology they can carry on board. In comparison to benchtop models, the frame rate on the miniature microscopes is low, which make them susceptible to interference from motion. Disturbances such as the mouse’s breathing or heart rate would affect the accuracy of the data these microscopes can capture. Researchers estimate that Li’s miniature microscope would need to exceed 20 frames per second to eliminate all the disturbances from the motion of a freely moving mouse.
    “There are two ways to increase frame rate,” says Li. “You can increase the scanning speed and you can decrease the number of points scanned.”
    In previous research, Li’s engineering team quickly found they hit the physical limits of the scanner, reaching six frames per second, which maintained excellent image quality but was far below the required rate. So, the team moved on to the second strategy for increasing frame rate — decreasing the number of points scanned. However, similar to reducing the number of pixels in an image, this strategy would cause the microscope to capture lower-resolution data. More

  • in

    New brain learning mechanism calls for revision of long-held neuroscience hypothesis

    The brain is a complex network containing billions of neurons. Each of these neurons communicates simultaneously with thousands of others via their synapses (links), and collects incoming signals through several extremely long, branched “arms,” called dendritic trees.
    For the last 70 years a core hypothesis of neuroscience has been that brain learning occurs by modifying the strength of the synapses, following the relative firing activity of their connecting neurons. This hypothesis has been the basis for machine and deep learning algorithms which increasingly affect almost all aspects of our lives. But after seven decades, this long-lasting hypothesis has now been called into question.
    In an article published today in Scientific Reports, researchers from Bar-Ilan University in Israel reveal that the brain learns completely differently than has been assumed since the 20th century. The new experimental observations suggest that learning is mainly performed in neuronal dendritic trees, where the trunk and branches of the tree modify their strength, as opposed to modifying solely the strength of the synapses (dendritic leaves), as was previously thought. These observations also indicate that the neuron is actually a much more complex, dynamic and computational element than a binary element that can fire or not. Just one single neuron can realize deep learning algorithms, which previously required an artificial complex network consisting of thousands of connected neurons and synapses.
    “We’ve shown that efficient learning on dendritic trees of a single neuron can artificially achieve success rates approaching unity for handwritten digit recognition. This finding paves the way for an efficient biologically inspired new type of AI hardware and algorithms,” said Prof. Ido Kanter, of Bar-Ilan’s Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Center, who led the research. “This simplified learning mechanism represents a step towards a plausible biological realization of backpropagation algorithms, which are currently the central technique in AI,” added Shiri Hodassman, a PhD student and one of the key contributors to this work.
    The efficient learning on dendritic trees is based on Kanter and his research team’s experimental evidence for sub-dendritic adaptation using neuronal cultures, together with other anisotropic properties of neurons, like different spike waveforms, refractory periods and maximal transmission rates.
    The brain’s clock is a billion times slower than existing parallel GPUs, but with comparable success rates in many perceptual tasks.
    The new demonstration of efficient learning on dendritic trees calls for new approaches in brain research, as well as for the generation of counterpart hardware aiming to implement advanced AI algorithms. If one can implement slow brain dynamics on ultrafast computers, the sky is the limit.
    Story Source:
    Materials provided by Bar-Ilan University. Note: Content may be edited for style and length. More

  • in

    Ionic liquid-based reservoir computing: The key to efficient and flexible edge computing

    Physical reservoir computing (PRC), which relies on the transient response of physical systems, is an attractive machine learning framework that can perform high-speed processing of time-series signals at low power. However, PRC systems have low tunability, limiting the signals it can process. Now, researchers from Japan present ionic liquids as an easily tunable physical reservoir device that can be optimized to process signals over a broad range of timescales by simply changing their viscosity.
    Artificial Intelligence (AI) is fast becoming ubiquitous in the modern society and will feature a broader implementation in the coming years. In applications involving sensors and internet-of-things devices, the norm is often edge AI, a technology in which the computing and analyses are performed close to the user (where the data is collected) and not far away on a centralized server. This is because edge AI has low power requirements as well as high-speed data processing capabilities, traits that are particularly desirable in processing time-series data in real time.
    In this regard, physical reservoir computing (PRC), which relies on the transient dynamics of physical systems, can greatly simplify the computing paradigm of edge AI. This is because PRC can be used to store and process analog signals into those edge AI can efficiently work with and analyze. However, the dynamics of solid PRC systems are characterized by specific timescales that are not easily tunable and are usually too fast for most physical signals. This mismatch in timescales and their low controllability make PRC largely unsuitable for real-time processing of signals in living environments.
    To address this issue, a research team from Japan involving Professor Kentaro Kinoshita and Sang-Gyu Koh, a PhD student, from the Tokyo University of Science, and senior researchers Dr. Hiroyuki Akinaga, Dr. Hisashi Shima, and Dr. Yasuhisa Naitoh from the National Institute of Advanced Industrial Science and Technology, proposed, in a new study published in Scientific Reports, the use of liquid PRC systems instead. “Replacing conventional solid reservoirs with liquid ones should lead to AI devices that can directly learn at the time scales of environmentally generated signals, such as voice and vibrations, in real time,” explains Prof. Kinoshita. “Ionic liquids are stable molten salts that are completely made up of free-roaming electrical charges. The dielectric relaxation of the ionic liquid, or how its charges rearrange as a response to an electric signal, could be used as a reservoir and is holds much promise for edge AI physical computing.”
    In their study, the team designed a PRC system with an ionic liquid (IL) of an organic salt, 1-alkyl-3-methylimidazolium bis(trifluoromethane sulfonyl)imide ([Rmim+] [TFSI-] R = ethyl (e), butyl (b), hexyl (h), and octyl (o)), whose cationic part (the positively charged ion) can be easily varied with the length of a chosen alkyl chain. They fabricated gold gap electrodes, and filled in the gaps with the IL. “We found that the timescale of the reservoir, while complex in nature, can be directly controlled by the viscosity of the IL, which depends on the length of the cationic alkyl chain. Changing the alkyl group in organic salts is easy to do, and presents us with a controllable, designable system for a range of signal lifetimes, allowing a broad range of computing applications in the future,” says Prof. Kinoshita. By adjusting the alkyl chain length between 2 and 8 units, the researchers achieved characteristic response times that ranged between 1 — 20 ms, with longer alkyl sidechains leading to longer response times and tunable AI learning performance of devices.
    The tunability of the system was demonstrated using an AI image identification task. The AI was presented a handwritten image as the input, which was represented by 1 ms width rectangular pulse voltages. By increasing the side chain length, the team made the transient dynamics approach that of the target signal, with the discrimination rate improving for higher chain lengths. This is because, compared to [emim+] [TFSI-], in which the current relaxed to its value in about 1 ms, the IL with a longer side chain and, in turn, longer relaxation time retained the history of the time series data better, improving identification accuracy. When the longest sidechain of 8 units was used, the discrimination rate reached a peak value of 90.2%.
    These findings are encouraging as they clearly show that the proposed PRC system based on the dielectric relaxation at an electrode-ionic liquid interface can be suitably tuned according to the input signals by simply changing the IL’s viscosity. This could pave the way for edge AI devices that can accurately learn the various signals produced in the living environment in real time.
    Story Source:
    Materials provided by Tokyo University of Science. Note: Content may be edited for style and length. More

  • in

    In Einstein's footsteps and beyond

    In physics, as in life, it’s always good to look at things from different perspectives.
    Since the beginning of quantum physics, how light moves and interacts with matter around it has mostly been described and understood mathematically through the lens of its energy. In 1900, Max Planck used energy to explain how light is emitted by heated objects, a seminal study in the foundation of quantum mechanics. In 1905, Albert Einstein used energy when he introduced the concept of photon.
    But light has another, equally important quality known as momentum. And, as it turns out, when you take momentum away, light starts behaving in really interesting ways.
    An international team of physicists led by Michaël Lobet, a research associate at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and Eric Mazur, the Balkanski Professor of Physics and Applied Physics at SEAS, are re-examining the foundations of quantum physics from the perspective of momentum and exploring what happens when the momentum of light is reduced to zero.
    The research is published in Nature Light Science & Applications.
    Any object with mass and velocity has momentum — from atoms to bullets to asteroids — and momentum can be transferred from one object to another. A gun recoils when a bullet is fired because the momentum of the bullet is transferred to the gun. At the microscopic scale, an atom recoils when it emits light because of the acquired momentum of the photon. Atomic recoil, first described by Einstein when he was writing the quantum theory of radiation, is a fundamental phenomenon which governs light emission. More

  • in

    New device: Tallest height of any known jumper, engineered or biological

    A mechanical jumper developed by UC Santa Barbara engineering professor Elliot Hawkes and collaborators is capable of achieving the tallest height — roughly 100 feet (30 meters) — of any jumper to date, engineered or biological. The feat represents a fresh approach to the design of jumping devices and advances the understanding of jumping as a form of locomotion.
    “The motivation came from a scientific question,” said Hawkes, who as a roboticist seeks to understand the many possible methods for a machine to be able to navigate its environment. “We wanted to understand what the limits were on engineered jumpers.” While there are centuries’ worth of studies on biological jumpers (that would be us in the animal kingdom), and decades’ worth of research on mostly bio-inspired mechanical jumpers, he said, the two lines of inquiry have been kept somewhat separate.
    “There hadn’t really been a study that compares and contrasts the two and how their limits are different — whether engineered jumpers are really limited to the same laws that biological jumpers are,” Hawkes said.
    Their research is published in the journal Nature.
    Big Spring, Tiny Motor
    Biological systems have long served as the first and best models for locomotion, and that has been especially true for jumping, defined by the researchers as a “movement created by forces applied to the ground by the jumper, while maintaining a constant mass.” Many engineered jumpers have focused on duplicating the designs provided by evolution, and to great effect. More

  • in

    Plug-and-play organ-on-a-chip can be customized to the patient

    Engineered tissues have become a critical component for modeling diseases and testing the efficacy and safety of drugs in a human context. A major challenge for researchers has been how to model body functions and systemic diseases with multiple engineered tissues that can physiologically communicate — just like they do in the body. However, it is essential to provide each engineered tissue with its own environment so that the specific tissue phenotypes can be maintained for weeks to months, as required for biological and biomedical studies. Making the challenge even more complex is the necessity of linking the tissue modules together to facilitate their physiological communication, which is required for modeling conditions that involve more than one organ system, without sacrificing the individual engineered tissue environments.
    Novel plug-and-play multi-organ chip, customized to the patient
    Up to now, no one has been able to meet both conditions. Today, a team of researchers from Columbia Engineering and Columbia University Irving Medical Center reports that they have developed a model of human physiology in the form of a multi-organ chip consisting of engineered human heart, bone, liver, and skin that are linked by vascular flow with circulating immune cells, to allow recapitulation of interdependent organ functions. The researchers have essentially created a plug-and-play multi-organ chip, which is the size of a microscope slide, that can be customized to the patient. Because disease progression and responses to treatment vary greatly from one person to another, such a chip will eventually enable personalized optimization of therapy for each patient. The study is the cover story of the April 2022 issue of Nature Biomedical Engineering.
    “This is a huge achievement for us — we’ve spent ten years running hundreds of experiments, exploring innumerable great ideas, and building many prototypes, and now at last we’ve developed this platform that successfully captures the biology of organ interactions in the body,” said the project leader Gordana Vunjak-Novakovic, University Professor and the Mikati Foundation Professor of Biomedical Engineering, Medical Sciences, and Dental Medicine.
    Inspired by the human body
    Taking inspiration from how the human body works, the team has built a human tissue-chip system in which they linked matured heart, liver, bone, and skin tissue modules by recirculating vascular flow, allowing for interdependent organs to communicate just as they do in the human body. The researchers chose these tissues because they have distinctly different embryonic origins, structural and functional properties, and are adversely affected by cancer treatment drugs, presenting a rigorous test of the proposed approach. More