More stories

  • in

    Bionic arm restores natural behaviors in patients with upper limb amputations

    Cleveland Clinic researchers have engineered a first-of-its-kind bionic arm for patients with upper-limb amputations that allows wearers to think, behave and function like a person without an amputation, according to new findings published in Science Robotics.
    The Cleveland Clinic-led international research team developed the bionic system that combines three important functions — intuitive motor control, touch and grip kinesthesia, the intuitive feeling of opening and closing the hand. Collaborators included University of Alberta and University of New Brunswick.
    “We modified a standard-of-care prosthetic with this complex bionic system which enables wearers to move their prosthetic arm more intuitively and feel sensations of touch and movement at the same time,” said lead investigator Paul Marasco, Ph.D., associate professor in Cleveland Clinic Lerner Research Institute’s Department of Biomedical Engineering. “These findings are an important step towards providing people with amputation with complete restoration of natural arm function.”
    The system is the first to test all three sensory and motor functions in a neural-machine interface all at once in a prosthetic arm. The neural-machine interface connects with the wearer’s limb nerves. It enables patients to send nerve impulses from their brains to the prosthetic when they want to use or move it, and to receive physical information from the environment and relay it back to their brain through their nerves.
    The artificial arm’s bi-directional feedback and control enabled study participants to perform tasks with a similar degree of accuracy as non-disabled people.
    “Perhaps what we were most excited to learn was that they made judgments, decisions and calculated and corrected for their mistakes like a person without an amputation,” said Dr. Marasco, who leads the Laboratory for Bionic Integration. “With the new bionic limb, people behaved like they had a natural hand. Normally, these brain behaviors are very different between people with and without upper limb prosthetics.” Dr. Marasco also has an appointment to in Cleveland Clinic’s Charles Shor Epilepsy Center and the Cleveland VA Medical Center’s Advanced Platform Technology Center. More

  • in

    This rainbow-making tech could help autonomous vehicles read signs

    A new study explains the science behind microscale concave interfaces (MCI) — structures that reflect light to produce beautiful and potentially useful optical phenomena.
    “It is vital to be able to explain how a technology works to someone before you attempt to adopt it. Our new paper defines how light interacts with microscale concave interfaces,” says University at Buffalo engineering researcher Qiaoqiang Gan, noting that future applications of these effects could include aiding autonomous vehicles in recognizing traffic signs.
    The research was published online on Aug. 15 in Applied Materials Today, and is featured in the journal’s September issue.
    Gan, PhD, professor of electrical engineering in the UB School of Engineering and Applied Sciences, led the collaborative study, which was conducted by a team from UB, the University of Shanghai for Science and Technology, Fudan University, Texas Tech University and Hubei University. The first authors are Jacob Rada, UB PhD student in electrical engineering, and Haifeng Hu, PhD, professor of optical-electrical and computer engineering at the University of Shanghai for Science and Technology.
    Reflections that form concentric rings of light
    The study focuses on a retroreflective material — a thin film that consists of polymer microspheres laid down on the sticky side of a transparent tape. The microspheres are partially embedded in tape, and the parts that protrude form MCIs.?
    White light shining on this film is reflected in a way that causes the light to create concentric rainbow rings, the new paper reports. Alternately, hitting the material with a single-colored laser (red, green or blue, in the case of this study) generates a pattern of bright and dark rings. Reflections from infrared lasers also produced distinctive signals consisting of concentric rings.
    The research describes these effects in detail, and reports on experiments that used the thin film in a stop sign. The patterns formed by the material showed up clearly on both a visual camera that detects visible light, and a LIDAR (laser imaging, detection and ranging) camera that detects infrared signals, says Rada, the co-first author from UB.
    “Currently, autopilot systems face many challenges in recognizing traffic signs, especially in real-world conditions,” Gan says. “Smart traffic signs made from our material could provide more signals for future systems that use LIDAR and visible pattern recognition together to identify important traffic signs. This may be helpful to improve the traffic safety for autonomous cars.”
    “We demonstrated a new combined strategy to enhance the LIDAR signal and visible pattern recognition that are currently performed by both visible and infrared cameras,” Rada says. “Our work showed that the MCI is an ideal target for LIDAR cameras, due to the constantly strong signals that are produced.”
    A U.S. patent for the retroreflective material has been issued, as well as a counterpart in China, with Fudan University and UB as the patent-holders. The technology is available for licensing.
    Gan says future plans include testing the film using different wavelengths of light, and different materials for the microspheres, with the goal of enhancing performance for possible applications such as traffic signs designed for future autonomous systems.
    Story Source:
    Materials provided by University at Buffalo. Original written by Charlotte Hsu. Note: Content may be edited for style and length. More

  • in

    Machine learning algorithm revolutionizes how scientists study behavior

    To Eric Yttri, assistant professor of biological sciences and Neuroscience Institute faculty at Carnegie Mellon University, the best way to understand the brain is to watch how organisms interact with the world.
    “Behavior drives everything we do,” Yttri said.
    As a behavioral neuroscientist, Yttri studies what happens in the brain when animals walk, eat, sniff or do any action. This kind of research could help answer questions about neurological diseases or disorders like Parkinson’s disease or stroke. But identifying and predicting animal behavior is extremely difficult.
    Now, a new unsupervised machine learning algorithm developed by Yttri and Alex Hsu, a biological sciences Ph.D. candidate in his lab, makes studying behavior much easier and more accurate. The researchers published a paper on the new tool, B-SOiD (Behavioral segmentation of open field in DeepLabCut), in Nature Communications.
    Previously, the standard method to capture animal behavior was to track very simple actions, like whether a trained mouse pressed a lever or whether an animal was eating food or not. Alternatively, the experimenter could spend hours and hours manually identifying behavior, usually frame by frame on a video, a process prone to human error and bias.
    Hsu realized he could let an unsupervised learning algorithm do the time-consuming work. B-SOiD discovers behaviors by identifying patterns in the position of an animal’s body. The algorithm works with computer vision software and can tell researchers what behavior is happening at every frame in a video. More

  • in

    Exploring the past: Computational models shed new light on the evolution of prehistoric languages

    A new linguistic study sheds light on the nature of languages spoken before the written period, using computational modeling to reconstruct the grammar of the 6500-7000 year-old Proto-Indo-European language, which is the ancestor of most languages of Eurasia, including English and Hindi. The model employed makes it possible to observe evolutionary trends in language over the millennia. The article, “Reconstructing the evolution of Indo-European grammar,” authored by Gerd Carling (Lund University) and Chundra Cathcart (University of Zurich) will be published in the September 2021 issue of the scholarly journal Language.
    In the article, Carling & Cathcart use a database of features from 125 different languages of the Indo-European family, including extinct languages such as Sanskrit and Latin. Features include most of the differences that make the languages difficult to learn, such as differentiation in word order (the girl throws the stone in English or caitheann an cailín an chloch “throws the girl the stone” in Irish), gender (the apple in English or der Apfel in German), number of cases, number of forms of the verb, or whether languages have prepositions or postpositions (to the house in English but ghar ko “house-to” in Hindi). With the aid of methods adopted from computational biology, the authors use known grammars to reconstruct grammars of unknown prehistorical periods.
    The reconstruction of Indo-European grammar has been the subject of lengthy discussion for over a century. In the 19th century, scholars held the view that the ancient written languages, such as Classical Greek, were most similar to the reconstructed Proto-Indo-European language. The discovery of the archaic but highly dissimilar Hittite language in the early 20th century shifted the focus. Instead, scholars believed that Proto-Indo-European was a language with a structure more similar to non-Indo-European languages of Eurasia such as Basque or languages of the Caucasus region.
    The study confirms that Proto-Indo-European was similar to Classical Greek and Sanskrit, supporting the theory of the 19th century scholars. However, the study also provides new insights into the mechanisms of language change. Some features of the proto-language were very stable and dominant over time. Moreover, features of higher prominence and frequency were less likely to change.
    Though this study focused on one single family (Indo-European), the methods used in the paper can be applied to many other language families to reconstruct earlier states of languages and to observe how language evolves over time. The model also forms a basis for predicting future changes in language evolution.
    Story Source:
    Materials provided by Linguistic Society of America. Note: Content may be edited for style and length. More

  • in

    Revealing the hidden structure of quantum entangled states

    Quantum states that are entangled in many dimensions are key to our emerging quantum technologies, where more dimensions mean a higher quantum bandwidth (faster) and better resilience to noise (security), crucial for both fast and secure communication and speed up in error-free quantum computing. Now researchers at the University of the Witwatersrand in Johannesburg, South Africa, together with collaborators from Scotland, have invented a new approach to probing these “high-dimensional” quantum states, reducing the measurement time from decades to minutes.
    The study was published in the scientific journal, Nature Communications, on Friday, 27 August 2021. Wits PhD student Isaac Nape worked with Distinguished Professor Andrew Forbes, lead investigator on this study and Director of the Structured Light Laboratory in the School of Physics at Wits University, as well as postdoctoral fellow Dr Valeria Rodriguez-Fajardo, visiting Taiwanese researcher Dr Hasiao-Chih Huang, and Dr Jonathan Leach and Dr Feng Zhu from Heriot-Watt University in Scotland.
    In their paper titled: Measuring dimensionality and purity of high-dimensional entangled states, the team outlined a new approach to quantum measurement, testing it on a 100 dimensional quantum entangled state. With traditional approaches, the time of measurement increases unfavourably with dimension, so that to unravel a 100 dimensional state by a full ‘quantum state tomography’ would take decades. Instead, the team showed that the salient information of the quantum system — how many dimensions are entangled and to what level of purity? — could be deduced in just minutes. The new approach requires only simple ‘projections’ that could easily be done in most laboratories with conventional tools. Using light as an example, the team using an all-digital approach to perform the measurements.
    The problem, explains Forbes, is that while high-dimensional states are easily made, particularly with entangled particles of light (photons) they are not easy to measure — our toolbox for measuring and controlling them is almost empty. You can think of a high-dimensional quantum state like faces of a dice. A conventional dice has 6 faces, numbered 1 through 6, for a six-dimensional alphabet that can be used for computing, or for transferring information in communication. To make a “high-dimensional dice” means a dice with many more faces: 100 dimensions equals 100 faces — a rather complicated polygon. In our everyday world it would be easy to count the faces to know what sort of resource we had available to us, but not so in the quantum world. In the quantum world, you can never see the whole dice, so counting the faces is very difficult. The way we get around this is to do a tomography, like they do in the medical world, building up a picture from many, many slices of the object. But the information in quantum objects can be enormous, so the time for this process is prohibitive. A faster approach is a ‘Bell measurement’, a famous test to tell if what you have in front of you is entangled, like asking it “are you quantum or not?.” But while this confirms quantum correlations of the dice, it doesn’t say much about the number of faces it has.
    “Our work circumvented the problem by a chance discovery, that there is a set of measurements that is not a tomography and not a Bell measurement, but that holds important information of both,” says Isaac Nape, the PhD student who executed the research. “In technical parlance, we blended these two measurement approaches to do multiple projections that looks like a tomography but measuring the visibilities of the outcome, as if they were Bell measurements. This revealed the hidden information that could be extracted from the strength of the quantum correlations across many dimensions.” The combination of speed from the Bell-like approach and information from the tomography-like approach meant that key quantum parameters such as dimensionality and the purity of the quantum state could be determined quickly and quantitatively, the first approach to do so.
    “We are not suggesting that our approach replace other techniques,” says Forbes. “Rather, we see it as a fast probe to reveal what you are dealing with, and then use this information to make an informed decision on what to do next. A case of horses-for-courses.” For example, the team see their approach as changing the game in real-world quantum communication links, where a fast measurement of how noisy that quantum state has become and what this has done to the useful dimensions is crucial.
    Story Source:
    Materials provided by University of the Witwatersrand. Note: Content may be edited for style and length. More

  • in

    New artificial intelligence tech set to transform heart imaging

    A new artificial-intelligence technology for heart imaging can potentially improve care for patients, allowing doctors to examine their hearts for scar tissue while eliminating the need for contrast injections required for traditional cardiovascular magnetic resonance imaging (CMR).
    A team of researchers who developed the technology, including doctors at UVA Health, reports the success of the approach in a new article in the scientific journal Circulation. The team compared its AI approach, known as Virtual Native Enhancement (VNE), with contrast-enhanced CMR scans now used to monitor hypertrophic cardiomyopathy, the most common genetic heart condition. The researchers found that VNE produced higher-quality images and better captured evidence of scar in the heart, all without the need for injecting the standard contrast agent required for CMR.
    “This is a potentially important advance, especially if it can be expanded to other patient groups,” said researcher Christopher Kramer, MD, the chief of the Division of Cardiovascular Medicine at UVA Health, Virginia’s only designated Center of Excellence by the Hypertrophic Cardiomyopathy Association. “Being able to identify scar in the heart, an important contributor to progression to heart failure and sudden cardiac death, without contrast, would be highly significant. CMR scans would be done without contrast, saving cost and any risk, albeit low, from the contrast agent.”
    Imaging Hypertrophic Cardiomyopathy
    Hypertrophic cardiomyopathy is the most common inheritable heart disease, and the most common cause of sudden cardiac death in young athletes. It causes the heart muscle to thicken and stiffen, reducing its ability to pump blood and requiring close monitoring by doctors.
    The new VNE technology will allow doctors to image the heart more often and more quickly, the researchers say. It also may help doctors detect subtle changes in the heart earlier, though more testing is needed to confirm that.
    The technology also would benefit patients who are allergic to the contrast agent injected for CMR, as well as patients with severely failing kidneys, a group that avoids the use of the agent.
    The new approach works by using artificial intelligence to enhance “T1-maps” of the heart tissue created by magnetic resonance imaging (MRI). These maps are combined with enhanced MRI “cines,” which are like movies of moving tissue — in this case, the beating heart. Overlaying the two types of images creates the artificial VNE image
    Based on these inputs, the technology can produce something virtually identical to the traditional contrast-enhanced CMR heart scans doctors are accustomed to reading — only better, the researchers conclude. “Avoiding the use of contrast and improving image quality in CMR would only help both patients and physicians down the line,” Kramer said.
    While the new research examined VNE’s potential in patients with hypertrophic cardiomyopathy, the technology’s creators envision it being used for many other heart conditions as well.
    “While currently validated in the HCM population, there is a clear pathway to extend the technology to a wider range of myocardial pathologies,” they write. “VNE has enormous potential to significantly improve clinical practice, reduce scan time and costs, and expand the reach of CMR in the near future.” More

  • in

    Quantum networks in our future

    Large-scale quantum networks have been proposed, but so far, they do not exist. Some components of what would make up such networks are being studied, but the control mechanism for such a large-scale network has not been developed. In AVS Quantum Science, by AIP Publishing, investigators outline how a time-sensitive network control plane could be a key component of a workable quantum network.
    Quantum networks are similar to classical networks. Information travels through them, providing a means of communication between devices and over distances. Quantum networks move quantum bits of information, called qubits, through the network.
    These qubits are usually photons. Through the quantum phenomena of superposition and entanglement, they can transmit much more information than classical bits, which are limited to logical states of 0 and 1, are able to. Successful long-distance transmission of a qubit requires precise control and timing.
    In addition to the well-understood requirements of transmission distance and data rate, for quantum networks to be useful in a real-world setting there are at least two other requirements of industry that need to be considered.
    One is real-time network control, specifically time-sensitive networking. This control method, which takes network traffic into account, has been used successfully in other types of networks, such as Ethernet, to ensure messages are transmitted and received at precise times. This is precisely what is required to control quantum networks.
    The second requirement is cost. Large-scale adoption of an industrial quantum network will only happen if costs can be significantly reduced. One way to accomplish cost reduction is with photonic integrated circuits. More

  • in

    Standards for studies using machine learning

    Researchers in the life sciences who use machine learning for their studies should adopt standards that allow other researchers to reproduce their results, according to a comment article published today in the journal Nature Methods.
    The authors explain that the standards are key to advancing scientific breakthroughs, making advances in knowledge, and ensuring research findings are reproducible from one group of scientists to the next. The standards would allow other groups of scientists to focus on the next breakthrough rather than spending time recreating the wheel built by the authors of the original study.
    Casey S. Greene, PhD, director of the University of Colorado School of Medicine’s Center for Health AI, is a corresponding author of the article, which he co-authored with first author Benjamin J. Heil, a member of Greene’s research team, and researchers from the United States, Canada, and Europe.
    “Ultimately all science requires trust — no scientist can reproduce the results from every paper they read,” Greene and his co-authors write. “The question, then, is how to ensure that machine-learning analyses in the life sciences can be trusted.”
    Greene and his co-authors outline standards to qualify for one of three levels of accessibility: bronze, silver, and gold. These standards each set minimum levels for sharing study materials so that other life science researchers can trust the work and, if warranted, validate the work and build on it.
    To qualify for a bronze standard, life science researchers would need to make their data, code, and models publicly available. In machine learning, computers learn from training data and having access to that data enables scientists to look for problems that can confound the process. The code tells future researchers how the computer was told to carry out the steps of the work.
    In machine learning, the resulting model is critically important. For future researchers, knowing the original research team’s model is critical for understanding how it relates to the data it is supposed to analyze. Without access to the model, other researchers cannot determine biases that might influence the work. For example, it can be difficult to determine whether an algorithm favors one group of people over another.
    “Being unable to examine a model also makes trusting it difficult,” the authors write.
    The silver standard calls for the data, models, and code provided at the bronze level, and adds more information about the system in which to run the code. For the next scientists, that information makes it theoretically possible that they could duplicate the training process.
    To qualify for the gold standard, researchers must add an “easy button” to their work to make it possible for future researchers to reproduce the previous analysis with a single command. The original researchers must automate all steps of their analysis so that “the burden of reproducing their work is as small as possible.” For the next scientists, this information makes it practically possible to duplicate the training process and either adapt or extend it.
    Greene and his co-authors also offer recommendations for documenting the steps and sharing them.
    The Nature Methods article is an important contribution to the continuing refinement of the use of machine learning and other data-analysis methods in health sciences and other fields where trust is particularly important. Greene is one of several leaders recently recruited by the CU School of Medicine to establish a program in developing and applying robust data science methodologies to advance biomedical research, education, and clinical care. More