More stories

  • in

    Individual back training machine developed

    Back pain is extremely widespread. According to figures in the most recent 2023 Health Report, issued by the German health insurer DAK, around 18 percent of cases in which employees submit sick notes involve musculoskeletal ailments, above all back complaints. After topping the table of individual diagnoses in 2022, back pain still ranks high, just behind COVID-19 and respiratory ailments. It is pleasing to note that the latest report shows a slight decline in the percentage of back-related conditions in total reported absences, from 6.5% to 5.3%.
    However: “Even young people are reporting back pain in increasing numbers. This trend didn’t just start with the Covid-19 lockdowns,” says Prof. Rainer Burgkart of TUM Klinikum rechts der Isar. In the Burden Disease Incidence Study, conducted in 2020 by the Robert Koch Institute (RKI), with data from over 5,000 patients in Germany, it turned out that almost two thirds of the respondents (61.3%) had experienced back pain in the previous year. Lower back pain affected 55% of women and 48.6% of men, while one in three women (32.6%) and one in five men (22%) suffered from upper back pain. Some years ago the Institute for Health Economics and Management of the Ludwig-Maximilians-Universität (LMU) estimated the economic impact of these ailments at around 50 billion euros. What can be done? “Physiotherapy and targeted muscle and coordination training are highly effective and are often prescribed in case of frequently diagnosed, non-specific back pain,” says Dr. Burgkart, an orthopedic specialist. “However, on completing targeted treatment, most patients slip back into their old patterns of behavior and their back muscles become weaker again.” An invention by TUM and Klinikum rechts der Isar — the GyroTrainer — is designed to promote long-term, tailor-made back exercises in the future.
    GyroTrainer: Algorithm decides on intensity of training
    Prof. Burgkart from Klinikum rechts der Isar, in cooperation with the Munich Institute of Robotic and Machine Intelligence (MIRMI) at TUM, the fitness equipment manufacturer Erhard Peuker GmbH, and the hardware and software specialist B&W Embedded Solutions GmbH, developed the GyroTrainer — a back muscle training device that can be adapted to the abilities of individual users. The work was carried out in a three-year research project. The GyroTrainer is based on a round platform 50 cm in diameters It can tilted to the front, back and sideways, and can also rotate. It resembles a gyroscope, which is designed to remain balanced in a wide range of configurations and positions.
    Balance board as the starting point
    A similar principle is used in the GyroTrainer. Users step onto the round platform and try to keep their balance. Sensors and electric motors located below the platform register the user’s movements and can tilt and rotate the disk. The device works like a balance board, with the difference that the stiffness can be varied. The challenge is for users to keep their balance. “Preparing the device correctly is not a simple matter of adjusting it for the individual user,” says researcher Elisabeth Jensen from MIRMI. “First we have to find the right stiffness for that person.” If the user can comfortably keep their balance at a given stiffness level for a certain period of time, a learning algorithm decides on the right initial setting for the platform so that it is neither too easy or too difficult for the person.
    Gaming concept: strengthening the back by playing a game
    Then the actual training can begin. “Our cooperation partners have developed a computer game where the control comes from the user’s movements,” explains TUM researcher Jensen. It is modelled on the Space Invaders game. The player’s spaceship automatically fires at the invaders at regular intervals while trying to evade incoming shots. “This takes skill and concentration,” explains Jensen. The less rigid the platform setting, the harder it is to maintain stability and steer the spaceship. “It is also possible to add disruption factors,” explains the orthopedics specialist Burgkart. The platform rotates suddenly to the left or right, which makes it even harder for the user to stay balanced. “At the start, the platform feels quite firm under the user’s feet, but gradually becomes more unstable. And finally, for users in very good condition, it starts giving extra pushes,” explains Burgkart. Using electromyography (EMG) sensors, the team confirmed that the system effectively activates the abdominal and back muscles that are important for spinal stability, and that the activity becomes even more challenging with the rotational movement. The less rigid the system becomes and the more frequent the sudden rotations occur, the greater the demands on the muscles. “Balancing movements are among the most effective methods,” says Burgkart. He believes that the new training device should be used mainly for preventive purposes, both for primary patients, who have “elevated risk,” and secondary patients, who have suffered from back pain in the past.
    Next steps: from the concept to the product
    After nearly three years of research, it is now clear: the GyroTrainer functions as intended and fulfils its medical purpose. “There are still a few steps to take before it can be used as a product,” says Prof. Burgkart. The most important requirement for the future: the researchers want the device — which for safety reasons still has to be operated by TUM researchers — to be suitable for use without a physiotherapist or trainer. They also want it to be capable of adjusting dynamically to the ability of the individual user. The GyroTrainer already determines the individual stiffness via approximations and can make adjustments at any time using the measured data. In the future, the artificial intelligence function of the device will work as an independent, secure logical system to set the initial rigidity and select the difficulty level of the corresponding game options. It will also be able to make adjustments based on how the user is feeling on the day, fatigue levels and personal training progress. A final important requirement for the new back trainer: it has to fit into any living room: Prof. Burgkart’s vision: “The machine has to be mobile so that people can train on a regular basis without having to go to a physiotherapist.” More

  • in

    New twist on AI makes the most of sparse sensor data

    An innovative approach to artificial intelligence (AI) enables reconstructing a broad field of data, such as overall ocean temperature, from a small number of field-deployable sensors using low-powered “edge” computing, with broad applications across industry, science and medicine.
    “We developed a neural network that allows us to represent a large system in a very compact way,” said Javier Santos, a Los Alamos National Laboratory researcher who applies computational science to geophysical problems. “That compactness means it requires fewer computing resources compared to state-of-the-art convolutional neural network architectures, making it well-suited to field deployment on drones, sensor arrays and other edge-computing applications that put computation closer to its end use.”
    Novel AI approach boosts computing efficiency
    Santos is first author of a paper published by a team of Los Alamos researchers in Nature Machine Intelligence on the novel AI technique, which they dubbed Senseiver. The work, which builds on an AI model called Perceiver IO developed by Google, applies the techniques of natural-language models such as ChatGPT to the problem of reconstructing information about a broad area — such as the ocean — from relatively few measurements.
    The team realized the model would have broad application because of its efficiency. “Using fewer parameters and less memory requires fewer central processing unit cycles on the computer, so it runs faster on smaller computers,” said Dan O’Malley, a coauthor of the paper and Los Alamos researcher who applies machine learning to geoscience problems.
    In a first in the published literature, Santos and his Los Alamos colleagues validated the model by demonstrating its effectiveness on real-world sets of sparse data — meaning information taken from sensors that cover only a tiny portion of the field of interest — and on complex data sets of three-dimensional fluids.
    In a demonstration of the real-world utility of the Senseiver, the team applied the model to a National Oceanic and Atmospheric Administration sea-surface-temperature dataset. The model was able to integrate a multitude of measurements taken over decades from satellites and sensors on ships. From these sparse point measurements, the model forecast temperatures across the entire body of the ocean, which provides information useful to global climate models.

    Bringing AI to drones and sensor networks
    The Senseiver is well-suited to a variety of projects and research areas of interest to Los Alamos.
    “Los Alamos has a wide range of remote sensing capabilities, but it’s not easy to use AI because models are too big and don’t fit on devices in the field, which leads us to edge computing,” said Hari Viswanathan, Los Alamos National Laboratory Fellow, environmental scientist and coauthor of the paper about the Senseiver. “Our work brings the benefits of AI to drones, networks of field-based sensors and other applications currently beyond the reach of cutting-edge AI technology.”
    The AI model will be particularly useful in the Lab’s work identifying and characterizing orphaned wells. The Lab leads the Department of Energy-funded Consortium Advancing Technology for Assessment of Lost Oil & Gas Wells (CATALOG), a federal program tasked with locating and characterizing undocumented orphaned wells and measuring their methane emissions. Viswanathan is the lead scientist of CATALOG.
    The approach offers improved capabilities for large, practical applications such as self-driving cars, remote modeling of assets in oil and gas, medical monitoring of patients, cloud gaming, content delivery and contaminant tracing. More

  • in

    Keep it secret: Cloud data storage security approach taps quantum physics

    Distributed cloud storage is a hot topic for security researchers around the globe pursuing secure data storage, and a team in China is now merging quantum physics with mature cryptography and storage techniques to achieve a cost-effective cloud storage solution.
    Shamir’s secret sharing, a known method, is a key distribution algorithm. It involves distributing private information to a group so that “the secret” can be revealed only when a majority pools their knowledge. It’s common to combine quantum key distribution (QKD) and Shamir’s secret sharing algorithm for secure storage — at an utmost security level. But utmost security solutions tend to bring substantial cost baggage, including significant cloud storage space requirements.
    In AIP Advances, the team presents its method that uses quantum random numbers as encryption keys, disperses the keys via Sharmir’s secret sharing algorithm, applies erasure coding within ciphertext, and securely transmits the data through QKD-protected networks to distributed clouds.
    Their method not only provides quantum security to the entire system but also offers fault tolerance and efficient storage — and this may help speed the adoption of quantum technologies.
    “In essence, our solution is quantum-secure and serves as a practical application of the fusion between quantum and cryptography technologies,” said corresponding author Yong Zhao, vice president of QuantumCTek Co. Ltd., a quantum information technology company. “QKD-generated keys secure both user data uploads to servers and data transmissions to dispersed cloud storage nodes.”
    The team explored whether quantum security services could expand beyond secure data transmission to offer a richer spectrum of quantum security applications such as data storage and processing.
    They came up with a more secure and cost-effective fault-tolerant cloud storage solution. “It not only achieves quantum security but also saves storage space when compared to traditional mirroring methods or ones based on Shamir’s secret sharing, which is commonly used for distributed management of sensitive data,” said Zhao.
    When the team ran the solution through experimental tests ranging from encryption/decryption, key preservation, and data storage, it proved to be effective.
    The solution is currently feasible from both technological and engineering perspectives: It meets the requirement for relevant quantum and cryptographic standards to ensure a secure storage solution capable of withstanding the challenges posed by quantum computing.
    “In the future, we plan to drive the commercial implementation of this technology to offer practical services,” said Zhao. “We’ll explore various usage models in multiuser scenarios, and we’re also considering integrating more quantum technologies, such as quantum secret sharing, into cloud storage.” More

  • in

    AI programs spat out known data and hardly learned specific chemical interactions when predicting drug potency

    Artificial intelligence (AI) is on the rise. Until now, AI applications generally have “black box” character: How AI arrives at its results remains hidden. Prof. Dr. Jürgen Bajorath, a cheminformatics scientist at the University of Bonn, and his team have developed a method that reveals how certain AI applications work in pharmaceutical research. The results are unexpected: the AI programs largely remembered known data and hardly learned specific chemical interactions when predicting drug potency. The results have now been published in Nature Machine Intelligence.
    Which drug molecule is most effective? Researchers are feverishly searching for efficient active substances to combat diseases. These compounds often dock onto protein, which usually are enzymes or receptors that trigger a specific chain of physiological actions. In some cases, certain molecules are also intended to block undesirable reactions in the body — such as an excessive inflammatory response. Given the abundance of available chemical compounds, at a first glance this research is like searching for a needle in a haystack. Drug discovery therefore attempts to use scientific models to predict which molecules will best dock to the respective target protein and bind strongly. These potential drug candidates are then investigated in more detail in experimental studies.
    Since the advance of AI, drug discovery research has also been increasingly using machine learning applications. As one “Graph neural networks” (GNNs) provide one of several opportunities for such applications. They are adapted to predict, for example, how strongly a certain molecule binds to a target protein. To this end, GNN models are trained with graphs that represent complexes formed between proteins and chemical compounds (ligands). Graphs generally consist of nodes representing objects and edges representing relationship between nodes. In graph representations of protein-ligand complexes, edges connect only protein or ligand nodes, representing their structures, respectively, or protein and ligand nodes, representing specific protein-ligand interactions.
    “How GNNs arrive at their predictions is like a black box we can’t glimpse into,” says Prof. Dr. Jürgen Bajorath. The chemoinformatics researcher from the LIMES Institute at the University of Bonn, the Bonn-Aachen International Center for Information Technology (B-IT) and the Lamarr Institute for Machine Learning and Artificial Intelligence in Bonn, together with colleagues from Sapienza University in Rome, has analyzed in detail whether graph neural networks actually learn protein-ligand interactions to predict how strongly an active substance binds to a target protein.
    How do the AI applications work?
    The researchers analyzed a total of six different GNN architectures using their specially developed “EdgeSHAPer” method and a conceptually different methodology for comparison. These computer programs “screen” whether the GNNs learn the most important interactions between a compound and a protein and thereby predict the potency of the ligand, as intended and anticipated by researchers — or whether AI arrives at the predictions in other ways. “The GNNs are very dependent on the data they are trained with,” says the first author of the study, PhD candidate Andrea Mastropietro from Sapienza University in Rome, who conducted a part of his doctoral research in Prof. Bajorath’s group in Bonn.
    The scientists trained the six GNNs with graphs extracted from structures of protein-ligand complexes, for which the mode of action and binding strength of the compounds to their target proteins was already known from experiments. The trained GNNs were then tested on other complexes. The subsequent EdgeSHAPer analysis then made it possible to understand how the GNNs generated apparently promising predictions. More

  • in

    New scientific methods for analyzing criminal careers

    Researchers at the Complexity Science Hub have examined 1.2 million criminal incidents and developed an innovative method to identify patterns in criminal trajectories.
    When it comes to preventing future crimes, it is essential to understand how past criminal behavior relates to future offenses. One key question is whether criminals tend to specialize in specific types of crimes or exhibit a generalist approach by engaging in a variety of illegal activities.
    Despite the potential significance of systematically identifying patterns in criminal careers, especially in preventing recurrent offenses, there is a scarcity of comprehensive empirical studies on this subject.
    “To address this gap, we conducted an exhaustive examination of over 1.2 million criminal incidents,” elaborates Stefan Thurner of the Complexity Science Hub. This comprehensive dataset encompassed all criminal reports filed against individuals over six years in a small Central European country.
    Specialists With Certain Features
    Criminal offenders who specialize in specific types of crimes typically are older and more frequently female than individuals involved in a broader range of offenses.
    “These individuals, referred to as specialists, also tend to operate within a more confined geographic area, suggesting their dependence on local knowledge and potentially receiving support from individuals within that specific region, as opposed to offenders with a wider focus,” Thurner explains one of the study’s results. Furthermore, the researchers observed that these specialists tend to collaborate in tighter-knit local networks, increasing the likelihood of recurring partnerships. More

  • in

    Photo-induced superconductivity on a chip

    Researchers at the Max Planck Institute for the Structure and Dynamics of Matter (MPSD) in Hamburg, Germany, have shown that a previously demonstrated ability to turn on superconductivity with a laser beam can be integrated on a chip, opening up a route toward opto-electronic applications.
    Their work, now published in Nature Communications, also shows that the electrical response of photo-excited K3C60 is not linear, that is, the resistance of the sample depends on the applied current. This is a key feature of superconductivity, validates some of the previous observations and provides new information and perspectives on the physics of K3C60 thin films.
    The optical manipulation of materials to produce superconductivity at high temperatures is a key research focus of the MPSD. So far, this strategy has proven successful in several quantum materials, including cuprates, k-(ET)2-X and K3C60. Enhanced electrical coherence and vanishing resistance have been observed in previous studies on the optically driven states in these materials.
    In this study, researchers from the Cavalleri group deployed on-chip non-linear THz spectroscopy to open up the realm of picosecond transport measurements (a picosecond is a trillionth of a second). They connected thin films of K3C60 to photo-conductive switches with co-planar waveguides. Using a visible laser pulse to trigger the switch, they sent a strong electrical current pulse lasting just one picosecond through the material. After travelling through the solid at around half the speed of light, the current pulse reached another switch which served as a detector to reveal important information, such as the characteristic electrical signatures of superconductivity.
    By simultaneously exposing the K3C60 films to mid-infrared light, the researchers were able to observe non-linear current changes in the optically excited material. This so-called critical current behavior and the Meissner effect are the two key features of superconductors. However, neither has been measured so far — making this demonstration of critical current behavior in the excited solid particularly significant. Moreover, the team discovered that the optically driven state of K3C60 resembled that of a so-called granular superconductor, consisting of weakly connected superconducting islands.
    The MPSD is uniquely placed to carry out such measurements on the picosecond scale, with the on-chip set-up having been designed and built in-house. “We developed a technique platform which is perfect for probing non-linear transport phenomena away from equilibrium, like the non-linear and anomalous Hall effects, the Andreev reflection and others,” says lead author Eryin Wang, a staff scientist in the Cavalleri group. In addition, the integration of non-equilibrium superconductivity into opto-electronic platforms may lead to new devices based on this effect.
    Andrea Cavalleri, who has founded and is currently leading the research group, adds: “This work underscores the scientific and technological developments within the MPSD in Hamburg, where new experimental methods are constantly being developed to achieve new scientific understanding. We have been working on ultrafast electrical transport methods for nearly a decade and are now in a position to study so many new phenomena in non-equilibrium materials, and potentially to introduce lasting changes in technology.”
    The research underpinning these results was carried out in the laboratories of the MPSD at the Center for Free-Electron Laser Science (CFEL) in Hamburg, Germany. More

  • in

    AI faces look more real than actual human face

    White faces generated by artificial intelligence (AI) now appear more real than human faces, according to new research led by experts at The Australian National University (ANU).
    In the study, more people thought AI-generated white faces were human than the faces of real people. The same wasn’t true for images of people of colour.
    The reason for the discrepancy is that AI algorithms are trained disproportionately on white faces, Dr Amy Dawel, the senior author of the paper, said.
    “If white AI faces are consistently perceived as more realistic, this technology could have serious implications for people of colour by ultimately reinforcing racial biases online,” Dr Dawel said.
    “This problem is already apparent in current AI technologies that are being used to create professional-looking headshots. When used for people of colour, the AI is altering their skin and eye colour to those of white people.”
    One of the issues with AI ‘hyper-realism’ is that people often don’t realise they’re being fooled, the researchers found.
    “Concerningly, people who thought that the AI faces were real most often were paradoxically the most confident their judgements were correct,” Elizabeth Miller, study co-author and PhD candidate at ANU, said.

    “This means people who are mistaking AI imposters for real people don’t know they are being tricked.”
    The researchers were also able to discover why AI faces are fooling people.
    “It turns out that there are still physical differences between AI and human faces, but people tend to misinterpret them. For example, white AI faces tend to be more in-proportion and people mistake this as a sign of humanness,” Dr Dawel said.
    “However, we can’t rely on these physical cues for long. AI technology is advancing so quickly that the differences between AI and human faces will probably disappear soon.”
    The researchers argue this trend could have serious implications for the proliferation of misinformation and identity theft, and that action needs to be taken.
    “AI technology can’t become sectioned off so only tech companies know what’s going on behind the scenes. There needs to be greater transparency around AI so researchers and civil society can identify issues before they become a major problem,” Dr Dawel said.
    Raising public awareness can also play a significant role in reducing the risks posed by the technology, the researchers argue.
    “Given that humans can no longer detect AI faces, society needs tools that can accurately identify AI imposters,” Dr Dawel said.
    “Educating people about the perceived realism of AI faces could help make the public appropriately sceptical about the images they’re seeing online.” More

  • in

    Twisted magnets make brain-inspired computing more adaptable

    A form of brain-inspired computing that exploits the intrinsic physical properties of a material to dramatically reduce energy use is now a step closer to reality, thanks to a new study led by UCL and Imperial College London researchers.
    In the new study, published in the journal Nature Materials, an international team of researchers used chiral (twisted) magnets as their computational medium and found that, by applying an external magnetic field and changing temperature, the physical properties of these materials could be adapted to suit different machine-learning tasks.
    Such an approach, known as physical reservoir computing, has until now been limited due to its lack of reconfigurability. This is because a material’s physical properties may allow it to excel at a certain subset of computing tasks but not others.
    In the new study, published in the journal Nature Materials, an international team of researchers used chiral (twisted) magnets as their computational medium and found that, by applying an external magnetic field and changing temperature, the physical properties of these materials could be adapted to suit different machine-learning tasks.
    Dr Oscar Lee (London Centre for Nanotechnology at UCL and UCL Department of Electronic & Electrical Engineering), the lead author of the paper, said: “This work brings us a step closer to realising the full potential of physical reservoirs to create computers that not only require significantly less energy, but also adapt their computational properties to perform optimally across various tasks, just like our brains.
    “The next step is to identify materials and device architectures that are commercially viable and scalable.”
    Traditional computing consumes large amounts of electricity. This is partly because it has separate units for data storage and processing, meaning information has to be shuffled constantly between the two, wasting energy and producing heat. This is particularly a problem for machine learning, which requires vast datasets for processing. Training one large AI model can generate hundreds of tonnes of carbon dioxide. More