More stories

  • in

    A prosthesis driven by the nervous system helps people with amputation walk naturally

    State-of-the-art prosthetic limbs can help people with amputations achieve a natural walking gait, but they don’t give the user full neural control over the limb. Instead, they rely on robotic sensors and controllers that move the limb using predefined gait algorithms.
    Using a new type of surgical intervention and neuroprosthetic interface, MIT researchers, in collaboration with colleagues from Brigham and Women’s Hospital, have shown that a natural walking gait is achievable using a prosthetic leg fully driven by the body’s own nervous system. The surgical amputation procedure reconnects muscles in the residual limb, which allows patients to receive “proprioceptive” feedback about where their prosthetic limb is in space.
    In a study of seven patients who had this surgery, the MIT team found that they were able to walk faster, avoid obstacles, and climb stairs much more naturally than people with a traditional amputation.
    “This is the first prosthetic study in history that shows a leg prosthesis under full neural modulation, where a biomimetic gait emerges. No one has been able to show this level of brain control that produces a natural gait, where the human’s nervous system is controlling the movement, not a robotic control algorithm,” says Hugh Herr, a professor of media arts and sciences, co-director of the K. Lisa Yang Center for Bionics at MIT, an associate member of MIT’s McGovern Institute for Brain Research, and the senior author of the new study.
    Patients also experienced less pain and less muscle atrophy following this surgery, which is known as the agonist-antagonist myoneural interface (AMI). So far, about 60 patients around the world have received this type of surgery, which can also be done for people with arm amputations.
    Hyungeun Song, a postdoc in MIT’s Media Lab, is the lead author of the paper, which will appear in Nature Medicine.
    Sensory feedback
    Most limb movement is controlled by pairs of muscles that take turns stretching and contracting. During a traditional below-the-knee amputation, the interactions of these paired muscles are disrupted. This makes it very difficult for the nervous system to sense the position of a muscle and how fast it’s contracting — sensory information that is critical for the brain to decide how to move the limb.

    People with this kind of amputation may have trouble controlling their prosthetic limb because they can’t accurately sense where the limb is in space. Instead, they rely on robotic controllers built into the prosthetic limb. These limbs also include sensors that can detect and adjust to slopes and obstacles.
    To try to help people achieve a natural gait under full nervous system control, Herr and his colleagues began developing the AMI surgery several years ago. Instead of severing natural agonist-antagonist muscle interactions, they connect the two ends of the muscles so that they still dynamically communicate with each other within the residual limb. This surgery can be done during a primary amputation, or the muscles can be reconnected after the initial amputation as part of a revision procedure.
    “With the AMI amputation procedure, to the greatest extent possible, we attempt to connect native agonists to native antagonists in a physiological way so that after amputation, a person can move their full phantom limb with physiologic levels of proprioception and range of movement,” Herr says.
    In a 2021 study, Herr’s lab found that patients who had this surgery were able to more precisely control the muscles of their amputated limb, and that those muscles produced electrical signals similar to those from their intact limb.
    After those encouraging results, the researchers set out to explore whether those electrical signals could generate commands for a prosthetic limb and at the same time give the user feedback about the limb’s position in space. The person wearing the prosthetic limb could then use that proprioceptive feedback to volitionally adjust their gait as needed.
    In the new Nature Medicine study, the MIT team found this sensory feedback did indeed translate into a smooth, near-natural ability to walk and navigate obstacles.

    “Because of the AMI neuroprosthetic interface, we were able to boost that neural signaling, preserving as much as we could. This was able to restore a person’s neural capability to continuously and directly control the full gait, across different walking speeds, stairs, slopes, even going over obstacles,” Song says.
    A natural gait
    For this study, the researchers compared seven people who had the AMI surgery with seven who had traditional below-the-knee amputations. All of the subjects used the same type of bionic limb: a prosthesis with a powered ankle as well as electrodes that can sense electromyography (EMG) signals from the tibialis anterior the gastrocnemius muscles. These signals are fed into a robotic controller that helps the prosthesis calculate how much to bend the ankle, how much torque to apply, or how much power to deliver.
    The researchers tested the subjects in several different situations: level-ground walking across a 10-meter pathway, walking up a slope, walking down a ramp, walking up and down stairs, and walking on a level surface while avoiding obstacles.
    In all of these tasks, the people with the AMI neuroprosthetic interface were able to walk faster — at about the same rate as people without amputations — and navigate around obstacles more easily. They also showed more natural movements, such as pointing the toes of the prosthesis upward while going up stairs or stepping over an obstacle, and they were better able to coordinate the movements of their prosthetic limb and their intact limb. They were also able to push off the ground with the same amount of force as someone without an amputation.
    “With the AMI cohort, we saw natural biomimetic behaviors emerge,” Herr says. “The cohort that didn’t have the AMI, they were able to walk, but the prosthetic movements weren’t natural, and their movements were generally slower.”
    These natural behaviors emerged even though the amount of sensory feedback provided by the AMI was less than 20 percent of what would normally be received in people without an amputation.
    “One of the main findings here is that a small increase in neural feedback from your amputated limb can restore significant bionic neural controllability, to a point where you allow people to directly neurally control the speed of walking, adapt to different terrain, and avoid obstacles,” Song says.
    “This work represents yet another step in us demonstrating what is possible in terms of restoring function in patients who suffer from severe limb injury. It is through collaborative efforts such as this that we are able to make transformational progress in patient care,” says Matthew Carty, a surgeon at Brigham and Women’s Hospital and associate professor at Harvard Medical School, who is also an author of the paper.
    Enabling neural control by the person using the limb is a step toward Herr’s lab’s goal of “rebuilding human bodies,” rather than having people rely on ever more sophisticated robotic controllers and sensors — tools that are powerful but do not feel like part of the user’s body.
    “The problem with that long-term approach is that the user would never feel embodied with their prosthesis. They would never view the prosthesis as part of their body, part of self,” Herr says. “The approach we’re taking is trying to comprehensively connect the brain of the human to the electromechanics.”
    The research was funded by the MIT K. Lisa Yang Center for Bionics, the National Institute of Neurological Disorders and Stroke, a Neurosurgery Research Education Foundation Medical Research Fellowship, and the Eunice Kennedy Shriver National Institute of Child Health and Human Development. More

  • in

    New and improved camera inspired by the human eye

    A team led by University of Maryland computer scientists invented a camera mechanism that improves how robots see and react to the world around them. Inspired by how the human eye works, their innovative camera system mimics the tiny involuntary movements used by the eye to maintain clear and stable vision over time. The team’s prototyping and testing of the camera — called the Artificial Microsaccade-Enhanced Event Camera (AMI-EV) — was detailed in a paper published in the journal Science Robotics in May 2024.
    “Event cameras are a relatively new technology better at tracking moving objects than traditional cameras, but -today’s event cameras struggle to capture sharp, blur-free images when there’s a lot of motion involved,” said the paper’s lead author Botao He, a computer science Ph.D. student at UMD. “It’s a big problem because robots and many other technologies — such as self-driving cars — rely on accurate and timely images to react correctly to a changing environment. So, we asked ourselves: How do humans and animals make sure their vision stays focused on a moving object?”
    For He’s team, the answer was microsaccades, small and quick eye movements that involuntarily occur when a person tries to focus their view. Through these minute yet continuous movements, the human eye can keep focus on an object and its visual textures — such as color, depth and shadowing — accurately over time.
    “We figured that just like how our eyes need those tiny movements to stay focused, a camera could use a similar principle to capture clear and accurate images without motion-caused blurring,” He said.
    The team successfully replicated microsaccades by inserting a rotating prism inside the AMI-EV to redirect light beams captured by the lens. The continuous rotational movement of the prism simulated the movements naturally occurring within a human eye, allowing the camera to stabilize the textures of a recorded object just as a human would. The team then developed software to compensate for the prism’s movement within the AMI-EV to consolidate stable images from the shifting lights.
    Study co-author Yiannis Aloimonos, a professor of computer science at UMD, views the team’s invention as a big step forward in the realm of robotic vision.
    “Our eyes take pictures of the world around us and those pictures are sent to our brain, where the images are analyzed. Perception happens through that process and that’s how we understand the world,” explained Aloimonos, who is also director of the Computer Vision Laboratory at the University of Maryland Institute for Advanced Computer Studies (UMIACS). “When you’re working with robots, replace the eyes with a camera and the brain with a computer. Better cameras mean better perception and reactions for robots.”
    The researchers also believe that their innovation could have significant implications beyond robotics and national defense. Scientists working in industries that rely on accurate image capture and shape detection are constantly looking for ways to improve their cameras — and AMI-EV could be the key solution to many of the problems they face.

    “With their unique features, event sensors and AMI-EV are poised to take center stage in the realm of smart wearables,” said research scientist Cornelia Fermüller, senior author of the paper. “They have distinct advantages over classical cameras — such as superior performance in extreme lighting conditions, low latency and low power consumption. These features are ideal for virtual reality applications, for example, where a seamless experience and the rapid computations of head and body movements are necessary.”
    In early testing, AMI-EV was able to capture and display movement accurately in a variety of contexts, including human pulse detection and rapidly moving shape identification. The researchers also found that AMI-EV could capture motion in tens of thousands of frames per second, outperforming most typically available commercial cameras, which capture 30 to 1000 frames per second on average. This smoother and more realistic depiction of motion could prove to be pivotal in anything from creating more immersive augmented reality experiences and better security monitoring to improving how astronomers capture images in space.
    “Our novel camera system can solve many specific problems, like helping a self-driving car figure out what on the road is a human and what isn’t,” Aloimonos said. “As a result, it has many applications that much of the general public already interacts with, like autonomous driving systems or even smartphone cameras. We believe that our novel camera system is paving the way for more advanced and capable systems to come.” More

  • in

    Scientists probe chilling behavior of promising solid-state cooling material

    A research team led by the Department of Energy’s Oak Ridge National Laboratory has bridged a knowledge gap in atomic-scale heat motion. This new understanding holds promise for enhancing materials to advance an emerging technology called solid-state cooling.
    An environmentally friendly innovation, solid-state cooling could efficiently chill many things in daily life from food to vehicles to electronics — without traditional refrigerant liquids and gases or moving parts. The system would operate via a quiet, compact and lightweight system that allows precise temperature control.
    Although the discovery of improved materials and the invention of higher-quality devices are already helping to promote the growth of the new cooling method, a deeper understanding of material enhancements is essential. The research team used a suite of neutron-scattering instruments to examine at the atomic scale a material that scientists consider to be an optimal candidate for use in solid-state cooling.
    The material, a nickel-cobalt-manganese-indium magnetic shape-memory alloy, can be deformed and then returned to its original shape by driving it through a phase transition either by increasing temperature or by applying a magnetic field. When subjected to a magnetic field, the material undergoes a magnetic and structural phase transition, during which it absorbs and releases heat, a behavior known as the magnetocaloric effect. In solid-state cooling applications, the effect is harnessed to provide refrigeration. A key characteristic of the material is its nearness to disordered conditions known as ferroic glassy states, because they present a way to enhance the material’s ability to store and release heat.
    Magnons, also known as spin waves, and phonons, or vibrations, couple in a synchronized dance in small regions distributed across the disordered arrangement of atoms that comprise the material. The researchers found that patterns of behavior in these small regions, referred to as localized hybrid magnon-phonon modes in the team’s paper detailing the research, have important implications for the thermal properties of the material.
    The scientists revealed that the modes cause the phonons to be significantly altered or shifted by the presence of a magnetic field. The modes also modify the material’s phase stability. These changes can result in fundamental alterations in the material’s properties and behavior that can be tuned and tailored.
    “Neutron scattering shows that the cooling capacity of the magnetic shape-memory alloy is tripled by the heat contained within these local magnon-phonon hybrid modes that form because of the disorder in the system,” said ORNL’s Michael Manley, the leader of the study. “This finding reveals a path to make better materials for solid-state cooling applications for societal needs.”
    The magnetic shape-memory alloy that the team studied is in a phase that has nearly formed disordered conditions known as spin glass and strain glass — not the familiar glass used in windows and elsewhere but rather unconventional phases of matter that lack order. The magnetic moments, or tiny magnets, associated with the atoms in the spin glass phase are randomly oriented rather than pointing in the same direction. Comparatively, in the strain glass phase, the lattice of atoms is strained at the nanometer scale in a messy and irregular pattern. Spin glass and strain glass are referred to as frustrated conditions in a material because they arise from competing interactions or constraints that prevent the material from achieving a stable ordered state.

    “As the material approaches this frustrated state, the amount of heat being stored increases,” Manley said. “Long- and short-range interactions manifest as localized vibrations and spin waves, which means they’re getting trapped in small regions. This is important because these extra localized vibrational states store heat. Changing the magnetic field triggers another phase transition in which this heat is released.”
    Controlling the functions of the magnetic shape-memory alloy so that it can be used as a heat sponge could be one way to allow for efficient solid-state cooling without the need for traditional refrigerants or mechanical components.
    This study was supported by DOE’s Office of Science Materials Sciences and Engineering Division. A portion of the neutron scattering work for this research was performed at the High Flux Isotope Reactor and the Spallation Neutron Source, DOE Office of Science user facilities at ORNL. The National Institute of Standards and Technology of the Department of Commerce also provided neutron research facilities. More

  • in

    How researchers are using digital city-building games to shape the future

    Lancaster University researchers have come up with exciting and sophisticated new mapping technology enabling future generations to get involved in creating their own future built landscape.
    They say, in their new research, that planners are missing a real trick when it comes to encouraging and involving the public to help shape their own towns, cities and counties for the future.
    They also say that games platforms can be used to plan future cities and also help the public immerse themselves in these future worlds.
    The researchers have modified Colossal Order’s game ‘Cities: Skylines’ where players control zones, public services and transportation.
    Real-world buildings and models can be imported into the game to create realistic cities and inform planning.
    Players can manage education, police and fire services, health and even set tax policies, amongst other realistic simulations. The game dashboard even measures how happy citizens are!
    Players must add infrastructure, manage power, water and think carefully about what is needed for their community.

    Given that, according to Royal Town Planning Institute statistics, only 20% of younger people are interested in planning, the use of digital games, say the researchers, enables the public to ‘play’ real-world planning policies based on a ‘real world’ place, which creates a dialogue with planners.
    Dr Paul Cureton and Professor Paul Coulton, from ImaginationLancaster, Lancaster University’s design-led laboratory, share their research in an open access article ‘Game Based Worldbuilding: Planning, Models, Simulations and Digital Twins’ published in Acta Ludologica, the peer reviewed scientific journal in the field of games and digital games.
    Their research, funded by the Digital Planning Programme, Department for Levelling Up, Housing and Communities (DLUHC), cites a lack of public interest in planning issues and a need for ‘urgent change’ given, what they say, is a general lack of public interest in planning processes.
    Gaming technology has been used in 3D planning models and what is called City Information Models (CIMs), and Urban Digital Twins (UDTs). Urban digital twins are virtual replica systems of an environment that are connected to real-world sensors such as traffic or air quality to enhance public participation and engagement in the planning process and generate future scenarios.
    But, say the researchers, while this is a good step forward, the use of gaming technology for real-world applications is ‘one-directional and misses opportunities’ to include game design and research, such as mechanics, dynamics, flow, and public participatory’ world-building’ for future scenarios.
    They believe the technology can be used for higher levels of ‘citizen engagement’ by making the process more enjoyable.

    The method, they add, is cost-effective and can be rolled out across the UK by any local authority.
    They have already conducted gaming workshops playtesting alongside Lancaster City Council with 140 children to ‘play’ and plan Lancaster in the UK, in an area to be developed along with Lancashire County Council and national house builder Homes England, previously earmarked for development as a new garden village for 5,000 homes.
    Digital games have a long tradition of providing simulations of various systems of human activities, such as politics, culture, society, environment, and war. Urban planning has been simulated through various city-building games such as the Summer Game (1964), EA Games, SimCity (1989), and Colossal Order’s Cities Skylines (2015, 2023), amongst many others.
    While a range of future urban planning scenarios use gaming technology, they do not necessarily incorporate game design ideas such as mechanics and dynamics, levels, progress, flows and feedback as part of a game world, say the researchers.
    And, they add, this needs to be more fully understood if such systems are to yield potential benefits in terms of citizen engagement more fully.
    Dr Cureton and Professor Coulton created a reference tool for new planning models and ensuing case studies offer new insights into the opportunities for using game design and gaming technology in urban planning and digital transformation.
    The article in Acta Ludologica develops an understanding of the role of worldbuilding games in urban planning, architecture, and design, developing a playable theoretical urban game continuum to illuminate both the various nuances of a range of precedents and scaffold future applications.
    Cities and urban areas are complex systems, and games allow a player to explore the complexities of this landscape and simulate and model behaviour and realise scenarios.
    Arguably, there is a restrictive incorporation of gaming technologies for real-world planning that misses opportunities to engage players in changing the rules of the system being replicated.
    The researchers say this is much needed as new governments will look at what urgent change is required in planning if the shortfall in housing and stimulation of economic growth is to be addressed. To do this, planners will need support, skill development, and the tools to engage people.
    Gaming technologies are intended for citizen participation and access, yet fundamental challenges remain unaddressed.
    The Royal Town Planning Institute (RTPI) in the UK stated that : “Response rates to a typical pre-planning consultation are around 3% of those directly made aware of it. In Local Plan consultations, this figure can fall to less than 1% of a district.”
    Professor Paul Coulton is Chair of Speculative and Game Design at ImaginationLancaster and is internationally recognised for his speculative design work.
    He says: “Whilst games and game playing are often dismissed as trivial or problematic they can serve as powerful tools in delivering information and understanding of how systems operate in a manner that can the lead to real world engagement in processes which previously seemed opaque.”
    Dr Paul Cureton is a Senior Lecturer in Design at ImaginationLancaster and a member of the Data Science Institute (DSI) whose work focuses on subjects in spatial planning, 3D GIS modelling and design futures.
    He says: “Only 20% of 18-34-year-olds engaging in local plans, according to the Royal Town Planning Institute (2020). So few engage in how our spaces are being transformed, so there is space for gaming in this field to provide and help the public think like planners, play issues and use gaming tools for modelling future spaces.” More

  • in

    Nanorobot with hidden weapon kills cancer cells

    Researchers at Karolinska Institutet in Sweden have developed nanorobots that kill cancer cells in mice. The robot’s weapon is hidden in a nanostructure and is exposed only in the tumour microenvironment, sparing healthy cells. The study is published in the journal Nature Nanotechnology.
    The research group at Karolinska Institutet has previously developed structures that can organise so-called death receptors on the surface of cells, leading to cell death. The structures exhibit six peptides (amino acid chains) assembled in a hexagonal pattern.
    “This hexagonal nanopattern of peptides becomes a lethal weapon,” explains Professor Björn Högberg at the Department of Medical Biochemistry and Biophysics, Karolinska Institutet, who led the study. “If you were to administer it as a drug, it would indiscriminately start killing cells in the body, which would not be good. To get around this problem, we have hidden the weapon inside a nanostructure built from DNA.”
    Created a ‘kill switch’
    The art of building nanoscale structures using DNA as a building material is called DNA origami and is something Björn Högberg’s research team has been working on for many years. Now they have used the technique to create a ‘kill switch’ that is activated under the right conditions.
    “We have managed to hide the weapon in such a way that it can only be exposed in the environment found in and around a solid tumour,” he says. “This means that we have created a type of nanorobot that can specifically target and kill cancer cells.”
    The key is the low pH, or acidic microenvironment that usually surrounds cancer cells, which activates the nanorobot’s weapon. In cell analyses in test tubes, the researchers were able to show that the peptide weapon is hidden inside the nanostructure at a normal pH of 7.4, but that it has a drastic cell-killing effect when the pH drops to 6.5.

    Reduced tumour growth
    They then tested injecting the nanorobot into mice with breast cancer tumours. This resulted in a 70 per cent reduction in tumour growth compared to mice given an inactive version of the nanorobot.
    “We now need to investigate whether this works in more advanced cancer models that more closely resemble the real human disease,” says the study’s first author Yang Wang, a researcher at the Department of Medical Biochemistry and Biophysics, Karolinska Institutet. “We also need to find out what side effects the method has before it can be tested on humans.”
    The researchers also plan to investigate whether it is possible to make the nanorobot more targeted by placing proteins or peptides on its surface that specifically bind to certain types of cancer. More

  • in

    AI model finds the cancer clues at lightning speed

    Researchers at the University of Gothenburg have developed an AI model that increases the potential for detecting cancer through sugar analyses. The AI model is faster and better at finding abnormalities than the current semi-manual method.
    Glycans, or structures of sugar molecules in our cells, can be measured by mass spectrometry. One important use is that the structures can indicate different forms of cancer in the cells.
    However, the data from the mass spectrometer measurement must be carefully analysed by humans to work out the structure from the glycan fragmentation. This process can take anywhere from hours to days for each sample and can only be carried out with high confidence by a small number of experts in the world, as it is essentially detective work learnt over many years.
    Automating the detective work
    The process is thus a bottleneck in the use of glycan analyses, for example for cancer detection, when there are many samples to be analysed.
    Researchers at the University of Gothenburg have developed an AI model to automate this detective work. The AI model, named Candycrunch, solves the task in just a few seconds per test. The results are reported in a scientific article in the journal Nature Methods.
    The AI model was trained using a database of over 500,000 examples of different fragmentations and associated structures of sugar molecules.

    “The training has enabled Candycrunch to calculate the exact sugar structure in a sample in 90 per cent of cases,” says Daniel Bojar, Associate Senior Lecturer in Bioinformatics at the University of Gothenburg.
    Can find new biomarkers
    This means that the AI model could soon reach the same levels of accuracy as the sequencing of other biological sequences, such as DNA, RNA or proteins.
    Because the AI model is so fast and accurate in its answers, it can accelerate the discovery of glycan-based biomarkers for both diagnosis and prognosis of the cancer.
    “We believe that glycan analyses will become a bigger part of biological and clinical research now that we have automated the biggest bottleneck,” says Daniel Bojar.
    The AI model Candycrunch is also able to identify structures that are often missed by human analyses due to their low concentrations. The model can therefore help researchers to find new glycan-based biomarkers. More

  • in

    How powdered rock could help slow climate change

    On a banana plantation in rural Australia, a second-generation farming family spreads crushed volcanic rock between rows of ripening fruit. Eight thousand kilometers away, two young men in central India dust the same type of rock powder onto their dry-season rice paddy, while across the ocean, a farmer in Kenya sprinkles the powder by hand onto his potato plants. Far to the north in foggy Scotland, a plot of potatoes gets the same treatment, as do cattle pastures on sunny slopes in southern Brazil.

    And from Michigan to Mississippi, farmers are scattering volcanic rock dust on their wheat, soy and corn fields with ag spreaders typically reserved for dispersing crushed limestone to adjust soil acidity. More

  • in

    ‘World record’ for data transmission speed

    Aston University researchers are part of a team that has sent data at a record rate of 402 terabits per second using commercially available optical fibre.
    This beats their previous record, announced in March 2024, of 301 terabits or 301,000,000 megabits per second using a single, standard optical fibre.
    “If compared to the internet connection speed recommendations of Netflix, of 3 Mbit/s or higher, for a watching a HD movie, this speed is over 100 million times faster.
    The speed was achieved by using a wider spectrum, using six bands rather than the previous four, which increased capacity for data sharing. Normally just one or two bands are used.
    The international research team included Professor Wladek Forysiak and Dr Ian Philips who are members of the University’s Aston Institute of Photonic Technologies (AIPT). Led by the Photonic Network Laboratory of the National Institute of Information and Communications Technology (NICT) which is based in Tokyo, Japan it also including Nokia Bell labs of the USA.
    Together they achieved the feat by constructing the first optical transmission system covering six wavelength bands (O,E,S,C,L and U) used in fibre optical communication. Aston University contributed specifically by building a set of U-band Raman amplifiers, the longest part of the combined wavelength spectrum, where conventional doped fibre amplifiers are not presently available from commercial sources.
    Optical fibres are small tubular strands of glass that pass information using light unlike regular copper cables that can’t carry data at such speeds.

    As well as increasing capacity by approximately a third, the technique uses so-called “standard fibre” that is already deployed in huge quantities worldwide, so there would be no need to install new specialist cables.
    As demand for data from business and individuals increases this new discovery could help keep broadband prices stable despite an improvement in capacity and speed.
    Aston University’s Dr Philips said: “This finding could help increase capacity on a single fibre so the world would have a higher performing system.
    “The newly developed technology is expected to make a significant contribution to expand the communication capacity of the optical communication infrastructure as future data services rapidly increase demand.”
    His colleague Professor Wladek Forysiak added: ‘This is a ‘hero experiment’ made possible by a multi-national team effort and very recent technical advances in telecommunications research laboratories from across the world’.”
    The results of the experiment were accepted as a post-deadline paper at the 47th International Conference on Optical Fiber Communications (OFC 2024) in the USA on 28 March.
    To help support some of its work in this area Aston University has received funding from EPSRC (UKRI), the Royal Society (RS Exchange grant with NICT) and the EU (European Training Network). More