More stories

  • in

    Evaluating cybersecurity methods

    A savvy hacker can obtain secret information, such as a password, by observing a computer program’s behavior, like how much time that program spends accessing the computer’s memory.
    Security approaches that completely block these “side-channel attacks” are so computationally expensive that they aren’t feasible for many real-world systems. Instead, engineers often apply what are known as obfuscation schemes that seek to limit, but not eliminate, an attacker’s ability to learn secret information.
    To help engineers and scientists better understand the effectiveness of different obfuscation schemes, MIT researchers created a framework to quantitatively evaluate how much information an attacker could learn from a victim program with an obfuscation scheme in place.
    Their framework, called Metior, allows the user to study how different victim programs, attacker strategies, and obfuscation scheme configurations affect the amount of sensitive information that is leaked. The framework could be used by engineers who develop microprocessors to evaluate the effectiveness of multiple security schemes and determine which architecture is most promising early in the chip design process.
    “Metior helps us recognize that we shouldn’t look at these security schemes in isolation. It is very tempting to analyze the effectiveness of an obfuscation scheme for one particular victim, but this doesn’t help us understand why these attacks work. Looking at things from a higher level gives us a more holistic picture of what is actually going on,” says Peter Deutsch, a graduate student and lead author of an open-access paper on Metior.
    Deutsch’s co-authors include Weon Taek Na, an MIT graduate student in electrical engineering and computer science; Thomas Bourgeat PhD ’23, an assistant professor at the Swiss Federal Institute of Technology (EPFL); Joel Emer, an MIT professor of the practice in computer science and electrical engineering; and senior author Mengjia Yan, the Homer A. Burnell Career Development Assistant Professor of Electrical Engineering and Computer Science (EECS) at MIT and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The research was presented last week at the International Symposium on Computer Architecture.

    Illuminating obfuscation
    While there are many obfuscation schemes, popular approaches typically work by adding some randomization to the victim’s behavior to make it harder for an attacker to learn secrets. For instance, perhaps an obfuscation scheme involves a program accessing additional areas of the computer memory, rather than only the area it needs to access, to confuse an attacker. Others adjust how often a victim accesses memory or another a shared resource so an attacker has trouble seeing clear patterns.
    But while these approaches make it harder for an attacker to succeed, some amount of information from the victim still “leaks” out. Yan and her team want to know how much.
    They had previously developed CaSA, a tool to quantify the amount of information leaked by one particular type of obfuscation scheme. But with Metior, they had more ambitious goals. The team wanted to derive a unified model that could be used to analyze any obfuscation scheme — even schemes that haven’t been developed yet.
    To achieve that goal, they designed Metior to map the flow of information through an obfuscation scheme into random variables. For instance, the model maps the way a victim and an attacker access shared structures on a computer chip, like memory, into a mathematical formulation.

    One Metior derives that mathematical representation, the framework uses techniques from information theory to understand how the attacker can learn information from the victim. With those pieces in place, Metior can quantify how likely it is for an attacker to successfully guess the victim’s secret information.
    “We take all of the nitty-gritty elements of this microarchitectural side-channel and map it down to, essentially, a math problem. Once we do that, we can explore a lot of different strategies and better understand how making small tweaks can help you defend against information leaks,” Deutsch says.
    Surprising insights
    They applied Metior in three case studies to compare attack strategies and analyze the information leakage from state-of-the-art obfuscation schemes. Through their evaluations, they saw how Metior can identify interesting behaviors that weren’t fully understood before.
    For instance, a prior analysis determined that a certain type of side-channel attack, called probabilistic prime and probe, was successful because this sophisticated attack includes a preliminary step where it profiles a victim system to understand its defenses.
    Using Metior, they show that this advanced attack actually works no better than a simple, generic attack and that it exploits different victim behaviors than researchers previously thought.
    Moving forward, the researchers want to continue enhancing Metior so the framework can analyze even very complicated obfuscation schemes in a more efficient manner. They also want to study additional obfuscation schemes and types of victim programs, as well as conduct more detailed analyses of the most popular defenses.
    Ultimately, the researchers hope this work inspires others to study microarchitectural security evaluation methodologies that can be applied early in the chip design process.
    “Any kind of microprocessor development is extraordinarily expensive and complicated, and design resources are extremely scarce. Having a way to evaluate the value of a security feature is extremely important before a company commits to microprocessor development. This is what Metior allows them to do in a very general way,” Emer says.
    This research is funded, in part, by the National Science Foundation, the Air Force Office of Scientific Research, Intel, and the MIT RSC Research Fund. More

  • in

    Turning old maps into 3D digital models of lost neighborhoods

    Imagine strapping on a virtual reality headset and “walking” through a long-gone neighborhood in your city — seeing the streets and buildings as they appeared decades ago.
    That’s a very real possibility now that researchers have developed a method to create 3D digital models of historic neighborhoods using machine learning and historic Sanborn Fire Insurance maps.
    But the digital models will be more than just a novelty — they will give researchers a resource to conduct studies that would have been nearly impossible before, such as estimating the economic loss caused by the demolition of historic neighborhoods.
    “The story here is we now have the ability to unlock the wealth of data that is embedded in these Sanborn fire atlases,” said Harvey Miller, co-author of the study and professor of geography at The Ohio State University.
    “It enables a whole new approach to urban historical research that we could never have imagined before machine learning. It is a game changer.”
    The study was published today (June 28, 2023) in the journal PLOS ONE.

    This research begins with the Sanborn maps, which were created to allow fire insurance companies to assess their liability in about 12,000 cities and towns in the United States during the 19th and 20th centuries. In larger cities, they were often updated regularly, said Miller, who is director of Ohio State’s Center for Urban and Regional Analysis (CURA).
    The problem for researchers was that trying to manually collect usable data from these maps was tedious and time-consuming — at least until the maps were digitized. Digital versions are now available from the Library of Congress.
    Study co-author Yue Lin, a doctoral student in geography at Ohio State, developed machine learning tools that can extract details about individual buildings from the maps, including their locations and footprints, the number of floors, their construction materials and their primary use, such as dwelling or business.
    “We are able to get a very good idea of what the buildings look like from data we get from the Sanborn maps,” Lin said.
    The researchers tested their machine learning technique on two adjacent neighborhoods on the near east side of Columbus, Ohio, that were largely destroyed in the 1960s to make way for the construction of I-70.

    One of the neighborhoods, Hanford Village, was developed in 1946 to house returning Black veterans of World War II.
    “The GI bill gave returning veterans funds to purchase homes, but they could only be used on new builds,” said study co-author Gerika Logan, outreach coordinator of CURA. “So most of the homes were lost to the highway not long after they were built.”
    The other neighborhood in the study was Driving Park, which also housed a thriving Black community until I-70 split it in two.
    The researchers used 13 Sanborn maps for the two neighborhoods produced in 1961, just before I-70 was built. Machine learning techniques were able to extract the data from the maps and create digital models.
    Comparing data from the Sanford maps to today showed that a total of 380 buildings were demolished in the two neighborhoods for the highway, including 286 houses, 86 garages, five apartments and three stores.
    Analysis of the results showed that the machine learning model was very accurate in recreating the information contained in the maps — about 90% accurate for building footprints and construction materials.
    “The accuracy was impressive. We can actually get a visual sense of what these neighborhoods looked like that wouldn’t be possible in any other way,” Miller said.
    “We want to get to the point in this project where we can give people virtual reality headsets and let them walk down the street as it was in 1960 or 1940 or perhaps even 1881.”
    Using the machine learning techniques developed for this study, researchers could develop similar 3D models for nearly any of the 12,000 cities and towns that have Sanborn maps, Miller said.
    This will allow researchers to re-create neighborhoods lost to natural disasters like floods, as well as urban renewal, depopulation and other types of change.
    Because the Sanborn maps include information on businesses that occupied specific buildings, researchers could re-create digital neighborhoods to determine the economic impact of losing them to urban renewal or other factors. Another possibility would be to study how replacing homes with highways that absorbed the sun’s heat affected the urban heat island effect.
    “There’s a lot of different types of research that can be done. This will be a tremendous resource for urban historians and a variety of other researchers,” Miller said.
    “Making these 3D digital models and being able to reconstruct buildings adds so much more than what you could show in a chart, graph, table or traditional map. There’s just incredible potential here.” More

  • in

    NeuWS camera answers ‘holy grail problem’ in optical imaging

    Engineers from Rice University and the University of Maryland have created full-motion video technology that could potentially be used to make cameras that peer through fog, smoke, driving rain, murky water, skin, bone and other media that reflect scattered light and obscure objects from view.
    “Imaging through scattering media is the ‘holy grail problem’ in optical imaging at this point,” said Rice’s Ashok Veeraraghavan, co-corresponding author of an open-access study published today in Science Advances. “Scattering is what makes light — which has lower wavelength, and therefore gives much better spatial resolution — unusable in many, many scenarios. If you can undo the effects of scattering, then imaging just goes so much further.”
    Veeraraghavan’s lab collaborated with the research group of Maryland co-corresponding author Christopher Metzler to create a technology they named NeuWS, which is an acronym for “neural wavefront shaping,” the technology’s core technique.
    “If you ask people who are working on autonomous driving vehicles about the biggest challenges they face, they’ll say, ‘Bad weather. We can’t do good imaging in bad weather.'” Veeraraghavan said. “They are saying ‘bad weather,’ but what they mean, in technical terms, is light scattering. If you ask biologists about the biggest challenges in microscopy, they’ll say, ‘We can’t image deep tissue in vivo.’ They’re saying ‘deep tissue’ and ‘in vivo,’ but what they actually mean is that skin and other layers of tissue they want to see through, are scattering light. If you ask underwater photographers about their biggest challenge, they’ll say, ‘I can only image things that are close to me.’ What they actually mean is light scatters in water, and therefore doesn’t go deep enough for them to focus on things that are far away.
    “In all of these circumstances, and others, the real technical problem is scattering,” Veeraraghavan said.
    He said NeuWS could potentially be used to overcome scattering in those scenarios and others.

    “This is a big step forward for us, in terms of solving this in a way that’s potentially practical,” he said. “There’s a lot of work to be done before we can actually build prototypes in each of those application domains, but the approach we have demonstrated could traverse them.”
    Conceptually, NeuWS is based on the principle that light waves are complex mathematical quantities with two key properties that can be computed for any given location. The first, magnitude, is the amount of energy the wave carries at the location, and the second is phase, which is the wave’s state of oscillation at the location. Metzler and Veeraraghavan said measuring phase is critical for overcoming scattering, but it is impractical to measure directly because of the high-frequency of optical light.
    So they instead measure incoming light as “wavefronts” — single measurements that contain both phase and intensity information — and use backend processing to rapidly decipher phase information from several hundred wavefront measurements per second.
    “The technical challenge is finding a way to rapidly measure phase information,” said Metzler, an assistant professor of computer science at Maryland and “triple Owl” Rice alum who earned his Ph.D., masters and bachelors degrees in electrical and computer engineering from Rice in 2019, 2014 and 2013 respectively. Metzler was at Rice University during the development of an earlier iteration of wavefront-processing technology called WISH that Veeraraghavan and colleagues published in 2020.
    “WISH tackled the same problem, but it worked under the assumption that everything was static and nice,” Veeraraghavan said. “In the real world, of course, things change all of the time.”
    With NeuWS, he said, the idea is to not only undo the effects of scattering, but to undo them fast enough so the scattering media itself doesn’t change during the measurement.

    “Instead of measuring the state of oscillation itself, you measure its correlation with known wavefronts,” Veeraraghavan said. “You take a known wavefront, you interfere that with the unknown wavefront and you measure the interference pattern produced by the two. That is the correlation between those two wavefronts.”
    Metzler used the analogy of looking at the North Star at night through a haze of clouds. “If I know what the North Star is supposed to look like, and I can tell it is blurred in a particular way, then that tells me how everything else will be blurred.”
    Veerarghavan said, “It’s not a comparison, it’s a correlation, and if you measure at least three such correlations, you can uniquely recover the unknown wavefront.”
    State-of-the-art spatial light modulators can make several hundred such measurements per minute, and Veeraraghavan, Metzler and colleagues showed they could use a modulator and their computational method to capture video of moving objects that were obscured from view by intervening scattering media.
    “This is the first step, the proof-of principle that this technology can correct for light scattering in real time,” said Rice’s Haiyun Guo, one of the study’s lead authors and a Ph.D. student in Veeraraghavan’s research group.
    In one set of experiments, for example, a microscope slide containing a printed image of an owl or a turtle was spun on a spindle and filmed by an overhead camera. Light-scattering media were placed between the camera and target slide, and the researchers measured NeuWS ability to correct for light-scattering. Examples of scattering media included onion skin, slides coated with nail polish, slices of chicken breast tissue and light-diffusing films. For each of these, the experiments showed NeuWS could correct for light scattering and produce clear video of the spinning figures.
    “We developed algorithms that allow us to continuously estimate both the scattering and the scene,” Metzler said. “That’s what allows us to do this, and we do it with mathematical machinery called neural representation that allows it to be both efficient and fast.”
    NeuWS rapidly modulates light from incoming wavefronts to create several slightly altered phase measurements. The altered phases are then fed directly into a 16,000 parameter neural network that quickly computes the necessary correlations to recover the wavefront’s original phase information.
    “The neural networks allow it to be faster by allowing us to design algorithms that require fewer measurements,” Veeraraghavan said.
    Metzler said, “That’s actually the biggest selling point. Fewer measurements, basically, means we need much less capture time. It’s what allows us to capture video rather than still frames.”
    The research was supported by the Air Force Office of Scientific Research (FA9550- 22-1-0208), the National Science Foundation (1652633, 1730574, 1648451) and the National Institutes of Health (DE032051), and partial funding for open access was provided by the University of Maryland Libraries’ Open Access Publishing Fund. More

  • in

    GPT-3 informs and disinforms us better

    A recent study conducted by researchers at the University of Zurich delved into the capabilities of AI models, specifically focusing on OpenAI’s GPT-3, to determine their potential risks and benefits in generating and disseminating (dis)information. Led by postdoctoral researchers Giovanni Spitale and Federico Germani, alongside Nikola Biller-Andorno, director of the Institute of Biomedical Ethics and History of Medicine (IBME), University of Zurich, the study involving 697 participants sought to evaluate whether individuals could differentiate between disinformation and accurate information presented in the form of tweets. Furthermore, the researchers aimed to determine if participants could discern whether a tweet was written by a genuine Twitter user or generated by GPT-3, an advanced AI language model. The topics covered included climate change, vaccine safety, the COVID-19 pandemic, flat earth theory, and homeopathic treatments for cancer.
    AI-powered systems could generate large-scale disinformation campaigns
    On the one hand, GPT-3 demonstrated the ability to generate accurate and, compared to tweets from real Twitter users, more easily comprehensible information. However, the researchers also discovered that the AI language model had an unsettling knack for producing highly persuasive disinformation. In a concerning twist, participants were unable to reliably differentiate between tweets created by GPT-3 and those written by real Twitter users. “Our study reveals the power of AI to both inform and mislead, raising critical questions about the future of information ecosystems,” says Federico Germani.
    These findings suggest that information campaigns created by GPT-3, based on well-structured prompts and evaluated by trained humans, would prove more effective for instance in a public health crisis which requires fast and clear communication to the public. The findings also raise significant concerns regarding the threat of AI perpetuating disinformation, particularly in the context of the rapid and widespread dissemination of misinformation and disinformation during a crisis or public health event. The study reveals that AI-powered systems could be exploited to generate large-scale disinformation campaigns on potentially any topic, jeopardizing not only public health but also the integrity of information ecosystems vital for functioning democracies.
    Proactive regulation highly recommended
    As the impact of AI on information creation and evaluation becomes increasingly pronounced, the researchers call on policymakers to respond with stringent, evidence-based and ethically informed regulations to address the potential threats posed by these disruptive technologies and ensure the responsible use of AI in shaping our collective knowledge and well-being. “The findings underscore the critical importance of proactive regulation to mitigate the potential harm caused by AI-driven disinformation campaigns,” says Nikola Biller-Andorno. “Recognizing the risks associated with AI-generated disinformation is crucial for safeguarding public health and maintaining a robust and trustworthy information ecosystem in the digital age.”
    Transparent research using open science best practice
    The study adhered to open science best practices throughout the entire pipeline, from pre-registration to dissemination. Giovanni Spitale, who is also an UZH Open Science Ambassador, states: “Open science is vital for fostering transparency and accountability in research, allowing for scrutiny and replication. In the context of our study, it becomes even more crucial as it enables stakeholders to access and evaluate the data, code, and intermediate materials, enhancing the credibility of our findings and facilitating informed discussions on the risks and implications of AI-generated disinformation.” Interested parties can access these resources through the OSF repository: https://osf.io/9ntgf/. More

  • in

    Geologists are using artificial intelligence to predict landslides

    A new technique developed by UCLA geologists that uses artificial intelligence to better predict where and why landslides may occur could bolster efforts to protect lives and property in some of the world’s most disaster-prone areas.
    The new method, described in a paper published in the journal Communications Earth & Environment, improves the accuracy and interpretability of AI-based machine-learning techniques, requires far less computing power and is more broadly applicable than traditional predictive models.
    The approach would be particularly valuable in places like California, the researchers say, where drought, wildfires and earthquakes create the perfect recipe for landslide disasters and where the situation is to expected to get worse as climate change brings stronger and wetter storms.
    Many factors influence where a landslide will occur, including the shape of the terrain, its slope and drainage areas, the material properties of soil and bedrock, and environmental conditions like climate, rainfall, hydrology and ground motion resulting from earthquakes. With so many variables, predicting when and where a chunk of earth is likely to lose its grip is as much an art as a science.
    Geologists have traditionally estimated an area’s landslide risk by incorporating these factors into physical and statistical models. With enough data, such models can achieve reasonably accurate predictions, but physical models are time- and resource-intensive and can’t be applied over broad areas, while statistical models give little insight into how they assess various risk factors to arrive at their predictions.
    Using artificial intelligence to predict landslides
    In recent years, researchers have trained AI machine-learning models known as deep neural networks, or DNNs, to predict landslides. When fed reams of landslide-related variables and historical landslide information, these large, interconnected networks of algorithms can very quickly process and “learn” from this data to make highly accurate predictions.

    Yet despite their advantages in processing time and learning power, as with statistical models, DNNs do not “show their work,” making it difficult for researchers to interpret their predictions and to know which causative factors to target in attempting to prevent possible landslides in the future.
    “DNNs will deliver a percentage likelihood of a landslide that may be accurate, but we are unable to figure out why and which specific variables were most important in causing the landslide,” said Kevin Shao, a doctoral student in Earth, planetary and space sciences and co-first author of the journal paper.
    The problem, said co-first author Khalid Youssef, a former student of biomedical engineering and postdoctoral researcher at UCLA, is that the various network layers of DNNs constantly feed into one another during the learning process, and untangling their analysis is impossible. The UCLA researchers’ new method aimed to address that.
    “We sought to enable a clear separation of the results from the different data inputs, which would make the analysis far more useful in determining which factors are the most important contributors to natural disasters,” he said.
    Youssef and Shao teamed with co-corresponding authors Seulgi Moon, a UCLA associate professor of Earth, planetary and space sciences, and Louis Bouchard, a UCLA professor of chemistry and bioengineering, to develop an approach that could decouple the analytic power of DNNs from their complex adaptive nature in order to deliver more actionable results.

    Their method uses a type of AI called a superposable neural network, or SNN, in which the different layers of the network run alongside each other — retaining the ability to assess the complex relationships between data inputs and output results — but only converging at the very end to yield the prediction.
    The researchers fed the SNN data about 15 geospatial and climatic variables relevant to the eastern Himalaya mountains. The region was selected because the majority of human losses due to landslides occur in Asia, with a substantial portion in the Himalayas. The SNN model was able to predict landslide susceptibility for Himalayan areas with an accuracy rivaling that of DNNs, but most importantly, the researchers could tease apart the variables to see which ones played bigger roles in producing the results.
    “Similar to how autopsies are required to determine the cause of death, identifying the exact trigger for a landslide will always require field measurements and historical records of soil, hydrologic and climate conditions, such as rainfall amount and intensity, which can be hard to obtain in remote places like the Himalayas,” Moon said. “Nonetheless, our new AI prediction model can identify key variables and quantify their contributions to landslide susceptibility.”
    The researchers’ new AI program also requires far fewer computer resources than DNNs and can run effectively with relatively little computing power.
    “The SNN is so small it can run on an Apple Watch, as opposed to DNNs, which require powerful computer servers to train,” Bouchard said.
    The team plans to extend their work to other landslide-prone regions of the world. In California, for example, where landslide risk is exacerbated by frequent wildfires and earthquakes, and in similar areas, the new system may help develop early warning systems that account for a multitude of signals and predict a range of other surface hazards, including floods. More

  • in

    Fiber optic smart pants offer a low-cost way to monitor movements

    With an aging global population comes a need for new sensor technologies that can help clinicians and caregivers remotely monitor a person’s health. New smart pants based on fiber optic sensors could help by offering a nonintrusive way to track a person’s movements and issue alerts if there are signs of distress.
    “Our polymer optical fiber smart pants can be used to detect activities such as sitting, squatting, walking or kicking without inhibiting natural movements,” said research team leader Arnaldo Leal-Junior from the Federal University of Espirito Santo in Brazil. “This approach avoids the privacy issues that come with image-based systems and could be useful for monitoring aging patients at home or measuring parameters such as range of motion in rehabilitation clinics.”
    In the Optica Publishing Group journal Biomedical Optics Express, the researchers describe the new smart pants, which feature transparent optical fibers directly integrated into the textile. They also developed a portable signal acquisition unit that can be placed inside the pants pocket.
    “This research shows that it is possible to develop low-cost wearable sensing systems using optical devices,” said Leal-Junior. “We also demonstrate that new machine learning algorithms can be used to extend the sensing capabilities of smart textiles and possibly enable the measurement of new parameters.”
    Creating fiber optic pants
    The research is part of a larger project focused on the development of photonic textiles for low-cost wearable sensors. Although devices like smartwatches can tell if a person is moving, the researchers wanted to develop a technology that could detect specific types of activity without hindering movement in any way. They did this by incorporating intensity variation polymer optical fiber sensors directly into fabric that was then used to create pants.

    The sensors were based on polymethyl methacrylate optical fibers that are 1 mm in diameter. The researchers created sensitive areas in the fibers by removing small sections of the outer cladding fiber core. When the fiber bends due to movement, this will cause a change in optical power traveling through the fiber and can be used to identify what type of physical modification was applied to the sensitive area of the fiber.
    By creating these sensitive fiber areas in various locations, the researchers created a multiplexed sensor system with 30 measurement points on each leg. They also developed a new machine learning algorithm to classify different types of activities and to classify gait parameters based on the sensor data.
    Classifying activities
    To test their prototype, the researchers had volunteers wear the smart pants and perform specific activities: slow walking, fast walking, squatting, sitting on a chair, sitting on the floor, front kicking and back kicking. The sensing approach achieved 100% accuracy in classifying these activities.
    “Fiber optic sensors have several advantages, including the fact that they are immune to electric or magnetic interference and can be easily integrated into different clothing accessories due to their compactness and flexibility,” said Leal-Junior. “Basing the device on a multiplexed optical power variation sensor also makes the sensing approach low-cost and highly reliable.”
    The researchers are now working to connect the signal acquisition unit to the cloud, which would enable the data to be accessed remotely. They also plan to test the smart textile in home settings.
    This work was performed in LabSensors/LabTel in the scope of assistive technologies framework funded by Brazilian agencies FINEP and CNPq. More

  • in

    ‘Electronic skin’ from bio-friendly materials can track human vital signs with ultrahigh precision

    Queen Mary University and University of Sussex researchers have used materials inspired by molecular gastronomy to create smart wearables that surpassed similar devices in terms of strain sensitivity. They integrated graphene into seaweed to create nanocomposite microcapsules for highly tunable and sustainable epidermal electronics. When assembled into networks, the tiny capsules can record muscular, breathing, pulse, and blood pressure measurements in real-time with ultrahigh precision.
    Currently much of the research on nanocomposite-based sensors is related to non-sustainable materials. This means that these devices contribute to plastic waste when they are no longer in use. A new study, published on 28 June in Advanced Functional Materials, shows for the first time that it is possible to combine molecular gastronomy concepts with biodegradable materials to create such devices that are not only environmentally friendly, but also have the potential to outperform the non-sustainable ones.
    Scientists used seaweed and salt, two very commonly used materials in the restaurant industry, to create graphene capsules made up of a solid seaweed/graphene gel layer surrounding a liquid graphene ink core. This technique is similar to how Michelin star restaurants serve capsules with a solid seaweed/raspberry jam layer surrounding a liquid jam core.
    Unlike the molecular gastronomy capsules though, the graphene capsules are very sensitive to pressure; so, when squeezed or compressed, their electrical properties change dramatically. This means that they can be utilised as highly efficient strain sensors and can facilitate the creation of smart wearable skin-on devices for high precision, real-time biomechanical and vital signs measurements.
    Dr Dimitrios Papageorgiou, Lecturer in Materials Science at Queen Mary University of London, said: “By introducing a ground-breaking fusion of culinary artistry and cutting-edge nanotechnology, we harnessed the extraordinary properties of newly-created seaweed-graphene microcapsules that redefine the possibilities of wearable electronics. Our discoveries offer a powerful framework for scientists to reinvent nanocomposite wearable technologies for high precision health diagnostics, while our commitment to recyclable and biodegradable materials is fully aligned with environmentally conscious innovation.”
    This research can now be used as a blueprint by other labs to understand and manipulate the strain sensing properties of similar materials, pushing the concept of nano-based wearable technologies to new heights.
    The environmental impact of plastic waste has had a profound effect on our livelihoods and there is a need for future plastic-based epidermal electronics to trend towards more sustainable approaches. The fact that these capsules are made using recyclable and biodegradable materials could impact the way we think about wearable sensing devices and the effect of their presence.
    Dr Papageorgiou said: “We are also very proud of the collaborative effort between Dr Conor Boland’s group from University of Sussex and my group from Queen Mary University of London that fuelled this ground-breaking research. This partnership exemplifies the power of scientific collaboration, bringing together diverse expertise to push the boundaries of innovation.” More

  • in

    Research breakthrough could be significant for quantum computing future

    Scientists using one of the world’s most powerful quantum microscopes have made a discovery that could have significant consequences for the future of computing.
    Researchers at the Macroscopic Quantum Matter Group laboratory in University College Cork (UCC) have discovered a spatially modulating superconducting state in a new and unusual superconductor Uranium Ditelluride (UTe2). This new superconductor may provide a solution to one of quantum computing’s greatest challenges.
    Their finding has been published in the  journal Nature.
    Lead author Joe Carroll, a PhD researcher working with UCC Prof. of Quantum Physics Séamus Davis, explains the subject of the paper.
    “Superconductors are amazing materials which have many strange and unusual properties. Most famously they allow electricity to flow with zero resistance. That is, if you pass a current through them they don’t start to heat up, in fact, they don’t dissipate any energy despite carrying a huge current. They can do this because instead of individual electrons moving through the metal we have pairs of electrons which bind together. These pairs of electrons together form macroscopic quantum mechanical fluid.”
    “What our team found was that some of the electron pairs form a new crystal structure embedded in this background fluid. These types of states were first discovered by our group in 2016 and are now called Electron Pair-Density Waves. These Pair Density Waves are a new form of superconducting matter the properties of which we are still discovering.”
    “What is particularly exciting for us and the wider community is that UTe2 appears to be a new type of superconductor. Physicists have been searching for a material like it for nearly 40 years. The pairs of electrons appear to have intrinsic angular momentum. If this is true, then what we have detected is the first Pair-Density Wave composed of these exotic pairs of electrons.”

    When asked about the practical implications of this work Mr. Carroll explained:
    “There are indications that UTe2 is a special type of superconductor that could have huge consequences for quantum computing.”
    “Typical, classical, computers use bits to store and manipulate information. Quantum computers rely on quantum bits or qubits to do the same. The problem facing existing quantum computers is that each qubit must be in a superposition with two different energies — just as Schrödinger’s cat could be called both ‘dead’ and ‘alive’. This quantum state is very easily destroyed by collapsing into the lowest energy state — ‘dead’ — thereby cutting off any useful computation.
    “This places huge limits on the application of quantum computers. However, since its discovery five years ago there has been a huge amount of research on UTe2 with evidence pointing to it being a superconductor which may be used as a basis for topological quantum computing. In such materials there is no limit on the lifetime of the qubit during computation opening up many new ways for more stable and useful quantum computers. In fact, Microsoft have already invested billions of dollars into topological quantum computing so this is a well-established theoretical science already.” he said.
    “What the community has been searching for is a relevant topological superconductor; UTe2 appears to be that.”
    “What we’ve discovered then provides another piece to the puzzle of UTe2. To make applications using materials like this we must understand their fundamental superconducting properties. All of modern science moves step by step. We are delighted to have contributed to the understanding of a material which could bring us closer to much more practical quantum computers.”
    Congratulating the research team at the Macroscopic Quantum Matter Group Laboratory in University College Cork, Professor John F. Cryan, Vice President Research and Innovation said:
    “This important discovery will have significant consequences for the future of quantum computing. In the coming weeks, the University will launch UCC Futures — Future Quantum and Photonics and research led by Professor Seamus Davis and the Macroscopic Quantum Matter Group, with the use of one of the world’s most powerful microscopes, will play a crucial role in this exciting initiative.” More