More stories

  • in

    A new type of cooling for quantum simulators

    Quantum experiments always have to deal with the same problem, regardless of whether they involve quantum computers, quantum teleportation or new types of quantum sensors: quantum effects break down very easily. They are extremely sensitive to external disturbances — for example, to fluctuations caused simply by the surrounding temperature. It is therefore important to be able to cool down quantum experiments as effectively as possible.
    At TU Wien (Vienna), it has now been shown that this type of cooling can be achieved in an interesting new way: A Bose-Einstein condensate is split into two parts, neither abruptly nor particularly slowly, but with a very specific temporal dynamic that ensures that random fluctuations are prevented as perfectly as possible. In this way, the relevant temperature in the already extremely cold Bose-Einstein condensate can be significantly reduced. This is important for quantum simulators, which are used at TU Wien to gain insights into quantum effects that could not be investigated using previous methods.
    Quantum simulators
    “We work with quantum simulators in our research,” says Maximilian Prüfer, who is researching new methods at TU Wien’s Atomic Institute with the help of an Esprit Grant from the FWF. “Quantum simulators are systems whose behavior is determined by quantum mechanical effects and which can be controlled and monitored particularly well. These systems can therefore be used to study fundamental phenomena of quantum physics that also occur in other quantum systems, which cannot be studied so easily.”
    This means that a physical system is used to actually learn something about other systems. This idea is not entirely new in physics: for example, you can also carry out experiments with water waves in order to learn something about sound waves — but water waves are easier to observe.
    “In quantum physics, quantum simulators have become an extremely useful and versatile tool in recent years,” says Maximilian Prüfer. “Among the most important tools for realizing interesting model systems are clouds of extremely cold atoms, such as those we study in our laboratory.” In the current paper published in Physical Review X, the scientists led by Jörg Schmiedmayer and Maximilian Prüfer investigated how quantum entanglement evolves over time and how this can be used to achieve an even colder temperature equilibrium than before. Quantum simulation is also a central topic in the recently launched QuantA Cluster of Excellence, in which various quantum systems are being investigated.
    The colder, the better
    The decisive factor that usually limits the suitability of such quantum simulators at present is their temperature: “The better we cool down the interesting degrees of freedom of the condensate, the better we can work with it and the more we can learn from it,” says Maximilian Prüfer.

    There are different ways to cool something down: For example, you can cool a gas by increasing its volume very slowly. With extremely cold Bose-Einstein condensates, other tricks are typically used: the most energetic atoms are quickly removed until only a collection of atoms remains, which have a fairly uniformly low energy and are therefore cooler.
    “But we use a completely different technique,” says Tiantian Zhang, first author of the study, who investigated this topic as part of her doctoral thesis at the Doctoral College of the Vienna Center for Quantum Science and Technology. “We create a Bose-Einstein condensate and then split it into two parts by creating a barrier in the middle.” The number of particles which end up on the right side and on the left side of the barrier is undetermined. Due to the laws of quantum physics, there is a certain amount of uncertainty here. One could say that both sides are in a quantum-physical superposition of different particle number states.
    “On average, exactly 50% of the particles are on the left and 50% on the right,” says Maximilian Prüfer. “But quantum physics says that there are always certain fluctuations. The fluctuations, i.e. the deviations from the expected value, are closely related to the temperature.”
    Cooling by controlling the fluctuations
    The research team at TU Wien was able to show: neither an extremely abrupt nor an extremely slow splitting of the Bose-Einstein condensate is optimal. A compromise must be found, a cleverly tailored way to dynamically split the condensate, in order to control the quantum fluctuations as well as possible. This cannot be calculated: this problem cannot be solved using conventional computers. But with experiments, the research team was able to show: The appropriate splitting dynamics can be used to suppress the fluctuation in the number of particles, and this in turn translates into a reduction the temperature that you want to minimize.
    “Different temperature scales exist simultaneously in this system, and we lower a very specific one of them,” explains Maximilian Prüfer. “So you can’t think of it like a mini-fridge that gets noticeably colder overall. But that’s not what we’re talking about: suppressing the fluctuations is exactly what we need to be able to use our system as a quantum simulator even better than before. We can now use it to answer questions from fundamental quantum physics that were previously inaccessible.” More

  • in

    Hidden geometry of learning: Neural networks think alike

    Penn Engineers have uncovered an unexpected pattern in how neural networks — the systems leading today’s AI revolution — learn, suggesting an answer to one of the most important unanswered questions in AI: why these methods work so well.
    Inspired by biological neurons, neural networks are computer programs that take in data and train themselves by repeatedly making small modifications to the weights or parameters that govern their output, much like neurons adjusting their connections to one another. The final result is a model that allows the network to predict on data it has not seen before. Neural networks are being used today in essentially all fields of science and engineering, from medicine to cosmology, identifying potentially diseased cells and discovering new galaxies.
    In a new paper published in the Proceedings of the National Academy of Sciences (PNAS), Pratik Chaudhari, Assistant Professor in Electrical and Systems Engineering (ESE) and core faculty at the General Robotics, Automation, Sensing and Perception (GRASP) Lab, and co-author James Sethna, James Gilbert White Professor of Physical Sciences at Cornell University, show that neural networks, no matter their design, size or training recipe, follow the same route from ignorance to truth when presented with images to classify.
    Jialin Mao, a doctoral student in Applied Mathematics and Computational Science at the University of Pennsylvania School of Arts & Sciences, is the paper’s lead author.
    “Suppose the task is to identify pictures of cats and dogs,” says Chaudhari. “You might use the whiskers to classify them, while another person might use the shape of the ears — you would presume that different networks would use the pixels in the images in different ways, and some networks certainly achieve better results than others, but there is a very strong commonality in how they all learn. This is what makes the result so surprising.”
    The result not only illuminates the inner workings of neural networks, but gestures toward the possibility of developing hyper-efficient algorithms that could classify images in a fraction of the time, at a fraction of the cost. Indeed, one of the highest costs associated with AI is the immense computational power required to develop neural networks. “These results suggest that there may exist new ways to train them,” says Chaudhari.
    To illustrate the potential of this new method, Chaudhari suggests imagining the networks as trying to chart a course on a map. “Let us imagine two points,” he says. “Ignorance, where the network does not know anything about the correct labels, and Truth, where it can correctly classify all images. Training a network corresponds to charting a path between Ignorance and Truth in probability space — in billions of dimensions. But it turns out that different networks take the same path, and this path is more like three-, four-, or five-dimensional.”
    In other words, despite the staggering complexity of neural networks, classifying images — one of the foundational tasks for AI systems — requires only a small fraction of that complexity. “This is actually evidence that the details of the network design, size or training recipes matter less than we think,” says Chaudhari.

    To arrive at these insights, Chaudhari and Sethna borrowed tools from information geometry, a field that brings together geometry and statistics. By treating each network as a distribution of probabilities, the researchers were able to make a true apples-to-apples comparison among the networks, revealing their unexpected, underlying similarities. “Because of the peculiarities of high-dimensional spaces, all points are far away from one another,” says Chaudhari. “We developed more sophisticated tools that give us a cleaner picture of the networks’ differences.”
    Using a wide variety of techniques, the team trained hundreds of thousands of networks, of many different varieties, including multi-layer perceptrons, convolutional and residual networks, and the transformers that are at the heart of systems like ChatGPT. “Then, this beautiful picture emerged,” says Chaudhari. “The output probabilities of these networks were neatly clustered together on these thin manifolds in gigantic spaces.” In other words, the paths that represented the networks’ learning aligned with one another, showing that they learned to classify images the same way.
    Chaudhari offers two potential explanations for this surprising phenomenon: first, neural networks are never trained on random assortments of pixels. “Imagine salt and pepper noise,” says Chaudhari. “That is clearly an image, but not a very interesting one — images of actual objects like people and animals are a tiny, tiny subset of the space of all possible images.” Put differently, asking a neural network to classify images that matter to humans is easier than it seems, because there are many possible images the network never has to consider.
    Second, the labels neural networks use are somewhat special. Humans group objects into broad categories, like dogs and cats, and do not have separate words for every particular member of every breed of animals. “If the networks had to use all the pixels to make predictions,” says Chaudhari, “then the networks would have figured out many, many different ways.” But the features that distinguish, say, cats and dogs are themselves low-dimensional. “We believe these networks are finding the same relevant features,” adds Chaudhari, likely by identifying commonalities like ears, eyes, markings and so on.
    Discovering an algorithm that will consistently find the path needed to train a neural network to classify images using just a handful of inputs is an unresolved challenge. “This is the billion-dollar question,” says Chaudhari. “Can we train neural networks cheaply? This paper gives evidence that we might be able to. We just don’t know how.”
    This study was conducted at the University of Pennsylvania School of Engineering and Applied Science and Cornell University. It was supported by grants from the National Science Foundation, National Institutes of Health, the Office of Naval Research, Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship and cloud computing credits from Amazon Web Services.
    Other co-authors include Rahul Ramesh at Penn Engineering; Rubing Yang at the University of Pennsylvania School of Arts & Sciences; Itay Griniasty and Han Kheng Teoh at Cornell University; and Mark K. Transtrum at Brigham Young University. More

  • in

    Memory self-test via smartphone can identify early signs of Alzheimer’s disease

    Dedicated memory tests on smartphones enable the detection of “mild cognitive impairment,” a condition that may indicate Alzheimer’s disease, with high accuracy. Researchers from DZNE, the Otto-von-Guericke University Magdeburg and the University of Wisconsin-Madison in the United States who collaborated with the Magdeburg-based company “neotiv” report these findings in the scientific journal npj Digital Medicine. Their study is based on data from 199 older adults. The results underline the potential of mobile apps for Alzheimer’s disease research, clinical trials and routine medical care. The app that has been evaluated is now being offered to medical doctors to support the early detection of memory problems.
    Memory problems are a key symptom of Alzheimer’s disease. Not surprisingly, their severity and progression play a central role in the diagnosis of Alzheimer’s disease and also in Alzheimer’s research. In current clinical practice, memory assessment is performed under the guidance of a medical professional. The individuals being tested have to complete standardized tasks in writing or in conversation: for example, remembering and repeating words, spontaneously naming as many terms as possible on a certain topic or drawing geometric figures according to instructions. All these tests necessarily require professional supervision, otherwise the results are not conclusive. Thus, these tests cannot be completed alone, for example at home.
    Prof. Emrah Düzel, a senior neuroscientist at DZNE’s Magdeburg site and at University Magdeburg as well as entrepreneur in medical technology, advocates a new approach: “It has advantages if you can carry out such tests on your own and only have to visit the doctor’s office to evaluate the results. Just as we know it from a long-term ECG, for example. Unsupervised testing would help to detect clinically relevant memory impairment at an earlier stage and track disease progression more closely than is currently possible. In view of recent developments in Alzheimer’s therapy and new treatment options, early diagnosis is becoming increasingly important.”
    Comparison between remote at-home and supervised in-clinic testing
    In addition to his involvement in dementia research, Düzel is also “Chief Medical Officer” of “neotiv,” a Magdeburg-based start-up with which the DZNE has been cooperating for several years. The company has developed an app with which memory tests can be carried out autonomously with no need for professional supervision. The software runs on smartphones and tablets, and has been scientifically validated; it is used in Alzheimer’s disease research and is now also offered as a digital tool for medical doctors to support the detection of mild cognitive impairment (MCI). Although MCI has little impact on the affected individuals daily living, they have nevertheless an increased risk of developing Alzheimer’s dementia within a few years.
    Dr. David Berron, research group leader at DZNE and also co-founder of neotiv explains: “As part of the validation process, we applied these novel remote and unsupervised assessments as well as an established in-clinic neuropsychological test battery. We found that the novel method is comparable to in-clinic assessments and detects mild cognitive impairment, also known as MCI, with high accuracy. This technology has enormous potential to provide clinicians with information that they cannot obtain during a patient vist to the clinic.” These findings have now been published in the scientific journal npj Digital Medicine.
    Participants from Germany and the USA
    A total of 199 women and men over the age of 60 participated in the current study. They were located either in Germany or the USA and were each involved in one of two long-term observational studies, both of which address Alzheimer’s — the most common dementia: DZNE’s DELCODE study (Longitudinal Cognitive Impairment and Dementia Study) and the WRAP (Wisconsin Registry for Alzheimer’s Prevention) study of the University of Wisconsin-Madison. The study sample reflected varying cognitive conditions as they occur in a real world situation: It included individuals who were cognitively healthy, patients with MCI and others with subjectively perceived but not measurable memory problems. The diagnosis was based on established assessments that included e. g. memory and language tasks. In addition, all participants completed multiple memory assessments with the neotiv app over a period of at least six weeks, using their own smartphones or tablets — and wherever it was convenient for them. “We found that a majority of our WRAP participants were able to complete the unsupervised digital tasks remotely and they were satisfied with the tasks and the digital platform,” says Lindsay Clark, PhD, neuropsychologist and lead investigator of the Assessing Memory with Mobile Devices study at the University of Wisconsin-Madison.

    Remembering images and detecting differences
    “Assessments with the neotiv app are interactive and comprise three types of memory tasks. These address different areas of the brain that can be affected by Alzheimer’s disease in different disease stages. Many years of research have gone into this,” Düzel explains. Essentially, these tests involve remembering images or recognizing differences between images that are presented by the app. Using a specially developed score, the German-US research team was able to compare the results of the app with the findings of the established in-clinic assessments. “Our study shows that memory complaints can be meaningfully assessed using this digital, remote and unsupervised approach,” says Düzel. “If the results from the digital assessment indicate that there is memory impairment typical of MCI, this paves the way for further clinical examinations. If test results indicate that memory is within the age-specific normal range, individuals can be given an all-clear signal for the time being. And for Alzheimer’s disease research, this approach provides a digital cognitive assessment tool that can be used in clinical studies — as is already being done in Germany, the USA, Sweden and other countries.”
    Outlook
    Further studies are in preparation or already underway. The novel memory assessment is to be tested on even larger study groups, and the researchers also intend to investigate whether it can be used to track the progression of Alzheimer’s disease over a longer period of time. Berron: “Information about how quickly memory declines over time is important for medical doctors and patients. It is also important for clinical trials as new treatments aim to slow the rate of cognitive decline.” The cognitive neuroscientist describes the challenges: “To advance such self-tests, a patient’s clinical data must be linked to self-tests outside the clinic, in the real-world. This is no easy task, but as our current study shows, we are making progress as a field.” More

  • in

    Optimizing electronic health records: Study reveals improvements in departmental productivity

    In a study published in the Annals of Family Medicine, researchers at the Marshall University Joan C. Edwards School of Medicine identify transformative effects of electronic health record (EHR) optimization on departmental productivity. With the universal implementation of EHR systems, the study sheds light on the importance of collaborative efforts between clinicians and information technology (IT) experts in maximizing the potential of these digital tools.
    The study, led by a team of health care professionals in a family medicine department, embarked on a department-wide EHR optimization initiative in collaboration with IT specialists over a four-month period. Unlike previous efforts that primarily focused on institutional-level successes, this study delved deep into the intricacies of EHR interface development and its impact on clinical workflow.
    “There has been a longstanding disconnect between EHR developers and end-users, resulting in interfaces that often fail to capture the intricacies of clinical workflows,” said Adam M. Franks, M.D., interim chair of family and community health at the Joan C. Edwards School of Medicine and lead researcher on the study. “Our study aimed to bridge this gap and demonstrate the tangible benefits of collaborative optimization efforts.”
    The methodology involved an intensive quality improvement process engaging clinicians and clinical staff at all levels. Four categories of optimizations emerged: accommodations (adjustments made by the department to fit EHR workflows); creations (novel workflows developed by IT); discoveries (previously unnoticed workflows within the EHR); and modifications (changes made by IT to existing workflows).
    Key findings from the study showed significant improvements in productivity: The optimization efforts led to remarkable enhancements in departmental productivity. Monthly charges increased from 0.74 to 1.28, while payments surged from 0.83 to 1.58. Although monthly visit ratios also increased from 0.65 to 0.98, the change was not statistically significant.
    The study also revealed that a significant number of solutions to EHR usability issues were already embedded within the system, emphasizing the need for thorough exploration and understanding of existing workflows.
    Finally, accommodation optimizations underscored the necessity for better collaboration between EHR developers and end-users before implementation, highlighting the potential for more user-centric design approaches.
    “Our study not only demonstrates the efficacy of departmental collaboration with IT for EHR optimization but also underscores the importance of detailed workflow analysis in enhancing productivity,” Franks said.
    The research provides valuable insights for health care institutions aiming to maximize the potential of their EHR systems, with implications for improving patient care, efficiency and overall organizational performance. More

  • in

    Bullseye! Accurately centering quantum dots within photonic chips

    Traceable microscopy could improve the reliability of quantum information technologies, biological imaging, and more.
    Devices that capture the brilliant light from millions of quantum dots, including chip-scale lasers and optical amplifiers, have made the transition from laboratory experiments to commercial products. But newer types of quantum-dot devices have been slower to come to market because they require extraordinarily accurate alignment between individual dots and the miniature optics that extract and guide the emitted radiation.
    Researchers at the National Institute of Standards and Technology (NIST) and their colleagues have now developed standards and calibrations for optical microscopes that allow quantum dots to be aligned with the center of a photonic component to within an error of 10 to 20 nanometers (about one-thousandth the thickness of a sheet of paper). Such alignment is critical for chip-scale devices that employ the radiation emitted by quantum dots to store and transmit quantum information.
    For the first time, the NIST researchers achieved this level of accuracy across the entire image from an optical microscope, enabling them to correct the positions of many individual quantum dots. A model developed by the researchers predicts that if microscopes are calibrated using the new standards, then the number of high-performance devices could increase by as much as a hundred-fold.
    That new ability could enable quantum information technologies that are slowly emerging from research laboratories to be more reliably studied and efficiently developed into commercial products.
    In developing their method, Craig Copeland, Samuel Stavis, and their collaborators, including colleagues from the Joint Quantum Institute (JQI), a research partnership between NIST and the University of Maryland, created standards and calibrations that were traceable to the International System of Units (SI) for optical microscopes used to guide the alignment of quantum dots.
    “The seemingly simple idea of finding a quantum dot and placing a photonic component on it turns out to be a tricky measurement problem,” Copeland said.

    In a typical measurement, errors begin to accumulate as researchers use an optical microscope to find the location of individual quantum dots, which reside at random locations on the surface of a semiconductor material. If researchers ignore the shrinkage of semiconductor materials at the ultracold temperatures at which quantum dots operate, the errors grow larger. Further complicating matters, these measurement errors are compounded by inaccuracies in the fabrication process that researchers use to make their calibration standards, which also affects the placement of the photonic components.
    The NIST method, which the researchers described in an article posted online in Optica Quantum on March 18, identifies and corrects such errors, which were previously overlooked.
    The NIST team created two types of traceable standards to calibrate optical microscopes — first at room temperature to analyze the fabrication process, and then at cryogenic temperatures to measure the location of quantum dots. Building on their previous work, the room-temperature standard consisted of an array of nanoscale holes spaced a set distance apart in a metal film.
    The researchers then measured the actual positions of the holes with an atomic force microscope, ensuring that the positions were traceable to the SI. By comparing the apparent positions of the holes as viewed by the optical microscope with the actual positions, the researchers assessed errors from magnification calibration and image distortion of the optical microscope. The calibrated optical microscope could then be used to rapidly measure other standards that the researchers fabricated, enabling a statistical analysis of the accuracy and variability of the process.
    “Good statistics are essential to every link in a traceability chain,” said NIST researcher Adam Pintar, a coauthor of the article.
    Extending their method to low temperatures, the research team calibrated an ultracold optical microscope for imaging quantum dots. To perform this calibration, the team created a new microscopy standard — an array of pillars fabricated on a silicon wafer. The scientists worked with silicon because the shrinkage of the material at low temperatures has been accurately measured.

    The researchers discovered several pitfalls in calibrating the magnification of cryogenic optical microscopes, which tend to have worse image distortion than microscopes operating at room temperature. These optical imperfections bend the images of straight lines into gnarled curves that the calibration effectively straightens out. If uncorrected, the image distortion causes large errors in determining the position of quantum dots and in aligning the dots within targets, waveguides, or other light-controlling devices.
    “These errors have likely prevented researchers from fabricating devices that perform as predicted,” said NIST researcher Marcelo Davanco, a coauthor of the article.
    The researchers developed a detailed model of the measurement and fabrication errors in integrating quantum dots with chip-scale photonic components. They studied how these errors limit the ability of quantum-dot devices to perform as designed, finding the potential for a hundred-fold improvement.
    “A researcher might be happy if one out of a hundred devices works for their first experiment, but a manufacturer might need ninety-nine out of a hundred devices to work,” Stavis noted. “Our work is a leap ahead in this lab-to-fab transition.”
    Beyond quantum-dot devices, traceable standards and calibrations under development at NIST may improve accuracy and reliability in other demanding applications of optical microscopy, such as imaging brain cells and mapping neural connections. For these endeavors, researchers also seek to determine accurate positions of the objects under study across an entire microscope image. In addition, scientists may need to coordinate position data from different instruments at different temperatures, as is true for quantum-dot devices. More

  • in

    Using ‘time travel’ to think about technology from the perspective of future generations

    The world approaches an environmental tipping point, and our decisions now regarding energy, resources, and the environment will have profound consequences for the future. Despite this, most sustainable thought tends to be limited to the viewpoint of current generations.
    In a study published in Technological Forecasting and Social Change, researchers from Osaka University have revealed that adopting the perspective of “imaginary future generations” (IFGs) can yield fascinating insights into long-term social and technological trends.
    The researchers organized a series of four workshops at Osaka University, with participants drawn from the faculty and student body of the Graduate School of Engineering. The workshops discussed the state of future society and manufacturing in general, and also looked at one technology in particular: hydrothermally produced porous glass. During the workshops, the participants were asked to think about this technology from the perspective of IFGs, to imagine how this technology might be adopted in the future and to assess its future potentiality.
    “We chose hydrothermally produced porous glass for the case study because of the generational trade-offs involved,” says lead author of the study Keishiro Hara. “Porous glass is incredibly useful as either a filter for removing impurities or an insulator for buildings. Also, it can be recycled into new porous glass more or less indefinitely. The problem is that making it takes a lot of energy — both to pulverize waste glass and to heat water to very high temperatures. There’s a striking trade-off between costs now and gains in the future.”
    In the workshops, the participants first looked at issues involving society and manufacturing from the perspective of the present and were then asked to imagine themselves in the shoes of their counterparts in 2040.
    “The future the participants imagined was quite different from the future as seen from the perspective of the current generation,” explains Toshihiro Tanaka, senior author. “Most groups described a future in which sustainability has become a central concern for society. Meanwhile, advances in renewal energy mean that energy is abundant, as are resources, as frontiers such as the moon and deep ocean are opened to exploration. In this context, hydrothermally produced porous glass comes into its own as a sustainable way to recycle glass, and the energy needed to produce it is readily available.”
    The participants were surveyed between workshops and asked to rank indicators related to the future potentiality of the technology. Interestingly, these rankings looked quite different after the workshops in which the participants were asked to take on the perspective of “imaginary future generations.”
    “We noticed that when the “imaginary future generations” method, which has been proven to be effective in facilitating long-term thinking, was adopted, participants perceived the feasibility of this technology differently, and their adoption scenarios changed accordingly,” says Hara.
    The study suggests that the simple act of putting ourselves in the position of future generations may provide new perspectives on issues of sustainability and technology, helping us to rethink our priorities and set new directions for research and development. More

  • in

    Micro-Lisa! Making a mark with novel nano-scale laser writing

    High-power lasers are often used to modify polymer surfaces to make high-tech biomedical products, electronics and data storage components.
    Now Flinders University researchers have discovered a light-responsive, inexpensive sulfur-derived polymer is receptive to low power, visible light lasers — promising a more affordable and safer production method in nanotech, chemical science and patterning surfaces in biological applications.
    Details of the novel system have just been published in one of the highest-ranking chemistry journals, Angewandte Chemie International Edition, featuring a laser-etched version of the famous Mona Lisa painting and micro-braille printing even smaller than a pin head.
    “This could be a way to reduce the need for expensive, specialised equipment including high-power lasers with hazardous radiation risk, while also using more sustainable materials. For instance, the key polymer is made from low-cost elemental sulfur, an industrial byproduct, and either cyclopentadiene or dicyclopentadiene,” says Matthew Flinders Professor of Chemistry Justin Chalker, from the Flinders University.
    “Our study used a suite of lasers with discreet wavelengths (532, 638 and 786 nm) and powers to demonstrate a variety of surface modifications on the special polymers, including controlled swelling or etching via ablation.
    “The facile synthesis and laser modification of these photo-sensitive polymer systems were exploited in applications such as direct-write laser lithography and erasable information storage,” says Dr Chalker from the Flinders University Institute for NanoScale Science and Engineering.
    As soon as the laser light touches the surface, the polymer will swell or etch a pit to fashion lines, holes, spikes and channels instantly.

    The discovery was made by Flinders University researcher and co-author Dr Christopher Gibson during what was thought to be a routine analysis of a polymer first invented in the Chalker Lab in 2022by PhD candidate Samuel Tonkin and Professor Chalker.
    Dr Gibson says: “The novel polymer was immediately modified by a low-power lasers — an unusual response I had never observed before on any other common polymers.
    “We immediately released that this phenomenon might be useful in a number of applications, so we guild a research project around the discovery.”
    Another Flinders College of Science and Engineering PhD candidate, Ms Abigail Mann, led the next stage of the project and is first-author on the new international journal paper.
    “The outcome of these efforts is a new technology for generating precise patterns on the polymer surface,” she says.
    “It is exciting to develop and bring new microfabrication techniques to sulfur-based materials. We hope to inspire a broad range of real-world applications in our lab and beyond.”
    Potential applications include new approaches to storing data on polymers, new patterned surfaces for biomedical applications, and new ways to make micro- and nanoscale devices for electronics, sensors and microfluidics.

    With support from research associate Dr Lynn Lisboa and Samuel Tonkin, the Flinders team conducted detailed analysis of how the laser modifies the polymer and how to control the type and size of modification.
    Another co-author Dr Lisboa adds: “The impact of this discovery extends far beyond the laboratory, with potential use in biomedical devices, electronics, information storage, microfluidics, and many other functional material applications.
    Flinders spectroscopist Dr Jason Gascooke, of the Australian National Fabrication Facility (ANFF), also worked on the project. He says the latest discovery would not have been possible without the tools afforded by Federal and State Government funding for the national facilities of Microscopy Australia and the ANFF in SA, as well as Flinders Microscopy and Microanalysis. More

  • in

    Mathematical innovations enable advances in seismic activity detection

    Amidst the unique landscape of geothermal development in the Tohoku region, subtle seismic activities beneath the Earth’s surface present a fascinating challenge for researchers. While earthquake warnings may intermittently alert us to seismic events, there exist numerous smaller quakes that have long intrigued resource engineers striving to detect and understand them.
    Mathematical innovations from Tohoku University researchers are advancing detection of more types — and fainter forms — of seismic waves, paving the way for more effective earthquake monitoring and risk assessment.
    The results of their study were published in IEEE Transactions on Geoscience and Remote Sensing on January 15, 2024.
    Collection of seismic data relies on the number and positioning of sensors called seismometers. Especially where only limited deployment of seismic sensors is possible, such as in challenging environments like the planet Mars or when conducting long-term monitoring of captured and stored carbon, optimizing data extraction from each and every sensor becomes crucial. One promising method for doing so is polarization analysis, which involves studying 3-D particle motion and has garnered attention for its ability to leverage three-component data, offering more information than one-component data. This approach enables the detection and identification of various polarized seismic waveforms, including S-waves, P-waves and others.
    Polarization analysis using a spectral matrix (SPM) in particular is a technique used to analyze the way particles move in three dimensions over time and at different frequencies, in other words, in the time-frequency domain. However, in scenarios where the desired signal is weak compared to background noise — known as low signal-to-noise ratio (SNR) events, which are typical in underground reservoirs — SPM analysis faces limitations. Due to mathematical constraints, it can only characterize linear particle motion (meaning the fast-moving, easy-to-detect P-waves), making the analysis of other waveforms (such as the secondary arriving S-waves) challenging.
    “We overcame the technical challenges of conventional SPM analysis and expanded it for broader polarization realization by introducing time-delay components,” said Yusuke Mukuhira, an assistant professor at the Institute of Fluid Science of Tohoku University and lead author of the study.
    Compared to existing techniques, his team’s incorporation of time-delay components enhanced the accuracy of SPM analysis, enabling the characterization of various polarized waves, including S-waves, and the detection of low-SNR events with smaller amplitudes.

    A key innovation in the study is the introduction of a new weighting function based on the phase information of the first eigenvector — a special vector that, when multiplied by the matrix, results in a scaled version of the original vector. The purpose of the weighting function is to assign different levels of importance to different parts of signals according to their significance, thereby reducing false alarms. Synthetic waveform tests showed that this addition significantly improved the evaluation of seismic wave polarization, a crucial factor in distinguishing signal from noise.
    “Technically, we have developed a signal processing technique that improves particle motion analysis in the time and frequency domain,” Mukuhira said.
    The research team validated their methodology using real-world data recorded at the Groningen gas field in the Netherlands. The results showcased superior seismic motion detection performance, bringing to light two low-SNR events that had previously gone unnoticed by conventional methods.
    These findings hold the potential for applications across various fields, including seismology and geophysics, particularly in monitoring underground conditions with limited observation points. The implications extend to earthquake monitoring, planetary exploration and resource development. More