More stories

  • in

    Soft, stretchy electrode simulates touch sensations using electrical signals

    A team of researchers led by the University of California San Diego has developed a soft, stretchy electronic device capable of simulating the feeling of pressure or vibration when worn on the skin. This device, reported in a paper published in Science Robotics, represents a step towards creating haptic technologies that can reproduce a more varied and realistic range of touch sensations.
    The device consists of a soft, stretchable electrode attached to a silicone patch. It can be worn like a sticker on either the fingertip or forearm. The electrode, in direct contact with the skin, is connected to an external power source via wires. By sending a mild electrical current through the skin, the device can produce sensations of either pressure or vibration depending on the signal’s frequency.
    “Our goal is to create a wearable system that can deliver a wide gamut of touch sensations using electrical signals — without causing pain for the wearer,” said study co-first author Rachel Blau, a nano engineering postdoctoral researcher at the UC San Diego Jacobs School of Engineering.
    Existing technologies that recreate a sense of touch through electrical stimulation often induce pain due to the use of rigid metal electrodes, which do not conform well to the skin. The air gaps between these electrodes and the skin can result in painful electrical currents.
    To address these issues, Blau and a team of researchers led by Darren Lipomi, a professor in the Aiiso Yufeng Li Family Department of Chemical and Nano Engineering at UC San Diego, developed a soft, stretchy electrode that seamlessly conforms to the skin.
    The electrode is made of a new polymer material constructed from the building blocks of two existing polymers: a conductive, rigid polymer known as PEDOT:PSS, and a soft, stretchy polymer known as PPEGMEA. “By optimizing the ratio of these [polymer building blocks], we molecularly engineered a material that is both conductive and stretchable,” said Blau.
    The polymer electrode is laser-cut into a spring-shaped, concentric design and attached to a silicone substrate. “This design enhances the electrode’s stretchability and ensures that the electrical current targets a specific location on the skin, thus providing localized stimulation to prevent any pain,” said Abdulhameed Abdal, a Ph.D. student in the Department of Mechanical and Aerospace Engineering at UC San Diego and the study’s other co-first author. Abdal and Blau worked on the synthesis and fabrication of the electrode with UC San Diego nano engineering undergraduate students Yi Qie, Anthony Navarro and Jason Chin.

    In tests, the electrode device was worn on the forearm by 10 participants. In collaboration with behavioral scientists and psychologists at the University of Amsterdam, the researchers first identified the lowest level of electrical current detectable. They then adjusted the frequency of the electrical stimulation, allowing participants to experience sensations categorized as either pressure or vibration.
    “We found that by increasing the frequency, participants felt more vibration rather than pressure,” said Abdal. “This is interesting because biophysically, it was never known exactly how current is perceived by the skin.”
    The new insights could pave the way for the development of advanced haptic devices for applications such as virtual reality, medical prosthetics and wearable technology.
    This work was supported by the National Science Foundation Disability and Rehabilitation Engineering program (CBET-2223566). This work was performed in part at the San Diego Nanotechnology Infrastructure (SDNI) at UC San Diego, a member of the National Nanotechnology Coordinated Infrastructure, which is supported by the National Science Foundation (grant ECCS-1542148). More

  • in

    Can A.I. tell you if you have osteoporosis? Newly developed deep learning model shows promise

    Osteoporosis is so difficult to detect in early stage it’s called the “silent disease.” What if artificial intelligence could help predict a patient’s chances of having the bone-loss disease before ever stepping into a doctor’s office?
    Tulane University researchers made progress toward that vision by developing a new deep learning algorithm that outperformed existing computer-based osteoporosis risk prediction methods, potentially leading to earlier diagnoses and better outcomes for patients with osteoporosis risk.
    Their results were recently published in Frontiers in Artificial Intelligence.
    Deep learning models have gained notice for their ability to mimic human neural networks and find trends within large datasets without being specifically programmed to do so. Researchers tested the deep neural network (DNN) model against four conventional machine learning algorithms and a traditional regression model, using data from over 8,000 participants aged 40 and older in the Louisiana Osteoporosis Study. The DNN achieved the best overall predictive performance, measured by scoring each model’s ability to identify true positives and avoid mistakes.
    “The earlier osteoporosis risk is detected, the more time a patient has for preventative measures,” said lead author Chuan Qiu, a research assistant professor at the Tulane School of Medicine Center for Biomedical Informatics and Genomics. “We were pleased to see our DNN model outperform other models in accurately predicting the risk of osteoporosis in an aging population.”
    In testing the algorithms using a large sample size of real-world health data, the researchers were also able to identify the 10 most important factors for predicting osteoporosis risk: weight, age, gender, grip strength, height, beer drinking, diastolic pressure, alcohol drinking, years of smoking, and income level.
    Notably, the simplified DNN model using these top 10 risk factors performed nearly as well as the full model which included all risk factors.
    While Qiu admitted that there is much more work to be done before an AI platform can be used by the public to predict an individual’s risk of osteoporosis, he said identifying the benefits of the deep learning model was a step in that direction.
    “Our final aim is to allow people to enter their information and receive highly accurate osteoporosis risk scores to empower them to seek treatment to strengthen their bones and reduce any further damage,” Qiu said. More

  • in

    Wireless receiver blocks interference for better mobile device performance

    The growing prevalence of high-speed wireless communication devices, from 5G mobile phones to sensors for autonomous vehicles, is leading to increasingly crowded airwaves. This makes the ability to block interfering signals that can hamper device performance an even more important — and more challenging — problem.
    With these and other emerging applications in mind, MIT researchers demonstrated a new millimeter-wave multiple-input-multiple-output (MIMO) wireless receiver architecture that can handle stronger spatial interference than previous designs. MIMO systems have multiple antennas, enabling them to transmit and receive signals from different directions. Their wireless receiver senses and blocks spatial interference at the earliest opportunity, before unwanted signals have been amplified, which improves performance.
    Key to this MIMO receiver architecture is a special circuit that can target and cancel out unwanted signals, known as a nonreciprocal phase shifter. By making a novel phase shifter structure that is reconfigurable, low-power, and compact, the researchers show how it can be used to cancel out interference earlier in the receiver chain.
    Their receiver can block up to four times more interference than some similar devices. In addition, the interference-blocking components can be switched on and off as needed to conserve energy.
    In a mobile phone, such a receiver could help mitigate signal quality issues that can lead to slow and choppy Zoom calling or video streaming.
    “There is already a lot of utilization happening in the frequency ranges we are trying to use for new 5G and 6G systems. So, anything new we are trying to add should already have these interference-mitigation systems installed. Here, we’ve shown that using a nonreciprocal phase shifter in this new architecture gives us better performance. This is quite significant, especially since we are using the same integrated platform as everyone else,” says Negar Reiskarimian, the X-Window Consortium Career Development Assistant Professor in the Department of Electrical Engineering and Computer Science (EECS), a member of the Microsystems Technology Laboratories and Research Laboratory of Electronics (RLE), and the senior author of a paper on this receiver.
    Reiskarimian wrote the paper with EECS graduate students Shahabeddin Mohin, who is the lead author, Soroush Araei, and Mohammad Barzgari, an RLE postdoc. The work was recently presented at the IEEE Radio Frequency Circuits Symposium and received the Best Student Paper Award.

    Blocking interference
    Digital MIMO systems have an analog and a digital portion. The analog portion uses antennas to receive signals, which are amplified, down-converted, and passed through an analog-to-digital converter before being processed in the digital domain of the device. In this case, digital beamforming is required to retrieve the desired signal.
    But if a strong, interfering signal coming from a different direction hits the receiver at the same time as a desired signal, it can saturate the amplifier so the desired signal is drowned out. Digital MIMOs can filter out unwanted signals, but this filtering occurs later in the receiver chain. If the interference is amplified along with the desired signal, it is more difficult to filter out later.
    “The output of the initial low-noise amplifier is the first place you can do this filtering with minimal penalty, so that is exactly what we are doing with our approach,” Reiskarimian says.
    The researchers built and installed four nonreciprocal phase shifters immediately at the output of the first amplifier in each receiver chain, all connected to the same node. These phase shifters can pass signal in both directions and sense the angle of an incoming interfering signal. The devices can adjust their phase until they cancel out the interference.
    The phase of these devices can be precisely tuned, so they can sense and cancel an unwanted signal before it passes to the rest of the receiver, blocking interference before it affects any other parts of the receiver. In addition, the phase shifters can follow signals to continue blocking interference if it changes location.

    “If you start getting disconnected or your signal quality goes down, you can turn this on and mitigate that interference on the fly. Because ours is a parallel approach, you can turn it on and off with minimal effect on the performance of the receiver itself,” Reiskarimian adds.
    A compact device
    In addition to making their novel phase shifter architecture tunable, the researchers designed them to use less space on the chip and consume less power than typical nonreciprocal phase shifters.
    Once the researchers had done the analysis to show their idea would work, their biggest challenge was translating the theory into a circuit that achieved their performance goals. At the same time, the receiver had to meet strict size restrictions and a tight power budget, or it wouldn’t be useful in real-world devices.
    In the end, the team demonstrated a compact MIMO architecture on a 3.2-square-millimeter chip that could block signals which were up to four times stronger than what other devices could handle. Simpler than typical designs, their phase shifter architecture is also more energy efficient.
    Moving forward, the researchers want to scale up their device to larger systems, as well as enable it to perform in the new frequency ranges utilized by 6G wireless devices. These frequency ranges are prone to powerful interference from satellites. In addition, they would like to adapt nonreciprocal phase shifters to other applications.
    This research was supported, in part, by the MIT Center for Integrated Circuits and Systems. More

  • in

    Study reveals why AI models that analyze medical images can be biased

    Artificial intelligence models often play a role in medical diagnoses, especially when it comes to analyzing images such as X-rays. However, studies have found that these models don’t always perform well across all demographic groups, usually faring worse on women and people of color.
    These models have also been shown to develop some surprising abilities. In 2022, MIT researchers reported that AI models can make accurate predictions about a patient’s race from their chest X-rays — something that the most skilled radiologists can’t do.
    That research team has now found that the models that are most accurate at making demographic predictions also show the biggest “fairness gaps” — that is, discrepancies in their ability to accurately diagnose images of people of different races or genders. The findings suggest that these models may be using “demographic shortcuts” when making their diagnostic evaluations, which lead to incorrect results for women, Black people, and other groups, the researchers say.
    “It’s well-established that high-capacity machine-learning models are good predictors of human demographics such as self-reported race or sex or age. This paper re-demonstrates that capacity, and then links that capacity to the lack of performance across different groups, which has never been done,” says Marzyeh Ghassemi, an MIT associate professor of electrical engineering and computer science, a member of MIT’s Institute for Medical Engineering and Science, and the senior author of the study.
    The researchers also found that they could retrain the models in a way that improves their fairness. However, their approached to “debiasing” worked best when the models were tested on the same types of patients they were trained on, such as patients from the same hospital. When these models were applied to patients from different hospitals, the fairness gaps reappeared.
    “I think the main takeaways are, first, you should thoroughly evaluate any external models on your own data because any fairness guarantees that model developers provide on their training data may not transfer to your population. Second, whenever sufficient data is available, you should train models on your own data,” says Haoran Zhang, an MIT graduate student and one of the lead authors of the new paper. MIT graduate student Yuzhe Yang is also a lead author of the paper, which will appear in Nature Medicine. Judy Gichoya, an associate professor of radiology and imaging sciences at Emory University School of Medicine, and Dina Katabi, the Thuan and Nicole Pham Professor of Electrical Engineering and Computer Science at MIT, are also authors of the paper.
    Removing bias
    As of May 2024, the FDA has approved 882 AI-enabled medical devices, with 671 of them designed to be used in radiology. Since 2022, when Ghassemi and her colleagues showed that these diagnostic models can accurately predict race, they and other researchers have shown that such models are also very good at predicting gender and age, even though the models are not trained on those tasks.

    “Many popular machine learning models have superhuman demographic prediction capacity — radiologists cannot detect self-reported race from a chest X-ray,” Ghassemi says. “These are models that are good at predicting disease, but during training are learning to predict other things that may not be desirable.” In this study, the researchers set out to explore why these models don’t work as well for certain groups. In particular, they wanted to see if the models were using demographic shortcuts to make predictions that ended up being less accurate for some groups. These shortcuts can arise in AI models when they use demographic attributes to determine whether a medical condition is present, instead of relying on other features of the images.
    Using publicly available chest X-ray datasets from Beth Israel Deaconess Medical Center in Boston, the researchers trained models to predict whether patients had one of three different medical conditions: fluid buildup in the lungs, collapsed lung, or enlargement of the heart. Then, they tested the models on X-rays that were held out from the training data.
    Overall, the models performed well, but most of them displayed “fairness gaps” — that is, discrepancies between accuracy rates for men and women, and for white and Black patients.
    The models were also able to predict the gender, race, and age of the X-ray subjects. Additionally, there was a significant correlation between each model’s accuracy in making demographic predictions and the size of its fairness gap. This suggests that the models may be using demographic categorizations as a shortcut to make their disease predictions.
    The researchers then tried to reduce the fairness gaps using two types of strategies. For one set of models, they trained them to optimize “subgroup robustness,” meaning that the models are rewarded for having better performance on the subgroup for which they have the worst performance, and penalized if their error rate for one group is higher than the others.
    In another set of models, the researchers forced them to remove any demographic information from the images, using “group adversarial” approaches. Both of these strategies worked fairly well, the researchers found.

    “For in-distribution data, you can use existing state-of-the-art methods to reduce fairness gaps without making significant trade-offs in overall performance,” Ghassemi says. “Subgroup robustness methods force models to be sensitive to mispredicting a specific group, and group adversarial methods try to remove group information completely.”
    Not always fairer
    However, those approaches only worked when the models were tested on data from the same types of patients that they were trained on — for example, only patients from the Beth Israel Deaconess Medical Center dataset.
    When the researchers tested the models that had been “debiased” using the BIDMC data to analyze patients from five other hospital datasets, they found that the models’ overall accuracy remained high, but some of them exhibited large fairness gaps.
    “If you debias the model in one set of patients, that fairness does not necessarily hold as you move to a new set of patients from a different hospital in a different location,” Zhang says.
    This is worrisome because in many cases, hospitals use models that have been developed on data from other hospitals, especially in cases where an off-the-shelf model is purchased, the researchers say.
    “We found that even state-of-the-art models which are optimally performant in data similar to their training sets are not optimal — that is, they do not make the best trade-off between overall and subgroup performance — in novel settings,” Ghassemi says. “Unfortunately, this is actually how a model is likely to be deployed. Most models are trained and validated with data from one hospital, or one source, and then deployed widely.”
    The researchers found that the models that were debiased using group adversarial approaches showed slightly more fairness when tested on new patient groups that those debiased with subgroup robustness methods. They now plan to try to develop and test additional methods to see if they can create models that do a better job of making fair predictions on new datasets.
    The findings suggest that hospitals that use these types of AI models should evaluate them on their own patient population before beginning to use them, to make sure they aren’t giving inaccurate results for certain groups.
    The research was funded by a Google Research Scholar Award, the Robert Wood Johnson Foundation Harold Amos Medical Faculty Development Program, RSNA Health Disparities, the Lacuna Fund, the Gordon and Betty Moore Foundation, the National Institute of Biomedical Imaging and Bioengineering, and the National Heart, Lung, and Blood Institute. More

  • in

    Researchers develop fastest possible flow algorithm

    In a breakthrough that brings to mind Lucky Luke — the man who shoots faster than his shadow — Rasmus Kyng and his team have developed a superfast algorithm that looks set to transform an entire field of research. The groundbreaking work by Kyng’s team involves what is known as a network flow algorithm, which tackles the question of how to achieve the maximum flow in a network while simultaneously minimising transport costs.
    Imagine you are using the European transportation network and looking for the fastest and cheapest route to move as many goods as possible from Copenhagen to Milan. Kyng’s algorithm can be applied in such cases to calculate the optimal, lowest-cost traffic flow for any kind of network — be it rail, road, water or the internet. His algorithm performs these computations so fast that it can deliver the solution at the very moment a computer reads the data that describes the network.
    Computations as fast as a network is big
    Before Kyng, no one had ever managed to do that — even though researchers have been working on this problem for some 90 years. Previously, it took significantly longer to compute the optimal flow than to process the network data. And as the network became larger and more complex, the required computing time increased much faster, comparatively speaking, than the actual size of the computing problem. This is why we also see flow problems in networks that are too large for a computer to even calculate.
    Kyng’s approach eliminates this problem: using his algorithm, computing time and network size increase at the same rate — a bit like going on a hike and constantly keeping up the same pace however steep the path gets. A glance at the raw figures shows just how far we have come: until the turn of the millennium, no algorithm managed to compute faster than m1.5, where m stands for the number of connections in a network that the computer has to calculate, and just reading the network data once takes m time. In 2004, the computing speed required to solve the problem was successfully reduced to m1.33. Using Kyng’s algorithm, the “additional” computing time required to reach the solution after reading the network data is now negligible.
    Like a Porsche racing a horse-drawn carriage
    The ETH Zurich researchers have thus developed what is, in theory, the fastest possible network flow algorithm. Two years ago, Kyng and his team presented mathematical proof of their concept in a groundbreaking paper. Scientists refer to these novel, almost optimally fast algorithms as “almost-linear-time algorithms,” and the community of theoretical computer scientists responded to Kyng’s breakthrough with a mixture of amazement and enthusiasm.

    Kyng’s doctoral supervisor, Daniel A. Spielman, Professor of Applied Mathematics and Computer Science at Yale and himself a pioneer and doyen in this field, compared the “absurdly fast” algorithm to a Porsche overtaking horse-drawn carriages. As well as winning the 2022 Best Paper Award at the IEEE Annual Symposium on Foundations of Computer Science (FOCS), their paper was also highlighted in the computing journal Communications of the ACM, and the editors of popular science magazine Quanta named Kyng’s algorithm one of the ten biggest discoveries in computer science in 2022.
    The ETH Zurich researchers have since refined their approach and developed further almost-linear-time algorithms. For example, the first algorithm was still focused on fixed, static networks whose connections are directed, meaning they function like one-way streets in urban road networks. The algorithms published this year are now also able to compute optimal flows for networks that incrementally change over time. Lightning-fast computation is an important step in tackling highly complex and data-rich networks that change dynamically and very quickly, such as molecules or the brain in biology, or human friendships.
    Lightning-fast algorithms for changing networks
    On Thursday, Simon Meierhans — a member of Kyng’s team — presented a new almost-linear-time algorithm at the Annual ACM Symposium on Theory of Computing (STOC) in Vancouver. This algorithm solves the minimum-cost maximum-flow problem for networks that incrementally change as new connections are added. Furthermore, in a second paper accepted by the IEEE Symposium on Foundations of Computer Science (FOCS) in October, the ETH researchers have developed another algorithm that also handles connections being removed.
    Specifically, these algorithms identify the shortest routes in networks where connections are added or deleted. In real-world traffic networks, examples of such changes in Switzerland include the complete closure and then partial reopening of the Gotthard Base Tunnel in the months since summer 2023, or the recent landslide that destroyed part of the A13 motorway, which is the main alternative route to the Gotthard Road Tunnel.
    Confronted with such changes, how does a computer, an online map service or a route planner calculate the lowest-cost and fastest connection between Milan and Copenhagen? Kyng’s new algorithms also compute the optimal route for these incrementally or decrementally changing networks in almost-linear time — so quickly that the computing time for each new connection, whether added through rerouting or the creation of new routes, is again negligible.

    But what exactly is it that makes Kyng’s approach to computations so much faster than any other network flow algorithm? In principle, all computational methods are faced with the challenge of having to analyse the network in multiple iterations in order to find the optimal flow and the minimum-cost route. In doing so, they run through each of the different variants of which connections are open, closed or congested because they have reached their capacity limit.
    Compute the whole? Or its parts?
    Prior to Kyng, computer scientists tended to choose between two key strategies for solving this problem. One of these was modelled on the railway network and involved computing a whole section of the network with a modified flow of traffic in each iteration. The second strategy — inspired by power flows in the electricity grid — computed the entire network in each iteration but used statistical mean values for the modified flow of each section of the network in order to make the computation faster.
    Kyng’s team has now tied together the respective advantages of these two strategies in order to create a radical new combined approach. “Our approach is based on many small, efficient and low-cost computational steps, which — taken together — are much faster than a few large ones,” says Maximilian Probst Gutenberg, a lecturer and member of Kyng’s group, who played a key role in developing the almost-linear-time algorithms.
    A brief look at the history of this discipline adds an additional dimension to the significance of Kyng’s breakthrough: flow problems in networks were among the first to be solved systematically with the help of algorithms in the 1950s, and flow algorithms played an important role in establishing theoretical computer science as a field of research in its own right. The well-known algorithm developed by mathematicians Lester R. Ford Jr. and Delbert R. Fulkerson also stems from this period. Their algorithm efficiently solves the maximum-flow problem, which seeks to determine how to transport as many goods through a network as possible without exceeding the capacity of the individual routes.
    Fast and wide-ranging
    These advances showed researchers that the maximum-flow problem, the minimum-cost problem (transshipment or transportation problem) and many other important network-flow problems can all be viewed as special cases of the general minimum-cost flow problem. Prior to Kyng’s research, most algorithms were only able to solve one of these problems efficiently, though they could not do even this particularly quickly, nor could they be extended to the broader minimum-cost flow problem. The same applies to the pioneering flow algorithms of the 1970s, for which the theoretical computer scientists John Edward Hopcroft, Richard Manning Karp and Robert Endre Tarjan each received a Turing Award, regarded as the “Nobel Prize” of computer science. Karp received his in 1985; Hopcroft and Tarjan won theirs in 1986.
    Shift in perspective from railways to electricity
    It wasn’t until 2004 that mathematicians and computer scientists Daniel Spielman and Shang-Hua Teng — and later Samuel Daitch — succeeded in writing algorithms that also provided a fast and efficient solution to the minimum-cost flow problem. It was this group that shifted the focus to power flows in the electricity grid. Their switch in perspective from railways to electricity led to a key mathematical distinction: if a train is rerouted on the railway network because a line is out of service, the next best route according to the timetable may already be occupied by a different train. In the electricity grid, it is possible for the electrons that make up a power flow to be partially diverted to a network connection through which other current is already flowing. Thus, unlike trains, the electrical current can, in mathematical terms, be “partially” moved to a new connection.
    This partial rerouting enabled Spielman and his colleagues to compute such route changes much faster and, at the same time, to recalculate the entire network after each change. “We rejected Spielman’s approach of creating the most powerful algorithms we could for the entire network,” says Kyng. “Instead, we applied his idea of partial route computation to the earlier approaches of Hopcroft and Karp.” This computation of partial routes in each iteration played a major role in speeding up the overall flow computation.
    A turning point in theoretical principles
    Much of the ETH Zurich researchers’ progress comes down to the decision to extend their work beyond the development of new algorithms. The team also uses and designs new mathematical tools that speed up their algorithms even more. In particular, they have developed a new data structure for organising network data; this makes it possible to identify any change to a network connection extremely quickly; this, in turn, helps make the algorithmic solution so amazingly fast. With so many applications lined up for the almost-linear-time algorithms and for tools such as the new data structure, the overall innovation spiral could soon be turning much faster than before.
    Yet laying the foundations for solving very large problems that couldn’t previously be computed efficiently is only one benefit of these significantly faster flow algorithms — because they also change the way in which computers calculate complex tasks in the first place. “Over the past decade, there has been a revolution in the theoretical foundations for obtaining provably fast algorithms for foundational problems in theoretical computer science,” writes an international group of researchers from University of California, Berkeley, which includes among its members Rasmus Kyng and Deeksha Adil, a researcher at the Institute for Theoretical Studies at ETH Zurich. More

  • in

    Visual explanations of machine learning models to estimate charge states in quantum dots

    A group of researchers has successfully demonstrated automatic charge state recognition in quantum dot devices using machine learning techniques, representing a significant step towards automating the preparation and tuning of quantum bits (qubits) for quantum information processing.
    Semiconductor qubits use semiconductor materials to create quantum bits. These materials are common in traditional electronics, making them integrable with conventional semiconductor technology. This compatibility is why scientists consider them strong candidates for future qubits in the quest to realize quantum computers.
    In semiconductor spin qubits, the spin state of an electron confined in a quantum dot serves as the fundamental unit of data, or the qubit. Forming these qubit states requires tuning numerous parameters, such as gate voltage, something performed by human experts.
    However, as the number of qubits grows, tuning becomes more complex due to the excessive number of parameters. When it comes to realizing large-scale computers, this becomes problematic.
    “To overcome this, we developed a means of automating the estimation of charge states in double quantum dots, crucial for creating spin qubits where each quantum dot houses one electron,” points out Tomohiro Otsuka, an associate professor at Tohoku University’s Advanced Institute for Materials Research (WPI-AIMR).
    Using a charge sensor, Otsuka and his team obtained charge stability diagrams to identify optimal gate voltage combinations ensuring the presence of precisely one electron per dot. Automating this tuning process required developing an estimator capable of classifying charge states based on variations in charge transition lines within the stability diagram.
    To construct this estimator, the researchers employed a convolutional neural network (CNN) trained on data prepared using a lightweight simulation model: the Constant Interaction model (CI model). Pre-processing techniques enhanced data simplicity and noise robustness, optimizing the CNN’s ability to accurately classify charge states.
    Upon testing the estimator with experimental data, initial results showed effective estimation of most charge states, though some states exhibited higher error rates. To address this, the researchers utilized Grad-CAM visualization to uncover decision-making patterns within the estimator. They identified that errors were often attributed to coincidental-connected noise misinterpreted as charge transition lines. By adjusting the training data and refining the estimator’s structure, researchers significantly improved accuracy for previously error-prone charge states while maintaining the high performance for others.
    “Utilizing this estimator means that parameters for semiconductor spin qubits can be automatically tuned, something necessary if we are to scale up quantum computers,” adds Otsuka. “Additionally, by visualizing the previously black-boxed decision basis, we have demonstrated that it can serve as a guideline for improving the estimator’s performance.”
    Details of the research were published in the journal APL Machine Learning on April 15, 2024. More

  • in

    Synthetic fuels and chemicals from CO2: Ten experiments in parallel

    Why do just one experiment at a time when you can do ten? Empa researchers have developed an automated system, which allows them to research catalysts, electrodes, and reaction conditions for CO₂ electrolysis up to ten times faster. The system is complemented by an open-source software for data analysis.
    If you mix fossil fuel with a little oxygen and add a spark, three things are produced: water, climate-warming carbon dioxide, and lots of energy. This fundamental chemical reaction takes place in every combustion engine, whether it runs on gasoline, petrol, or kerosene. In theory, this reaction can be reversed: with the addition of (renewable) energy, previously released CO2 can be converted back into a (synthetic) fuel.
    This was the key idea behind the ETH Board funded Joint Initiative SynFuels. Researchers at Empa and the Paul Scherrer Institute (PSI) spent three years working on ways to produce synthetic fuels — known as synfuels — economically and efficiently from CO2. This reaction, however, comes with challenges: for one, CO2 electrolysis does not just yield the fuel that was previously burned. Rather, more than 20 different products can be simultaneously formed, and they are difficult to separate from each other.
    The composition of these products can be controlled in various ways, for example via the reaction conditions, the catalyst used, and the microstructure of the electrodes. The number of possible combinations is enormous and examining each one individually would take too long. How are scientists supposed to find the best one? Empa researchers have now accelerated this process by a factor of 10.
    Accelerating research
    As part of the SynFuels project, researchers led by Corsin Battaglia and Alessandro Senocrate from Empa’s Materials for Energy Conversion laboratory have developed a system that can be used to investigate up to ten different reaction conditions as well as catalyst and electrode materials simultaneously. The researchers have recently published the blueprint for the system and the accompanying software in the journal Nature Catalysis.
    The system consists of ten “reactors”: small chambers with catalysts and electrodes in which the reaction takes place. Each reactor is connected to multiple gas and liquid in- and outlets and various instruments via hundreds of meters of tubing. Numerous parameters are recorded fully automatically, such as the pressure, the temperature, gas flows, and the liquid and gaseous reaction products — all with high temporal resolution.

    “As far as we know, this is the first system of its kind for CO2 electrolysis,” says Empa postdoctoral researcher Alessandro Senocrate. “It yields a large number of high-quality datasets, which will help us make accelerated discoveries.” When the system was being developed, some of the necessary instruments were not even available on the market. In collaboration with the company Agilent Technologies, Empa researchers co-developed the world’s first online liquid chromatography device, which identifies and quantifies the liquid reaction products in real time during CO? electrolysis.
    Sharing research data
    Conducting experiments ten times faster also generates ten times as much data. In order to analyze this data, the researchers have developed a software solution that they are making available to scientists at other institutions on an open-source basis. They also want to share the data itself with other researchers. “Today, research data often disappears in a drawer as soon as the results are published,” explains Corsin Battaglia, Head of Empa’s Materials for Energy Conversion laboratory. A joint research project between Empa, PSI and ETH Zurich, which bears the name PREMISE, aims to prevent this: “We want to create standardized methods for storing and sharing data,” says Battaglia. “Then other researchers can gain new insights from our data — and vice versa.”
    Open access to research data is also a priority in other research activities of the Materials for Energy Conversion laboratory. This includes the National Center of Competence in Research NCCR Catalysis, which focuses on sustainable chemistry. The new parallel CO2 electrolysis system is set to play an important role in the second phase of this large-scale national project, with both the data generated and the know-how made available to other Swiss research institutions. To this end, the Empa researchers will continue to refine both the hardware and the software in the future. More

  • in

    Light-controlled artificial maple seeds could monitor the environment even in hard-to-reach locations

    Researchers from Tampere University, Finland, and the University of Pittsburgh, USA, have developed a tiny robot replicating the aerial dance of falling maple seeds. In the future, this robot could be used for real-time environmental monitoring or delivery of small samples even in inaccessible terrain such as deserts, mountains or cliffs, or the open sea. This technology could be a game changer for fields such as search-and-rescue, endangered species studies, or infrastructure monitoring.
    At Tampere University, Professor Hao Zeng and Doctoral Researcher Jianfeng Yang workat the interface between physics, soft mechanics, and material engineering in their Light Robots research group. They have drawn inspiration from nature todesign polymeric gliding structures that can be controlled using light.
    Now, Zeng and Yang, with Professor M. Ravi Shankar,from the University of Pittsburgh Swanson School of Engineering, utilized a light-activated smart material to control the gliding mode of an artificial maple seed. In nature, maple disperse to new growth sites with the help of flying wings in their samara, or dry fruit. The wings help the seed to rotate as it falls, allowing it to glide in a gentle breeze. The configuration of these wings defines their glide path.
    According to the researchers, the artificial maple seed can be actively controlled using light, where its dispersal in the wind can be actively tuned to achieve a range of gliding trajectories. In the future, it can also be equipped with various microsensors for environmental monitoring or be used to deliver, for example, small samples of soil.
    Hi-tech robot beats natural seed in adaptability
    The researchers were inspired by the variety of gliding seeds of Finnish trees, each exhibiting a unique and mesmerizing flight pattern. Their fundamental question was whether the structure of these seeds could be recreated using artificial materials to achieve a similar airborne elegance controlled by light.
    “The tiny light-controlled robots are designed to be released into the atmosphere, utilizing passive flight to disperse widely through interactions with surrounding airflows. Equipped with GPS and various sensors, they can provide real-time monitoring of local environmental indicators like pH levels and heavy metal concentrations” explains Yang.

    Inspired by natural maple samara, the team created azobenzene-based light-deformable liquid crystal elastomer that achieves reversible photochemical deformation to finely tune the aerodynamic properties.
    “The artificial maple seeds outperform their natural counterparts in adjustable terminal velocity, rotation rate, and hovering positions, enhancing wind-assisted long-distance travel through self-rotation,” says Zeng.
    In the beginning of 2023 Zeng and Yang released their first, dandelion seed like mini robot within the projectFlying Aero-robots based on Light Responsive Materials Assembly — FAIRY. The project, funded by the Research Council of Finland, started in September 2021, and will continue until August 2026.
    “Whether it is seeds or bacteria or insects, nature provides them with clever templates to move, feed and reproduce. Often this comes via a simple, but remarkably functional, mechanical design,” Shankar explains.
    “Thanks to advances in materials that are photosensitive, we are able to dictate mechanical behavior at almost the molecular level. We now have the potential to create micro robots, drones, and probes that can not only reach inaccessible areas but also relay critical information to the user. This could be a game changer for fields such as search-and-rescue, endangered or invasive species studies, or infrastructure monitoring,” he adds. More