More stories

  • in

    Catalyst for electronically controlled C–H functionalization

    The Chirik Group at the Princeton Department of Chemistry is chipping away at one of the great challenges of metal-catalyzed C-H functionalization with a new method that uses a cobalt catalyst to differentiate between bonds in fluoroarenes, functionalizing them based on their intrinsic electronic properties.
    In a paper published this week in Science, researchers show they are able to bypass the need for steric control and directing groups to induce cobalt-catalyzed borylation that is meta-selective.
    The lab’s research showcases an innovative approach driven by deep insights into organometallic chemistry that have been at the heart of its mission for over a decade. In this case, the Chirik Lab drilled down into how transition metals break C-H bonds, uncovering a method that could have vast implications for the synthesis of medicines, natural products, and materials.
    And their method is fast — comparable in speed to those that rely on iridium.
    The research is outlined in “Kinetic and Thermodynamic Control of C(sp2)-H Activation Enable Site-Selective Borylation,” by lead author Jose Roque, a former postdoc in the Chirik Group; postdoc Alex Shimozono; and P.I. Paul Chirik, the Edwards S. Sanford Professor of Chemistry and former lab members Tyler Pabst, Gabriele Hierlmeier, and Paul Peterson.
    ‘Really fast, really selective’
    “Chemists have been saying for decades, let’s turn synthetic chemistry on its head and make the C-H bond a reactive part of the molecule. That would be incredibly important for drug discovery for the pharmaceutical industry, or for making materials,” said Chirik.

    “One of the ways we do this is called C-H borylation, in which you turn the C-H bond into something else, into a carbon-boron bond. Turning C-H to C-B is a gateway to great chemistry.”
    Benzene rings are highly represented motifs in medicinal chemistry. However, chemists rely on traditional approaches to functionalize them. The Chirik Group develops new methods that access less-explored routes.
    “Imagine you have a benzene ring and it has one substituent on it,” Chikik added. “The site next to it is called ortho, the one next to that is called meta, and the one opposite is called para. The meta C-H bond is the hardest one to do selectively. That’s what Jose has done here with a cobalt catalyst, and no one’s done it before.
    “He’s made a cobalt catalyst that is really fast and really selective.”
    Roque, now an assistant professor in Princeton’s Department of Chemistry, said rational design was at the heart of their solution.
    “We started to get a glimpse of the high activity for C-H activation early during our stoichiometric studies,” said Roque. “The catalyst was rapidly activating the C-H bonds of aromatic solvents at room temperature. In order to isolate the catalyst, we had to avoid handling the catalyst in aromatic solvents,” he added. “We designed an electronically rich but sterically accessible pincer ligand that we posited — based on some previous insights from our lab as well as some fundamental organometallic principles — would lead to a more active catalyst.

    “And it has.”
    Chirik Lab Target Since 2014
    State-of-the-art borylation uses iridium as a catalyst for sterically driven C-H functionalization. It is highly reactive, and it is fast. But if you have a molecule with many C-H bonds, iridium catalysts fail to selectively functionalize the desired bond.
    As a result, pharmaceutical companies have appealed for an alternative with more selectivity. And they’ve sought it among first-row transition metals like cobalt and iron, which are less expensive and more sustainable than iridium.
    Since their first paper on C-H borylation in 2014, the Chirik Lab has articulated the concept of electronically controlled C-H activation as one answer to this challenge. Their idea is to differentiate between C-H bonds based on electronic properties in order to functionalize them. These properties are reflected in the metal-carbon bond strength. With the catalyst designed in this research, chemists can hit the selected bond and only the selected bond by tapping into these disparate strengths.
    But they uncovered another result that makes their method advantageous: the site selectivity can be switched by exploiting the kinetic or thermodynamic preferences of C-H activation. This selectivity switch can be accomplished by choosing one reagent over another, a process that is as streamlined as it is cost-effective.
    “Site-selective meta-to-fluorine functionalization was a huge challenge. We made some great progress toward that with this research and expanded the chemistry to include other substrate classes beyond fluoroarenes,” said Roque. “But as a function of studying first-row metals, we also found out, hey, we can switch the selectivity.”
    Added Chirik: “To me, this is a huge concept in C-H functionalization. Now we can look at metal-carbon bond strengths and predict where things are going to go. This opens a whole new opportunity. We’re going to be able to do things that iridium doesn’t do.”
    Shimozono came to the project late in the game, after Roque had already discovered the pivotal catalyst. His role will deepen in the coming months as he seeks new advances in borylation.
    “Jose’s catalyst is groundbreaking. Usually, a completely different catalyst is required in order to change site-selectivity,” said Shimozono. “Counter to this dogma, Jose demonstrated that using B2Pin2 as the boron source affords meta selective chemistry, while using HBPin as the boron source gives ortho selective borylation using the same iPrACNCCo catalyst.
    “In general, the more methods we have to install groups in specific sites in molecules, the better. This gives pharmaceutical chemists more tools to make and discover medications more efficiently.” More

  • in

    How a failure to understand race leads to flawed health tech

    A new study focused on wearable health monitors underscores an entrenched problem in the development of new health technologies — namely, that a failure to understand race means the way these devices are developed and tested can exacerbate existing racial health inequities.
    “This is a case study that focuses on one specific health monitoring technology, but it really highlights the fact that racial bias is baked into the design of many of these technologies,” says Vanessa Volpe, co-author of the study and an associate professor of psychology at North Carolina State University.
    “The way that we understand race, and the way that we put that understanding into action when developing and using health technologies, is deeply flawed,” says Beza Merid, corresponding author of the study and an assistant professor of science, technology, innovation and racial justice at Arizona State University.
    “Basically, the design of health technologies that purport to provide equitable solutions to racial health disparities often define race as a biological trait, when it’s actually a social construct,” Merid says. “And the end result of this misunderstanding is that we have health technologies that contribute to health inequities rather than reducing them.”
    To explore issues related to the way the development and testing of health tech can reinforce racism, the researchers focused specifically on photoplethysmographic (PPG) sensors, which are widely used in consumer devices such as Fitbits and Apple watches. PPG sensors are used in wearable technologies to measure biological signals, such as heart rate, by sending a signal of light through the skin and collecting data from the way in which the light is reflected back to the device.
    For the study, the researchers drew on data from clinical validation studies for a wearable health monitoring device that relied on PPG sensors. The researchers also used data from studies that investigated the ways in which skin tone affects the accuracy of PPG “green light” sensors in the context of health monitoring. Lastly, the researchers looked at wearable device specification and user manuals and data from a lawsuit filed against a health technology manufacturer related to the accuracy of technologies that relied on PPG sensors.
    “Essentially, we synthesized and interpreted data from each of these sources to take a critical look at racial bias in the development and testing of PPG sensors and their outputs, to see if they matched guidelines for responsible innovation,” Volpe says.

    “These studies identified challenges with PPG sensors for people with darker skin tones,” says Merid. “We drew on scholarship exploring how innovative technologies can reproduce racial health inequities to dig more deeply into how and why these challenges exist. Our own expertise in responsible innovation and structural racism in technology guided our approach. If people are developing technologies with the goal of reducing harm to people’s health, how and why do these technologies end up with flaws that can exacerbate that harm?”
    The findings suggest there are significant challenges when it comes to “race correction” in health technologies.
    “Race correction” is a broad term that applies not only to technologies, but also involves correcting or adjusting health risk scores used to make decisions about the relative risk of disease and the allocation of health care resources.
    “Race correction assumes that we can develop technologies or health risk scoring algorithms to first quantify and then ‘remove’ the effect of biological race from the equation,” says Merid. “But doing so assumes race is a biological difference that needs to be corrected for to achieve equitable health for all. This prevents us from treating the real thing that needs to be corrected — the system of racism itself (e.g., differential treatment and access to health care, systematic socioeconomic disenfranchisement).”
    “For example, many — if not most — health technologies that use PPG sensors claim to be designed for use by everyone,” Volpe says. “But in reality those technologies are less accurate for people with darker skin tones. We argue that the systematic exclusion and erasure of those with darker skin tones in the development and testing of wearable technologies that are supposed to democratize and improve health for all can be a less visible form of race correction. In other words, the development process itself reflects the system of racism. The end result is a technological ‘solution’ that fails to deliver equity and is instead characteristic of the very system that created the problem.
    “Race corrections assume that we have to make adjustments based on race as a biological construct,” Volpe says. “But we should be adjusting racism as a system so that the technologies developed work and are responsible and equitable for everyone — in both their development and their consequences.”
    “Innovation can introduce unintended consequences,” Merid says. “Rather than coming up with a solution, you can potentially just introduce a new suite of problems. This is a longstanding challenge for trying to develop technological solutions to social problems.
    “Hopefully, this work contributes to our understanding of the ways that race correction is problematic,” says Merid. “We also hope that this work advances the idea that assumptions about race in the health field are deeply problematic, whether we’re talking about health technology, diagnoses or access to care. Lastly, we need to be mindful about the ways in which emerging health technologies can be harmful.” More

  • in

    Bowtie resonators that build themselves bridge the gap between nanoscopic and macroscopic

    A central goal in quantum optics and photonics is to increase the strength of the interaction between light and matter to produce, e.g., better photodetectors or quantum light sources. The best way to do that is to use optical resonators that store light for a long time, making it interact more strongly with matter. If the resonator is also very small, such that light is squeezed into a tiny region of space, the interaction is enhanced even further. The ideal resonator would store light for a long time in a region at the size of a single atom.
    Physicists and engineers have struggled for decades with how small optical resonators can be made without making them very lossy, which is equivalent to asking how small you can make a semiconductor device. The semiconductor industry’s roadmap for the next 15 years predicts that the smallest possible width of a semiconductor structure will be no less than 8 nm, which is several tens of atoms wide.
    The team behind a new paper in Nature, Associate Professor Søren Stobbe and his colleagues at DTU Electro demonstrated 8 nm cavities last year, but now they propose and demonstrate a novel approach to fabricate a self-assembling cavity with an air void at the scale of a few atoms. Their paper ‘Self-assembled photonic cavities with atomic-scale confinement’ detailing the results is published today in Nature.
    To briefly explain the experiment, two halves of silicon structures are suspended on springs, although in the first step, the silicon device is firmly attached to a layer of glass. The devices are made by conventional semiconductor technology, so the two halves are a few tens of nanometers apart. Upon selective etching of the glass, the structure is released and now only suspended by the springs, and because the two halves are fabricated so close to each other, they attract due to surface forces. By carefully engineering the design of the silicon structures, the result is a self-assembled resonator with bowtie-shaped gaps at the atomic scale surrounded by silicon mirrors.
    “We are far from a circuit that builds itself completely. But we have succeeded in converging two approaches that have been travelling along parallel tracks so far. And it allowed us to build a silicon resonator with unprecedented miniaturization,” says Søren Stobbe.
    Two separate approaches
    One approach — the top-down approach — is behind the spectacular development we have seen with silicon-based semiconductor technologies. Here, crudely put, you go from a silicon block and work on making nanostructures from them. The other approach — the bottom-up approach — is where you try to have a nanotechnological system assemble itself. It aims to mimic biological systems, such as plants or animals, built through biological or chemical processes. These two approaches are at the very core of what defines nanotechnology. But the problem is that these two approaches were so far disconnected: Semiconductors are scalable but cannot reach the atomic scale, and while self-assembled structures have long been operating at atomic scales, they offer no architecture for the interconnects to the external world.

    “The interesting thing would be if we could produce an electronic circuit that built itself — just like what happens with humans as they grow but with inorganic semiconductor materials. That would be true hierarchical self-assembly. We use the new self-assembly concept for photonic resonators, which may be used in electronics, nanorobotics, sensors, quantum technologies, and much more. Then, we would really be able to harvest the full potential of nanotechnology. The research community is many breakthroughs away from realizing that vision, but I hope we have taken the first steps,” says Guillermo Arregui, who co-supervised the project.
    Approaches converging
    Supposing a combination of the two approaches is possible, the team at DTU Electro set out to create nanostructures that surpass the limits of conventional lithography and etching despite using nothing more than conventional lithography and etching. Their idea was to use two surface forces, namely the Casimir force for attracting the two halves and the van der Waals force for making them stick together. These two forces are rooted in the same underlying effect: quantum fluctuations (see Fact box).
    The researchers made photonic cavities that confine photons to air gaps so small that determining their exact size was impossible, even with a transmission electron microscope. But the smallest they built are of a size of 1-3 silicon atoms.
    “Even if the self-assembly takes care of reaching these extreme dimensions, the requirements for the nanofabrication are no less extreme. For example, structural imperfections are typically on the scale of several nanometers. Still, if there are defects at this scale, the two halves will only meet and touch at the three largest defects. We are really pushing the limits here, even though we make our devices in one of the very best university cleanrooms in the world,” says Ali Nawaz Babar, a PhD student at the NanoPhoton Center of Excellence at DTU Electro and first author of the new paper.
    “The advantage of self-assembly is that you can make tiny things. You can build unique materials with amazing properties. But today, you can’t use it for anything you plug into a power outlet. You can’t connect it to the rest of the world. So, you need all the usual semiconductor technology for making the wires or waveguides to connect whatever you have self-assembled to the external world.”
    Robust and accurate self-assembly

    The paper shows a possible way to link the two nanotechnology approaches by employing a new generation of fabrication technology that combines the atomic dimensions enabled by self-assembly with the scalability of semiconductors fabricated with conventional methods.
    “We don’t have to go in and find these cavities afterwards and insert them into another chip architecture. That would also be impossible because of the tiny size. In other words, we are building something on the scale of an atom already inserted in a macroscopic circuit. We are very excited about this new line of research, and plenty of work is ahead,” says Søren Stobbe.
    Surface forces
    There are four known fundamental forces: Gravitational, electromagnetic, and strong and weak nuclear forces. Besides the forces due to static configurations, e.g., the attractive electromagnetic force between positively and negatively charged particles, there can also be forces due to fluctuations. Such fluctuations may be either thermal or quantum in origin, and they give rise to surface forces such as the van der Waals force and the Casimir force which act at different length scales but are rooted in the same underlying physics. Other mechanisms, such as electrostatic surface charges, can add to the net surface force. For example, geckos exploit surface forces to cling to walls and ceilings.
    How it was done
    The paper details three experiments that the researchers carried out in the labs at DTU: No fewer than 2688 devices across two microchips were fabricated, each containing a platform that would either collapse onto a nearby silicon wall — or not collapse, depending upon the surface area details, spring constant, and distance between platform and wall. This allowed the researchers to make a map of which parameters would — and would not — lead to deterministic self-assembly. Only 11 devices failed due to fabrication errors or other defects, a remarkably low number for a novel self-assembly process. The researchers made self-assembled optical resonators whose optical properties were verified experimentally, and the atomic scale was confirmed by transmission electron microscopy. The self-assembled cavities were embedded in a larger architecture consisting of self-assembled waveguides, springs, and photonic couplers to make the surrounding microchip circuitry in the same process. More

  • in

    Artificial intelligence makes gripping more intuitive

    Different types of grasps and bionic design: technological developments in recent decades have already led to advanced artificial hands. They can enable amputees who have lost a hand through accident or illness to regain some movements. Some of these modern prostheses allow independent finger movements and wrist rotation. These movements can be selected via a smartphone app or by using muscle signals from the forearm, typically detected by two sensors.
    For instance, the activation of wrist flexor muscles can be used to close the fingers together to grip a pen. If the wrist extensor muscles are contracted, the fingers re-open and the hand releases the pen. The same approach makes it possible to control different finger movements that are selected with the simultaneous activation of both flexor and extensor muscle groups. “These are movements that the patient has to learn during rehabilitation,” says Cristina Piazza, a professor of rehabilitation and assistive robotics at TUM. Now, Prof. Piazza’s research team has shown that artificial intelligence can enable patients to control advanced hand prostheses more intuitively by using the “synergy principle” and with the help of 128 sensors on the forearm.
    The synergy principle: the brain activates a pool of muscle cells
    What is the synergy principle? “It is known from neuroscientific studies that repetitive patterns are observed in experimental sessions, both in kinematics and muscle activation,” says Prof. Piazza. These patterns can be interpreted as the way in which the human brain copes with the complexity of the biological system. That means that the brain activates a pool of muscle cells, also in the forearm. The professor adds: “When we use our hands to grasp an object, for example a ball, we move our fingers in a synchronized way and adapt to the shape of the object when contact occurs.” The researchers are now using this principle to design and control artificial hands by creating new learning algorithms. This is necessary for intuitive movement: When controlling an artificial hand to grasp a pen, for example, multiple steps take place. First, the patient orients the artificial hand according to the grasping location, slowly moves the fingers together, and then grabs the pen. The goal is to make these movements more and more fluid, so that it is hardly noticeable that numerous separate movements make up an overall process. “With the help of machine learning, we can understand the variations among subjects and improve the control adaptability over time and the learning process,” concludes Patricia Capsi Morales, the senior scientist in Prof. Piazza’s team.
    Discovering patterns from 128 signal channels
    Experiments with the new approach already indicate that conventional control methods could soon be empowered by more advanced strategies. To study what is happening at the level of the central nervous system, the researchers are working with two films: one for the inside and one for the outside of the forearm. Each contains up to 64 sensors to detect muscle activation. The method also estimates which electrical signals the spinal motor neurons have transmitted. “The more sensors we use, the better we can record information from different muscle groups and find out which muscle activations are responsible for which hand movements,” explains Prof. Piazza. Depending on whether a person intends to make a fist, grip a pen or open a jam jar, “characteristic features of muscle signals” result, according to Dr. Capsi Morales — a prerequisite for intuitive movements.
    Wrist and hand movement: Eight out of ten people prefer the intuitive way
    Current research concentrates on the movement of the wrist and the whole hand. It shows that most people (eight out of ten) prefer the intuitive way of moving wrist and hand. This is also the more efficient way. But two of ten learn to handle the less intuitive way, becoming in the end even more precise. “Our goal is to investigate the learning effect and find the right solution for each patient,” Dr. Capsi Morales explains. “This is a step in the right direction,” says Prof. Piazza, who emphasizes that each system consists of individual mechanics and properties of the hand, special training with patients, interpretation and analysis, and machine learning.
    Current challenges of advanced control of artificial hands
    There are still some challenges to address: The learning algorithm, which is based on the information from the sensors, has to be retrained every time the film slips or is removed. In addition, the sensors must be prepared with a gel to guarantee the necessary conductivity to record the signals from the muscles precisely. “We use signal processing techniques to filter out the noise and get usable signals,” explains Dr. Capsi Morales. Every time a new patient wears the cuff with the many sensors over their forearm, the algorithm must first identify the activation patterns for each movement sequence to later detect the user’s intention and translate it into commands for the artificial hand. More

  • in

    AI accelerates problem-solving in complex scenarios

    While Santa Claus may have a magical sleigh and nine plucky reindeer to help him deliver presents, for companies like FedEx, the optimization problem of efficiently routing holiday packages is so complicated that they often employ specialized software to find a solution.
    This software, called a mixed-integer linear programming (MILP) solver, splits a massive optimization problem into smaller pieces and uses generic algorithms to try and find the best solution. However, the solver could take hours — or even days — to arrive at a solution.
    The process is so onerous that a company often must stop the software partway through, accepting a solution that is not ideal but the best that could be generated in a set amount of time.
    Researchers from MIT and ETH Zurich used machine learning to speed things up.
    They identified a key intermediate step in MILP solvers that has so many potential solutions it takes an enormous amount of time to unravel, which slows the entire process. The researchers employed a filtering technique to simplify this step, then used machine learning to find the optimal solution for a specific type of problem.
    Their data-driven approach enables a company to use its own data to tailor a general-purpose MILP solver to the problem at hand.
    This new technique sped up MILP solvers between 30 and 70 percent, without any drop in accuracy. One could use this method to obtain an optimal solution more quickly or, for especially complex problems, a better solution in a tractable amount of time.

    This approach could be used wherever MILP solvers are employed, such as by ride-hailing services, electric grid operators, vaccination distributors, or any entity faced with a thorny resource-allocation problem.
    “Sometimes, in a field like optimization, it is very common for folks to think of solutions as either purely machine learning or purely classical. I am a firm believer that we want to get the best of both worlds, and this is a really strong instantiation of that hybrid approach,” says senior author Cathy Wu, the Gilbert W. Winslow Career Development Assistant Professor in Civil and Environmental Engineering (CEE), and a member of a member of the Laboratory for Information and Decision Systems (LIDS) and the Institute for Data, Systems, and Society (IDSS).
    Wu wrote the paper with co-lead authors Siriu Li, an IDSS graduate student, and Wenbin Ouyang, a CEE graduate student; as well as Max Paulus, a graduate student at ETH Zurich. The research will be presented at the Conference on Neural Information Processing Systems.
    Tough to solve
    MILP problems have an exponential number of potential solutions. For instance, say a traveling salesperson wants to find the shortest path to visit several cities and then return to their city of origin. If there are many cities which could be visited in any order, the number of potential solutions might be greater than the number of atoms in the universe.
    “These problems are called NP-hard, which means it is very unlikely there is an efficient algorithm to solve them. When the problem is big enough, we can only hope to achieve some suboptimal performance,” Wu explains.

    An MILP solver employs an array of techniques and practical tricks that can achieve reasonable solutions in a tractable amount of time.
    A typical solver uses a divide-and-conquer approach, first splitting the space of potential solutions into smaller pieces with a technique called branching. Then, the solver employs a technique called cutting to tighten up these smaller pieces so they can be searched faster.
    Cutting uses a set of rules that tighten the search space without removing any feasible solutions. These rules are generated by a few dozen algorithms, known as separators, that have been created for different kinds of MILP problems.
    Wu and her team found that the process of identifying the ideal combination of separator algorithms to use is, in itself, a problem with an exponential number of solutions.
    “Separator management is a core part of every solver, but this is an underappreciated aspect of the problem space. One of the contributions of this work is identifying the problem of separator management as a machine learning task to begin with,” she says.
    Shrinking the solution space
    She and her collaborators devised a filtering mechanism that reduces this separator search space from more than 130,000 potential combinations to around 20 options. This filtering mechanism draws on the principle of diminishing marginal returns, which says that the most benefit would come from a small set of algorithms, and adding additional algorithms won’t bring much extra improvement.
    Then they use a machine-learning model to pick the best combination of algorithms from among the 20 remaining options.
    This model is trained with a dataset specific to the user’s optimization problem, so it learns to choose algorithms that best suit the user’s particular task. Since a company like FedEx has solved routing problems many times before, using real data gleaned from past experience should lead to better solutions than starting from scratch each time.
    The model’s iterative learning process, known as contextual bandits, a form of reinforcement learning, involves picking a potential solution, getting feedback on how good it was, and then trying again to find a better solution.
    This data-driven approach accelerated MILP solvers between 30 and 70 percent without any drop in accuracy. Moreover, the speedup was similar when they applied it to a simpler, open-source solver and a more powerful, commercial solver.
    In the future, Wu and her collaborators want to apply this approach to even more complex MILP problems, where gathering labeled data to train the model could be especially challenging. Perhaps they can train the model on a smaller dataset and then tweak it to tackle a much larger optimization problem, she says. The researchers are also interested in interpreting the learned model to better understand the effectiveness of different separator algorithms.
    This research is supported, in part, by Mathworks, the National Science Foundation (NSF), the MIT Amazon Science Hub, and MIT’s Research Support Committee. More

  • in

    Using AI to find microplastics

    An interdisciplinary research team from the University of Waterloo is using artificial intelligence (AI) to identify microplastics faster and more accurately than ever before.
    Microplastics are commonly found in food and are dangerous pollutants that cause severe environmental damage — finding them is the key to getting rid of them.
    The research team’s advanced imaging identification system could help wastewater treatment plants and food production industries make informed decisions to mitigate the potential impact of microplastics on the environment and human health.
    A comprehensive risk analysis and action plan requires quality information based on accurate identification. In search of a robust analytical tool that could enumerate, identify and describe the many microplastics that exist, project lead Dr. Wayne Parker and his team, employed an advanced spectroscopy method which exposes particles to a range of wavelengths of light. Different types of plastics produce different signals in response to the light exposure. These signals are like fingerprints that can also be employed to mark particles as microplastic or not.
    The challenge researchers often find is that microplastics come in wide varieties due to the presence of manufacturing additives and fillers that can blur the “fingerprints” in a lab setting. This makes identifying microplastics from organic material, as well as the different types of microplastics, often difficult. Human intervention is usually required to dig out subtle patterns and cues, which is slow and prone to error.
    “Microplastics are hydrophobic materials that can soak up other chemicals,” said Parker, a professor in Waterloo’s Department of Civil and Environmental Engineering. “Science is still evolving in terms of how bad the problem is, but it’s theoretically possible that microplastics are enhancing the accumulation of toxic substances in the food chain.”
    Parker approached Dr. Alexander Wong, a professor in Waterloo’s Department of Systems Design Engineeringand the Canada Research Chair in Artificial Intelligence and Medical Imaging for assistance. With his help, the team developed an AI tool called PlasticNet that enables researchers to rapidly analyze large numbers of particles approximately 50 per cent faster than prior methods and with 20 per cent more accuracy.

    The tool is the latest sustainable technology designed by Waterloo researchers to protect our environment and engage in research that will contribute to a sustainable future.
    “We built a deep learning neural network to enhance microplastic identification from the spectroscopic signals,” said Wong. “We trained it on data from existing literature sources and our own generated images to understand the varied make-up of microplastics and spot the differences quickly and correctly — regardless of the fingerprint quality.”
    Parker’s former PhD student, Frank Zhu, tested the system on microplastics isolated from a local wastewater treatment plant. Results show that it can identify microplastics with unprecedented speed and accuracy. This information can empower treatment plants to implement effective measures to control and eliminate these substances.
    The next steps involve continued learning and testing, as well as feeding the PlasticNet system more data to increase the quality of its microplastics identification capabilities for application across a broad range of needs. More

  • in

    Diamonds and rust help unveil ‘impossible’ quasi-particles

    Researchers have discovered magnetic monopoles — isolated magnetic charges — in a material closely related to rust, a result that could be used to power greener and faster computing technologies.
    Researchers led by the University of Cambridge used a technique known as diamond quantum sensing to observe swirling textures and faint magnetic signals on the surface of hematite, a type of iron oxide.
    The researchers observed that magnetic monopoles in hematite emerge through the collective behaviour of many spins (the angular momentum of a particle). These monopoles glide across the swirling textures on the surface of the hematite, like tiny hockey pucks of magnetic charge. This is the first time that naturally occurring emergent monopoles have been observed experimentally.
    The research has also shown the direct connection between the previously hidden swirling textures and the magnetic charges of materials like hematite, as if there is a secret code linking them together. The results, which could be useful in enabling next-generation logic and memory applications, are reported in the journal Nature Materials.
    According to the equations of James Clerk Maxwell, a giant of Cambridge physics, magnetic objects, whether a fridge magnet or the Earth itself, must always exist as a pair of magnetic poles that cannot be isolated.
    “The magnets we use every day have two poles: north and south,” said Professor Mete Atatüre, who led the research. “In the 19th century, it was hypothesised that monopoles could exist. But in one of his foundational equations for the study of electromagnetism, James Clerk Maxwell disagreed.”
    Atatüre is Head of Cambridge’s Cavendish Laboratory, a position once held by Maxwell himself. “If monopoles did exist, and we were able to isolate them, it would be like finding a missing puzzle piece that was assumed to be lost,” he said.

    About 15 years ago, scientists suggested how monopoles could exist in a magnetic material. This theoretical result relied on the extreme separation of north and south poles so that locally each pole appeared isolated in an exotic material called spin ice.
    However, there is an alternative strategy to find monopoles, involving the concept of emergence. The idea of emergence is the combination of many physical entities can give rise to properties that are either more than or different to the sum of their parts.
    Working with colleagues from the University of Oxford and the National University of Singapore, the Cambridge researchers used emergence to uncover monopoles spread over two-dimensional space, gliding across the swirling textures on the surface of a magnetic material.
    The swirling topological textures are found in two main types of materials: ferromagnets and antiferromagnets. Of the two, antiferromagnets are more stable than ferromagnets, but they are more difficult to study, as they don’t have a strong magnetic signature.
    To study the behaviour of antiferromagnets, Atatüre and his colleagues use an imaging technique known as diamond quantum magnetometry. This technique uses a single spin — the inherent angular momentum of an electron — in a diamond needle to precisely measure the magnetic field on the surface of a material, without affecting its behaviour.
    For the current study, the researchers used the technique to look at hematite, an antiferromagnetic iron oxide material. To their surprise, they found hidden patterns of magnetic charges within hematite, including monopoles, dipoles and quadrupoles.

    “Monopoles had been predicted theoretically, but this is the first time we’ve actually seen a two-dimensional monopole in a naturally occurring magnet,” said co-author Professor Paolo Radaelli, from the University of Oxford.
    “These monopoles are a collective state of many spins that twirl around a singularity rather than a single fixed particle, so they emerge through many-body interactions. The result is a tiny, localised stable particle with diverging magnetic field coming out of it,” said co-first author Dr Hariom Jani, from the University of Oxford.
    “We’ve shown how diamond quantum magnetometry could be used to unravel the mysterious behaviour of magnetism in two-dimensional quantum materials, which could open up new fields of study in this area,” said co-first author Dr Anthony Tan, from the Cavendish Laboratory. “The challenge has always been direct imaging of these textures in antiferromagnets due to their weaker magnetic pull, but now we’re able to do so, with a nice combination of diamonds and rust.”
    The study not only highlights the potential of diamond quantum magnetometry but also underscores its capacity to uncover and investigate hidden magnetic phenomena in quantum materials. If controlled, these swirling textures dressed in magnetic charges could power super-fast and energy-efficient computer memory logic.
    The research was supported in part by the Royal Society, the Sir Henry Royce Institute, the European Union, and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI). More

  • in

    Exposure to soft robots decreases human fears about working with them

    Seeing robots made with soft, flexible parts in action appears to lower people’s anxiety about working with them or even being replaced by them.
    A Washington State University study found that watching videos of a soft robot working with a person at picking and placing tasks lowered the viewers’ safety concerns and feelings of job insecurity. This was true even when the soft robot was shown working in close proximity to the person. This finding shows soft robots hold a potential psychological advantage over rigid robots made of metal or other hard materials.
    “Prior research has generally found that the closer you are to a rigid robot, the more negative your reactions are, but we didn’t find those outcomes in this study of soft robots,” said lead author Tahira Probst, a WSU psychology professor.
    Currently, human and rigid robotic workers have to maintain a set distance for safety reasons, but as this study indicates, proximity to soft robots could be not only physically safer but also more psychologically accepted.
    “This finding needs to be replicated, but if it holds up, that means humans could work together more closely with the soft robots,” Probst said.
    The study, published in the journal IISE Transactions on Occupational Ergonomics and Human Factors, did find that faster interactions with a soft robot tended to cause more negative responses, but when the study participants had previous experience with robots, faster speed did not bother them. In fact, they preferred the faster interactions. This reinforces the finding that greater familiarity increased overall comfort with soft robots.
    About half of all occupations are highly likely to involve some type of automation within the next couple decades, said Probst, particularly those related to production, transportation, extraction and agriculture.

    Soft robots, which are made with flexible materials like fabric and rubber, are still relatively new technology compared to rigid robots which are already widely in use in manufacturing.
    Rigid robots have many limitations including their high cost and high safety concerns — two problems soft robots can potentially solve, said study co-author Ming Luo, an assistant professor in WSU’s School of Mechanical and Materials Engineering.
    “We make soft robots that are naturally safe, so we don’t have to focus a lot on expensive hardware and sensors to guarantee safety like has to be done with rigid robots,” said Luo.
    As an example, Luo noted that one rigid robot used for apple picking could cost around $30,000 whereas the current research and development cost for one soft robot, encompassing all components and manufacturing, is under $5,000. Also, that cost could be substantially decreased if production were scaled up.
    Luo’s team is in the process of developing soft robots for a range of functions, including fruit picking, pruning and pollinating. Soft robots also have the potential help elderly or disabled people in home or health care settings. Much more development has to be done before this can be a reality, Luo said, but his engineering lab has partnered with Probst’s psychology team to better understand human-robot interactions early in the process.
    “It’s good to know how humans will react to the soft robots in advance and then incorporate that information into the design,” said Probst. “That’s why we’re working in tandem, where the psychology side is informing the technical development of these robots in their infancy.”
    To further test this study’s findings, the researchers are planning to bring participants into the lab to interact directly with soft robots. In addition to collecting participants self-reported surveys, they will also measure participants’ physical stress reactions, such as heart rate and galvanic skin responses, which are changes in the skin’s electrical resistance in reaction to emotional stress. More