More stories

  • in

    Using machine learning to monitor driver ‘workload’ could help improve road safety

    Researchers have developed an adaptable algorithm that could improve road safety by predicting when drivers are able to safely interact with in-vehicle systems or receive messages, such as traffic alerts, incoming calls or driving directions.
    The researchers, from the University of Cambridge, working in partnership with Jaguar Land Rover (JLR) used a combination of on-road experiments and machine learning as well as Bayesian filtering techniques to reliably and continuously measure driver ‘workload’. Driving in an unfamiliar area may translate to a high workload, while a daily commute may mean a lower workload.
    The resulting algorithm is highly adaptable and can respond in near real-time to changes the driver’s behaviour and status, road conditions, road type, or driver characteristics.
    This information could then be incorporated into in-vehicle systems such as infotainment and navigation, displays, advanced driver assistance systems (ADAS) and others. Any driver vehicle interaction can be then customised to prioritise safety and enhance the user experience, delivering adaptive human machine interactions. For example, drivers are only alerted at times of low workload, so that the driver can keep their full concentration on the road in more stressful driving scenarios. The results are reported in the journal IEEE Transactions on Intelligent Vehicles.
    “More and more data is made available to drivers all the time. However, with increasing levels of driver demand, this can be a major risk factor for road safety,” said co-first author Dr Bashar Ahmad from Cambridge’s Department of Engineering. “There is a lot of information that a vehicle can make available to the driver, but it’s not safe or practical to do so unless you know the status of the driver.”
    A driver’s status — or workload — can change frequently. Driving in a new area, in heavy traffic or in poor road conditions, for example, is usually more demanding than a daily commute.
    “If you’re in a demanding driving situation, that would be a bad time for a message to pop up on a screen or a heads-up display,” said Ahmad. “The issue for car manufacturers is how to measure how occupied the driver is, and instigate interactions or issue messages or prompts only when the driver is happy to receive them.”
    There are algorithms for measuring the levels of driver demand using eye gaze trackers and biometric data from heart rate monitors, but the Cambridge researchers wanted to develop an approach that could do the same thing using information that’s available in any car, specifically driving performance signals such as steering, acceleration and braking data. It should also be able consume and fuse different unsynchronised data streams that have different update rates, including from biometric sensors if available.

    To measure driver workload, the researchers first developed a modified version of the Peripheral Detection Task to collect, in an automated way, subjective workload information during driving. For the experiment, a phone showing a route on a navigation app was mounted to the car’s central air vent, next to a small LED ring light that would blink at regular intervals. Participants all followed the same route through a mix of rural, urban and main roads. They were asked to push a finger-worn button whenever the LED light lit up in red and the driver perceived they were in a low workload scenario.
    Video analysis of the experiment, paired with the data from the buttons, allowed the researchers to identify high workload situations, such as busy junctions or a vehicle in front or behind the driver behaving unusually.
    The on-road data was then used to develop and validate a supervised machine learning framework to profile drivers based on the average workload they experience, and an adaptable Bayesian filtering approach for sequentially estimating, in real-time, the driver’s instantaneous workload, using several driving performance signals including steering and braking. The framework combines macro and micro measures of workload where the former is the driver’s average workload profile and the latter is the instantaneous one.
    “For most machine learning applications like this, you would have to train it on a particular driver, but we’ve been able to adapt the models on the go using simple Bayesian filtering techniques,” said Ahmad. “It can easily adapt to different road types and conditions, or different drivers using the same car.”
    The research was conducted in collaboration with JLR who did the experimental design and the data collection. It was part of a project sponsored by JLR under the CAPE agreement with the University of Cambridge.
    “This research is vital in understanding the impact of our design from a user perspective, so that we can continually improve safety and curate exceptional driving experiences for our clients,” said JLR’s Senior Technical Specialist of Human Machine Interface Dr Lee Skrypchuk. “These findings will help define how we use intelligent scheduling within our vehicles to ensure drivers receive the right notifications at the most appropriate time, allowing for seamless and effortless journeys.”
    The research at Cambridge was carried out by a team of researchers from the Signal Processing and Communications Laboratory (SigProC), Department of Engineering, under the supervision of Professor Simon Godsill. It was led by Dr Bashar Ahmad and included Nermin Caber (PhD student at the time) and Dr Jiaming Liang, who all worked on the project while based at Cambridge’s Department of Engineering. More

  • in

    How ChatGPT could help first responders during natural disasters

    A little over a year since its launch, ChatGPT’s abilities are well known. The machine learning model can write a decent college-level essay and hold a conversation in an almost human-like way.
    But could its language skills also help first responders find those in distress during a natural disaster?
    A new University at Buffalo-led study trains ChatGPT to recognize locations, from home addresses to intersections, in disaster victims’ social media posts.
    Supplied with carefully constructed prompts, researchers’ “geoknowledge-guided” GPT models extracted location data from tweets sent during Hurricane Harvey at an accuracy rate 76% better than default GPT models.
    “This use of AI technology may be able to help first responders reach victims more quickly and even save more lives,” said Yingjie Hu, associate professor in the UB Department of Geography, within the College of Arts and Sciences, and lead author of the study, which was published in October in the International Journal of Geographical Information Science.
    Disaster victims have frequently turned to social media to plead for help when 911 systems become overloaded, including during Harvey’s devastation of the Houston area in 2017.
    Yet first responders often don’t have the resources to monitor social media feeds during a disaster, following the various hashtags and deciding which posts are most urgent.

    It is the hope of the UB-led research team, which also includes collaborators from the University of Georgia, Stanford University and Google, that their work could lead to AI systems that automatically process social media data for emergency services.
    “ChatGPT and other large language models have drawn controversy for their potential negative uses, whether it be academic fraud or eliminating jobs, so it is exciting to instead harness their powers for social good,” Hu says.
    “While there are a number of significant and valid concerns about the emergence of ChatGPT, our work shows that careful, interdisciplinary work can produce applications of this technology that can provide tangible benefits to society,” adds co-author Kenneth Joseph, assistant professor in the UB Department of Computer Science and Engineering, within the School of Engineering and Applied Sciences.
    Fusing ‘geoknowledge’ into ChatGPT
    Imagine a tweet with an urgent but clear message: A family, including a 90-year-old not steady on their feet, needs rescuing at 1280 Grant St., Cypress, Texas, 77249.
    A typical model, such as a named entity recognition (NER) tool, would recognize the listed address as three separate entities — Grant Street, Cypress and Texas. If this data was used to geolocate, the model would send first responders not to 1280 Grant St., but into the middle of Grant Street, or even the geographical center of Texas.

    Hu says that NER tools can be trained to recognize complete location descriptions, but it would require a large dataset of accurately labeled location descriptions specific to a given local area, a labor-intensive and time-consuming process.
    “Although there’s a lack of labeled datasets, first responders have a lot of knowledge about the way locations are described in their local area, whether it be the name of a restaurant or a popular intersection,” Hu says. “So we asked ourselves: How can we quickly and efficiently infuse this geoknowledge into a machine learning model?”
    The answer was OpenAI’s Generative Pretrained Transformers, or GPT, large language models already trained from billions of webpages and able to generate human-like responses. Through simple conversation and the right prompts, Hu’s team thought GPT could quickly learn to accurately interpret location data from social media posts.
    First, researchers provided GPT with 22 real tweets from Hurricane Harvey victims, which they’d already collected and labeled in a previous study. They told GPT which words in the post described a location and what kind of location it was describing, whether it be an address, street, intersection, business or landmark.
    Researchers then tested the geoknowledge-guided GPT on another 978 Hurricane Harvey tweets, and asked it to extract the location words and guess the location category by itself.
    The results: The geoknowledge-guided GPT models were 76% better at recognizing location descriptions than GPT models not provided with geoknowledge, as well as 40% better than NER tools. The best performers were the geoknowledge-guided GPT-3 and GPT-4, with the geoknowledge-guided ChatGPT only slightly behind.
    “GPT basically combines the vast amount of text it’s already read with the specific geoknowledge examples we provided to form its answers,” Hu says. “GPT has the ability to quickly learn and quickly adapt to a problem.”
    However, the human touch, that is, providing a good prompt, is crucial. For example, GPT may not consider a stretch of highway between two specific exits as a location unless specifically prompted to do so.
    “This emphasizes the importance of us as researchers instructing GPT as accurately and comprehensively as possible so it can deliver the results that we require,” Hu says.
    Letting first responders do what they do best
    Hu’s team began their work in early 2022 with GPT-2 and GPT-3, and later included GPT-4 and ChatGPT after those models launched in late 2022 and early 2023, respectively.
    “Our method will likely be applicable to the newer GPT models that may come out in the following years,” Hu says.
    Further research will have to be done to use GPT’s extracted location descriptions to actually geolocate victims, and perhaps figure out ways to filter out irrelevant or false posts about a disaster.
    Hu hopes their efforts can simplify the use of AI technologies so that emergency managers don’t have to become AI experts themselves in order to use these them, and can focus on saving lives.
    “I think a good way for humans to collaborate with AI is to let each of us focus on what we’re really good at,” Hu says. “Let AI models help us complete those more labor-intensive tasks, while we humans focus on gaining knowledge and using such knowledge to guide AI models.”
    The work was supported by the National Science Foundation. More

  • in

    Magnetization by laser pulse

    To magnetize an iron nail, one simply has to stroke its surface several times with a bar magnet. Yet, there is a much more unusual method: A team led by the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) discovered some time ago that a certain iron alloy can be magnetized with ultrashort laser pulses. The researchers have now teamed up with the Laserinstitut Hochschule Mittweida (LHM) to investigate this process further. They discovered that the phenomenon also occurs with a different class of materials – which significantly broadens potential application prospects. The working group presents its findings in the scientific journal Advanced Functional Materials (DOI: 10.1002/adfm.202311951).The unexpected discovery was made back in 2018. When the HZDR team irradiated a thin layer of an iron-aluminum alloy with ultrashort laser pulses, the non-magnetic material suddenly became magnetic. The explanation: The laser pulses rearrange the atoms in the crystal in such a way that the iron atoms move closer together, and thus forming a magnet. The researchers were then able to demagnetize the layer again with a series of weaker laser pulses. This enabled them to discover a way of creating and erasing tiny “magnetic spots” on a surface.However, the pilot experiment still left some questions unanswered. “It was unclear whether the effect only occurs in the iron-aluminum alloy or also in other materials,” explains HZDR physicist Dr. Rantej Bali. “We also wanted to try tracking the time progression of the process.” For further investigation, he teamed up with Dr. Theo Pflug from the LHM and colleagues from the University of Zaragoza in Spain.Flip book with laser pulsesThe experts focused specifically on an iron-vanadium alloy. Unlike the iron-aluminum alloy with its regular crystal lattice, the atoms in the iron-vanadium alloy are arranged more chaotically, forming an amorphous, glass-like structure. In order to observe what happens upon laser irradiation, the physicists used a special method: The pump-probe method.”First, we irradiate the alloy with a strong laser pulse, which magnetizes the material,” explains Theo Pflug. “Simultaneously, we use a second, weaker pulse that is reflected on the material surface.”The analysis of the reflected laser pulse provides an indication of the material’s physical properties. This process is repeated several times, whereby the time interval between the first “pump” pulse and the subsequent “probe” pulse is continually extended.As a result, a time series of reflection data is obtained, which allows to characterize the processes being triggered by the laser excitation. “The whole procedure is similar to generating a flip book,” says Pflug. “Likewise, a series of individual images that animate when viewed in quick succession.”Rapid meltingThe result: Although it has a different atomic structure than the iron-aluminum compound, the iron-vanadium alloy can also be magnetized via laser. “In both cases, the material melts briefly at the irradiation point”, explains Rantej Bali. “This causes the laser to erase the previous structure so that a small magnetic area is generated in both alloys.”An encouraging result: Apparently, the phenomenon is not limited to a specific material structure but can be observed in diverse atomic arrangements.The team is also keeping track of the temporal dynamics of the process: “At least we now know in which time scales something happens,” explains Theo Pflug. “Within femtoseconds, the laser pulse excites the electrons in the material. Several picoseconds later, the excited electrons transfer their energy to the atomic nuclei.”Consequently, this energy transfer causes the rearrangement into a magnetic structure, which is stabilized by the subsequent rapid cooling. In follow-up experiments, the researchers aim to observe exactly how the atoms rearrange themselves by examining the magnetization process with intense X-rays.Sights set on applicationsAlthough still in the early stages, this work already provides initial ideas for possible applications: For example, placing tiny magnets on a chip surface via laser is conceivable. “This could be useful for the production of sensitive magnetic sensors, such as those used in vehicles,” speculates Rantej Bali. “It could also find possible applications in magnetic data storage.”Additionally, the phenomenon appears relevant for a new type of electronics, namely spintronics. Here, magnetic signals should be used for digital computing processes instead of electrons passing through transistors as usual – offering a possible approach to computer technology of the future. More

  • in

    Polaritons open up a new lane on the semiconductor highway

    On the highway of heat transfer, thermal energy is moved by way of quantum particles called phonons. But at the nanoscale of today’s most cutting-edge semiconductors, those phonons don’t remove enough heat. That’s why Purdue University researchers are focused on opening a new nanoscale lane on the heat transfer highway by using hybrid quasiparticles called “polaritons.”
    Thomas Beechem loves heat transfer. He talks about it loud and proud, like a preacher at a big tent revival.
    “We have several ways of describing energy,” said Beechem, associate professor of mechanical engineering. “When we talk about light, we describe it in terms of particles called ‘photons.’ Heat also carries energy in predictable ways, and we describe those waves of energy as ‘phonons.’ But sometimes depending on the material, photons and phonons will come together and make something new called a ‘polariton.’ It carries energy in its own way, distinct from both photons or phonons.”
    Like photons and phonons, polaritons aren’t physical particles you can see or capture. They are more like ways of describing energy exchange as if they were particles.
    Still fuzzy? How about another analogy. “Phonons are like internal combustion vehicles, and photons are like electric vehicles,” Beechem said. “Polaritons are a Toyota Prius. They are a hybrid of light and heat, and retain some of the properties of both. But they are their own special thing.”
    Polaritons have been used in optical applications — everything from stained glass to home health tests. But their ability to move heat has largely been ignored, because their impact becomes significant only when the size of materials becomes very small. “We know that phonons do a majority of the work of transferring heat,” said Jacob Minyard, a Ph.D. student in Beechem’s lab. “The effect of polaritons is only observable at the nanoscale. But we’ve never needed to address heat transfer at that level until now, because of semiconductors.”
    “Semiconductors have become so incredibly small and complex,” he continued. “People who design and build these chips are discovering that phonons don’t efficiently disperse heat at these very small scales. Our paper demonstrates that at those length scales, polaritons can contribute a larger share of thermal conductivity.”
    Their research on polaritons has been selected as a Featured Article in the Journal of Applied Physics.

    “We in the heat transfer community have been very material-specific in describing the effect of polaritons,” said Beechem. “Someone will observe it in this material or at that interface. It’s all very disparate. Jacob’s paper has established that this isn’t some random thing. Polaritons begin to dominate the heat transfer on any surface thinner than 10 nanometers. That’s twice as big as the transistors on an iPhone 15.”
    Now Beechem gets really fired up. “We’ve basically opened up a whole extra lane on the highway. And the smaller the scales get, the more important this extra lane becomes. As semiconductors continue to shrink, we need to think about designing the traffic flow to take advantage of both lanes: phonons and polaritons.”
    Minyard’s paper just scratches the surface of how this can happen practically. The complexity of semiconductors means that there are many opportunities to capitalize upon polariton-friendly designs. “There are many materials involved in chipmaking, from the silicon itself to the dielectrics and metals,” Minyard said. “The way forward for our research is to understand how these materials can be used to conduct heat more efficiently, recognizing that polaritons provide a whole new lane to move energy.”
    Recognizing this, Beechem and Minyard want to show chip manufacturers how to incorporate these polariton-based nanoscale heat transfer principles right into the physical design of the chip — from the physical materials involved, to the shape and thickness of the layers.
    While this work is theoretical now, physical experimentation is very much on the horizon — which is why Beechem and Minyard are happy to be at Purdue.
    “The heat transfer community here at Purdue is so robust,” Beechem said. “We can literally go upstairs and talk to Xianfan Xu, who had one of the first experimental realizations of this effect. Then we can walk over to Flex Lab and ask Xiulin Ruan about his pioneering work in phonon scattering. And we have the facilities here at Birck Nanotechnology Center to build nanoscale experiments, and use one-of-a-kind measurement tools to confirm our findings. It’s really a researcher’s dream.” More

  • in

    Soundwaves harden 3D-printed treatments in deep tissues

    Engineers at Duke University and Harvard Medical School have developed a bio-compatible ink that solidifies into different 3D shapes and structures by absorbing ultrasound waves. Because it responds to sound waves rather than light, the ink can be used in deep tissues for biomedical purposes ranging from bone healing to heart valve repair.
    This work appears on December 7 in the journal Science.
    The uses for 3D-printing tools are ever increasing. Printers create prototypes of medical devices, design flexible, lightweight electronics, and even engineer tissues used in wound healing. But many of these printing techniques involve building the object point-by-point in a slow and arduous process that often requires a robust printing platform.
    To circumvent these issues over the past several years, researchers developed a photo-sensitive ink that responds directly to targeted beams of light and quickly hardens into a desired structure. While this printing technique can substantially improve the speed and quality of a print, researchers can only use transparent inks for the prints, and biomedical purposes are limited, as light can’t reach beyond a few millimeters deep into tissue.
    Now, Y. Shrike Zhang, associate bioengineer at Brigham and Women’s Hospital and associate professor at Harvard Medical School, and Junjie Yao, associate professor of biomedical engineering at Duke, have developed a new printing method called deep-penetrating acoustic volumetric printing, or DVAP, that resolves these problems. This new technique involves a specialized ink that reacts to soundwaves rather than light, enabling them to create biomedically useful structures at unprecedented tissue depths.
    “DVAP relies on the sonothermal effect, which occurs when soundwaves are absorbed and increase the temperature to harden our ink,” explained Yao, who designed the ultrasound printing technology for DVAP. “Ultrasound waves can penetrate more than 100 times deeper than light while still spatially confined, so we can reach tissues, bones and organs with high spatial precision that haven’t been reachable with light-based printing methods.”
    The first component of DVAP involves a sonicated ink, called sono-ink, that is a combination of hydrogels, microparticles and molecules designed to specifically react to ultrasound waves. Once the sono-ink is delivered into the target area, a specialized ultrasound printing probe sends focused ultrasound waves into the ink, hardening portions of it into intricate structures. These structures can range from a hexagonal scaffold that mimics the hardness of bone to a bubble of hydrogel that can be placed on an organ.

    “The ink itself is a viscous liquid, so it can be injected into a targeted area fairly easily, and as you move the ultrasound printing probe around, the materials in the ink will link together and harden,” said Zhang, who designed the sono-ink in his lab at the Brigham. “Once it’s done, you can remove any remaining ink that isn’t solidified via a syringe.”
    The different components of the sono-ink enable the researchers to adjust the formula for a wide variety uses. For example, if they want to create a scaffold to help heal a broken bone or make up for bone loss, they can add bone mineral particles to the ink. This flexibility also allows them to engineer the hardened formula to be more durable or more degradable, depending on its use. They can even adjust the colors of their final print.
    The team conducted three tests as a proof-of-concept of their new technique. The first involved using the ink to seal off a section in a goat’s heart. When a human has nonvalvular atrial fibrillation, the heart won’t beat correctly, causing blood to pool in the organ. Traditional treatment often requires open-chest surgery to seal off the left atrial appendage to reduce the risk of blood clots and heart attack.
    Instead, the team used a catheter to deliver their sono-ink to the left atrial appendage in a goat heart that was placed in a printing chamber. The ultrasound probe then delivered focused ultrasound waves through 12 mm of tissue, hardening the ink without damaging any of the surrounding organ. Once the process was complete, the ink was safely bonded to the heart tissue and was flexible enough to withstand movements that mimicked the heart beating.
    Next, the team tested the potential for DVAP’s use for tissue reconstruction and regeneration. After creating a bone defect model using a chicken leg, the team injected the sono-ink and hardened it through 10 mm of sample skin and muscle tissue layers. The resulting material bonded seamlessly to the bone and didn’t negatively impact any of the surrounding tissues.
    Finally, Yao and Zhang showed that DVAP could also be used for therapeutic drug delivery. In their example, they added a common chemotherapy drug to their ink, which they delivered to sample liver tissue. Using their probe, they hardened the sono-ink into hydrogels that slowly release the chemotherapy and diffuse into the liver tissue.
    “We’re still far from bringing this tool into the clinic, but these tests reaffirmed the potential of this technology,” said Zhang. “We’re very excited to see where it can go from here.”
    “Because we can print through tissue, it allows for a lot of potential applications in surgery and therapy that traditionally involve very invasive and disruptive methods,” said Yao. “This work opens up an exciting new avenue in the 3D printing world, and we’re excited to explore the potential of this tool together.” More

  • in

    Catalyst for electronically controlled C–H functionalization

    The Chirik Group at the Princeton Department of Chemistry is chipping away at one of the great challenges of metal-catalyzed C-H functionalization with a new method that uses a cobalt catalyst to differentiate between bonds in fluoroarenes, functionalizing them based on their intrinsic electronic properties.
    In a paper published this week in Science, researchers show they are able to bypass the need for steric control and directing groups to induce cobalt-catalyzed borylation that is meta-selective.
    The lab’s research showcases an innovative approach driven by deep insights into organometallic chemistry that have been at the heart of its mission for over a decade. In this case, the Chirik Lab drilled down into how transition metals break C-H bonds, uncovering a method that could have vast implications for the synthesis of medicines, natural products, and materials.
    And their method is fast — comparable in speed to those that rely on iridium.
    The research is outlined in “Kinetic and Thermodynamic Control of C(sp2)-H Activation Enable Site-Selective Borylation,” by lead author Jose Roque, a former postdoc in the Chirik Group; postdoc Alex Shimozono; and P.I. Paul Chirik, the Edwards S. Sanford Professor of Chemistry and former lab members Tyler Pabst, Gabriele Hierlmeier, and Paul Peterson.
    ‘Really fast, really selective’
    “Chemists have been saying for decades, let’s turn synthetic chemistry on its head and make the C-H bond a reactive part of the molecule. That would be incredibly important for drug discovery for the pharmaceutical industry, or for making materials,” said Chirik.

    “One of the ways we do this is called C-H borylation, in which you turn the C-H bond into something else, into a carbon-boron bond. Turning C-H to C-B is a gateway to great chemistry.”
    Benzene rings are highly represented motifs in medicinal chemistry. However, chemists rely on traditional approaches to functionalize them. The Chirik Group develops new methods that access less-explored routes.
    “Imagine you have a benzene ring and it has one substituent on it,” Chikik added. “The site next to it is called ortho, the one next to that is called meta, and the one opposite is called para. The meta C-H bond is the hardest one to do selectively. That’s what Jose has done here with a cobalt catalyst, and no one’s done it before.
    “He’s made a cobalt catalyst that is really fast and really selective.”
    Roque, now an assistant professor in Princeton’s Department of Chemistry, said rational design was at the heart of their solution.
    “We started to get a glimpse of the high activity for C-H activation early during our stoichiometric studies,” said Roque. “The catalyst was rapidly activating the C-H bonds of aromatic solvents at room temperature. In order to isolate the catalyst, we had to avoid handling the catalyst in aromatic solvents,” he added. “We designed an electronically rich but sterically accessible pincer ligand that we posited — based on some previous insights from our lab as well as some fundamental organometallic principles — would lead to a more active catalyst.

    “And it has.”
    Chirik Lab Target Since 2014
    State-of-the-art borylation uses iridium as a catalyst for sterically driven C-H functionalization. It is highly reactive, and it is fast. But if you have a molecule with many C-H bonds, iridium catalysts fail to selectively functionalize the desired bond.
    As a result, pharmaceutical companies have appealed for an alternative with more selectivity. And they’ve sought it among first-row transition metals like cobalt and iron, which are less expensive and more sustainable than iridium.
    Since their first paper on C-H borylation in 2014, the Chirik Lab has articulated the concept of electronically controlled C-H activation as one answer to this challenge. Their idea is to differentiate between C-H bonds based on electronic properties in order to functionalize them. These properties are reflected in the metal-carbon bond strength. With the catalyst designed in this research, chemists can hit the selected bond and only the selected bond by tapping into these disparate strengths.
    But they uncovered another result that makes their method advantageous: the site selectivity can be switched by exploiting the kinetic or thermodynamic preferences of C-H activation. This selectivity switch can be accomplished by choosing one reagent over another, a process that is as streamlined as it is cost-effective.
    “Site-selective meta-to-fluorine functionalization was a huge challenge. We made some great progress toward that with this research and expanded the chemistry to include other substrate classes beyond fluoroarenes,” said Roque. “But as a function of studying first-row metals, we also found out, hey, we can switch the selectivity.”
    Added Chirik: “To me, this is a huge concept in C-H functionalization. Now we can look at metal-carbon bond strengths and predict where things are going to go. This opens a whole new opportunity. We’re going to be able to do things that iridium doesn’t do.”
    Shimozono came to the project late in the game, after Roque had already discovered the pivotal catalyst. His role will deepen in the coming months as he seeks new advances in borylation.
    “Jose’s catalyst is groundbreaking. Usually, a completely different catalyst is required in order to change site-selectivity,” said Shimozono. “Counter to this dogma, Jose demonstrated that using B2Pin2 as the boron source affords meta selective chemistry, while using HBPin as the boron source gives ortho selective borylation using the same iPrACNCCo catalyst.
    “In general, the more methods we have to install groups in specific sites in molecules, the better. This gives pharmaceutical chemists more tools to make and discover medications more efficiently.” More

  • in

    How a failure to understand race leads to flawed health tech

    A new study focused on wearable health monitors underscores an entrenched problem in the development of new health technologies — namely, that a failure to understand race means the way these devices are developed and tested can exacerbate existing racial health inequities.
    “This is a case study that focuses on one specific health monitoring technology, but it really highlights the fact that racial bias is baked into the design of many of these technologies,” says Vanessa Volpe, co-author of the study and an associate professor of psychology at North Carolina State University.
    “The way that we understand race, and the way that we put that understanding into action when developing and using health technologies, is deeply flawed,” says Beza Merid, corresponding author of the study and an assistant professor of science, technology, innovation and racial justice at Arizona State University.
    “Basically, the design of health technologies that purport to provide equitable solutions to racial health disparities often define race as a biological trait, when it’s actually a social construct,” Merid says. “And the end result of this misunderstanding is that we have health technologies that contribute to health inequities rather than reducing them.”
    To explore issues related to the way the development and testing of health tech can reinforce racism, the researchers focused specifically on photoplethysmographic (PPG) sensors, which are widely used in consumer devices such as Fitbits and Apple watches. PPG sensors are used in wearable technologies to measure biological signals, such as heart rate, by sending a signal of light through the skin and collecting data from the way in which the light is reflected back to the device.
    For the study, the researchers drew on data from clinical validation studies for a wearable health monitoring device that relied on PPG sensors. The researchers also used data from studies that investigated the ways in which skin tone affects the accuracy of PPG “green light” sensors in the context of health monitoring. Lastly, the researchers looked at wearable device specification and user manuals and data from a lawsuit filed against a health technology manufacturer related to the accuracy of technologies that relied on PPG sensors.
    “Essentially, we synthesized and interpreted data from each of these sources to take a critical look at racial bias in the development and testing of PPG sensors and their outputs, to see if they matched guidelines for responsible innovation,” Volpe says.

    “These studies identified challenges with PPG sensors for people with darker skin tones,” says Merid. “We drew on scholarship exploring how innovative technologies can reproduce racial health inequities to dig more deeply into how and why these challenges exist. Our own expertise in responsible innovation and structural racism in technology guided our approach. If people are developing technologies with the goal of reducing harm to people’s health, how and why do these technologies end up with flaws that can exacerbate that harm?”
    The findings suggest there are significant challenges when it comes to “race correction” in health technologies.
    “Race correction” is a broad term that applies not only to technologies, but also involves correcting or adjusting health risk scores used to make decisions about the relative risk of disease and the allocation of health care resources.
    “Race correction assumes that we can develop technologies or health risk scoring algorithms to first quantify and then ‘remove’ the effect of biological race from the equation,” says Merid. “But doing so assumes race is a biological difference that needs to be corrected for to achieve equitable health for all. This prevents us from treating the real thing that needs to be corrected — the system of racism itself (e.g., differential treatment and access to health care, systematic socioeconomic disenfranchisement).”
    “For example, many — if not most — health technologies that use PPG sensors claim to be designed for use by everyone,” Volpe says. “But in reality those technologies are less accurate for people with darker skin tones. We argue that the systematic exclusion and erasure of those with darker skin tones in the development and testing of wearable technologies that are supposed to democratize and improve health for all can be a less visible form of race correction. In other words, the development process itself reflects the system of racism. The end result is a technological ‘solution’ that fails to deliver equity and is instead characteristic of the very system that created the problem.
    “Race corrections assume that we have to make adjustments based on race as a biological construct,” Volpe says. “But we should be adjusting racism as a system so that the technologies developed work and are responsible and equitable for everyone — in both their development and their consequences.”
    “Innovation can introduce unintended consequences,” Merid says. “Rather than coming up with a solution, you can potentially just introduce a new suite of problems. This is a longstanding challenge for trying to develop technological solutions to social problems.
    “Hopefully, this work contributes to our understanding of the ways that race correction is problematic,” says Merid. “We also hope that this work advances the idea that assumptions about race in the health field are deeply problematic, whether we’re talking about health technology, diagnoses or access to care. Lastly, we need to be mindful about the ways in which emerging health technologies can be harmful.” More

  • in

    Bowtie resonators that build themselves bridge the gap between nanoscopic and macroscopic

    A central goal in quantum optics and photonics is to increase the strength of the interaction between light and matter to produce, e.g., better photodetectors or quantum light sources. The best way to do that is to use optical resonators that store light for a long time, making it interact more strongly with matter. If the resonator is also very small, such that light is squeezed into a tiny region of space, the interaction is enhanced even further. The ideal resonator would store light for a long time in a region at the size of a single atom.
    Physicists and engineers have struggled for decades with how small optical resonators can be made without making them very lossy, which is equivalent to asking how small you can make a semiconductor device. The semiconductor industry’s roadmap for the next 15 years predicts that the smallest possible width of a semiconductor structure will be no less than 8 nm, which is several tens of atoms wide.
    The team behind a new paper in Nature, Associate Professor Søren Stobbe and his colleagues at DTU Electro demonstrated 8 nm cavities last year, but now they propose and demonstrate a novel approach to fabricate a self-assembling cavity with an air void at the scale of a few atoms. Their paper ‘Self-assembled photonic cavities with atomic-scale confinement’ detailing the results is published today in Nature.
    To briefly explain the experiment, two halves of silicon structures are suspended on springs, although in the first step, the silicon device is firmly attached to a layer of glass. The devices are made by conventional semiconductor technology, so the two halves are a few tens of nanometers apart. Upon selective etching of the glass, the structure is released and now only suspended by the springs, and because the two halves are fabricated so close to each other, they attract due to surface forces. By carefully engineering the design of the silicon structures, the result is a self-assembled resonator with bowtie-shaped gaps at the atomic scale surrounded by silicon mirrors.
    “We are far from a circuit that builds itself completely. But we have succeeded in converging two approaches that have been travelling along parallel tracks so far. And it allowed us to build a silicon resonator with unprecedented miniaturization,” says Søren Stobbe.
    Two separate approaches
    One approach — the top-down approach — is behind the spectacular development we have seen with silicon-based semiconductor technologies. Here, crudely put, you go from a silicon block and work on making nanostructures from them. The other approach — the bottom-up approach — is where you try to have a nanotechnological system assemble itself. It aims to mimic biological systems, such as plants or animals, built through biological or chemical processes. These two approaches are at the very core of what defines nanotechnology. But the problem is that these two approaches were so far disconnected: Semiconductors are scalable but cannot reach the atomic scale, and while self-assembled structures have long been operating at atomic scales, they offer no architecture for the interconnects to the external world.

    “The interesting thing would be if we could produce an electronic circuit that built itself — just like what happens with humans as they grow but with inorganic semiconductor materials. That would be true hierarchical self-assembly. We use the new self-assembly concept for photonic resonators, which may be used in electronics, nanorobotics, sensors, quantum technologies, and much more. Then, we would really be able to harvest the full potential of nanotechnology. The research community is many breakthroughs away from realizing that vision, but I hope we have taken the first steps,” says Guillermo Arregui, who co-supervised the project.
    Approaches converging
    Supposing a combination of the two approaches is possible, the team at DTU Electro set out to create nanostructures that surpass the limits of conventional lithography and etching despite using nothing more than conventional lithography and etching. Their idea was to use two surface forces, namely the Casimir force for attracting the two halves and the van der Waals force for making them stick together. These two forces are rooted in the same underlying effect: quantum fluctuations (see Fact box).
    The researchers made photonic cavities that confine photons to air gaps so small that determining their exact size was impossible, even with a transmission electron microscope. But the smallest they built are of a size of 1-3 silicon atoms.
    “Even if the self-assembly takes care of reaching these extreme dimensions, the requirements for the nanofabrication are no less extreme. For example, structural imperfections are typically on the scale of several nanometers. Still, if there are defects at this scale, the two halves will only meet and touch at the three largest defects. We are really pushing the limits here, even though we make our devices in one of the very best university cleanrooms in the world,” says Ali Nawaz Babar, a PhD student at the NanoPhoton Center of Excellence at DTU Electro and first author of the new paper.
    “The advantage of self-assembly is that you can make tiny things. You can build unique materials with amazing properties. But today, you can’t use it for anything you plug into a power outlet. You can’t connect it to the rest of the world. So, you need all the usual semiconductor technology for making the wires or waveguides to connect whatever you have self-assembled to the external world.”
    Robust and accurate self-assembly

    The paper shows a possible way to link the two nanotechnology approaches by employing a new generation of fabrication technology that combines the atomic dimensions enabled by self-assembly with the scalability of semiconductors fabricated with conventional methods.
    “We don’t have to go in and find these cavities afterwards and insert them into another chip architecture. That would also be impossible because of the tiny size. In other words, we are building something on the scale of an atom already inserted in a macroscopic circuit. We are very excited about this new line of research, and plenty of work is ahead,” says Søren Stobbe.
    Surface forces
    There are four known fundamental forces: Gravitational, electromagnetic, and strong and weak nuclear forces. Besides the forces due to static configurations, e.g., the attractive electromagnetic force between positively and negatively charged particles, there can also be forces due to fluctuations. Such fluctuations may be either thermal or quantum in origin, and they give rise to surface forces such as the van der Waals force and the Casimir force which act at different length scales but are rooted in the same underlying physics. Other mechanisms, such as electrostatic surface charges, can add to the net surface force. For example, geckos exploit surface forces to cling to walls and ceilings.
    How it was done
    The paper details three experiments that the researchers carried out in the labs at DTU: No fewer than 2688 devices across two microchips were fabricated, each containing a platform that would either collapse onto a nearby silicon wall — or not collapse, depending upon the surface area details, spring constant, and distance between platform and wall. This allowed the researchers to make a map of which parameters would — and would not — lead to deterministic self-assembly. Only 11 devices failed due to fabrication errors or other defects, a remarkably low number for a novel self-assembly process. The researchers made self-assembled optical resonators whose optical properties were verified experimentally, and the atomic scale was confirmed by transmission electron microscopy. The self-assembled cavities were embedded in a larger architecture consisting of self-assembled waveguides, springs, and photonic couplers to make the surrounding microchip circuitry in the same process. More