More stories

  • in

    North Korea and beyond: AI-powered satellite analysis reveals the unseen economic landscape of underdeveloped nations?

    The United Nations reports that more than 700 million people are in extreme poverty, earning less than two dollars a day. However, an accurate assessment of poverty remains a global challenge. For example, 53 countries have not conducted agricultural surveys in the past 15 years, and 17 countries have not published a population census. To fill this data gap, new technologies are being explored to estimate poverty using alternative sources such as street views, aerial photos, and satellite images.
    The paper published in Nature Communications demonstrates how artificial intelligence (AI) can help analyze economic conditions from daytime satellite imagery. This new technology can even apply to the least developed countries — such as North Korea — that do not have reliable statistical data for typical machine learning training.
    The researchers used Sentinel-2 satellite images from the European Space Agency (ESA) that are publicly available. They split these images into small six-square-kilometer grids. At this zoom level, visual information such as buildings, roads, and greenery can be used to quantify economic indicators. As a result, the team obtained the first ever fine-grained economic map of regions like North Korea. The same algorithm was applied to other underdeveloped countries in Asia: North Korea, Nepal, Laos, Myanmar, Bangladesh, and Cambodia.
    The key feature of their research model is the “human-machine collaborative approach,” which lets researchers combine human input with AI predictions for areas with scarce data. In this research, ten human experts compared satellite images and judged the economic conditions in the area, with the AI learning from this human data and giving economic scores to each image. The results showed that the Human-AI collaborative approach outperformed machine-only learning algorithms.
    The research was led by an interdisciplinary team of computer scientists, economists, and a geographer from KAIST & IBS (Donghyun Ahn, Meeyoung Cha, Jihee Kim), Sogang University (Hyunjoo Yang), HKUST (Sangyoon Park), and NUS (Jeasurk Yang). Dr Charles Axelsson, Associate Editor at Nature Communications, handled this paper during the peer review process at the journal.
    The research team found that the scores showed a strong correlation with traditional socio-economic metrics such as population density, employment, and number of businesses. This demonstrates the wide applicability and scalability of the approach, particularly in data-scarce countries. Furthermore, the model’s strength lies in its ability to detect annual changes in economic conditions at a more detailed geospatial level without using any survey data.
    This model would be especially valuable for rapidly monitoring the progress of Sustainable Development Goals such as reducing poverty and promoting more equitable and sustainable growth on an international scale. The model can also be adapted to measure various social and environmental indicators. For example, it can be trained to identify regions with high vulnerability to climate change and disasters to provide timely guidance on disaster relief efforts.

    As an example, the researchers explored how North Korea changed before and after the United Nations sanctions against the country. By applying the model to satellite images of North Korea both in 2016 and in 2019, the researchers discovered three key trends in the country’s economic development between 2016 and 2019. First, economic growth in North Korea became more concentrated in Pyongyang and major cities, exacerbating the urban-rural divide. Second, satellite imagery revealed significant changes in areas designated for tourism and economic development, such as new building construction and other meaningful alterations. Third, traditional industrial and export development zones showed relatively minor changes.
    Meeyoung Cha, a data scientist in the team explained, “This is an important interdisciplinary effort to address global challenges like poverty. We plan to apply our AI algorithm to other international issues, such as monitoring carbon emissions, disaster damage detection, and the impact of climate change.”
    An economist on the research team, Jihee Kim, commented that this approach would enable detailed examinations of economic conditions in the developing world at a low cost, reducing data disparities between developed and developing nations. She further emphasized that this is most essential because many public policies require economic measurements to achieve their goals, whether they are for growth, equality, or sustainability.
    The research team has made the source code publicly available via GitHub and plans to continue improving the technology, applying it to new satellite images updated annually. The results of this study, with Ph.D. candidate Donghyun Ahn at KAIST and Ph.D. candidate Jeasurk Yang at NUS as joint first authors, were published in Nature Communications under the title “A human-machine collaborative approach measures economic development using satellite imagery.” More

  • in

    New HS curriculum teaches color chemistry and AI simultaneously

    North Carolina State University researchers have developed a weeklong high school curriculum that helps students quickly grasp concepts in both color chemistry and artificial intelligence — while sparking their curiosity about science and the world around them.
    To test whether a short high school science module could effectively teach students something about both chemistry — a notoriously thorny subject — and artificial intelligence (AI), the researchers designed a relatively simple experiment involving pH levels, which reflect the acidity or alkalinity of a liquid solution.
    When testing pH levels on a test strip, color conversion charts provide a handy reference: more acidic solutions turn test strips red when a lot of acidity is present and turn test strips yellow and green as acid levels weaken. Test strips turn deep purple when liquids are highly alkaline and turn blue and dark green as alkaline levels decline. Numerical ranges of pH span from 0 to 14, with 7 being neutral — about the level of the tap water in your home — and the lower amounts reflecting greater acidity with higher numbers reflecting greater alkalinity.
    “We wanted to answer the question: ‘Can we use machine learning to more accurately read pH strips than visually?'” said Yang Zhang, assistant professor of textile engineering, chemistry and science and a co-corresponding author of a paper describing the work. “It turns out that the student-trained AI predictive model was about 5.5 times more precise than visual interpretations.”
    The students used their cellphone cameras to take pictures of pH test strips after wetting them in a variety of everyday liquids — beverages, pond or lake water, cosmetics and the like — and predicted their pH values visually. They also received test strips from the instructors with known pH levels taken with sophisticated instrumentation and predicted those visually.
    “We wanted students to think about the real-world implications of this type of testing, for example in underdeveloped places where drinking water might be an issue,” Zhang said. “You might not have a sophisticated instrument, but you really want to know if the pH level is less than 5 versus a 7.”
    Students entered data into free machine learning software called Orange, which has no lines of code, making it easy for novices to work with. They worked to convert test strip images and pH values into predictions, with machine learning improving accuracy as it learned to delineate the more subtle changes in test-strip color with the corresponding pH values. Students then compared their machine learning pH level predictions with their visual predictions and found that the AI predictions, though not perfect, were much closer to the true pH value than their visual predictions.

    The researchers also surveyed the students before and after the weeklong curriculum and found that they reported being more motivated to learn and more knowledgeable about both chemistry and AI.
    “Students could see the relevance of cutting-edge technology when applied to real-world problems and scientific advancements,” said Shiyan Jiang, assistant professor of learning design and technology at NC State and co-corresponding author of the paper. “This practical application not only enhances their understanding of complex science concepts but also inspires them to explore innovative solutions, fostering a deeper appreciation for the intersection of cutting-edge technology and science, in particular chemistry.”
    “On the chemistry side, there are a lot of similar color chemistry concepts we can teach this way,” Zhang said. “We can also scale this curriculum up to include more students.”
    NC State graduate students Jeanne McClure, Jiahui Chen and Yunshu Liu co-authored the paper. The work was supported by the National Science Foundation (grants CHE-2246548, DRL-1949110 and DRL-2025090) and the National Institutes of Health (grants R21GM141675 and R01GM143397). More

  • in

    Training algorithm breaks barriers to deep physical neural networks

    EPFL researchers have developed an algorithm to train an analog neural network just as accurately as a digital one, enabling the development of more efficient alternatives to power-hungry deep learning hardware.
    With their ability to process vast amounts of data through algorithmic ‘learning’ rather than traditional programming, it often seems like the potential of deep neural networks like Chat-GPT is limitless. But as the scope and impact of these systems have grown, so have their size, complexity, and energy consumption — the latter of which is significant enough to raise concerns about contributions to global carbon emissions.
    And while we often think of technological advancement in terms of shifting from analog to digital, researchers are now looking for answers to this problem in physical alternatives to digital deep neural networks. One such researcher is Romain Fleury of EPFL’s Laboratory of Wave Engineering in the School of Engineering. In a paper published in Science, he and his colleagues describe an algorithm for training physical systems that shows improved speed, enhanced robustness, and reduced power consumption compared to other methods.
    “We successfully tested our training algorithm on three wave-based physical systems that use sound waves, light waves, and microwaves to carry information, rather than electrons. But our versatile approach can be used to train any physical system,” says first author and LWE researcher Ali Momeni.
    A “more biologically plausible” approach
    Neural network training refers to helping systems learn to generate optimal values of parameters for a task like image or speech recognition. It traditionally involves two steps: a forward pass, where data is sent through the network and an error function is calculated based on the output; and a backward pass (also known as backpropagation, or BP), where a gradient of the error function with respect to all network parameters is calculated.
    Over repeated iterations, the system updates itself based on these two calculations to return increasingly accurate values. The problem? In addition to being very energy-intensive, BP is poorly suited to physical systems. In fact, training physical systems usually requires a digital twin for the BP step, which is inefficient and carries the risk of a reality-simulation mismatch.

    The scientists’ idea was to replace the BP step with a second forward pass through the physical system to update each network layer locally. In addition to decreasing power use and eliminating the need for a digital twin, this method better reflects human learning.
    “The structure of neural networks is inspired by the brain, but it is unlikely that the brain learns via BP,” explains Momeni. “The idea here is that if we train each physical layer locally, we can use our actual physical system instead of first building a digital model of it. We have therefore developed an approach that is more biologically plausible.”
    The EPFL researchers, with Philipp del Hougne of CNRS IETR and Babak Rahmani of Microsoft Research, used their physical local learning algorithm (PhyLL) to train experimental acoustic and microwave systems and a modeled optical system to classify data like vowel sounds and images. As well as showing comparable accuracy to BP-based training, the method was robust and adaptable — even in systems exposed to unpredictable external perturbations — compared to the state of the art.
    An analog future?
    While the LWE’s approach is the first BP-free training of deep physical neural networks, some digital updates of the parameters are still required. “It’s a hybrid training approach, but our aim is to decrease digital computation as much as possible,” Momeni says.
    The researchers now hope to implement their algorithm on a small-scale optical system, with the ultimate goal of increasing network scalability.
    “In our experiments, we used neural networks with up to 10 layers, but would it still work with 100 layers with billions of parameters? This is the next step, and will require overcoming technical limitations of physical systems.” More

  • in

    Using machine learning to monitor driver ‘workload’ could help improve road safety

    Researchers have developed an adaptable algorithm that could improve road safety by predicting when drivers are able to safely interact with in-vehicle systems or receive messages, such as traffic alerts, incoming calls or driving directions.
    The researchers, from the University of Cambridge, working in partnership with Jaguar Land Rover (JLR) used a combination of on-road experiments and machine learning as well as Bayesian filtering techniques to reliably and continuously measure driver ‘workload’. Driving in an unfamiliar area may translate to a high workload, while a daily commute may mean a lower workload.
    The resulting algorithm is highly adaptable and can respond in near real-time to changes the driver’s behaviour and status, road conditions, road type, or driver characteristics.
    This information could then be incorporated into in-vehicle systems such as infotainment and navigation, displays, advanced driver assistance systems (ADAS) and others. Any driver vehicle interaction can be then customised to prioritise safety and enhance the user experience, delivering adaptive human machine interactions. For example, drivers are only alerted at times of low workload, so that the driver can keep their full concentration on the road in more stressful driving scenarios. The results are reported in the journal IEEE Transactions on Intelligent Vehicles.
    “More and more data is made available to drivers all the time. However, with increasing levels of driver demand, this can be a major risk factor for road safety,” said co-first author Dr Bashar Ahmad from Cambridge’s Department of Engineering. “There is a lot of information that a vehicle can make available to the driver, but it’s not safe or practical to do so unless you know the status of the driver.”
    A driver’s status — or workload — can change frequently. Driving in a new area, in heavy traffic or in poor road conditions, for example, is usually more demanding than a daily commute.
    “If you’re in a demanding driving situation, that would be a bad time for a message to pop up on a screen or a heads-up display,” said Ahmad. “The issue for car manufacturers is how to measure how occupied the driver is, and instigate interactions or issue messages or prompts only when the driver is happy to receive them.”
    There are algorithms for measuring the levels of driver demand using eye gaze trackers and biometric data from heart rate monitors, but the Cambridge researchers wanted to develop an approach that could do the same thing using information that’s available in any car, specifically driving performance signals such as steering, acceleration and braking data. It should also be able consume and fuse different unsynchronised data streams that have different update rates, including from biometric sensors if available.

    To measure driver workload, the researchers first developed a modified version of the Peripheral Detection Task to collect, in an automated way, subjective workload information during driving. For the experiment, a phone showing a route on a navigation app was mounted to the car’s central air vent, next to a small LED ring light that would blink at regular intervals. Participants all followed the same route through a mix of rural, urban and main roads. They were asked to push a finger-worn button whenever the LED light lit up in red and the driver perceived they were in a low workload scenario.
    Video analysis of the experiment, paired with the data from the buttons, allowed the researchers to identify high workload situations, such as busy junctions or a vehicle in front or behind the driver behaving unusually.
    The on-road data was then used to develop and validate a supervised machine learning framework to profile drivers based on the average workload they experience, and an adaptable Bayesian filtering approach for sequentially estimating, in real-time, the driver’s instantaneous workload, using several driving performance signals including steering and braking. The framework combines macro and micro measures of workload where the former is the driver’s average workload profile and the latter is the instantaneous one.
    “For most machine learning applications like this, you would have to train it on a particular driver, but we’ve been able to adapt the models on the go using simple Bayesian filtering techniques,” said Ahmad. “It can easily adapt to different road types and conditions, or different drivers using the same car.”
    The research was conducted in collaboration with JLR who did the experimental design and the data collection. It was part of a project sponsored by JLR under the CAPE agreement with the University of Cambridge.
    “This research is vital in understanding the impact of our design from a user perspective, so that we can continually improve safety and curate exceptional driving experiences for our clients,” said JLR’s Senior Technical Specialist of Human Machine Interface Dr Lee Skrypchuk. “These findings will help define how we use intelligent scheduling within our vehicles to ensure drivers receive the right notifications at the most appropriate time, allowing for seamless and effortless journeys.”
    The research at Cambridge was carried out by a team of researchers from the Signal Processing and Communications Laboratory (SigProC), Department of Engineering, under the supervision of Professor Simon Godsill. It was led by Dr Bashar Ahmad and included Nermin Caber (PhD student at the time) and Dr Jiaming Liang, who all worked on the project while based at Cambridge’s Department of Engineering. More

  • in

    How ChatGPT could help first responders during natural disasters

    A little over a year since its launch, ChatGPT’s abilities are well known. The machine learning model can write a decent college-level essay and hold a conversation in an almost human-like way.
    But could its language skills also help first responders find those in distress during a natural disaster?
    A new University at Buffalo-led study trains ChatGPT to recognize locations, from home addresses to intersections, in disaster victims’ social media posts.
    Supplied with carefully constructed prompts, researchers’ “geoknowledge-guided” GPT models extracted location data from tweets sent during Hurricane Harvey at an accuracy rate 76% better than default GPT models.
    “This use of AI technology may be able to help first responders reach victims more quickly and even save more lives,” said Yingjie Hu, associate professor in the UB Department of Geography, within the College of Arts and Sciences, and lead author of the study, which was published in October in the International Journal of Geographical Information Science.
    Disaster victims have frequently turned to social media to plead for help when 911 systems become overloaded, including during Harvey’s devastation of the Houston area in 2017.
    Yet first responders often don’t have the resources to monitor social media feeds during a disaster, following the various hashtags and deciding which posts are most urgent.

    It is the hope of the UB-led research team, which also includes collaborators from the University of Georgia, Stanford University and Google, that their work could lead to AI systems that automatically process social media data for emergency services.
    “ChatGPT and other large language models have drawn controversy for their potential negative uses, whether it be academic fraud or eliminating jobs, so it is exciting to instead harness their powers for social good,” Hu says.
    “While there are a number of significant and valid concerns about the emergence of ChatGPT, our work shows that careful, interdisciplinary work can produce applications of this technology that can provide tangible benefits to society,” adds co-author Kenneth Joseph, assistant professor in the UB Department of Computer Science and Engineering, within the School of Engineering and Applied Sciences.
    Fusing ‘geoknowledge’ into ChatGPT
    Imagine a tweet with an urgent but clear message: A family, including a 90-year-old not steady on their feet, needs rescuing at 1280 Grant St., Cypress, Texas, 77249.
    A typical model, such as a named entity recognition (NER) tool, would recognize the listed address as three separate entities — Grant Street, Cypress and Texas. If this data was used to geolocate, the model would send first responders not to 1280 Grant St., but into the middle of Grant Street, or even the geographical center of Texas.

    Hu says that NER tools can be trained to recognize complete location descriptions, but it would require a large dataset of accurately labeled location descriptions specific to a given local area, a labor-intensive and time-consuming process.
    “Although there’s a lack of labeled datasets, first responders have a lot of knowledge about the way locations are described in their local area, whether it be the name of a restaurant or a popular intersection,” Hu says. “So we asked ourselves: How can we quickly and efficiently infuse this geoknowledge into a machine learning model?”
    The answer was OpenAI’s Generative Pretrained Transformers, or GPT, large language models already trained from billions of webpages and able to generate human-like responses. Through simple conversation and the right prompts, Hu’s team thought GPT could quickly learn to accurately interpret location data from social media posts.
    First, researchers provided GPT with 22 real tweets from Hurricane Harvey victims, which they’d already collected and labeled in a previous study. They told GPT which words in the post described a location and what kind of location it was describing, whether it be an address, street, intersection, business or landmark.
    Researchers then tested the geoknowledge-guided GPT on another 978 Hurricane Harvey tweets, and asked it to extract the location words and guess the location category by itself.
    The results: The geoknowledge-guided GPT models were 76% better at recognizing location descriptions than GPT models not provided with geoknowledge, as well as 40% better than NER tools. The best performers were the geoknowledge-guided GPT-3 and GPT-4, with the geoknowledge-guided ChatGPT only slightly behind.
    “GPT basically combines the vast amount of text it’s already read with the specific geoknowledge examples we provided to form its answers,” Hu says. “GPT has the ability to quickly learn and quickly adapt to a problem.”
    However, the human touch, that is, providing a good prompt, is crucial. For example, GPT may not consider a stretch of highway between two specific exits as a location unless specifically prompted to do so.
    “This emphasizes the importance of us as researchers instructing GPT as accurately and comprehensively as possible so it can deliver the results that we require,” Hu says.
    Letting first responders do what they do best
    Hu’s team began their work in early 2022 with GPT-2 and GPT-3, and later included GPT-4 and ChatGPT after those models launched in late 2022 and early 2023, respectively.
    “Our method will likely be applicable to the newer GPT models that may come out in the following years,” Hu says.
    Further research will have to be done to use GPT’s extracted location descriptions to actually geolocate victims, and perhaps figure out ways to filter out irrelevant or false posts about a disaster.
    Hu hopes their efforts can simplify the use of AI technologies so that emergency managers don’t have to become AI experts themselves in order to use these them, and can focus on saving lives.
    “I think a good way for humans to collaborate with AI is to let each of us focus on what we’re really good at,” Hu says. “Let AI models help us complete those more labor-intensive tasks, while we humans focus on gaining knowledge and using such knowledge to guide AI models.”
    The work was supported by the National Science Foundation. More

  • in

    Magnetization by laser pulse

    To magnetize an iron nail, one simply has to stroke its surface several times with a bar magnet. Yet, there is a much more unusual method: A team led by the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) discovered some time ago that a certain iron alloy can be magnetized with ultrashort laser pulses. The researchers have now teamed up with the Laserinstitut Hochschule Mittweida (LHM) to investigate this process further. They discovered that the phenomenon also occurs with a different class of materials – which significantly broadens potential application prospects. The working group presents its findings in the scientific journal Advanced Functional Materials (DOI: 10.1002/adfm.202311951).The unexpected discovery was made back in 2018. When the HZDR team irradiated a thin layer of an iron-aluminum alloy with ultrashort laser pulses, the non-magnetic material suddenly became magnetic. The explanation: The laser pulses rearrange the atoms in the crystal in such a way that the iron atoms move closer together, and thus forming a magnet. The researchers were then able to demagnetize the layer again with a series of weaker laser pulses. This enabled them to discover a way of creating and erasing tiny “magnetic spots” on a surface.However, the pilot experiment still left some questions unanswered. “It was unclear whether the effect only occurs in the iron-aluminum alloy or also in other materials,” explains HZDR physicist Dr. Rantej Bali. “We also wanted to try tracking the time progression of the process.” For further investigation, he teamed up with Dr. Theo Pflug from the LHM and colleagues from the University of Zaragoza in Spain.Flip book with laser pulsesThe experts focused specifically on an iron-vanadium alloy. Unlike the iron-aluminum alloy with its regular crystal lattice, the atoms in the iron-vanadium alloy are arranged more chaotically, forming an amorphous, glass-like structure. In order to observe what happens upon laser irradiation, the physicists used a special method: The pump-probe method.”First, we irradiate the alloy with a strong laser pulse, which magnetizes the material,” explains Theo Pflug. “Simultaneously, we use a second, weaker pulse that is reflected on the material surface.”The analysis of the reflected laser pulse provides an indication of the material’s physical properties. This process is repeated several times, whereby the time interval between the first “pump” pulse and the subsequent “probe” pulse is continually extended.As a result, a time series of reflection data is obtained, which allows to characterize the processes being triggered by the laser excitation. “The whole procedure is similar to generating a flip book,” says Pflug. “Likewise, a series of individual images that animate when viewed in quick succession.”Rapid meltingThe result: Although it has a different atomic structure than the iron-aluminum compound, the iron-vanadium alloy can also be magnetized via laser. “In both cases, the material melts briefly at the irradiation point”, explains Rantej Bali. “This causes the laser to erase the previous structure so that a small magnetic area is generated in both alloys.”An encouraging result: Apparently, the phenomenon is not limited to a specific material structure but can be observed in diverse atomic arrangements.The team is also keeping track of the temporal dynamics of the process: “At least we now know in which time scales something happens,” explains Theo Pflug. “Within femtoseconds, the laser pulse excites the electrons in the material. Several picoseconds later, the excited electrons transfer their energy to the atomic nuclei.”Consequently, this energy transfer causes the rearrangement into a magnetic structure, which is stabilized by the subsequent rapid cooling. In follow-up experiments, the researchers aim to observe exactly how the atoms rearrange themselves by examining the magnetization process with intense X-rays.Sights set on applicationsAlthough still in the early stages, this work already provides initial ideas for possible applications: For example, placing tiny magnets on a chip surface via laser is conceivable. “This could be useful for the production of sensitive magnetic sensors, such as those used in vehicles,” speculates Rantej Bali. “It could also find possible applications in magnetic data storage.”Additionally, the phenomenon appears relevant for a new type of electronics, namely spintronics. Here, magnetic signals should be used for digital computing processes instead of electrons passing through transistors as usual – offering a possible approach to computer technology of the future. More

  • in

    Polaritons open up a new lane on the semiconductor highway

    On the highway of heat transfer, thermal energy is moved by way of quantum particles called phonons. But at the nanoscale of today’s most cutting-edge semiconductors, those phonons don’t remove enough heat. That’s why Purdue University researchers are focused on opening a new nanoscale lane on the heat transfer highway by using hybrid quasiparticles called “polaritons.”
    Thomas Beechem loves heat transfer. He talks about it loud and proud, like a preacher at a big tent revival.
    “We have several ways of describing energy,” said Beechem, associate professor of mechanical engineering. “When we talk about light, we describe it in terms of particles called ‘photons.’ Heat also carries energy in predictable ways, and we describe those waves of energy as ‘phonons.’ But sometimes depending on the material, photons and phonons will come together and make something new called a ‘polariton.’ It carries energy in its own way, distinct from both photons or phonons.”
    Like photons and phonons, polaritons aren’t physical particles you can see or capture. They are more like ways of describing energy exchange as if they were particles.
    Still fuzzy? How about another analogy. “Phonons are like internal combustion vehicles, and photons are like electric vehicles,” Beechem said. “Polaritons are a Toyota Prius. They are a hybrid of light and heat, and retain some of the properties of both. But they are their own special thing.”
    Polaritons have been used in optical applications — everything from stained glass to home health tests. But their ability to move heat has largely been ignored, because their impact becomes significant only when the size of materials becomes very small. “We know that phonons do a majority of the work of transferring heat,” said Jacob Minyard, a Ph.D. student in Beechem’s lab. “The effect of polaritons is only observable at the nanoscale. But we’ve never needed to address heat transfer at that level until now, because of semiconductors.”
    “Semiconductors have become so incredibly small and complex,” he continued. “People who design and build these chips are discovering that phonons don’t efficiently disperse heat at these very small scales. Our paper demonstrates that at those length scales, polaritons can contribute a larger share of thermal conductivity.”
    Their research on polaritons has been selected as a Featured Article in the Journal of Applied Physics.

    “We in the heat transfer community have been very material-specific in describing the effect of polaritons,” said Beechem. “Someone will observe it in this material or at that interface. It’s all very disparate. Jacob’s paper has established that this isn’t some random thing. Polaritons begin to dominate the heat transfer on any surface thinner than 10 nanometers. That’s twice as big as the transistors on an iPhone 15.”
    Now Beechem gets really fired up. “We’ve basically opened up a whole extra lane on the highway. And the smaller the scales get, the more important this extra lane becomes. As semiconductors continue to shrink, we need to think about designing the traffic flow to take advantage of both lanes: phonons and polaritons.”
    Minyard’s paper just scratches the surface of how this can happen practically. The complexity of semiconductors means that there are many opportunities to capitalize upon polariton-friendly designs. “There are many materials involved in chipmaking, from the silicon itself to the dielectrics and metals,” Minyard said. “The way forward for our research is to understand how these materials can be used to conduct heat more efficiently, recognizing that polaritons provide a whole new lane to move energy.”
    Recognizing this, Beechem and Minyard want to show chip manufacturers how to incorporate these polariton-based nanoscale heat transfer principles right into the physical design of the chip — from the physical materials involved, to the shape and thickness of the layers.
    While this work is theoretical now, physical experimentation is very much on the horizon — which is why Beechem and Minyard are happy to be at Purdue.
    “The heat transfer community here at Purdue is so robust,” Beechem said. “We can literally go upstairs and talk to Xianfan Xu, who had one of the first experimental realizations of this effect. Then we can walk over to Flex Lab and ask Xiulin Ruan about his pioneering work in phonon scattering. And we have the facilities here at Birck Nanotechnology Center to build nanoscale experiments, and use one-of-a-kind measurement tools to confirm our findings. It’s really a researcher’s dream.” More

  • in

    Soundwaves harden 3D-printed treatments in deep tissues

    Engineers at Duke University and Harvard Medical School have developed a bio-compatible ink that solidifies into different 3D shapes and structures by absorbing ultrasound waves. Because it responds to sound waves rather than light, the ink can be used in deep tissues for biomedical purposes ranging from bone healing to heart valve repair.
    This work appears on December 7 in the journal Science.
    The uses for 3D-printing tools are ever increasing. Printers create prototypes of medical devices, design flexible, lightweight electronics, and even engineer tissues used in wound healing. But many of these printing techniques involve building the object point-by-point in a slow and arduous process that often requires a robust printing platform.
    To circumvent these issues over the past several years, researchers developed a photo-sensitive ink that responds directly to targeted beams of light and quickly hardens into a desired structure. While this printing technique can substantially improve the speed and quality of a print, researchers can only use transparent inks for the prints, and biomedical purposes are limited, as light can’t reach beyond a few millimeters deep into tissue.
    Now, Y. Shrike Zhang, associate bioengineer at Brigham and Women’s Hospital and associate professor at Harvard Medical School, and Junjie Yao, associate professor of biomedical engineering at Duke, have developed a new printing method called deep-penetrating acoustic volumetric printing, or DVAP, that resolves these problems. This new technique involves a specialized ink that reacts to soundwaves rather than light, enabling them to create biomedically useful structures at unprecedented tissue depths.
    “DVAP relies on the sonothermal effect, which occurs when soundwaves are absorbed and increase the temperature to harden our ink,” explained Yao, who designed the ultrasound printing technology for DVAP. “Ultrasound waves can penetrate more than 100 times deeper than light while still spatially confined, so we can reach tissues, bones and organs with high spatial precision that haven’t been reachable with light-based printing methods.”
    The first component of DVAP involves a sonicated ink, called sono-ink, that is a combination of hydrogels, microparticles and molecules designed to specifically react to ultrasound waves. Once the sono-ink is delivered into the target area, a specialized ultrasound printing probe sends focused ultrasound waves into the ink, hardening portions of it into intricate structures. These structures can range from a hexagonal scaffold that mimics the hardness of bone to a bubble of hydrogel that can be placed on an organ.

    “The ink itself is a viscous liquid, so it can be injected into a targeted area fairly easily, and as you move the ultrasound printing probe around, the materials in the ink will link together and harden,” said Zhang, who designed the sono-ink in his lab at the Brigham. “Once it’s done, you can remove any remaining ink that isn’t solidified via a syringe.”
    The different components of the sono-ink enable the researchers to adjust the formula for a wide variety uses. For example, if they want to create a scaffold to help heal a broken bone or make up for bone loss, they can add bone mineral particles to the ink. This flexibility also allows them to engineer the hardened formula to be more durable or more degradable, depending on its use. They can even adjust the colors of their final print.
    The team conducted three tests as a proof-of-concept of their new technique. The first involved using the ink to seal off a section in a goat’s heart. When a human has nonvalvular atrial fibrillation, the heart won’t beat correctly, causing blood to pool in the organ. Traditional treatment often requires open-chest surgery to seal off the left atrial appendage to reduce the risk of blood clots and heart attack.
    Instead, the team used a catheter to deliver their sono-ink to the left atrial appendage in a goat heart that was placed in a printing chamber. The ultrasound probe then delivered focused ultrasound waves through 12 mm of tissue, hardening the ink without damaging any of the surrounding organ. Once the process was complete, the ink was safely bonded to the heart tissue and was flexible enough to withstand movements that mimicked the heart beating.
    Next, the team tested the potential for DVAP’s use for tissue reconstruction and regeneration. After creating a bone defect model using a chicken leg, the team injected the sono-ink and hardened it through 10 mm of sample skin and muscle tissue layers. The resulting material bonded seamlessly to the bone and didn’t negatively impact any of the surrounding tissues.
    Finally, Yao and Zhang showed that DVAP could also be used for therapeutic drug delivery. In their example, they added a common chemotherapy drug to their ink, which they delivered to sample liver tissue. Using their probe, they hardened the sono-ink into hydrogels that slowly release the chemotherapy and diffuse into the liver tissue.
    “We’re still far from bringing this tool into the clinic, but these tests reaffirmed the potential of this technology,” said Zhang. “We’re very excited to see where it can go from here.”
    “Because we can print through tissue, it allows for a lot of potential applications in surgery and therapy that traditionally involve very invasive and disruptive methods,” said Yao. “This work opens up an exciting new avenue in the 3D printing world, and we’re excited to explore the potential of this tool together.” More