More stories

  • in

    New AI system predicts how to prevent wildfires

    Wildfires are a growing threat in a world shaped by climate change. Now, researchers at Aalto University have developed a neural network model that can accurately predict the occurrence of fires in peatlands. They used the new model to assess the effect of different strategies for managing fire risk and identified a suite of interventions that would reduce fire incidence by 50-76%.
    The study focused on the Central Kalimantan province of Borneo in Indonesia, which has the highest density of peatland fires in Southeast Asia. Drainage to support agriculture or residential expansion has made peatlands increasingly vulnerable to recurring fires. In addition to threatening lives and livelihoods, peatland fires release significant amounts of carbon dioxide. However, prevention strategies have faced difficulties because of the lack of clear, quantified links between proposed interventions and fire risk.
    The new model uses measurements taken before each fire season in 2002-2019 to predict the distribution of peatland fires. While the findings can be broadly applied to peatlands elsewhere, a new analysis would have to be done for other contexts. ‘Our methodology could be used for other contexts, but this specific model would have to be re-trained on the new data,’ says Alexander Horton, the postdoctoral researcher who carried out study.
    The researchers used a convolutional neural network to analyse 31 variables, such as the type of land cover and pre-fire indices of vegetation and drought. Once trained, the network predicted the likelihood of a peatland fire at each spot on the map, producing an expected distribution of fires for the year.
    Overall, the neural network’s predictions were correct 80-95% of the time. However, while the model was usually right in predicting a fire, it also missed many fires that actually occurred. About half of the observed fires weren’t predicted by the model, meaning that it isn’t suitable as an early-warning predictive system. Larger groupings of fires tended to be predicted well, while isolated fires were often missed by the network. With further work, the researchers hope to improve the network’s performance so it can also serve as an early-warning system.
    The team took advantage of the fact that fire predictions were usually correct to test the effect of different land management strategies. By simulating different interventions, they found that the most effective plausible strategy would be to convert shrubland and scrubland into swamp forests, which would reduce fire incidence by 50%. If this were combined with blocking all of the drainage canals except the major ones, fires would decrease by 70% in total.
    However, such a strategy would have clear economic drawbacks. ‘The local community is in desperate need of long-term, stable cultivation to booster the local economy,’ says Horton.
    An alternative strategy would be to establish more plantations, since well-managed dramatically reduce the likelihood of fire. However, the plantations are among the key drivers of forest loss, and Horton points out ‘the plantations are mostly owned by larger corporations, often based outside Borneo, which means the profits aren’t directly fed back into the local economy beyond the provision of labour for the local workforce.’
    Ultimately, fire prevention strategies have to balance risks, benefits, and costs, and this research provides the information to do that, explains Professor Matti Kummu, who led the study team. ‘We tried to quantify how the different strategies would work. It’s more about informing policy-makers than providing direct solutions.’
    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More

  • in

    Collaborative machine learning that preserves privacy

    Training a machine-learning model to effectively perform a task, such as image classification, involves showing the model thousands, millions, or even billions of example images. Gathering such enormous datasets can be especially challenging when privacy is a concern, such as with medical images. Researchers from MIT and the MIT-born startup DynamoFL have now taken one popular solution to this problem, known as federated learning, and made it faster and more accurate.
    Federated learning is a collaborative method for training a machine-learning model that keeps sensitive user data private. Hundreds or thousands of users each train their own model using their own data on their own device. Then users transfer their models to a central server, which combines them to come up with a better model that it sends back to all users.
    A collection of hospitals located around the world, for example, could use this method to train a machine-learning model that identifies brain tumors in medical images, while keeping patient data secure on their local servers.
    But federated learning has some drawbacks. Transferring a large machine-learning model to and from a central server involves moving a lot of data, which has high communication costs, especially since the model must be sent back and forth dozens or even hundreds of times. Plus, each user gathers their own data, so those data don’t necessarily follow the same statistical patterns, which hampers the performance of the combined model. And that combined model is made by taking an average — it is not personalized for each user.
    The researchers developed a technique that can simultaneously address these three problems of federated learning. Their method boosts the accuracy of the combined machine-learning model while significantly reducing its size, which speeds up communication between users and the central server. It also ensures that each user receives a model that is more personalized for their environment, which improves performance.
    The researchers were able to reduce the model size by nearly an order of magnitude when compared to other techniques, which led to communication costs that were between four and six times lower for individual users. Their technique was also able to increase the model’s overall accuracy by about 10 percent. More

  • in

    Modified microwave oven cooks up next-gen semiconductors

    A household microwave oven modified by a Cornell engineering professor is helping to cook up the next generation of cellphones, computers and other electronics after the invention was shown to overcome a major challenge faced by the semiconductor industry.
    The research is detailed in a paper published in Applied Physics Letters. The lead author is James Hwang, a research professor in the department of materials science and engineering.
    As microchips continue to shrink, silicon must be doped, or mixed, with higher concentrations of phosphorus to produce the desired current. Semiconductor manufacturers are now approaching a critical limit in which heating the highly doped materials using traditional methods no longer produces consistently functional semiconductors.
    The Taiwan Semiconductor Manufacturing Company (TSMC) theorized that microwaves could be used to activate the excess dopants, but just like with household microwave ovens that sometimes heat food unevenly, previous microwave annealers produced “standing waves” that prevented consistent dopant activation.
    TSMC partnered with Hwang, who modified a microwave oven to selectively control where the standing waves occur. Such precision allows for the proper activation of the dopants without excessive heating or damage of the silicon crystal.
    This discovery could be used to produce semiconductor materials and electronics appearing around the year 2025, said Hwang, who has filed two patents for the prototype.
    “A few manufacturers are currently producing semiconductor materials that are 3 nanometers,” Hwang said. “This new microwave approach can potentially enable leading manufacturers such as TSMC and Samsung to scale down to just 2 nanometers.”
    The breakthrough could change the geometry of transistors used in microchips. For more than 20 years, transistors have been made to stand up like dorsal fins so that more can be packed on each microchip, but manufacturers have recently begun to experiment with a new architecture in which transistors are stacked horizontally. The excessively doped materials enabled by microwave annealing would be key to the new architecture.
    Story Source:
    Materials provided by Cornell University. Original written by Syl Kacapyr, courtesy of the Cornell Chronicle. Note: Content may be edited for style and length. More

  • in

    Intelligent microscopes for detecting rare biological events

    Imagine you’re a PhD student with a fluorescent microscope and a sample of live bacteria. What’s the best way use these resources to obtain detailed observations of bacterial division from the sample?
    You may be tempted to forgo food and rest, to sit at the microscope non-stop and acquire images when bacterial finally division starts. (It can take hours for one bacterium to divide!) It’s not as crazy as it sounds, since manual detection and acquisition control is widespread in many of the sciences.
    Alternatively, you may want to set the microscope to take images indiscriminately and as often as possible. But excessive light depletes the fluorescence from the sample faster and can prematurely destroy living samples. Plus, you’d generate many uninteresting images, since only a few would contain images of dividing bacteria.
    Another solution would be to use artificial intelligence to detect precursors to bacterial division and use these to automatically update the microscope’s control software to take more pictures of the event.
    Drum roll… yes, EPFL biophysicists have indeed found a way to automate microscope control for imaging biological events in detail while limiting stress on the sample, all with the help of artificial neural networks. Their technique works for bacterial cell division, and for mitochondrial division. The details of their intelligent microscope are described in Nature Methods.
    “An intelligent microscope is kind of like a self-driving car. It needs to process certain types of information, subtle patterns that it then responds to by changing its behavior,” explains principal investigator Suliana Manley of EPFL’s Laboratory of Experimental Biophysics. “By using a neural network, we can detect much more subtle events and use them to drive changes in acquisition speed.”
    Manley and her colleagues first solved how to detect mitochondrial division, more difficult than for bacteria such as C. crescentus. Mitochondrial division is unpredictable, since it occurs infrequently, and can happen almost anywhere within the mitochondrial network at any moment. But the scientists solved the problem by training the neural network to look out for mitochondrial constrictions, a change in shape of mitochondria that leads to division, combined with observations of a protein known to be enriched at sites of division.
    When both constrictions and protein levels are high, the microscope switches into high-speed imaging to capture many images of division events in detail. When constriction and protein levels are low, the microscope then switches to low-speed imaging to avoid exposing the sample to excessive light.
    With this intelligent fluorescent microscope, the scientists showed that they could observe the sample for longer compared to standard fast imaging. While the sample was more stressed compared to standard slow imaging, they were able to obtain more meaningful data.
    “The potential of intelligent microscopy includes measuring what standard acquisitions would miss,” Manley explains. “We capture more events, measure smaller constrictions, and can follow each division in greater detail.”
    The scientists are making the control framework available as an open source plug-in for the open microscope software Micro-Manager, with the aim of allowing other scientists to integrate artificial intelligence into their own microscopes. More

  • in

    Gamers can have their cake and eat it too

    Parents and pundits may no longer argue that gamers are indulging in brainless activities in front of their screens. And gamers may finally feel a sense of vindication.
    Kyoto University and BonBon Inc, a Kyoto-based healthcare-related IT company, have now teamed up to show that multiple cognitive abilities may be empirically measured from a complex game experience depending on the game’s design.
    “Video games can be made to engage and characterize distinct cognitive abilities while still retaining the entertainment value that popular titles offer,” says Tomihiro Ono, lead author of the joint study in Scientific Reports.
    He adds, “For example, we found that there are in-game micro-level connections such as between stealth behavior and abstract thinking, aiming and attention, and targeting and visual discrimination.”
    To make these connections between complex gameplay and interpretable cognitive characteristics, the team combined the use of data from Potion, a 3-D action video game by BonBon Inc, and WebCNP, conventional cognitive tests maintained by the University of Pennsylvania.
    Although existing literature and general beliefs regarding similar action video games already suggest the advantage that younger males may have over other demographic groups, the researchers did not expect to obtain measurements reflecting stark differences even after accounting for gaming experience.
    “The lack of a connection between cognitive abilities and video game elements in aged players came as a surprise,” Ono notes.
    To attain more scientific insight into the psyche of gamers, such as in why computer games have positive influences on some players, the researchers posit that studies using games ought to avoid one-size-fits-all approaches, as demographic factors and game experience can be assumed to affect results.
    “We think that a granular understanding of cognitive engagement in video games has potential in benefitting such research areas as psychiatry, psychology, and education,” concludes the author.
    Story Source:
    Materials provided by Kyoto University. Note: Content may be edited for style and length. More

  • in

    City digital twins help train deep learning models to separate building facades

    Game engines were originally developed to build imaginary worlds for entertainment. However, these same engines can be used to build copies of real environments, that is, digital twins. Researchers from Osaka University have found a way to use the images that were automatically generated by digital city twins to train deep learning models that can efficiently analyze images of real cities and accurately separate the buildings that appear in them.
    A convolutional neural network is a deep learning neural network designed for processing structured arrays of data such as images. Such advancements in deep learning have fundamentally changed the way tasks, like architectural segmentation, are performed. However, an accurate deep convolutional neural network (DCNN) model needs a large volume of labeled training data and labeling these data can be a slow and extremely expensive manual undertaking.
    To create the synthetic digital city twin data, the investigators used a 3D city model from the PLATEAU platform, which contains 3D models of most Japanese cities at an extremely high level of detail. They loaded this model into the Unity game engine and created a camera setup on a virtual car, which drove around the city and acquired the virtual data images under various lighting and weather conditions. The Google Maps API was then used to obtain real street-level images of the same study area for the experiments.
    The researchers found that the digital city twin data leads to better results than purely virtual data with no real-world counterpart. Furthermore, adding synthetic data to a real dataset improves segmentation accuracy. However, most importantly, the investigators found that when a certain fraction of real data is included in the digital city twin synthetic dataset, the segmentation accuracy of the DCNN is boosted significantly. In fact, its performance becomes competitive with that of a DCNN trained on 100% real data. “These results reveal that our proposed synthetic dataset could potentially replace all the real images in the training set,” says Tomohiro Fukuda, the corresponding author of the paper.
    Automatically separating out the individual building facades that appear in an image is useful for construction management and architecture design, large-scale measurements for retrofits and energy analysis, and even visualizing building facades that have been demolished. The system was tested on multiple cities, demonstrating the proposed framework’s transferability. The hybrid dataset of real and synthetic data yields promising prediction results for most modern architectural styles. This makes it a promising approach for training DCNNs for architectural segmentation tasks in the future — without the need for costly manual data annotation.
    Story Source:
    Materials provided by Osaka University. Note: Content may be edited for style and length. More

  • in

    Scientists see spins in a 2D magnet

    All magnets — from the simple souvenirs hanging on your refrigerator to the discs that give your computer memory to the powerful versions used in research labs — contain spinning quasiparticles called magnons. The direction one magnon spins can influence that of its neighbor, which affects the spin of its neighbor, and so on, yielding what are known as spin waves. Information can potentially be transmitted via spin waves more efficiently than with electricity, and magnons can serve as “quantum interconnects” that “glue” quantum bits together into powerful computers.
    Magnons have enormous potential, but they are often difficult to detect without bulky pieces of lab equipment. Such setups are fine for conducting experiments, but not for developing devices, said Columbia researcher Xiaoyang Zhu, such as magnonic devices and so-called spintronics. Seeing magnons can be made much simpler, however, with the right material: a magnetic semiconductor called chromium sulfide bromide (CrSBr) that can be peeled into atom-thin, 2D layers, synthesized in Department of Chemistry professor Xavier Roy’s lab.
    In a new article in Nature, Zhu and collaborators at Columbia, the University of Washington, New York University, and Oak Ridge National Laboratory show that magnons in CrSBr can pair up with another quasiparticle called an exciton, which emits light, offering the researchers a means to “see” the spinning quasiparticle.
    As they perturbed the magnons with light, they observed oscillations from the excitons in the near-infrared range, which is nearly visible to the naked eye. “For the first time, we can see magnons with a simple optical effect,” Zhu said.
    The results may be viewed as quantum transduction, or the conversion of one “quanta” of energy to another, said first author Youn Jun (Eunice) Bae, a postdoc in Zhu’s lab. The energy of excitons is four orders of magnitude larger than that of magnons; now, because they pair together so strongly, we can easily observe tiny changes in the magnons, Bae explained. This transduction may one day enable researchers to build quantum information networks that can take information from spin-based quantum bits — which generally need to be located within millimeters of each other — and convert it to light, a form of energy that can transfer information up to hundreds of miles via optical fibers
    The coherence time — how long the oscillations can last — was also remarkable, Zhu said, lasting much longer than the five-nanosecond limit of the experiment. The phenomenon could travel over seven micrometers and persist even when the CrSBr devices were made of just two atom-thin layers, raising the possibility of building nano-scale spintronic devices. These devices could one day be more efficient alternatives to today’s electronics. Unlike electrons in an electrical current that encounter resistance as they travel, no particles are actually moving in a spin wave.
    The work was supported by Columbia’s NSF-funded Materials Research Science and Engineering Center (MRSEC), with the material created in the DOE-funded Energy Frontier Research Center (EFRC). From here, the researchers plan to explore CrSBr’s quantum information potential, as well as other material candidates. “In the MRSEC and EFRC, we are exploring the quantum properties of several 2D materials that you can stack like papers to create all kinds of new physical phenomena,” Zhu said.
    For example, if magnon-exciton coupling can be found in other kinds of magnetic semiconductors with slightly different properties than CrSBr, they might emit light in a wider range of colors. “We’re assembling the toolbox to construct new devices with customizable properties,” Zhu said.
    Story Source:
    Materials provided by Columbia University. Original written by Ellen Neff. Note: Content may be edited for style and length. More

  • in

    What is the best way to group students? Math model

    Imagine you have a group of 30 children who want to play soccer. You would like to divide them into two teams, so they can practice their skills and learn from their coaches to become better players.
    But what is the most effective way for them to improve: Should you group the children according to skill level, with all of the most skilled players in one group and the rest of the players in the other group? Or, should you divide them into two equal teams by talent and skill?
    For a fresh approach to this age-old question in grouping theory, a researcher from the University of Rochester, along with his childhood friend, an education professor at the University of Nevada, Las Vegas, turned to math.
    “The selection and grouping of individuals for training purposes is extremely common in our society,” says Chad Heatwole, a professor of neurology at the University of Rochester Medical Center and the director of Rochester’s Center for Health + Technology (CHeT). “There is a historic and ongoing rigorous debate regarding the best way to group students for the purpose of instruction.”
    In a paper published in the journal Education Practice and Theory, the research team — which also includes Peter Wiens, an associate professor of teaching and learning at the University of Nevada, Las Vegas, and Christine Zizzi, a director at CHeT — developed, for the first time, a mathematical approach to grouping. The approach compares different grouping methods, selecting the optimal way to group individuals for teacher-led instruction. The research has broad implications in education, as well as in economics, music, medicine, and sports.
    “Our solution was to look at this through a purely mathematical lens, evaluating for the greatest good of the entire sample,” Heatwole says. “To our knowledge, this novel mathematical approach has never been described or utilized in this way.”
    Two approaches in grouping theory More