More stories

  • in

    Researchers investigate the links between facial recognition and Alzheimer's disease

    In recent years Alzheimer’s disease has been on the rise throughout the world and is rarely diagnosed at an early stage when it can still be effectively controlled. Using artificial intelligence, KTU researchers conducted a study to identify whether human-computer interfaces could be adapted for people with memory impairments to recognise a visible object in front of them.
    Rytis Maskeliūnas, a researcher at the Department of Multimedia Engineering at Kaunas University of Technology (KTU), considers that the classification of information visible on the face is a daily human function: “While communicating, the face “tells” us the context of the conversation, especially from an emotional point of view, but can we identify visual stimuli based on brain signals?”
    The visual processing of the human face is complex. Information such as a person’s identity or emotional state can be perceived by us, analysing the faces. The aim of the study was to analyse a person’s ability to process contextual information from the face and detect how a person responds to it.
    Face can indicate the first symptoms of the disease
    According to Maskeliūnas, many studies demonstrate that brain diseases can potentially be analysed by examining facial muscle and eye movements since degenerative brain disorders affect not only memory and cognitive functions, but also the cranial nervous system associated with the above facial (especially eye) movements.
    Dovilė Komolovaitė, a graduate of KTU Faculty of Mathematics and Natural Sciences, who co-authored the study, shared that the research has clarified whether a patient with Alzheimer’s disease visually processes visible faces in the brain in the same way as individuals without the disease. More

  • in

    Multi-spin flips and a pathway to efficient ising machines

    Combinatorial optimization problems are at the root of many industrial processes and solving them is key to a more sustainable and efficient future. Ising machines can solve certain combinatorial optimization problems, but their efficiency could be improved with multi-spin flips. Researchers have now tackled this difficult problem by developing a merge algorithm that disguises a multi-spin flip as a simpler, single-spin flip. This technology provides optimal solutions to hard computational problems in a shorter time.
    In a rapidly developing world, industries are always trying to optimize their operations and resources. Combinatorial optimization using an Ising machine helps solve certain operational problems, like mapping the most efficient route for a multi-city tour or optimizing delivery of resources. Ising machines operate by mapping the solution space to a spin configuration space and solving the associated spin problem instead. These machines have a wide range of applications in both academia and industry, tackling problems in machine learning, material design, portfolio optimization, logistics, and drug discovery. For larger problems, however, it is still difficult to obtain the optimal solution in a feasible amount of time.
    Now, while Ising machines can be optimized by integrating multi-spin flips into their hardware, this is a challenging task because it essentially means completely overhauling the software of traditional Ising machines by changing their basic operation. But a team of researchers from the Department of Computer Science and Communications Engineering, Waseda University — consisting of Assistant Professor Tatsuhiko Shirai and Professor Nozomu Togawa — has provided a novel solution to this long-standing problem.
    In their paper, which was published in IEEE Transactions on Computerson 27 May 2022, they engineered a feasible multi-spin flip algorithm by deforming the Hamiltonian (which is an energy function of the Ising model). “We have developed a hybrid algorithm that takes an infeasible multi-spin flip and expresses it in the form of a feasible single-spin flip instead. This algorithm is proposed along with our merge process, in which the original Hamiltonian of a difficult combinatorial problem is deformed into a new Hamiltonian, a problem that the hardware of a traditional Ising machine can easily solve,” explains Tatsuhiko Shirai.
    The newly-developed hybrid Ising processes are fully compatible with current methods and hardware, reducing the challenges to their widespread application. “We applied the hybrid merge process to several common examples of difficult combinatorial optimization problems. Our algorithm shows superior performance in all instances. It reduces residual energy and reaches more optimal results in shorter time — it really is a win-win,” states Nozomu Togawa.
    Their work will allow industries to solve new complex optimization problems and help tackle climate change-related issues such as increased energy demand, food shortage, and the realization of sustainable development goals (SDGs). “For example, we could use this to optimize shipping and delivery planning problems in industries to increase their efficiency while reducing carbon dioxide emissions,” Tatsuhiko Shirai adds.
    This new technology directly increases the number of applications where the Ising machine can be feasibly used to produce solutions. As a result, the Ising machine method can be increasingly used across machine learning and optimization science. The team’s technology not only improves the performance of existing Ising machines, but also provides a blueprint to the development of new Ising machine architectures in the near future. With the merge algorithm driving Ising machines further into new uncharted territories, the future of optimization, and thus sustainability practices, looks bright.
    Story Source:
    Materials provided by Waseda University. Note: Content may be edited for style and length. More

  • in

    Scientists hope to mimic the most extreme hurricane conditions

    Winds howl at over 300 kilometers per hour, battering at a two-story wooden house and ripping its roof from its walls. Then comes the water. A 6-meter-tall wave engulfs the structure, knocking the house off its foundation and washing it away.

    That’s the terrifying vision of researchers planning a new state-of-the-art facility to re-create the havoc wreaked by the most powerful hurricanes on Earth. In January, the National Science Foundation awarded a $12.8 million grant to researchers to design a facility that can simulate wind speeds of at least 290 km/h — and can, at the same time, produce deadly, towering storm surges.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    No facility exists that can produce such a one-two punch of extreme wind and water. But it’s an idea whose time has come — and not a moment too soon.

    “It’s a race against time,” says disaster researcher Richard Olson, director of extreme events research at Florida International University, or FIU, in Miami.

    Hurricanes are being made worse by human-caused climate change: They’re getting bigger, wetter, stronger and slower (SN: 9/13/18; SN: 11/11/20). Scientists project that the 2022 Atlantic Ocean hurricane season, spanning June 1 to November 30, will be the seventh straight season with more storms than average. Recent seasons have been marked by an increase in rapidly intensifying hurricanes linked to warming ocean waters (SN: 12/21/20).

    Those trends are expected to continue as the Earth heats up further, researchers say. And coastal communities around the world need to know how to prepare: how to build structures — buildings, bridges, roads, water and energy systems — that are resilient to such punishing winds and waves.

    To help with those preparations, FIU researchers are leading a team of wind and structural engineers, coastal and ocean engineers, computational modelers and resilience experts from around the United States to work out how best to simulate these behemoths. Combining extreme wind and water surges into one facility is uncharted territory, says Ioannis Zisis, a wind engineer at FIU. “There is a need to push the envelope,” Zisis says. But as for how exactly to do it, “the answer is simple: We don’t know. That’s what we want to find out.”

    Prepping for “Category 6”

    It’s not that such extreme storms haven’t been seen on Earth. Just in the last few years, Hurricanes Dorian (2019) and Irma (2017) in the Atlantic Ocean and super Typhoon Haiyan (2013) in the Pacific Ocean have brought storms with wind speeds well over 290 km/h. Such ultraintense storms are sometimes referred to as “category 6” hurricanes, though that’s not an official designation.

    The National Oceanic and Atmospheric Administration, or NOAA, rates hurricanes in the Atlantic and eastern Pacific oceans on a scale of 1 to 5, based on their wind speeds and how much damage those winds might do. Each category spans an increment of roughly 30 km/h.  

    Category 1 hurricanes, with wind speeds of 119 to 153 km/h, produce “some damage,” bringing down some power lines, toppling trees and perhaps knocking roof shingles or vinyl siding off a house. Category 5 storms, with winds starting at 252 km/h, cause “catastrophic damage,” bulldozing buildings and potentially leaving neighborhoods uninhabitable for weeks to months.

    But 5 is as high as it gets on the official scale; after all, what could be more devastating than catastrophic damage? That means that even monster storms like 2019’s Hurricane Dorian, which flattened the Bahamas with wind speeds of up to nearly 300 km/h, are still considered category 5 (SN: 9/3/19).

    “Strictly speaking, I understand that [NOAA doesn’t] see the need for a category 6,” Olson says. But there is a difference in public perception, he says. “I see it as a different type of storm, a storm that is simply scarier.”

    And labels aside, the need to prepare for these stronger storms is clear, Olson says. “I don’t think anybody wants to be explaining 20 years from now why we didn’t do this,” he says. “We have challenged nature. Welcome to payback.”

    Superstorm simulation

    FIU already hosts the Wall of Wind, a huge hurricane simulator housed in a large hangar anchored at one end by an arc of 12 massive yellow fans. Even at low wind speeds — say, around 50 km/h — the fans generate a loud, unsettling hum. At full blast, those fans can generate wind speeds of up to 252 km/h — equivalent to a low-grade category 5 hurricane.

    Inside, researchers populate the hangar with structures mimicking skyscrapers, houses and trees, or shapes representing the bumps and dips of the ground surface. Engineers from around the world visit the facility to test out the wind resistance of their own creations, watching as the winds pummel at their structural designs.

    Twelve fans tower over one end of the Wall of Wind, a large experimental facility at Florida International University in Miami. There, winds as fast as 252 kilometers per hour let researchers re-create conditions experienced during a low-grade category 5 hurricane.NSF-NHERI Wall of Wind/FIU

    It’s one of eight facilities in a national network of laboratories that study the potential impacts of wind, water and earthquake hazards, collectively called the U.S. Natural Hazards Engineering Research Infrastructure, or NHERI.

    The Wall of Wind is designed for full-scale wind testing of entire structures. Another wind machine, hosted at the University of Florida in Gainesville, can zoom in on the turbulent behavior of winds right at the boundary between the atmosphere and ground. Then there are the giant tsunami- and storm surge–simulating water wave tanks at Oregon State University in Corvallis.

    The new facility aims to build on the shoulders of these giants, as well as on other experimental labs around the country. The design phase is projected to take four years, as the team ponders how to ramp up wind speeds — possibly with more, or more powerful fans than the Wall of Wind’s — and how to combine those gale-force winds and massive water tanks in one experimental space.

    Existing labs that study wind and waves together, albeit on a much smaller scale, can offer some insight into that aspect of the design, says Forrest Masters, a wind engineer at the University of Florida and the head of that institution’s NHERI facility.

    This design phase will also include building a scaled-down version of the future lab as proof of concept. Building the full-scale facility will require a new round of funding and several more years.

    Past approaches to studying the impacts of strong wind storms tend to use one of three approaches: making field observations of the aftermath of a given storm; building experimental facilities to re-create storms; and using computational simulations to visualize how those impacts might play out over large geographical regions. Each of these approaches has strengths and limitations, says Tracy Kijewski-Correa, a disaster risk engineer at the University of Notre Dame in Indiana.

    “In this facility, we want to bring together all of these methodologies,” to get as close as possible to recreating what Mother Nature can do, Kijewski-Correa says.  

    It’s a challenging engineering problem, but an exciting one. “There’s a lot of enthusiasm for this in the broader scientific community,” Masters says. “If it gets built, nothing like it will exist.” More

  • in

    Algorithms help to distinguish diseases at the molecular level

    In today’s medicine, doctors define and diagnose most diseases on the basis of symptoms. However, that does not necessarily mean that the illnesses of patients with similar symptoms will have identical causes or demonstrate the same molecular changes. In biomedicine, one often speaks of the molecular mechanisms of a disease. This refers to changes in the regulation of genes, proteins or metabolic pathways at the onset of illness. The goal of stratified medicine is to classify patients into various subtypes at the molecular level in order to provide more targeted treatments.
    To extract disease subtypes from large pools of patient data, new machine learning algorithms can help. They are designed to independently recognize patterns and correlations in extensive clinical measurements. The LipiTUM junior research group, headed by Dr. Josch Konstantin Pauling of the Chair for Experimental Bioinformatics has developed an algorithm for this purpose.
    Complex analysis via automated web tool
    Their method combines the results of existing algorithms to obtain more precise and robust predictions of clinical subtypes. This unifies the characteristics and advantages of each algorithm and eliminates their time-consuming adjustment. “This makes it much easier to apply the analysis in clinical research,” reports Dr. Pauling. “For that reason, we have developed a web-based tool that permits online analysis of molecular clinical data by practitioners without prior knowledge of bioinformatics.”
    On the website (https://exbio.wzw.tum.de/mosbi/), researchers can submit their data for automated analysis and use the results to interpret their studies. “Another important aspect for us was the visualization of the results. Previous approaches were not capable of generating intuitive visualizations of relationships between patient groups, clinical factors and molecular signatures. This will change with the web-based visualization produced by our MoSBi tool,” says Tim Rose, a scientist at the TUM School of Life Sciences. MoSBi stands for “Molecular Signatures using Biclustering.” “Biclustering” is the name of the technology used by the algorithm.
    Application for clinically relevant questions
    With the tool, researchers can now, for example, represent data from cancer studies and simulations for various scenarios. They have already demonstrated the potential of their method in a large-scale clinical study. In a cooperative study conducted with researchers from the Max Planck Institute in Dresden, the Technical University of Dresden and the Kiel University Clinic, they studied the change in lipid metabolism in the liver of patients with non-alcoholic fatty liver disease (NAFLD).
    This widespread disease is associated with obesity and diabetes. It develops from the non-alcoholic fatty liver (NAFL), in which lipids are deposited in liver cells, to non-alcoholic steatohepatitis (NASH), in which the liver becomes further inflamed, to liver cirrhosis and the formation of tumors. Apart from dietary adjustments, no treatments have been found to date. Because the disease is characterized and diagnosed by the accumulation of various lipids in the liver, it is important to understand their molecular composition.
    Biomarkers for liver disease
    Using the MoSBi methods, the researchers were able to demonstrate the heterogeneity of the livers of patients in the NAFL stage at the molecular level. “From a molecular standpoint, the liver cells of many NAFL patients were almost identical to those of NASH patients, while others were still largely similar to healthy patients. We could also confirm our predictions using clinical data,” says Dr. Pauling. “We were then able to identify two potential lipid biomarkers for disease progression.” This is important for early recognition of the disease and its progression and the development of targeted treatments.
    The research group is already working on further applications of their method to gain a better understanding of other diseases. “In the future algorithms will play an even greater role in biomedical research than they already do today. They can make it significantly easier to detect complex mechanisms and find more targeted treatment approaches,” says Dr. Pauling.
    Story Source:
    Materials provided by Technical University of Munich (TUM). Note: Content may be edited for style and length. More

  • in

    A quarter of the world's Internet users rely on infrastructure that is susceptible to attacks

    About a quarter of the world’s Internet users live in countries that are more susceptible than previously thought to targeted attacks on their Internet infrastructure. Many of the at-risk countries are located in the Global South.
    That’s the conclusion of a sweeping, large-scale study conducted by computer scientists at the University of California San Diego. The researchers surveyed 75 countries.
    “We wanted to study the topology of the Internet to find weak links that, if compromised, would expose an entire nation’s traffic,” said Alexander Gamero-Garrido, the paper’s first author, who earned his Ph.D. in computer science at UC San Diego.
    Researchers presented their findings at the Passive and Active Measurement Conference 2022 online this spring.
    The structure of the Internet can differ dramatically in different parts of the world. In many developed countries, like the United States, a large number of Internet providers compete to provide services for a large number of users. These networks are directly connected to one another and exchange content, a process known as direct peering. All the providers can also plug directly into the world’s Internet infrastructure.
    “But a large portion of the Internet doesn’t function with peering agreements for network connectivity,” Gamero-Garrido pointed out. More

  • in

    AI learns coral reef 'song'

    Artificial Intelligence (AI) can track the health of coral reefs by learning the “song of the reef,” new research shows.
    Coral reefs have a complex soundscape — and even experts have to conduct painstaking analysis to measure reef health based on sound recordings.
    In the new study, University of Exeter scientists trained a computer algorithm using multiple recordings of healthy and degraded reefs, allowing the machine to learn the difference.
    The computer then analysed a host of new recordings, and successfully identified reef health 92% of the time.
    The team used this to track the progress of reef restoration projects.
    “Coral reefs are facing multiple threats including climate change, so monitoring their health and the success of conservation projects is vital,” said lead author Ben Williams. More

  • in

    High altitudes may be a climate refuge for some birds, but not these hummingbirds

    Cooler, higher locales may not be very welcoming to some hummingbirds trying to escape rising temperatures and other effects of climate change.

    Anna’s hummingbirds live no higher than about 2,600 meters above sea level. If the birds attempt to expand their range to include higher altitudes, they may struggle to fly well in the thinner air, researchers report May 26 in the Journal of Experimental Biology.

    These hummingbirds have expanded their range in the past. Once only found in Southern California, the birds now live as far north as Vancouver, says Austin Spence, an ecologist at the University of California, Davis. That expansion is probably due to climate change and people using feeders to attract hummingbirds, he says.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Spence and colleagues collected 26 Anna’s hummingbirds (Calypte anna) from different elevations in the birds’ natural range in California. The team transported the birds to an aviary about 1,200 meters above sea level and measured their metabolic rate when hovering. After relocating the hummingbirds to a field station at 3,800 meters altitude, the researchers let the birds rest for at least 12 hours and then measured that rate again.

    The rate was 37 percent lower, on average, at the higher elevation than the aviary, even though the birds should have been working harder to stay aloft in the thinner air (SN: 2/8/18). At higher altitudes, hovering, which takes a lot of energy compared with other forms of flight, is more challenging and requires even more energy, Spence says. The decrease in metabolic rate shows that the birds’ hovering performance was suffering, he says. “Low oxygen and low air pressure may be holding them back as they try to move upslope.”

    Additional work is needed to see whether the birds might be able to better adjust if given weeks or months to acclimate to the conditions at gradually higher altitudes. More

  • in

    Agriculture tech use opens possibility of digital havoc

    Wide-ranging use of smart technologies is raising global agricultural production but international researchers warn this digital-age phenomenon could reap a crop of another kind — cybersecurity attacks.
    Complex IT and math modelling at King Abdulaziz University in Saudi Arabia, Aix-Marseille University, France and Flinders University in South Australia, has highlighted the risks in a new article in the open access journal Sensors.
    “Smart sensors and systems are used to monitor crops, plants, the environment, water, soil moisture, and diseases,” says lead author Professor Abel Alahmadi from King Abdulaziz University.
    “The transformation to digital agriculture would improve the quality and quantity of food for the ever-increasing human population, which is forecast to reach 10.9 billion by 2100.”
    This progress in production, genetic modification for drought-resistant crops, and other technologies is prone to cyber-attack — particularly if the ag-tech sector doesn’t take adequate precautions like other corporate or defence sectors, researchers warn.
    Flinders University researcher Dr Saeed Rehman says the rise of internet connectivity and smart low-power devices has facilitated the shift of many labour-intensive food production jobs into the digital domain — including modern techniques for accurate irrigation, soil and crop monitoring using drone surveillance.
    “However, we should not overlook security threats and vulnerabilities to digital agriculture, in particular possible side-channel attacks specific to ag-tech applications,” says Dr Rehman, an expert in cybersecurity and networking.
    “Digital agriculture is not immune to cyber-attack, as seen by interference to a US watering system, a meatpacking firm, wool broker software and an Australian beverage company.”
    “Extraction of cryptographic or sensitive information from the operation of physical hardware is termed side-channel attack,” adds Flinders co-author Professor David Glynn.
    “These attacks could be easily carried out with physical access to devices, which the cybersecurity community has not explicitly investigated.”
    The researchers recommend investment into precautions and awareness about the vulnerabilities of digital agriculture to cyber-attack, with an eye on the potential serious effects on the general population in terms of food supply, labour and flow-on costs.
    Story Source:
    Materials provided by Flinders University. Note: Content may be edited for style and length. More