More stories

  • in

    A 'Goldilocks amount' of time spent online could be good for teenagers' wellbeing

    New research from the Department of Sociology in Trinity College Dublin has found further evidence of a relationship between online engagement and mental wellbeing in teenagers. The study, published recently in the journal ‘Computers in Human Behaviour’, contributes to mounting international evidence on the dangers of high levels of digital media use.
    Additionally, the researchers found that in today’s connected world low engagement with digital media is also associated with poor mental health outcomes for adolescents who spend less time online than their peers. This finding supports the ‘goldilocks’ hypothesis — that digital media use at moderate levels is not intrinsically harmful and there is a point between low and high use that is ‘just right’ for young people.
    This is the first time the ‘goldilocks’ theory has been examined in Irish teenagers/young adults. It is also the first study to attempt the integration of both time and online behaviours when examining associations between digital media and mental wellbeing.
    Professor Richard Layte, Professor of Sociology and co-author on the paper, said:
    “Evidence is mounting internationally that online engagement among adolescents may be damaging for mental well-being but the evidence is mixed. Our work provides fresh insights on the impact of digital engagement at the age of 17/18 and the results provide worrying evidence of real harms that require urgent action.”
    “There is a simple narrative out there that more is worse. It is important to emphasise that online engagement is now a normal channel of social participation and non-use has consequences. Our findings also raise the possibility that moderate use is important in today’s digital world and that low levels of online engagement carries its own risks. Now the questions for researchers are how much is too much and how little is too little?”
    The research, drawing on longitudinal data from the Growing Up in Ireland study, looked at the association between adolescent use of online engagement and mental wellbeing in over 6,000 young people between the age of 13 and again at the age of 17/18. More

  • in

    How structural changes affect the superconducting properties of a metal oxide

    A team led by University of Minnesota Twin Cities researchers has discovered how subtle structural changes in strontium titanate, a metal oxide semiconductor, can alter the material’s electrical resistance and affect its superconducting properties.
    The research can help guide future experiments and materials design related to superconductivity and the creation of more efficient semiconductors for various electronic device applications.
    The study is published in Science Advances, a peer-reviewed, multidisciplinary scientific journal published by the American Association for the Advancement of Science.
    Strontium titanate has been on scientists’ radar for the past 60 years because it displays many interesting properties. For one, it becomes a superconductor, i.e. conducts electricity smoothly without resistance, at low temperatures and low concentrations of electrons. It also undergoes a structure change at 110 Kelvin (-262 degrees Fahrenheit), meaning the atoms in its crystalline structure change their arrangement. However, scientists are still debating what exactly causes superconductivity in this material on the microscopic level or what happens when its structure changes.
    In this study, the University of Minnesota-led team was able to shine some light on these issues.
    Using a combination of materials synthesis, analysis, and theoretical modeling, the researchers found that the structural change within strontium titanate directly affects how electric current flows through the material. They also revealed how small changes in the concentrations of electrons in the material affect its superconductivity. These insights will ultimately inform future research on this material, including research on its unique superconducting properties. More

  • in

    Photonics: Quest for elusive monolayers just got a lot simpler

    One of the most tedious, daunting tasks for undergraduate assistants in university research labs involves looking hours on end through a microscope at samples of material, trying to find monolayers.
    These two-dimensional materials — less than 1/100,000th the width of a human hair — are highly sought for use in electronics, photonics, and optoelectronic devices because of their unique properties.
    “Research labs hire armies of undergraduates to do nothing but look for monolayers,” says Jaime Cardenas, an assistant professor of optics at the University of Rochester. “It’s very tedious, and if you get tired, you might miss some of the monolayers or you might start making misidentifications.”
    Even after all that work, the labs then must doublecheck the materials with expensive Raman spectroscopy or atomic force microscopy.
    Jesús Sánchez Juárez, a PhD student in the Cardenas Lab, has made life a whole lot easier for those undergraduates, their research labs, and companies that encounter similar difficulties in detecting monolayers.
    The breakthrough technology, an automated scanning device described in Optical Materials Express, can detect monolayers with 99.9 percent accuracy — surpassing any other method to date. More

  • in

    COVID-19 superspreader events originate from small number of carriers

    Among several infectious disease terms to enter the public lexicon, superspreading events continue to make headlines years after the first cases of the COVID-19 pandemic. How features of the SARS-CoV2 virus lead to some events becoming superspreading events while leaving others relatively benign remains unresolved.
    In Physics of Fluids, by AIP Publishing, researchers in Canada and the United States created a model to connect what biologists have learned about COVID-19 superspreading with how such events have occurred in the real world. They use real-world occupancy data from more than 100,000 places where people gather across 10 U.S. cities to test several features ranging from viral loads to the occupancy and ventilation of social contact settings.
    They found that 80% of infections occurring at superspreading events arose from only 4% of those who were carrying the virus into the event, called index cases. The top feature driving the wide variability in superspreading events was the number of viral particles found in index cases, followed by the overall occupancy in social contact settings.
    The researchers’ methods take aim at the curious observations that the variability between infection events is higher than one would expect, a situation called overdispersion.
    “It is now well known that COVID-19 is airborne, and that is probably the dominant pathway of transmission,” said author Swetaprovo Chaudhuri. “This paper connects indoor airborne transmission to the evolution of the infection distribution on a population scale and shows the physics of airborne transmission is consistent with the mathematics of overdispersion.”
    The group’s model draws on numerical simulations and research by others on viral loads and the number of virus-laden aerosols ejected by people, as well as data on the occupancy of a restaurant or area from SafeGraph, a company that generates such data from anonymized cell phone signals.
    “While there are uncertainties and unknowns, it appears it is rather hard to prevent a superspreading event if the person carrying high viral load happens to be in a crowded place,” Chaudhuri said.
    Chaudhuri said the findings not only underscore the importance of efforts to curb the spread of the virus but also help describe how integral properly planning can be for each situation.
    “To mitigate such superspreading events, vaccination, ventilation, filtration, mask wearing, reduced occupancy — all are required,” he said. “However, putting them in place is not enough, knowing what size, type, parameters can mitigate risk to certain acceptable levels is important.”
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Scientists use AI to update data vegetation maps for improved wildfire forecasts

    A new technique developed by the National Center for Atmospheric Research (NCAR) uses artificial intelligence to efficiently update the vegetation maps that are relied on by wildfire computer models to accurately predict fire behavior and spread.
    In a recent study, scientists demonstrated the method using the 2020 East Troublesome Fire in Colorado, which burned through land that was mischaracterized in fuel inventories as being healthy forest. In fact the fire, which grew explosively, scorched a landscape that had recently been ravaged by pine beetles and windstorms, leaving significant swaths of dead and downed timber.
    The research team compared simulations of the fire generated by a state-of-the-art wildfire behavior model developed at NCAR using both the standard fuel inventory for the area and one that was updated with artificial intelligence (AI). The simulations that used the AI-updated fuels did a significantly better job of predicting the area burned by the fire, which ultimately grew to more than 190,000 acres of land on both sides of the continental divide.
    “One of our main challenges in wildfire modeling has been to get accurate input, including fuel data,” said NCAR scientist and lead author Amy DeCastro. “In this study, we show that the combined use of machine learning and satellite imagery provides a viable solution.”
    The research was funded by the U.S. National Science Foundation, which is NCAR’s sponsor. The modeling simulations were run at the NCAR-Wyoming Supercomputing Center on the Cheyenne system.
    Using satellites to account for pine beetle damage
    For a model to accurately simulate a wildfire, it requires detailed information about the current conditions. This includes the local weather and terrain as well as the characteristics of the plant matter that provides fuel for the flames — what’s actually available to burn and what condition it’s in. Is it dead or alive? Is it moist or dry? What type of vegetation is it? How much is there? How deep is the fuel layered on the ground? More

  • in

    Researchers investigate the links between facial recognition and Alzheimer's disease

    In recent years Alzheimer’s disease has been on the rise throughout the world and is rarely diagnosed at an early stage when it can still be effectively controlled. Using artificial intelligence, KTU researchers conducted a study to identify whether human-computer interfaces could be adapted for people with memory impairments to recognise a visible object in front of them.
    Rytis Maskeliūnas, a researcher at the Department of Multimedia Engineering at Kaunas University of Technology (KTU), considers that the classification of information visible on the face is a daily human function: “While communicating, the face “tells” us the context of the conversation, especially from an emotional point of view, but can we identify visual stimuli based on brain signals?”
    The visual processing of the human face is complex. Information such as a person’s identity or emotional state can be perceived by us, analysing the faces. The aim of the study was to analyse a person’s ability to process contextual information from the face and detect how a person responds to it.
    Face can indicate the first symptoms of the disease
    According to Maskeliūnas, many studies demonstrate that brain diseases can potentially be analysed by examining facial muscle and eye movements since degenerative brain disorders affect not only memory and cognitive functions, but also the cranial nervous system associated with the above facial (especially eye) movements.
    Dovilė Komolovaitė, a graduate of KTU Faculty of Mathematics and Natural Sciences, who co-authored the study, shared that the research has clarified whether a patient with Alzheimer’s disease visually processes visible faces in the brain in the same way as individuals without the disease. More

  • in

    Multi-spin flips and a pathway to efficient ising machines

    Combinatorial optimization problems are at the root of many industrial processes and solving them is key to a more sustainable and efficient future. Ising machines can solve certain combinatorial optimization problems, but their efficiency could be improved with multi-spin flips. Researchers have now tackled this difficult problem by developing a merge algorithm that disguises a multi-spin flip as a simpler, single-spin flip. This technology provides optimal solutions to hard computational problems in a shorter time.
    In a rapidly developing world, industries are always trying to optimize their operations and resources. Combinatorial optimization using an Ising machine helps solve certain operational problems, like mapping the most efficient route for a multi-city tour or optimizing delivery of resources. Ising machines operate by mapping the solution space to a spin configuration space and solving the associated spin problem instead. These machines have a wide range of applications in both academia and industry, tackling problems in machine learning, material design, portfolio optimization, logistics, and drug discovery. For larger problems, however, it is still difficult to obtain the optimal solution in a feasible amount of time.
    Now, while Ising machines can be optimized by integrating multi-spin flips into their hardware, this is a challenging task because it essentially means completely overhauling the software of traditional Ising machines by changing their basic operation. But a team of researchers from the Department of Computer Science and Communications Engineering, Waseda University — consisting of Assistant Professor Tatsuhiko Shirai and Professor Nozomu Togawa — has provided a novel solution to this long-standing problem.
    In their paper, which was published in IEEE Transactions on Computerson 27 May 2022, they engineered a feasible multi-spin flip algorithm by deforming the Hamiltonian (which is an energy function of the Ising model). “We have developed a hybrid algorithm that takes an infeasible multi-spin flip and expresses it in the form of a feasible single-spin flip instead. This algorithm is proposed along with our merge process, in which the original Hamiltonian of a difficult combinatorial problem is deformed into a new Hamiltonian, a problem that the hardware of a traditional Ising machine can easily solve,” explains Tatsuhiko Shirai.
    The newly-developed hybrid Ising processes are fully compatible with current methods and hardware, reducing the challenges to their widespread application. “We applied the hybrid merge process to several common examples of difficult combinatorial optimization problems. Our algorithm shows superior performance in all instances. It reduces residual energy and reaches more optimal results in shorter time — it really is a win-win,” states Nozomu Togawa.
    Their work will allow industries to solve new complex optimization problems and help tackle climate change-related issues such as increased energy demand, food shortage, and the realization of sustainable development goals (SDGs). “For example, we could use this to optimize shipping and delivery planning problems in industries to increase their efficiency while reducing carbon dioxide emissions,” Tatsuhiko Shirai adds.
    This new technology directly increases the number of applications where the Ising machine can be feasibly used to produce solutions. As a result, the Ising machine method can be increasingly used across machine learning and optimization science. The team’s technology not only improves the performance of existing Ising machines, but also provides a blueprint to the development of new Ising machine architectures in the near future. With the merge algorithm driving Ising machines further into new uncharted territories, the future of optimization, and thus sustainability practices, looks bright.
    Story Source:
    Materials provided by Waseda University. Note: Content may be edited for style and length. More

  • in

    Scientists hope to mimic the most extreme hurricane conditions

    Winds howl at over 300 kilometers per hour, battering at a two-story wooden house and ripping its roof from its walls. Then comes the water. A 6-meter-tall wave engulfs the structure, knocking the house off its foundation and washing it away.

    That’s the terrifying vision of researchers planning a new state-of-the-art facility to re-create the havoc wreaked by the most powerful hurricanes on Earth. In January, the National Science Foundation awarded a $12.8 million grant to researchers to design a facility that can simulate wind speeds of at least 290 km/h — and can, at the same time, produce deadly, towering storm surges.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    No facility exists that can produce such a one-two punch of extreme wind and water. But it’s an idea whose time has come — and not a moment too soon.

    “It’s a race against time,” says disaster researcher Richard Olson, director of extreme events research at Florida International University, or FIU, in Miami.

    Hurricanes are being made worse by human-caused climate change: They’re getting bigger, wetter, stronger and slower (SN: 9/13/18; SN: 11/11/20). Scientists project that the 2022 Atlantic Ocean hurricane season, spanning June 1 to November 30, will be the seventh straight season with more storms than average. Recent seasons have been marked by an increase in rapidly intensifying hurricanes linked to warming ocean waters (SN: 12/21/20).

    Those trends are expected to continue as the Earth heats up further, researchers say. And coastal communities around the world need to know how to prepare: how to build structures — buildings, bridges, roads, water and energy systems — that are resilient to such punishing winds and waves.

    To help with those preparations, FIU researchers are leading a team of wind and structural engineers, coastal and ocean engineers, computational modelers and resilience experts from around the United States to work out how best to simulate these behemoths. Combining extreme wind and water surges into one facility is uncharted territory, says Ioannis Zisis, a wind engineer at FIU. “There is a need to push the envelope,” Zisis says. But as for how exactly to do it, “the answer is simple: We don’t know. That’s what we want to find out.”

    Prepping for “Category 6”

    It’s not that such extreme storms haven’t been seen on Earth. Just in the last few years, Hurricanes Dorian (2019) and Irma (2017) in the Atlantic Ocean and super Typhoon Haiyan (2013) in the Pacific Ocean have brought storms with wind speeds well over 290 km/h. Such ultraintense storms are sometimes referred to as “category 6” hurricanes, though that’s not an official designation.

    The National Oceanic and Atmospheric Administration, or NOAA, rates hurricanes in the Atlantic and eastern Pacific oceans on a scale of 1 to 5, based on their wind speeds and how much damage those winds might do. Each category spans an increment of roughly 30 km/h.  

    Category 1 hurricanes, with wind speeds of 119 to 153 km/h, produce “some damage,” bringing down some power lines, toppling trees and perhaps knocking roof shingles or vinyl siding off a house. Category 5 storms, with winds starting at 252 km/h, cause “catastrophic damage,” bulldozing buildings and potentially leaving neighborhoods uninhabitable for weeks to months.

    But 5 is as high as it gets on the official scale; after all, what could be more devastating than catastrophic damage? That means that even monster storms like 2019’s Hurricane Dorian, which flattened the Bahamas with wind speeds of up to nearly 300 km/h, are still considered category 5 (SN: 9/3/19).

    “Strictly speaking, I understand that [NOAA doesn’t] see the need for a category 6,” Olson says. But there is a difference in public perception, he says. “I see it as a different type of storm, a storm that is simply scarier.”

    And labels aside, the need to prepare for these stronger storms is clear, Olson says. “I don’t think anybody wants to be explaining 20 years from now why we didn’t do this,” he says. “We have challenged nature. Welcome to payback.”

    Superstorm simulation

    FIU already hosts the Wall of Wind, a huge hurricane simulator housed in a large hangar anchored at one end by an arc of 12 massive yellow fans. Even at low wind speeds — say, around 50 km/h — the fans generate a loud, unsettling hum. At full blast, those fans can generate wind speeds of up to 252 km/h — equivalent to a low-grade category 5 hurricane.

    Inside, researchers populate the hangar with structures mimicking skyscrapers, houses and trees, or shapes representing the bumps and dips of the ground surface. Engineers from around the world visit the facility to test out the wind resistance of their own creations, watching as the winds pummel at their structural designs.

    Twelve fans tower over one end of the Wall of Wind, a large experimental facility at Florida International University in Miami. There, winds as fast as 252 kilometers per hour let researchers re-create conditions experienced during a low-grade category 5 hurricane.NSF-NHERI Wall of Wind/FIU

    It’s one of eight facilities in a national network of laboratories that study the potential impacts of wind, water and earthquake hazards, collectively called the U.S. Natural Hazards Engineering Research Infrastructure, or NHERI.

    The Wall of Wind is designed for full-scale wind testing of entire structures. Another wind machine, hosted at the University of Florida in Gainesville, can zoom in on the turbulent behavior of winds right at the boundary between the atmosphere and ground. Then there are the giant tsunami- and storm surge–simulating water wave tanks at Oregon State University in Corvallis.

    The new facility aims to build on the shoulders of these giants, as well as on other experimental labs around the country. The design phase is projected to take four years, as the team ponders how to ramp up wind speeds — possibly with more, or more powerful fans than the Wall of Wind’s — and how to combine those gale-force winds and massive water tanks in one experimental space.

    Existing labs that study wind and waves together, albeit on a much smaller scale, can offer some insight into that aspect of the design, says Forrest Masters, a wind engineer at the University of Florida and the head of that institution’s NHERI facility.

    This design phase will also include building a scaled-down version of the future lab as proof of concept. Building the full-scale facility will require a new round of funding and several more years.

    Past approaches to studying the impacts of strong wind storms tend to use one of three approaches: making field observations of the aftermath of a given storm; building experimental facilities to re-create storms; and using computational simulations to visualize how those impacts might play out over large geographical regions. Each of these approaches has strengths and limitations, says Tracy Kijewski-Correa, a disaster risk engineer at the University of Notre Dame in Indiana.

    “In this facility, we want to bring together all of these methodologies,” to get as close as possible to recreating what Mother Nature can do, Kijewski-Correa says.  

    It’s a challenging engineering problem, but an exciting one. “There’s a lot of enthusiasm for this in the broader scientific community,” Masters says. “If it gets built, nothing like it will exist.” More