More stories

  • in

    Lack of computer access linked to poorer mental health in young people during COVID-19 pandemic

    Cambridge researchers have highlighted how lack of access to a computer was linked to poorer mental health among young people and adolescents during COVID-19 lockdowns.
    The team found that the end of 2020 was the time when young people faced the most difficulties and that the mental health of those young people without access to a computer tended to deteriorate to a greater extent than that of their peers who did have access.
    The COVID-19 pandemic had a significant effect on young people’s mental health, with evidence of rising levels of anxiety, depression, and psychological distress. Adolescence is a period when people are particularly vulnerable to developing mental health disorders, which can have long-lasting consequences into adulthood. In the UK, the mental health of children and adolescents was already deteriorating before the pandemic, but the proportion of people in this age group likely to be experiencing a mental health disorder increased from 11% in 2017 to 16% in July 2020.
    The pandemic led to the closure of schools and an increase in online schooling, the impacts of which were not felt equally. Those adolescents without access to a computer faced the greatest disruption: in one study 30% of school students from middle-class homes reported taking part in live or recorded school lessons daily, while only 16% of students from working-class homes reported doing so.
    In addition to school closures, lockdown often meant that young people could not meet their friends in person. During these periods, online and digital forms of interaction with peers, such as through video games and social media, are likely to have helped reduce the impact of these social disruptions.
    Tom Metherell, who at the time of the study was an undergraduate student at Fitzwilliam College, University of Cambridge, said: “Access to computers meant that many young people were still able to ‘attend’ school virtually, carry on with their education to an extent and keep up with friends. But anyone who didn’t have access to a computer would have been at a significant disadvantage, which would only risk increasing their sense of isolation.”
    To examine in detail the impact of digital exclusion on the mental health of young people, Metherell and colleagues examined data from 1,387 10-15-year-olds collected as part of Understanding Society, a large UK-wide longitudinal survey. They focused on access to computers rather than smartphones, as schoolwork is largely possible only on a computer while at this age most social interactions occur in person at school. More

  • in

    Scientists promote FAIR standards for managing artificial intelligence models

    New data standards have been created for AI models.
    Aspiring bakers are frequently called upon to adapt award-winning recipes based on differing kitchen setups. Someone might use an eggbeater instead of a stand mixer to make prize-winning chocolate chip cookies, for instance.
    Being able to reproduce a recipe in different situations and with varying setups is critical for both talented chefs and computational scientists, the latter of whom are faced with a similar problem of adapting and reproducing their own ​”recipes” when trying to validate and work with new AI models. These models have applications in scientific fields ranging from climate analysis to brain research.
    “When we talk about data, we have a practical understanding of the digital assets we deal with,” said Eliu Huerta, scientist and lead for Translational AI at the U.S. Department of Energy’s (DOE) Argonne National Laboratory. ​”With an AI model, it’s a little less clear; are we talking about data structured in a smart way, or is it computing, or software, or a mix?”
    In a new study, Huerta and his colleagues have articulated a new set of standards for managing AI models. Adapted from recent research on automated data management, these standards are called FAIR, which stands for findable, accessible, interoperable and reusable.
    “By making AI models FAIR, we no longer have to build each system from the ground up each time,” said Argonne computational scientist Ben Blaiszik. ​”It becomes easier to reuse concepts from different groups, helping to create cross-pollination across teams.”
    According to Huerta, the fact that many AI models are currently not FAIR poses a challenge to scientific discovery. ​”For many studies that have been done to date, it is difficult to gain access to and reproduce the AI models that are referenced in the literature,” he said. ​”By creating and sharing FAIR AI models, we can reduce the amount of duplication of effort and share best practices for how to use these models to enable great science.” More

  • in

    A dual boost for optical delay scanning

    Various applications of pulsed laser sources rely on the ability to produce a series of pulse pairs with a stepwise increasing delay between them. Implementing such optical delay scanning with high precision is demanding, in particular for long delays. Addressing this challenge, ETH physicists have developed a versatile ‘dual-comb’ laser that combines a wide scanning range with high power, low noise, stable operation, and ease of use — thereby offering bright prospects for practical uses.
    Ultrafast laser technology has enabled a trove of methods for precision measurements. These include in particular a broad class of pulsed-laser experiments in which a sample is excited and, after a variable amount of time, the response is measured. In such studies, the delay between the two pulses should typically cover the range from femtoseconds to nanoseconds. In practice, scanning the delay time over a range that broad in a repeatable and precise manner is a significant challenge. A team of researchers in the group of Prof. Ursula Keller in the Department of Physics at ETH Zurich, with main contributions from Dr. Justinas Pupeikis, Dr. Benjamin Willenberg and Dr. Christopher Phillips, has now taken a major step towards a solution that has the potential to be a game changer for a wide range of practical applications. Writing in Optica, they recently introduced and demonstrated a versatile laser design that offers both outstanding specifications and a low-complexity setup that runs stably over many hours.
    The long path to long delays
    The conceptually simplest solution to scanning optical delays is based on a laser whose output is split into two pulses. While one of them takes a fixed route to the target, the optical path for the second pulse is varied with linearly displacing mirrors. The longer the path between mirrors, the later the laser pulse arrives at the target and the longer is the delay relative to the first pulse. The problem, however, is that light travels at famously high speed, covering some 0.3 metres per nanosecond (in air). For mechanical delay lines this means that scanning to delays up to several nanoseconds requires large devices with intricate and typically slow mechanical constructions.
    An elegant way to avoid complex constructions of that kind is to use a pair of ultrashort pulse lasers that emit trains of pulses, each at slightly different repetition rates. If, say, the first pulses emerging from each of the lasers are perfectly synchronized, then the second pair has a delay between the pulses that corresponds to the difference in repetition times of the two lasers. The next pair of pulses has twice that delay between them, and so on. In this manner, a perfectly linear and fast scan of optical delays without moving parts is possible — at least in theory. The most refined type of a laser system generating two such pulse trains is known as a dual comb, in reference to the spectral structure of the output consisting of a pair of optical frequency combs.
    Whereas the promise of the dual-comb approach has long been clear, progress towards applications was hindered by challenges related to designing a readily deployable laser system that provides two simultaneously operating combs of the required quality and with high relative stability. Now, Pupeikis et al. made a breakthrough towards such a practical laser, and the key is a new way to generate the two frequency combs in one and the same laser cavity.
    Two from one
    The task the researchers had at hand was to construct a laser source that consist of two coherent optical pulse trains that are basically identical in all properties except from that all-important difference in repetition rate. A natural route to achieve this is to create the two combs in the same laser cavity. Various approaches for realizing such laser-cavity multiplexing have been introduced in the past. But these typically require that additional components are placed inside the cavity. This introduces losses and different dispersion characteristics for the two combs, among other issues. The ETH physicists have overcome these issues while still ensuring that the two combs share all of the components inside the cavity.
    They achieved this by inserting into the cavity a ‘biprism’, a device with two separate angles on the surface from which light is reflected. The biprism splits the cavity mode into two parts, and the researchers show that by suitable design of the optical cavity the two combs can be spatially separated on the active intracavity components while still taking a very similar path otherwise. ‘Active components’ refers here to the gain medium, where lasing is induced, and to the so-called SESAM (semiconductor saturable absorber mirror) element, which enables mode-locking and pulse generation. The spatial separation of the modes at these stages means that two combs with distinct spacing can be generated, while most other properties are essentially duplicated. In particular, the two combs have highly correlated timing noise. That is, while imperfections in the temporal comb structure are unavoidably present, they are almost the same for the two combs, making it possible to deal with such noise.
    A gate to practical applications
    An outstanding feature of the novel single-cavity architecture now introduced is that it does not require compromises in laser design. Instead, cavity architectures that are optimal for single-comb operation can be readily adapted for dual-comb use. With that, the new design also represents a major simplification relative to commercial products and opens up a path for the production and deployment of this new class of ultrafast laser sources.
    The benchmarks achieved in the first demonstrations are highly encouraging. The researchers scanned an optical delay of 12.5 ns (equivalent to a distance of 3.75 m in air) with 2-fs precision (which is less than a micrometre in physical distance) at rates of up to 500 Hz and with record-high stability for a single-cavity dual-comb laser. The obtained performance — including the high power of more than 2.4 W for each comb, the short pulse durations of less than 140 fs, and the demonstrated coupling to an optical parametric oscillator (OPO) for converting the light into a different wavelength regime — underline the practical potential of the approach for a wide spectrum of measurements, from precision optical ranging (the optical measurement of absolute distance) to high-resolution absorption spectroscopy and nonlinear spectroscopy for sampling ultrafast phenomena. More

  • in

    Mimicking life: Breakthrough in non-living materials

    Researchers at the Eelkema Lab have discovered a new process that uses fuel to control non-living materials, similar to what living cells do. The reaction cycle can easily be applied to a wide range of materials and its rate can be controlled — a breakthrough in the emerging field of such reactions. The discovery is a step towards soft robotics; soft machines that can sense what is happening in their environment and respond accordingly. The chemists published their findings in Nature Communications last month.
    Chemist Rienk Eelkema and his group try to mimic nature, specifically the chemical reactions in living cells that provide the fuel to control the cell. The toolbox of reactions that drive non-living materials in the same way is limited, Eelkema explains. “Up to now, there are only about five types of reactions that are widely used by researchers. Those reactions have two major drawbacks: their rate is difficult to control and they only work on a specific set of molecules.” Eelkema and PhD candidate Benjamin Klemm, lead author of the publication, found a new type of reaction whose rate can be effectively controlled and which also works on a wide range of materials.
    Swelling gel
    “The essence of the reaction cycle is that it can switch between an uncharged and a charged particle by adding a chemical fuel to it,” Eelkema explains. “This allows us to charge materials and thus modify the structures of those materials, because equal charges repel each other and different charges attract each other. The type and amount of fuel determines the reaction rate, and therefore how long a charge and thus a given structure exists.” The researchers used their reaction cycle to charge a hydrogel, for example, after which the charges repelled each other and the gel began to swell.
    Soft robots
    The cycle of chemical reactions could be useful for building soft robots: little devices that are physically soft, like our skin and tissues, and can perform specific functions. “Soft robots do already exist, for example microparticles controlled externally through magnetic or electric fields. But ultimately you’d want a robot to be able to control itself: to see for itself where it is and what is happening and then respond accordingly,” says Eelkema. “You can program our cycle into a particle in advance, then leave it alone, and it performs its function independently as soon as it encounters a signal to do so.”
    Eelkema’s next step is to link the process to the environment by adding signal processing to it: “For example, a polymer particle could contain some components of such a cycle. When it encounters the last part of the reaction, the cycle is completed, serving as a signal to disintegrate or swell up, for example.”
    The definition of life
    Cells of humans or other organisms need energy for a variety of functions: to move, to sense that something is happening or to divide. “This is also the reason why we humans need to eat,” Eelkema explains. “That linking of energy to function takes place through chemical reactions and is what defines living beings. It enables cells to control when and where structures are formed or processes take place, locally and for a limited time.”
    In contrast, non-living materials can exist forever and function without an energy supply. Until a decade ago, there were no processes that could use a chemical fuel to drive interactions in non-living materials. Eelkema: “We introduced that here in Delft, along with a few other places, and since then the field has exploded.”
    Story Source:
    Materials provided by Delft University of Technology. Note: Content may be edited for style and length. More

  • in

    En route to human-environment interaction technology with soft microfingers

    Humans have always been fascinated by scales different than theirs, from giant objects such as stars, planets and galaxies, to the world of the tiny: insects, bacteria, viruses and other microscopic objects. While the microscope allows us to view and observe the microscopic world, it is still difficult to interact with it directly.
    However, human-robot interaction technology might change all that. Microrobots, for instance, can interact with the environment at much smaller scales than us. Microsensors have been used for measuring forces exerted by insects during activities such as flight or walking. However, most studies so far have only focused on measuring insect behavior rather than a direct insect-microsensor interaction.
    Against this backdrop, researchers from Ritsumeikan University in Japan have now developed a soft micro-robotic finger that can enable a more direct interaction with the microworld. The study, led by Professor Satoshi Konishi, was published in Scientific Reports on 10 October 2022 “A tactile microfinger is achieved by using a liquid metal flexible strain sensor. A soft pneumatic balloon actuator acts as an artificial muscle, allowing control and finger-like movement of the sensor. With a robotic glove, a human user can directly control the microfingers. This kind of system allows for a safe interaction with insects and other microscopic objects,” explains Prof. Konishi.
    Using their newly developed microrobot setup, the research team investigated the reaction force of a pill bug as a representative sample of an insect. The pill bug was fixed in place using a suction tool and the microfinger was used to apply a force and measure the reaction force of the bug’s legs.
    The reaction force measured from the legs of the pill bug was approximately 10 mN (millinewtons), which agreed with previously estimated values. While a representative study and a proof-of-concept, this result shows great promise towards realizing direct human interactions with the microworld. Moreover, it can have applications even in augmented reality (AR) technology. Using robotized gloves and micro-sensing tools such as the microfinger, many AR technologies concerning human-environment interactions on the microscale can be realized.
    “With our strain-sensing microfinger, we were able to directly measure the pushing motion and force of the legs and torso of a pill bug — something that has been impossible to achieve previously! We anticipate that our results will lead to further technological development for microfinger-insect interactions, leading to human-environment interactions at much smaller scales,” remarks Prof. Konishi.
    Story Source:
    Materials provided by Ritsumeikan University. Note: Content may be edited for style and length. More

  • in

    Growing pure nanotubes is a stretch, but possible

    Like a giraffe stretching for leaves on a tall tree, making carbon nanotubes reach for food as they grow may lead to a long-sought breakthrough.
    Materials theorists Boris Yakobson and Ksenia Bets at Rice University’s George R. Brown School of Engineering show how putting constraints on growing nanotubes could facilitate a “holy grail” of growing batches with a single desired chirality.
    Their paper in Science Advances describes a strategy by which constraining the carbon feedstock in a furnace would help control the “kite” growth of nanotubes. In this method, the nanotube begins to form at the metal catalyst on a substrate, but lifts the catalyst as it grows, resembling a kite on a string.
    Carbon nanotube walls are basically graphene, its hexagonal lattice of atoms rolled into a tube. Chirality refers to how the hexagons are angled within the lattice, between 0 and 30 degrees. That determines whether the nanotubes are metallic or semiconductors. The ability to grow long nanotubes in a single chirality could, for instance, enable the manufacture of highly conductive nanotube fibers or semiconductor channels of transistors.
    Normally, nanotubes grow in random fashion with single and multiple walls and various chiralities. That’s fine for some applications, but many need “purified” batches that require centrifugation or other costly strategies to separate the nanotubes.
    The researchers suggested hot carbon feedstock gas fed through moving nozzles could effectively lead nanotubes to grow for as long as the catalyst remains active. Because tubes with different chiralities grow at different speeds, they could then be separated by length, and slower-growing types could be completely eliminated. More

  • in

    Robots are taking over jobs, but not at the rate you might think

    It’s easy to believe that robots are stealing jobs from human workers and drastically disrupting the labor market; after all, you’ve likely heard that chatbots make more efficient customer service representatives and that computer programs are tracking and moving packages without the use of human hands.
    But there’s no need to panic about a pending robot takeover just yet, says a new study from BYU sociology professor Eric Dahlin. Dahlin’s research found that robots aren’t replacing humans at the rate most people think, but people are prone to severely exaggerate the rate of robot takeover.
    The study, recently published in Socius: Sociological Research for a Dynamic World, found that only 14% of workers say they’ve seen their job replaced by a robot. But those who have experienced job displacement due to a robot overstate the effect of robots taking jobs from humans by about three times.
    To understand the relationship between job loss and robots, Dahlin surveyed nearly 2,000 individuals about their perceptions of jobs being replaced by robots. Respondents were first asked to estimate the percentage of employees whose employers have replaced jobs with robots. They were then asked whether their employer had ever replaced their job with a robot.
    Those who had been replaced by a robot (about 14%), estimated that 47% of all jobs have been taken over by robots. Similarly, those who hadn’t experienced job replacement still estimated that 29% of jobs have been supplanted by robots.
    “Overall, our perceptions of robots taking over is greatly exaggerated,” said Dahlin. “Those who hadn’t lost jobs overestimated by about double, and those who had lost jobs overestimated by about three times.”
    Attention-grabbing headlines predicting a dire future of employment have likely overblown the threat of robots taking over jobs, said Dahlin, who noted that humans’ fear of being replaced by automated work processes dates to the early 1800s.
    “We expect novel technologies to be adopted without considering all of the relevant contextual impediments such as cultural, economic, and government arrangements that support the manufacturing, sale, and use of the technology,” he said. “But just because a technology can be used for something does not mean that it will be implemented.”
    Dahlin says these findings are consistent with previous studies, which suggest that robots aren’t displacing workers. Rather, workplaces are integrating both employees and robots in ways that generate more value for human labor.
    “An everyday example is an autonomous, self-propelled machine roaming the isles and cleaning floors at your local grocery store,” says Dahlin. “This robot cleans the floors while employees clean under shelves or other difficult-to-reach places.”
    Dahlin says the aviation industry is another good example of robots and humans working together. Airplane manufacturers used robots to paint airplane wings. A robot can administer one coat of paint in 24 minutes — something that would take a human painter hours to accomplish. Humans load and unload the paint while the robot does the painting.
    Story Source:
    Materials provided by Brigham Young University. Original written by Tyler Stahle. Note: Content may be edited for style and length. More

  • in

    Silicon nanochip could treat traumatic muscle loss

    Technology developed by researchers at the Indiana University School of Medicine that can change skin tissue into blood vessels and nerve cells has also shown promise as a treatment for traumatic muscle loss.
    Tissue nanotransfection is a minimally invasive nanochip device that can reprogram tissue function by applying a harmless electric spark to deliver specific genes in a fraction of a second.
    A new study, published in Nature Partner Journals Regenerative Medicine, tested tissue nanotransfection-based gene therapy as a treatment, with the goal of delivering a gene known to be a major driver of muscle repair and regeneration. They found that muscle function improved when tissue nanotransfection was used as a therapy for seven days following volumetric muscle loss in rats. It is the first study to report that tissue nanotransfection technology can be used to generate muscle tissue and demonstrates its benefit in addressing volumetric muscle loss.
    Volumetric muscle loss is the traumatic or surgical loss of skeletal muscle that results in compromised muscle strength and mobility. Incapable of regenerating the amount of lost tissue, the affected muscle undergoes substantial loss of function, thus compromising quality of life. A 20 percent loss in mass can result in an up to 90 percent loss in muscle function.
    Current clinical treatments for volumetric muscle loss are physical therapy or autologous tissue transfer (using a person’s own tissue), the outcomes of which are promising but call for improved treatment regimens.
    “We are encouraged that tissue nanotransfection is emerging as a versatile platform technology for gene delivery, gene editing and in vivo tissue reprogramming,” said Chandan Sen, director of the Indiana Center for Regenerative Medicine and Engineering, associate vice president for research and Distinguished Professor at the IU School of Medicine. “This work proves the potential of tissue nanotransfection in muscle tissue, opening up a new avenue of investigational pursuit that should help in addressing traumatic muscle loss. Importantly, it demonstrates the versatility of the tissue nanotransfection technology platform in regenerative medicine.”
    Sen also leads the regenerative medicine and engineering scientific pillar of the IU Precision Health Initiative and is lead author on the new publication.
    The Indiana Center for Regenerative Medicine and Engineering is home to the tissue nanotransfection technology for in vivo tissue reprogramming, gene delivery and gene editing. So far, tissue nanotransfection has also been achieved in blood vessel and nerve tissue. In addition, recent work has shown that topical tissue nanotransfection can achieve cell-specific gene editing of skin wound tissue to improve wound closure.
    Other study authors include Andrew Clark, Subhadip Ghatak, Poornachander Reddy Guda, Mohamed S. El Masry and Yi Xuan, all of IU, and Amy Y. Sato and Teresita Bellido of Purdue University.
    This work was supported by Department of Defense Discovery Award W81XWH-20-1-251. It is also supported in part by NIH grant DK128845 and Lilly Endowment INCITE (Indiana Collaborative Initiative for Talent Enrichment).
    Story Source:
    Materials provided by Indiana University. Note: Content may be edited for style and length. More