More stories

  • in

    Increased use of videoconferencing apps during COVID-19 pandemic led to more fatigue among workers, study finds

    Researchers at Nanyang Technological University, Singapore (NTU Singapore) have found that the increased use of videoconferencing platforms during the COVID-19 pandemic contributed to a higher level of fatigue, as reported by workers.
    Following work-from-home orders issued by governments worldwide during the pandemic, many employees attended meetings virtually using technologies such as Zoom or Microsoft Teams, instead of meeting face-to-face.
    In a survey conducted in December 2020, the NTU research team found that 46.2% of all respondents reported feelings of fatigue or being overwhelmed, tired, or drained from the use of videoconferencing applications.
    The researchers derived the results through an analysis of a survey of 1,145 Singapore residents in full-time employment and who had indicated that they use videoconferencing apps frequently.
    The researchers from the NTU Wee Kim Wee School of Communication and Information (WKWSCI) and its Centre for Information Integrity and the Internet (IN-cube), published their findings in the journal Computers in Human Behavior Reports in June 2022.
    Assistant Professor Benjamin Li, from NTU’s WKWSCI, who led the study, said: “We were motivated to conduct our study after hearing of increasing reports of fatigue from the use of videoconferencing applications during the pandemic. We found that there was a clear relation between the increased use of videoconferencing and fatigue in Singaporean workers. Our findings are even more relevant in today’s context, as the use of videoconferencing tools is here to stay, due to flexible work arrangements being a continuing trend.” He is also a member of IN-cube. More

  • in

    Using AI to diagnose birth defect in fetal ultrasound images

    In a new proof-of-concept study led by Dr. Mark Walker at the University of Ottawa’s Faculty of Medicine, researchers are pioneering the use of a unique Artificial Intelligence-based deep learning model as an assistive tool for the rapid and accurate reading of ultrasound images.
    The goal of the team’s study was to demonstrate the potential for deep-learning architecture to support early and reliable identification of cystic hygroma from first trimester ultrasound scans. Cystic hygroma is an embryonic condition that causes the lymphatic vascular system to develop abnormally. It’s a rare and potentially life-threatening disorder that leads to fluid swelling around the head and neck.
    The birth defect can typically be easily diagnosed prenatally during an ultrasound appointment, but Dr. Walker — co-founder of the OMNI Research Group (Obstetrics, Maternal and Newborn Investigations) at The Ottawa Hospital — and his research group wanted to test how well AI-driven pattern recognition could do the job.
    “What we demonstrated was in the field of ultrasound we’re able to use the same tools for image classification and identification with a high sensitivity and specificity,” says Dr. Walker, who believes their approach might be applied to other fetal anomalies generally identified by ultrasonography.
    Story Source:
    Materials provided by University of Ottawa. Note: Content may be edited for style and length. More

  • in

    Researchers learn to control electron spin at room temperature to make devices more efficient and faster

    As our devices become smaller, faster, more energy efficient, and capable of holding larger amounts of data, spintronics may continue that trajectory. Whereas electronics is based on the flow of electrons, spintronics is based on the spin of electrons.
    An electron has a spin degree of freedom, meaning that it not only holds a charge but also acts like a little magnet. In spintronics, a key task is to use an electric field to control electron spin and rotate the north pole of the magnet in any given direction.
    The spintronic field effect transistor harnesses the so-called Rashba or Dresselhaus spin-orbit coupling effect, which suggests that one can control electron spin by electric field. Although the method holds promise for efficient and high-speed computing, certain challenges must be overcome before the technology reaches its true, miniature but powerful, and eco-friendly, potential.
    For decades, scientists have been attempting to use electric fields to control spin at room temperature but achieving effective control has been elusive. In research recently published in Nature Photonics, a research team led by Jian Shi and Ravishankar Sundararaman of Rensselaer Polytechnic Institute and Yuan Ping of the University of California at Santa Cruz took a step forward in solving the dilemma.
    “You want the Rashba or Dresselhaus magnetic field to be large to make the electron spin precess quickly,” said Dr. Shi, associate professor of materials science and engineering. “If it’s weak, the electron spin precesses slowly and it would take too much time to turn the spin transistor on or off. However, often a larger internal magnetic field, if not arranged well, leads to poor control of electron spin.”
    The team demonstrated that a ferroelectric van der Waals layered perovskite crystal carrying unique crystal symmetry and strong spin-orbit coupling was a promising model material to understand the Rashba-Dresselhaus spin physics at room temperature. Its nonvolatile and reconfigurable spin-related room temperature optoelectronic properties may inspire the development of important design principles in enabling a room-temperature spin field effect transistor.
    Simulations revealed that this material was particularly exciting, according to Dr. Sundararaman, associate professor of materials science and engineering. “The internal magnetic field is simultaneously large and perfectly distributed in a single direction, which allows the spins to rotate predictably and in perfect concert,” he said. “This is a key requirement to use spins for reliably transmitting information.”
    “It’s a step forward toward the practical realization of a spintronic transistor,” Dr. Shi said.
    The first authors of this article include graduate student Lifu Zhang and postdoctoral associate Jie Jiang from Dr. Shi’s group, as well as graduate student Christian Multunas from Dr. Sundararaman’s group.
    This work was supported by the United States Army Research Office (Physical Properties of Materials program by Dr. Pani Varanasi), the Air Force Office of Scientific Research, and the National Science Foundation.
    Story Source:
    Materials provided by Rensselaer Polytechnic Institute. Original written by Katie Malatino. Note: Content may be edited for style and length. More

  • in

    Most complex protein knots

    Theoretical physicists at Johannes Gutenberg University Mainz have put Google’s artificial intelligence AlphaFold to the test and have found the most complex protein knots so far.
    The question of how the chemical composition of a protein, the amino acid sequence, determines its 3D structure has been one of the biggest challenges in biophysics for more than half a century. This knowledge about the so-called “folding” of proteins is in great demand, as it contributes significantly to the understanding of various diseases and their treatment, among other things. For these reasons, Google’s DeepMind research team has developed AlphaFold, an artificial intelligence that predicts 3D structures.
    A team consisting of researchers from Johannes Gutenberg University Mainz (JGU) and the University of California, Los Angeles, has now taken a closer look at these structures and examined them with respect to knots. We know knots primarily from shoelaces and cables, but they also occur on the nanoscale in our cells. Knotted proteins can not only be used to assess the quality of structure predictions but also raise important questions about folding mechanisms and the evolution of proteins.
    The most complex knots as a test for AlphaFold
    “We investigated numerically all — that is some 100,000 — predictions of AlphaFold for new protein knots,” said Maarten A. Brems, a PhD student in the group of Dr. Peter Virnau at Mainz University. The goal was to identify rare, high-quality structures containing complex and previously unknown protein knots to provide a basis for experimental verification of AlphaFold’s predictions. The study not only discovered the most complex knotted protein to date but also the first composite knots in proteins. The latter can be thought of as two separate knots on the same string. “These new discoveries also provide insight into the evolutionary mechanisms behind such rare proteins,” added Robert Runkel, a theoretical physicist also involved in the project. The results of this study were recently published in Protein Science.
    Dr. Peter Virnau is pleased with the results: “We have already established a collaboration with our colleague Todd Yeates from UCLA to confirm these structures experimentally. This line of research will shape the biophysics community’s view of artificial intelligence — and we are fortunate to have an expert like Dr. Yeates involved.”
    Story Source:
    Materials provided by Johannes Gutenberg Universitaet Mainz. Note: Content may be edited for style and length. More

  • in

    Virtual reality app trial shown to reduce common phobias

    Results from a University of Otago, Christchurch trial suggest fresh hope for the estimated one-in-twelve people worldwide suffering from a fear of flying, needles, heights, spiders and dogs.
    The trial, led by Associate Professor Cameron Lacey, from the Department of Psychological Medicine, studied phobia patients using a headset and a smartphone app treatment programme — a combination of Virtual Reality (VR) 360-degree video exposure therapy and cognitive behavioural therapy (CBT).
    Participants downloaded a fully self-guided smartphone app called “oVRcome,” developed by Christchurch tech entrepreneur Adam Hutchinson, aimed at treating patients with phobia and anxiety.
    The app was paired with a headset to immerse participants in virtual environments to help treat their phobia.
    The results from the trial, just published in the Australian and New Zealand Journal of Psychiatry, showed a 75 per cent reduction in phobia symptoms after six weeks of the treatment programme.
    “The improvements they reported suggests there’s great potential for the use of VR and mobile phone apps as a means of self-guided treatment for people struggling with often-crippling phobias,” Associate Professor Lacey says. More

  • in

    Machine learning identifies gun purchasers at risk of suicide

    A new study from the Violence Prevention Research Program (VPRP) at UC Davis suggests machine learning, a type of artificial intelligence, may help identify handgun purchasers who are at high risk of suicide. It also identified individual and community characteristics that are predictive of firearm suicide. The study was published in JAMA Network Open.
    Previous research has shown the risk of suicide is particularly high immediately after purchase, suggesting that acquisition itself is an indicator of elevated suicide risk.
    Risk factors identified by the algorithm to be predictive of firearm suicide included: older age first-time firearm purchaser white race living in close proximity to the gun dealer purchasing a revolver”While limiting access to firearms among individuals at increased risk for suicide presents a critical opportunity to save lives, accurately identifying those at risk remains a key challenge. Our results suggest the potential utility of handgun records in identifying high-risk individuals to aid suicide prevention,” said Hannah S. Laqueur, an assistant professor in the Department of Emergency Medicine and lead author of the study.
    In 2020, almost 48,000 Americans died by suicide, of which more than 24,000 were firearm suicides. Firearms are by far the most lethal method of suicide. Access to firearms has been identified as a major risk factor for suicideand is a potential focus for suicide prevention.
    Methodology
    To see if an algorithm could identify gun purchasers at risk of firearm suicide, the researchers looked at data from almost five million firearm transactions from the California Dealer Record of Sale database (DROS). The records, which spanned from 1996 to 2015, represented almost two million individuals. They also looked at firearm suicide data from California death records between 1996 and 2016. More

  • in

    Researchers use quantum-inspired approach to increase lidar resolution

    Researchers have shown that a quantum-inspired technique can be used to perform lidar imaging with a much higher depth resolution than is possible with conventional approaches. Lidar, which uses laser pulses to acquire 3D information about a scene or object, is usually best suited for imaging large objects such as topographical features or built structures due to its limited depth resolution.
    “Although lidar can be used to image the overall shape of a person, it typically doesn’t capture finer details such as facial features,” said research team leader Ashley Lyons from the University of Glasgow in the United Kingdom. “By adding extra depth resolution, our approach could capture enough detail to not only see facial features but even someone’s fingerprints.”
    In the Optica Publishing Group journal Optics Express, Lyons and first author Robbie Murray describe the new technique, which they call imaging two-photon interference lidar. They show that it can distinguish reflective surfaces less than 2 millimeters apart and create high-resolution 3D images with micron-scale resolution.
    “This work could lead to much higher resolution 3D imaging than is possible now, which could be useful for facial recognition and tracking applications that involve small features,” said Lyons. “For practical use, conventional lidar could be used to get a rough idea of where an object might be and then the object could be carefully measured with our method.”
    Using classically entangled light
    The new technique uses “quantum inspired” interferometry, which extracts information from the way that two light beams interfere with each other. Entangled pairs of photons — or quantum light — are often used for this type of interferometry, but approaches based on photon entanglement tend to perform poorly in situations with high levels of light loss, which is almost always the case for lidar. To overcome this problem, the researchers applied what they’ve learned from quantum sensing to classical (non-quantum) light. More

  • in

    Researchers develop computer model to predict whether a pesticide will harm bees

    Researchers in the Oregon State University College of Engineering have harnessed the power of artificial intelligence to help protect bees from pesticides.
    Cory Simon, assistant professor of chemical engineering, and Xiaoli Fern, associate professor of computer science, led the project, which involved training a machine learning model to predict whether any proposed new herbicide, fungicide or insecticide would be toxic to honey bees based on the compound’s molecular structure.
    The findings, featured on the cover of The Journal of Chemical Physics in a special issue, “Chemical Design by Artificial Intelligence,” are important because many fruit, nut, vegetable and seed crops rely on bee pollination.
    Without bees to transfer the pollen needed for reproduction, almost 100 commercial crops in the United States would vanish. Bees’ global economic impact is annually estimated to exceed $100 billion.
    “Pesticides are widely used in agriculture, which increase crop yield and provide food security, but pesticides can harm off-target species like bees,” Simon said. “And since insects, weeds, etc. eventually evolve resistance, new pesticides must continually be developed, ones that don’t harm bees.”
    Graduate students Ping Yang and Adrian Henle used honey bee toxicity data from pesticide exposure experiments, involving nearly 400 different pesticide molecules, to train an algorithm to predict if a new pesticide molecule would be toxic to honey bees.
    “The model represents pesticide molecules by the set of random walks on their molecular graphs,” Yang said.
    A random walk is a mathematical concept that describes any meandering path, such as on the complicated chemical structure of a pesticide, where each step along the path is decided by chance, as if by coin tosses.
    Imagine, Yang explains, that you’re out for an aimless stroll along a pesticide’s chemical structure, making your way from atom to atom via the bonds that hold the compound together. You travel in random directions but keep track of your route, the sequence of atoms and bonds that you visit. Then you go out on a different molecule, comparing the series of twists and turns to what you’ve done before.
    “The algorithm declares two molecules similar if they share many walks with the same sequence of atoms and bonds,” Yang said. “Our model serves as a surrogate for a bee toxicity experiment and can be used to quickly screen proposed pesticide molecules for their toxicity.”
    The National Science Foundation supported this research.
    Story Source:
    Materials provided by Oregon State University. Original written by Steve Lundeberg. Note: Content may be edited for style and length. More