More stories

  • in

    Better coaching needed to prevent burnout among video gaming pros

    Early research into the growing electronic sports (esports) industry highlights a need for better coaching to prevent burnout among professional players.
    The study, conducted by researchers at the University of Waterloo, identified several areas, including player fatigue, mental stress and peak performance conditions, that require in-depth research to improve coaching and player performance.
    “They burn out because they spend long hours sitting at desks playing and training,” said Bader Sabtan, a systems design engineering PhD student who led the study. “It results in all kinds of problems, from mental health issues to back and wrist injuries.”
    In a survey of professional League of Legends teams, it was found there are virtually no standardized coaching approaches or techniques to guide young players.
    Instead, players work to remain competitive in the constantly changing team battle video game, one of several with lucrative fan followings around the world, by practicing 12 to 14 hours a day, six days a week.
    Professional esports fill stadiums with spectators as players, who average just 18 to 20 years of age, compete at computers while their games are shown on giant screens. One championship event in 2018 drew almost 100 million online viewers.
    The researchers focused on League of Legends, which has 47 pro teams in North America, Europe, Korea and China. Players can earn more than $400,000 a year, but rarely have careers that last beyond three or four years.
    Coaches who participated in the study unanimously agreed that methods must be developed to make practice more efficient and strategic to reduce the demands on players.
    “I was surprised to learn even top professional coaches don’t have systematic training methods,” said Shi Cao, a systems design engineering professor and a member of the Games Institute at Waterloo. “Nothing is supported by scientific evidence or research.
    “Just as physiology and kinesiology research supports traditional sports, cognitive psychology and human factors engineering can support mental work like esports,” said Cao, an esports fan and recreational player.
    Sabtan can personally relate to the relentless demands on the best players. A few years ago, he was in the top one per cent of League of Legends players and spent up to 50 hours a week practicing to stay there.
    “Right now, there is no other option,” Sabtan said. “The required sharpness, game knowledge and reaction speed are only achieved by practicing and repetition, so they just play the game. They don’t have social lives. They don’t have girlfriends or boyfriends. It’s unsustainable.”
    Story Source:
    Materials provided by University of Waterloo. Note: Content may be edited for style and length. More

  • in

    Researchers generate high-quality quantum light with modular waveguide device

    For the first time, researchers have successfully generated strongly nonclassical light using a modular waveguide-based light source. The achievement represents a crucial step toward creating faster and more practical optical quantum computers.
    “Our goal is to dramatically improve information processing by developing faster quantum computers that can perform any type of computation without errors,” said research team member Kan Takase from the University of Tokyo. “Although there are several ways to create a quantum computer, light-based approaches are promising because the information processor can operate at room temperature and the computing scale can be easily expanded.”
    In the Optica Publishing Group journal Optics Express, a multi-institutional team of researchers from Japan describe the waveguide optical parametric amplifier (OPA) module they created for quantum experiments. Combining this device with a specially designed photon detector allowed them to generate a state of light known as Schrödinger cat, which is a superposition of coherent states.
    “Our method for generating quantum light can be used to increase the computing power of quantum computers and to make the information processer more compact,” said Takase. “Our approach outperforms conventional methods, and the modular waveguide OPA is easy to operate and integrate into quantum computers.”
    Generating strongly nonclassical light
    Continuous wave squeezed light is used to generate the various quantum states necessary to perform quantum computing. For the best computing performance, the squeezed light source must exhibit very low levels of light loss and be broadband, meaning it includes a wide range of frequencies. More

  • in

    Scientists find 'knob' to control magnetic behavior in quantum material

    Magnetism, one of the oldest technologies known to humans, is at the forefront of new-age materials that could enable next-generation lossless electronics and quantum computers. Researchers led by Penn State and the university of California, San Diego have discovered a new ‘knob’ to control the magnetic behavior of one promising quantum material, and the findings could pave the way toward novel, efficient and ultra-fast devices.
    “The unique quantum mechanical make-up of this material — manganese bismuth telluride — allows it to carry lossless electrical currents, something of tremendous technological interest,” said Hari Padmanabhan, who led the research as a graduate student at Penn State. “What makes this material especially intriguing is that this behavior is deeply connected to its magnetic properties. So, a knob to control magnetism in this material could also efficiently control these lossless currents.”
    Manganese bismuth telluride, a 2D material made of atomically thin stacked layers, is an example of a topological insulator, exotic materials that simultaneously can be insulators and conductors of electricity, the scientists said. Importantly, because this material is also magnetic, the currents conducted around its edges could be lossless, meaning they do not lose energy in the form of heat. Finding a way to tune the weak magnetic bonds between the layers of the material could unlock these functions.
    Tiny vibrations of atoms, or phonons, in the material may be one way to achieve this, the scientists reported April 8 in the journal Nature Communications.
    “Phonons are tiny atomic wiggles — atoms dancing together in various patterns, present in all materials,” Padmanabhan said. “We show that these atomic wiggles can potentially function as a knob to tune the magnetic bonding between the atomic layers in manganese bismuth telluride.”
    The scientists at Penn State studied the material using a technique called magneto-optical spectroscopy — shooting a laser onto a sample of the material and measuring the color and intensity of the reflected light, which carries information on the atomic vibrations. The team observed how the vibrations changed as they altered the temperature and magnetic field. More

  • in

    AI can predict probability of COVID-19 vs flu based on symptoms

    Testing shortages, long waits for results, and an over-taxed health care system have made headlines throughout the COVID-19 pandemic. These issues can be further exacerbated in small or rural communities in the US and globally. Additionally, respiratory symptoms of COVID-19 such as fever and cough are also associated with the flu, which complicates non-lab diagnoses during certain seasons. A new study by College of Health and Human Services researchers is designed to help identify which symptoms are more likely to indicate COVID during flu season. This is the first study to take seasonality into account.
    Farrokh Alemi, principal investigator and professor of Health Administration and Policy, and other Mason researchers predict the probability that a patient has COVID-19, flu, or another respiratory illness prior to testing, depending on the season. This can help clinicians triage patients who are most suspected of having COVID-19.
    “When access to reliable COVID testing is limited or test results are delayed, clinicians, especially those who are community-based, are more likely to rely on signs and symptoms than on laboratory findings to diagnose COVID-19,” said Alemi, who observed these challenges at points throughout the pandemic. “Our algorithm can help health care providers triage patient care while they are waiting on lab testing or help prioritize testing if there are testing shortages.”
    The findings suggest that community-based health care providers should follow different signs and symptoms for diagnosing COVID depending on the time of year. Outside of flu season, fever is an even stronger predictor of COVID than during flu season. During flu season, a person with a cough is more likely to have the flu than COVID. The study showed that assuming anyone with a fever during flu season has COVID would be incorrect. The algorithm relied on different symptoms for patients in different age and gender. The study also showed that symptom clusters are more important in diagnosis of COVID-19 than symptoms alone.
    The algorithms were created by analyzing the symptoms reported by 774 COVID patients in China and 273 COVID patients in the United States. The analysis also included 2,885 influenza and 884 influenza-like illnesses in US patients. “Modeling the Probability of COVID-19 Based on Symptom Screening and Prevalence of Influenza and Influenza-Like Illnesses” was published in Quality Management in Health Care’s April/June 2022 issue. The rest of the research team is also from Mason: Professor of Global Health and Epidemiology Health Amira Roess, Affiliate Faculty Jee Vang, and doctoral candidate Elina Guralnik.
    “Though helpful, the algorithms are too complex to expect clinicians to perform these calculations while providing care. The next step is to create an AI, web-based, calculator that can be used in the field. This would allow clinicians to arrive at a presumed diagnosis prior to the visit,” said Alemi. From there, clinicians can make triage decisions on how to care for the patient while waiting for official lab results.
    The study does not include any COVID-19 patients without respiratory symptoms, which includes asymptomatic people. Additionally, the study did not differentiate between the first and second week of onset of symptoms, which can vary.
    This research was a prototype of how existing data can be used to find signature symptoms of a new disease. The methodology may have relevance beyond this pandemic.
    “When there is a new outbreak, collecting data is time consuming. Rapid analysis of existing data can reduce the time to differentiate presentation of new diseases from illnesses with overlapping symptoms. The method in this paper is useful for rapid response to the next pandemic,” said Alemi.
    Story Source:
    Materials provided by George Mason University. Original written by Mary Cunningham. Note: Content may be edited for style and length. More

  • in

    New platform optimizes selection of combination cancer therapies

    Researchers at The University of Texas MD Anderson Cancer Center have developed a new bioinformatics platform that predicts optimal treatment combinations for a given group of patients based on co-occurring tumor alterations. In retrospective validation studies, the tool selected combinations that resulted in improved patient outcomes across both pre-clinical and clinical studies.
    The findings were presented today at the American Association for Cancer Research (AACR) Annual Meeting 2022 by principal investigator Anil Korkut, Ph.D., assistant professor of Bioinformatics and Computational Biology. The study results also were published today in Cancer Discovery.
    The platform, called REcurrent Features LEveraged for Combination Therapy (REFLECT), integrates machine learning and cancer informatics algorithms to analyze biological tumor features — including genetic mutations, copy number changes, gene expression and protein expression aberrations — and identify frequent co-occurring alterations that could be targeted by multiple drugs.
    “Our ultimate goal is to make precision oncology more effective and create meaningful patient benefit,” Korkut said. “We believe REFLECT may be one of the tools that can help overcome some of the current challenges in the field by facilitating both the discovery and the selection of combination therapies matched to the molecular composition of tumors.”
    Targeted therapies have improved clinical outcomes for many patients with cancer, but monotherapies against a single target often lead to treatment resistance. Cancer cells frequently rely on co-occurring alterations, such as mutations in two signaling pathways, to drive tumor progression. Increasing evidence suggests that identifying and targeting both alterations simultaneously could increase durable responses, Korkut explained.
    Led by Korkut and postdoctoral fellow Xubin Li, Ph.D., the researchers built and used the REFLECT tool to develop a systematic and unbiased approach to match patients with optimal combination therapies. More

  • in

    More than 57 billion tons of soil have eroded in the U.S. Midwest

    With soils rich for cultivation, most land in the Midwestern United States has been converted from tallgrass prairie to agricultural fields. Less than 0.1 percent of the original prairie remains.

    This shift over the last 160 years has resulted in staggering — and unsustainable — soil erosion rates for the region, researchers report in the March Earth’s Future. The erosion is estimated to be double the rate that the U.S. Department of Agriculture says is sustainable. If it continues unabated, it could significantly limit future crop production, the scientists say.

    In the new study, the team focused on erosional escarpments — tiny cliffs formed through erosion — lying at boundaries between prairie and agricultural fields (SN: 1/20/96). “These rare prairie remnants that are scattered across the Midwest are sort of a preservation of the pre-European-American settlement land surface,” says Isaac Larsen, a geologist at the University of Massachusetts Amherst.

    At 20 sites in nine Midwestern states, with most sites located in Iowa, Larsen and colleagues used a specialized GPS system to survey the altitude of the prairie and farm fields. That GPS system “tells you where you are within about a centimeter on Earth’s surface,” Larsen says. This enables the researchers to detect even small differences between the height of the prairie and the farmland.

    At each site, the researchers took these measurements at 10 or more spots. The team then measured erosion by comparing the elevation differences of the farmed and prairie land. The researchers found that the agricultural fields were 0.37 meters below the prairie areas, on average.

    Geologist Isaac Larsen stands at an erosional escarpment, a meeting point of farmland and prairie, in Stinson Prairie, Iowa. Studying these escarpments shows there’s been a startling amount of erosion in the U.S. Midwest since farming started there more than 150 years ago.University of Massachusetts Amherst

    This corresponds to the loss of roughly 1.9 millimeters of soil per year from agricultural fields since the estimated start of traditional farming at these sites more than a century and a half ago, the researchers calculate. That rate is nearly double the maximum of one millimeter per year that the USDA considers sustainable for these locations.  

    There are two main ways that the USDA currently estimates the erosion rate in the region. One way estimates the rate to be about one-third of that reported by the researchers. The other estimates the rate to be just one-eighth of the researchers’ rate. Those USDA estimates do not include tillage, a conventional farming process in which machinery is used to turn the soil and prepare it for planting. By disrupting the soil structure, tilling increases surface runoff and erosion due to soil moving downslope.

    Larsen and colleagues say that they would like to see tillage incorporated into the USDA’s erosion estimates. Then, the USDA numbers might better align with the whopping 57.6 billion metric tons of soil that the researchers estimate has been lost across the entire region in the last 160 years.

    This massive “soil loss is already causing food production to decline,” Larsen says. As soil thickness decreases, the amount of corn successfully grown in Iowa is reduced, research shows. And disruption to the food supply could continue or worsen if the estimated rate of erosion persists.

    Not everyone is convinced that the average amount of soil lost each year has remained steady since farming in the region started. Much of the erosion that the researchers measured could have been caused in the earlier histories of these sites, dating back to when farmers “began to break prairies and/or forests and clear things,” says agronomist Michael Kucera.

    Perhaps current erosion rates have slowed, says Kucera, who is the steward of the National Erosion Database at the USDA’s National Soil Survey Center in Lincoln, Neb.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    To help reduce future erosion, farmers can use no-till farming and plant cover crops, the researchers note. By planting cover crops during off-seasons, farmers reduce the amount of time the soil is bare, making it less vulnerable to wind and water erosion.

    In the United States, no-till and similar practices to help limit erosion have been implemented at least sometimes by 51 percent of corn, cotton, soybean and wheat farmers, according to the USDA. But cover crops are only used in about 5 percent of cases where they could be, says Bruno Basso, a sustainable agriculture researcher at Michigan State University in East Lansing who wasn’t involved with the study. “It costs $40 to $50 per acre to plant a cover crop,” he says. Though some government grant funding is available, “the costs of cover crops are not supported,” and there is a need for additional incentives, he says.

    To implement no-till strategies, “the farmer has to be a better manager,” says Keith Berns, a farmer who co-owns and operates Green Cover Seed, which is headquartered in Bladen, Neb. His company provides cover crop seeds and custom seed mixtures. He has also been using no-till practices for decades.

    To succeed, farmers must decide what particular cover crops are most suitable for their land, when to grow them and when to kill them. Following these regimens, which can be more complicated than traditional farming, can be “difficult to do on large scales,” Berns says.

    Cover crops can confer benefits such as helping farmers repair erosion and control weeds within the first year of planting. But it can take multiple years for the crops’ financial benefits to exceed their cost. Some farmers don’t even own the land they work, making it even less lucrative for them to invest in cover crops, Berns notes. 

    Building soil health can take half a decade, Basso says. “Agriculture is really always facing this dilemma [of] short-sighted, economically driven decisions versus longer-term sustainability of the whole enterprise.” More

  • in

    Engineering team develops new AI algorithms for high accuracy and cost effective medical image diagnostics

    Medical imaging is an important part of modern healthcare, enhancing both the precision, reliability and development of treatment for various diseases. Artificial intelligence has also been widely used to further enhance the process.
    However, conventional medical image diagnosis employing AI algorithms require large amounts of annotations as supervision signals for model training. To acquire accurate labels for the AI algorithms — radiologists, as part of the clinical routine, prepare radiology reports for each of their patients, followed by annotation staff extracting and confirming structured labels from those reports using human-defined rules and existing natural language processing (NLP) tools. The ultimate accuracy of extracted labels hinges on the quality of human work and various NLP tools. The method comes at a heavy price, being both labour intensive and time consuming.
    An engineering team at the University of Hong Kong (HKU) has developed a new approach “REFERS” (Reviewing Free-text Reports for Supervision), which can cut human cost down by 90%, by enabling the automatic acquisition of supervision signals from hundreds of thousands of radiology reports at the same time. It attains a high accuracy in predictions, surpassing its counterpart of conventional medical image diagnosis employing AI algorithms.
    The innovative approach marks a solid step towards realizing generalized medical artificial intelligence. The breakthrough was published in Nature Machine Intelligence in the paper titled “Generalized radiograph representation learning via cross-supervision between images and free-text radiology reports.”
    “AI-enabled medical image diagnosis has the potential to support medical specialists in reducing their workload and improving the diagnostic efficiency and accuracy, including but not limited to reducing the diagnosis time and detecting subtle disease patterns,” said Professor YU Yizhou, leader of the team from HKU’s Department of Computer Science under the Faculty of Engineering.
    “We believe abstract and complex logical reasoning sentences in radiology reports provide sufficient information for learning easily transferable visual features. With appropriate training, REFERS directly learns radiograph representations from free-text reports without the need to involve manpower in labelling.” Professor Yu remarked.
    For training REFERS, the research team uses a public database with 370,000 X-Ray images, and associated radiology reports, on 14 common chest diseases including atelectasis, cardiomegaly, pleural effusion, pneumonia and pneumothorax. The researchers managed to build a radiograph recognition model using 100 radiographs only, and attains 83% accuracy in predictions. When the number was increased to 1,000, their model exhibits amazing performance with an accuracy of 88.2%, which surpasses its counterpart trained with 10,000 radiologist annotations (accuracy at 87.6%). When 10,000 radiographs were used, the accuracy is at 90.1%. In general, an accuracy above 85% in predictions is useful in real-world clinical applications.
    REFERS achieves the goal by accomplishing two report-related tasks, i.e., report generation and radiograph-report matching. In the first task, REFERS translates radiographs into text reports by first encoding radiographs into an intermediate representation, which is then used to predict text reports via a decoder network. A cost function is defined to measure the similarity between predicted and real report texts, based on which gradient-based optimization is employed to train the neural network and update its weights.
    As for the second task, REFERS first encodes both radiographs and free-text reports into the same semantic space, where representations of each report and its associated radiographs are aligned via contrastive learning.
    “Compared to conventional methods that heavily rely on human annotations, REFERS has the ability to acquire supervision from each word in the radiology reports. We can substantially reduce the amount of data annotation by 90% and the cost to build medical artificial intelligence. It marks a significant step towards realizing generalized medical artificial intelligence, ” said the paper’s first author Dr ZHOU Hong-Yu.
    Story Source:
    Materials provided by The University of Hong Kong. Note: Content may be edited for style and length. More

  • in

    The ethics of research on 'conscious' artificial brains

    One way in which scientists are studying how the human body grows and ages is by creating artificial organs in the laboratory. The most popular of these organs is currently the organoid, a miniaturized organ made from stem cells. Organoids have been used to model a variety of organs, but brain organoids are the most clouded by controversy.
    Current brain organoids are different in size and maturity from normal brains. More importantly, they do not produce any behavioral output, demonstrating they are still a primitive model of a real brain. However, as research generates brain organoids of higher complexity, they will eventually have the ability to feel and think. In response to this anticipation, Associate Professor Takuya Niikawa of Kobe University and Assistant Professor Tsutomu Sawai of Kyoto University’s Institute for the Advanced Study of Human Biology (WPI-ASHBi), in collaboration with other philosophers in Japan and Canada, have written a paper on the ethics of research using conscious brain organoids. The paper can be read in the academic journal Neuroethics.
    Working regularly with both bioethicists and neuroscientists who have created brain organoids, the team has been writing extensively about the need to construct guidelines on ethical research. In the new paper, Niikawa, Sawai and their coauthors lay out an ethical framework that assumes brain organoids already have consciousness rather than waiting for the day when we can fully confirm that they do.
    “We believe a precautionary principle should be taken,” Sawai said. “Neither science nor philosophy can agree on whether something has consciousness. Instead of arguing about whether brain organoids have consciousness, we decided they do as a precaution and for the consideration of moral implications.”
    To justify this assumption, the paper explains what brain organoids are and examines what different theories of consciousness suggest about brain organoids, inferring that some of the popular theories of consciousness permit them to possess consciousness.
    Ultimately, the framework proposed by the study recommends that research on human brain organoids follows the ethical principles similar to those for animal experiments. Therefore, recommendations include using the minimum number of organoids possible and doing the upmost to prevent pain and suffering while considering the interests of the public and patients.
    “Our framework was designed to be simple and is based on valence experiences and the sophistication of those experiences,” said Niikawa.
    This, the paper explains, provides guidance on how strict the conditions for experiments should be. These conditions should be decided based upon several criteria, which include the physiological state of the organoid, the stimuli to which it responds, the neural structures it possesses, and its cognitive functions.
    Moreover, the paper argues that this framework is not exclusive to brain organoids. It can be applied to anything that is perceived to hold consciousness, such as fetuses, animals and even robots.
    “Our framework depends on the precautionary principle. Something that we believe does not have consciousness today may, through the development of consciousness studies, be found to have consciousness in the future. We can consider how we ought to treat these entities based on our ethical framework,” conclude Niikawa and Sawai.
    Story Source:
    Materials provided by Kyoto University. Note: Content may be edited for style and length. More