More stories

  • in

    Leadership online: Charisma matters most in video communication

    Managers need to make a consistent impression in order to motivate and inspire people, and that applies even more to video communication than to other digital channels. That is the result of a study by researchers at Karlsruhe Institute of Technology (KIT). They investigated the influence that charismatic leadership tactics used in text, audio and video communication channels have on employee performance. They focused on mobile work and the gig economy, in which jobs are flexibly assigned to freelancers via online platforms.
    Since the onset of the Covid-19 pandemic, more and more people are working partly or entirely from home or in mobile work arrangements. At the same time, the so-called gig economy is growing. It involves the flexible assignment of short-term work to freelancers or part-time, low-wage staff via online platforms. Both trends are accelerating the digitalization of work. However, compared to face-to-face conversation between people in the same place, communication through digital channels offers fewer opportunities to motivate people and show charisma. This presents new challenges for managers. The impact of charismatic leadership tactics (CLTs) and the choice of communications channel (text, audio or video) on staff performance is the subject of a study by Petra Nieken, professor of human resource management at the Institute of Management at KIT. The study has been published in the journal The Leadership Quarterly.
    Charismatic Leadership Tactics Can Be Learned and Objectively Observed
    A charismatic leadership style can be learned; researchers speak of charismatic leadership tactics, which include verbal, paraverbal and non-verbal means such as metaphors, anecdotes, contrasts, rhetorical questions, pitch and tone of voice, and gestures. CLTs can be objectively observed and measured. They can be selectively changed in randomized controlled trials. “Managers can use the entire range of CLTs in face-to-face meetings. Digital communication reduces the opportunities to signal charisma,” says Nieken. “Depending on the communication channel, visual and/or acoustic cues can be missing. The question is whether people’s performance suffers as a result or if they adjust their expectations to the selected channel.”
    In the first part of her study, Nieken conducted a field test with text, audio and video communication channels in which a task description was presented neutrally in one case and with the use of as many CLTs as possible in the other. In the neutral case, video messages led to lower performance than did audio and text messages. In contrast, there were no significant differences in performance in the CLT case. “The results show a positive correlation between video communication and charismatic communication; the charismatic video led to better performance than the neutral video,” explains Nieken. “So we can conclude that it’s most important for managers to convey a consistent impression when they use the video channel.”
    Traditional Charisma Questionnaires Do Not Predict Staff Performance
    In the second part of her study, Nieken had the different cases assessed with traditional questionnaires like the Multifactor Leadership Questionnaire (MLQ) and compared the results with those from the first part. Charisma noted in the questionnaires correlated with the use of CLTs but not with staff performance. “Traditional questionnaires like the MLQ are not suitable for predicting how people will perform in mobile work situations, working from home or in the gig economy,” concludes Nieken.
    Story Source:
    Materials provided by Karlsruher Institut für Technologie (KIT). Note: Content may be edited for style and length. More

  • in

    AI pilot can navigate crowded airspace

    A team of researchers at Carnegie Mellon University believe they have developed the first AI pilot that enables autonomous aircraft to navigate a crowded airspace.
    The artificial intelligence can safely avoid collisions, predict the intent of other aircraft, track aircraft and coordinate with their actions, and communicate over the radio with pilots and air traffic controllers. The researchers aim to develop the AI so the behaviors of their system will be indistinguishable from those of a human pilot.
    “We believe we could eventually pass the Turing Test,” said Jean Oh, an associate research professor at CMU’s Robotics Institute (RI) and a member of the AI pilot team, referring to the test of an AI’s ability to exhibit intelligent behavior equivalent to a human.
    To interact with other aircraft as a human pilot would, the AI uses both vision and natural language to communicate its intent with other aircraft, whether piloted or not. This behavior leads to safe and socially compliant navigation. Researchers achieved this implicit coordination by training the AI on data collected at the Allegheny County Airport and the Pittsburgh-Butler Regional Airport that included air traffic patterns, images of aircraft and radio transmissions.
    The AI uses six cameras and a computer vision system to detect nearby aircraft in a manner similar to that of a human pilot. Its automatic speech recognition function uses natural language processing techniques to both understand incoming radio messages and communicate with pilots and air traffic controllers using speech.
    Advancement in autonomous aircraft will broaden opportunities for drones, air taxis, helicopters and other aircraft to operate — moving people and goods, inspecting infrastructure, treating fields to protect crops, and monitoring for poaching or deforestation — often without a pilot behind the controls. These aircraft will have to fly, however, in an airspace already crowded with small airplanes, medical helicopters and more. More

  • in

    Relocated beavers helped mitigate some effects of climate change

    In the upper reaches of the Skykomish River in Washington state, a pioneering team of civil engineers is keeping things cool. Relocated beavers boosted water storage and lowered stream temperatures, indicating such schemes could be an effective tool to mitigate some of the effects of climate change.

    In just one year after their arrival, the new recruits brought average water temperatures down by about 2 degrees Celsius and raised water tables as much as about 30 centimeters, researchers report in the July Ecosphere. While researchers have discussed beaver dams as a means to restore streams and bulk up groundwater, the effects following a large, targeted relocation had been relatively unknown (SN: 3/26/21).

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    “That water storage is so critical during the drier periods, because that’s what can keep the ecosystem resilient to droughts and fires,” says Emily Fairfax, an ecohydrologist at California State University Channel Islands in Camarillo who was not involved with the study.

    The Skykomish River flows down the west side of Washington’s Cascade Mountains. Climate change is already transforming the region’s hydrology: The snowpack is shrinking, and snowfall is turning to rain, which drains quickly. Waters are also warming, which is bad news for salmon populations that struggle to survive in hot water.

    Beavers are known to tinker with hydrology too (SN: 7/27/18). They build dams, ponds and wetlands, deepening streams for their burrows and lodges (complete with underwater entrances). The dams slow the water, storing it upstream for longer, and cool it as it flows through the ground underneath.

    From 2014 to 2016, aquatic ecologist Benjamin Dittbrenner and colleagues relocated 69 beavers (Castor canadensis) from lowland areas of the state to 13 upstream sites in the Skykomish River basin, some with relic beaver ponds and others untouched. As beavers are family-oriented, the team moved whole clans to increase the chances that they would stay put.

    The researchers also matched singletons up with potential mates, which seemed to work well: “They were not picky at all,” says Dittbrenner, of Northeastern University in Boston. Fresh logs and wood cuttings got the beavers started in their new neighborhoods.

    At the five sites that saw long-term construction, beavers built 14 dams. Thanks to those dams, the volume of surface water — streams, ponds, wetlands — increased to about 20 times that of streams with no new beaver activity. Meanwhile below ground, wells at three sites showed that after dam construction the amount of groundwater grew to more than twice that was stored on the surface in ponds. Stream temperatures downstream of the dams fell by 2.3 degrees C on average, while streams not subject to the beavers’ tinkering warmed by 0.8 degrees C. These changes all came within the first year after relocation.

    “We’re achieving restoration objectives almost instantly, which is really cool,” Dittbrenner says.

    Crucially, the dams lowered temperatures enough to almost completely take the streams out of the harmful range for salmon during a particularly hot summer. “These fish are also experiencing heat waves within the water system, and the beavers are protecting them from it,” Fairfax says. “That to me was huge.”

    The study also found that small, shallow abandoned beaver ponds were actually warming streams, perhaps because the cooling system had broken down over time. Targeting these ponds as potential relocation sites could be the most effective way to bring temperatures down, the researchers say.  When relocated populations establish and breed, young beavers leaving their homes could seek those abandoned spots out first, Dittbrenner says, as it uses less energy than starting from scratch. “If they find a relic pond, it’s game on.”      More

  • in

    New AI technology integrates multiple data types to predict cancer outcomes

    While it’s long been understood that predicting outcomes in patients with cancer requires considering many factors, such as patient history, genes and disease pathology, clinicians struggle with integrating this information to make decisions about patient care. A new study from researchers from the Mahmood Lab at Brigham and Women’s Hospital reveals a proof-of-concept model that uses artificial intelligence (AI) to combine multiple types of data from different sources to predict patient outcomes for 14 different types of cancer. Results are published in Cancer Cell.
    Experts depend on several sources of data, like genomic sequencing, pathology, and patient history, to diagnose and prognosticate different types of cancer. While existing technology enables them to use this information to predict outcomes, manually integrating data from different sources is challenging and experts often find themselves making subjective assessments.
    “Experts analyze many pieces of evidence to predict how well a patient may do,” said Faisal Mahmood, PhD, an assistant professor in the Division of Computational Pathology at the Brigham and associate member of the Cancer Program at the Broad Institute of Harvard and MIT. “These early examinations become the basis of making decisions about enrolling in a clinical trial or specific treatment regimens. But that means that this multimodal prediction happens at the level of the expert. We’re trying to address the problem computationally.”
    Through these new AI models, Mahmood and colleagues uncovered a means to integrate several forms of diagnostic information computationally to yield more accurate outcome predictions. The AI models demonstrate the ability to make prognostic determinations while also uncovering the predictive bases of features used to predict patient risk — a property that could be used to uncover new biomarkers.
    Researchers built the models using The Cancer Genome Atlas (TCGA), a publicly available resource containing data on many different types of cancer. They then developed a multimodal deep learning-based algorithm which is capable of learning prognostic information from multiple data sources. By first creating separate models for histology and genomic data, they could fuse the technology into one integrated entity that provides key prognostic information. Finally, they evaluated the model’s efficacy by feeding it data sets from 14 cancer types as well as patient histology and genomic data. Results demonstrated that the models yielded more accurate patient outcome predictions than those incorporating only single sources of information.
    This study highlights that using AI to integrate different types of clinically informed data to predict disease outcomes is feasible. Mahmood explained that these models could allow researchers to discover biomarkers that incorporate different clinical factors and better understand what type of information they need to diagnose different types of cancer. The researchers also quantitively studied the importance of each diagnostic modality for individual cancer types and the benefit of integrating multiple modalities.
    The AI models are also capable of elucidating pathologic and genomic features that drive prognostic predictions. The team found that the models used patient immune responses as a prognostic marker without being trained to do so, a notable finding given that previous research shows that patients whose tumors elicit stronger immune responses tend to experience better outcomes.
    While this proof-of-concept model reveals a newfound role for AI technology in cancer care, this research is only a first step in implementing these models clinically. Applying these models in the clinic requires incorporating larger data sets and validating on large independent test cohorts. Going forward, Mahmood aims to integrate even more types of patient information, such as radiology scans, family histories, and electronic medical records, and eventually bring the model to clinical trials.
    “This work sets the stage for larger health care AI studies that combine data from multiple sources,” said Mahmood. “In a broader sense, our findings emphasize a need for building computational pathology prognostic models with much larger datasets and downstream clinical trials to establish utility.”
    Story Source:
    Materials provided by Brigham and Women’s Hospital. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence tools predict DNA's regulatory role and 3D structure

    Newly developed artificial intelligence (AI) programs accurately predicted the role of DNA’s regulatory elements and three-dimensional (3D) structure based solely on its raw sequence, according to two recent studies in Nature Genetics. These tools could eventually shed new light on how genetic mutations lead to disease and could lead to new understanding of how genetic sequence influences the spatial organization and function of chromosomal DNA in the nucleus, said study author Jian Zhou, Ph.D., Assistant Professor in the Lyda Hill Department of Bioinformatics at UTSW.
    “Taken together, these two programs provide a more complete picture of how changes in DNA sequence, even in noncoding regions, can have dramatic effects on its spatial organization and function,” said Dr. Zhou, a member of the Harold C. Simmons Comprehensive Cancer Center, a Lupe Murchison Foundation Scholar in Medical Research, and a Cancer Prevention and Research Institute of Texas (CPRIT) Scholar.
    Only about 1% of human DNA encodes instructions for making proteins. Research in recent decades has shown that much of the remaining noncoding genetic material holds regulatory elements — such as promoters, enhancers, silencers, and insulators — that control how the coding DNA is expressed. How sequence controls the functions of most of these regulatory elements is not well understood, Dr. Zhou explained.
    To better understand these regulatory components, he and colleagues at Princeton University and the Flatiron Institute developed a deep learning model they named Sei, which accurately sorts these snippets of noncoding DNA into 40 “sequence classes” or jobs — for example, as an enhancer for stem cell or brain cell gene activity. These 40 sequence classes, developed using nearly 22,000 data sets from previous studies studying genome regulation, cover more than 97% of the human genome. Moreover, Sei can score any sequence by its predicted activity in each of the 40 sequence classes and predict how mutations impact such activities.
    By applying Sei to human genetics data, the researchers were able to characterize the regulatory architecture of 47 traits and diseases recorded in the UK Biobank database and explain how mutations in regulatory elements cause specific pathologies. Such capabilities can help gain a more systematic understanding of how genomic sequence changes are linked to diseases and other traits. The findings were published this month.
    In May, Dr. Zhou reported the development of a different tool, called Orca, which predicts the 3D architecture of DNA in chromosomes based on its sequence. Using existing data sets of DNA sequences and structural data derived from previous studies that revealed the molecule’s folds, twists, and turns, Dr. Zhou trained the model to make connections and evaluated the model’s ability to predict structure at various length scales.
    The findings showed that Orca predicted DNA structures both small and large based on their sequences with high accuracy, including for sequences carrying mutations associated with various health conditions including a form of leukemia and limb malformations. Orca also enabled the researchers to generate new hypotheses about how DNA sequence controls its local and large-scale 3D structure.
    Dr. Zhou said that he and his colleagues plan to use Sei and Orca, which are both publicly available on web servers and as open-source code, to further explore the role of genetic mutations in causing the molecular and physical manifestations of diseases — research that could eventually lead to new ways to treat these conditions.
    The Orca study was supported by grants from CPRIT (RR190071), the National Institutes of Health (DP2GM146336), and the UT Southwestern Endowed Scholars Program in Medical Science.
    Story Source:
    Materials provided by UT Southwestern Medical Center. Note: Content may be edited for style and length. More

  • in

    Scientists mapped dark matter around galaxies in the early universe

    Scientists have mapped out the dark matter around some of the earliest, most distant galaxies yet.

    The 1.5 million galaxies appear as they were 12 billion years ago, or less than 2 billion years after the Big Bang. Those galaxies distort the cosmic microwave background — light emitted during an even earlier era of the universe — as seen from Earth. That distortion, called gravitational lensing, reveals the distribution of dark matter around those galaxies, scientists report in the Aug. 5 Physical Review Letters.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Understanding how dark matter collects around galaxies early in the universe’s history could tell scientists more about the mysterious substance. And in the future, this lensing technique could also help scientists unravel a mystery about how matter clumps together in the universe.

    Dark matter is an unknown, massive substance that surrounds galaxies. Scientists have never directly detected dark matter, but they can observe its gravitational effects on the cosmos (SN: 7/22/22). One of those effects is gravitational lensing: When light passes by a galaxy, its mass bends the light like a lens. How much the light bends reveals the mass of the galaxy, including its dark matter.

    It’s difficult to map dark matter around such distant galaxies, says cosmologist Hironao Miyatake of Nagoya University in Japan. That’s because scientists need a source of light that is farther away than the galaxy acting as the lens. Typically, scientists use even more distant galaxies as the source of that light. But when peering this deep into space, those galaxies are difficult to come by.

    So instead, Miyatake and colleagues turned to the cosmic microwave background, the oldest light in the universe. The team used measurements of lensing of the cosmic microwave background from the Planck satellite, combined with a multitude of distant galaxies observed by the Subaru Telescope in Hawaii (SN: 7/24/18). “The gravitational lensing effect is very small, so we need a lot of lens galaxies,” Miyatake says. The distribution of dark matter around the galaxies matched expectations, the researchers report.

    The researchers also estimated a quantity called sigma-8, a measure of how “clumpy” matter is in the cosmos. For years, scientists have found hints that different measurements of sigma-8 disagree with one another (SN: 8/10/20). That could be a hint that something is wrong with scientists’ theories of the universe. But the evidence isn’t conclusive.

    “One of the most interesting things in cosmology right now is whether that tension is real or not,” says cosmologist Risa Wechsler of Stanford University, who was not involved with the study. “This is a really nice example of one of the techniques that will help shed light on that.”

    Measuring sigma-8 using early, distant galaxies could help reveal what’s going on. “You want to measure this quantity, this sigma-8, from as many perspectives as possible,” says cosmologist Hendrik Hildebrandt of Ruhr University Bochum in Germany, who was not involved with the study.

    If estimates from different eras of the universe disagree with one another, that might help physicists craft a new theory that could better explain the cosmos. While the new measurement of sigma-8 isn’t precise enough to settle the debate, future projects, such as the Rubin Observatory in Chile, could improve the estimate (SN: 1/10/20). More

  • in

    Researchers discover major roadblock in alleviating network congestion

    When users want to send data over the internet faster than the network can handle, congestion can occur — the same way traffic congestion snarls the morning commute into a big city.
    Computers and devices that transmit data over the internet break the data down into smaller packets and use a special algorithm to decide how fast to send those packets. These congestion control algorithms seek to fully discover and utilize available network capacity while sharing it fairly with other users who may be sharing the same network. These algorithms try to minimize delay caused by data waiting in queues in the network.
    Over the past decade, researchers in industry and academia have developed several algorithms that attempt to achieve high rates while controlling delays. Some of these, such as the BBR algorithm developed by Google, are now widely used by many websites and applications.
    But a team of MIT researchers has discovered that these algorithms can be deeply unfair. In a new study, they show there will always be a network scenario where at least one sender receives almost no bandwidth compared to other senders; that is, a problem known as starvation cannot be avoided.
    “What is really surprising about this paper and the results is that when you take into account the real-world complexity of network paths and all the things they can do to data packets, it is basically impossible for delay-controlling congestion control algorithms to avoid starvation using current methods,” says Mohammad Alizadeh, associate professor of electrical engineering and computer science (EECS).
    While Alizadeh and his co-authors weren’t able to find a traditional congestion control algorithm that could avoid starvation, there may be algorithms in a different class that could prevent this problem. Their analysis also suggests that changing how these algorithms work, so that they allow for larger variations in delay, could help prevent starvation in some network situations. More

  • in

    Smart microrobots learn how to swim and navigate with artificial intelligence

    Researchers from Santa Clara University, New Jersey Institute of Technology and the University of Hong Kong have been able to successfully teach microrobots how to swim via deep reinforcement learning, marking a substantial leap in the progression of microswimming capability.
    There has been tremendous interest in developing artificial microswimmers that can navigate the world similarly to naturally-occuring swimming microorganisms, like bacteria. Such microswimmers provide promise for a vast array of future biomedical applications, such as targeted drug delivery and microsurgery. Yet, most artificial microswimmers to date can only perform relatively simple maneuvers with fixed locomotory gaits.
    In the researchers’ study published in Communications Physics, they reasoned microswimmers could learn — and adapt to changing conditions — through AI. Much like humans learning to swim require reinforcement learning and feedback to stay afloat and propel in various directions under changing conditions, so too must microswimmers, though with their unique set of challenges imposed by physics in the microscopic world.
    “Being able to swim at the micro-scale by itself is a challenging task,” said On Shun Pak, associate professor of mechanical engineering at Santa Clara University. “When you want a microswimmer to perform more sophisticated maneuvers, the design of their locomotory gaits can quickly become intractable.”
    By combining artificial neural networks with reinforcement learning, the team successfully taught a simple microswimmer to swim and navigate toward any arbitrary direction. When the swimmer moves in certain ways, it receives feedback on how good the particular action is. The swimmer then progressively learns how to swim based on its experiences interacting with the surrounding environment.
    “Similar to a human learning how to swim, the microswimmer learns how to move its ‘body parts’ — in this case three microparticles and extensible links — to self-propel and turn,” said Alan Tsang, assistant professor of mechanical engineering at the University of Hong Kong. “It does so without relying on human knowledge but only on a machine learning algorithm.”
    The AI-powered swimmer is able to switch between different locomotory gaits adaptively to navigate toward any target location on its own.
    As a demonstration of the powerful ability of the swimmer, the researchers showed that it could follow a complex path without being explicitly programmed. They also demonstrated the robust performance of the swimmer in navigating under the perturbations arising from external fluid flows.
    “This is our first step in tackling the challenge of developing microswimmers that can adapt like biological cells in navigating complex environments autonomously,” said Yuan-nan Young, professor of mathematical sciences at New Jersey Institute of Technology.
    Such adaptive behaviors are crucial for future biomedical applications of artificial microswimmers in complex media with uncontrolled and unpredictable environmental factors.
    “This work is a key example of how the rapid development of artificial intelligence may be exploited to tackle unresolved challenges in locomotion problems in fluid dynamics,” said Arnold Mathijssen, an expert on microrobots and biophysics at the University of Pennsylvania, who was not involved in the research. “The integration between machine learning and microswimmers in this work will spark further connections between these two highly active research areas.”
    Story Source:
    Materials provided by New Jersey Institute of Technology. Note: Content may be edited for style and length. More