More stories

  • in

    Human-machine interfaces work underwater, generate their own power

    Wearable human-machine interface devices, HMIs, can be used to control machines, computers, music players, and other systems. A challenge for conventional HMIs is the presence of sweat on human skin.
    In Applied Physics Reviews, by AIP Publishing, scientists at UCLA describe their development of a type of HMI that is stretchable, inexpensive, and waterproof. The device is based on a soft magnetoelastic sensor array that converts mechanical pressure from the press of a finger into an electrical signal.
    The device involves two main components. The first component is a layer that translates mechanical movement to a magnetic response. It consists of a set of micromagnets in a porous silicone matrix that can convert the gentle fingertip pressure into a magnetic field variation.
    The second component is a magnetic induction layer consisting of patterned liquid metal coils. These coils respond to the magnetic field changes and generate electricity through the phenomenon of electromagnetic induction.
    “Owing to the material’s flexibility and durability, the magnetoelastic sensor array can generate stable power under deformations, such as rolling, folding, and stretching,” said author Jun Chen, from UCLA. “Due to these compelling features, the device can be adopted for human-body powered HMI by transforming human biomechanical activities into electrical signals.”
    The power required to run the HMI comes from the wearer’s movements. This means no batteries or other external power components are required, rendering the HMI more environmentally friendly and sustainable.
    The device was tested in a variety of real-world situations, including in the presence of a water spray, such as might exist in the shower, a rainstorm, or during vigorous athletic activity. The device worked well when wet, since the magnetic field was not greatly affected by the presence of water.
    The investigators studied a range of fabrication and assembly techniques to optimize the biomechanical-to-electrical energy conversion of the device. They found they could achieve a balance between performance and flexibility by controlling the thickness of the flexible film and the concentration of the magnetic particles.
    To test their system, the investigators carried out a series of experiments in which a subject applied finger taps to turn a lamp off and on and control a music player.
    “Our magnetoelastic sensor array not only wirelessly functions as the on and off buttons of a lamp but also controls a music player’s command features, representing the actions of play, pause, next, and previous,” Chen said.
    These tests promise new applications for versatile water-resistant HMIs that can be used to control many types of smart devices.
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Realistic computer models of brain cells

    Cedars-Sinai investigators have created the most bio-realistic and complex computer models of individual brain cells — in unparalleled quantity. Their research, published today in the peer-reviewed journal Cell Reports, details how these models could one day answer questions about neurological disorders — and even human intellect — that aren’t possible to explore through biological experiments.
    “These models capture the shape, timing and speed of the electrical signals that neurons fire in order to communicate with each other, which is considered the basis of brain function,” said Costas Anastassiou, PhD, a research scientist in the Department of Neurosurgery at Cedars-Sinai, and senior author of the study. “This lets us replicate brain activity at the single-cell level.”
    The models are the first to combine data sets from different types of laboratory experiments to present a complete picture of the electrical, genetic and biological activity of single neurons. The models can be used to test theories that would require dozens of experiments to examine in the lab, Anastassiou said.
    “Imagine that you wanted to investigate how 50 different genes affect a cell’s biological processes,” Anastassiou said. “You would need to create a separate experiment to ‘knock out’ each gene and see what happens. With our computational models, we will be able to change the recipes of these gene markers for as many genes as we like and predict what will happen.”
    Another advantage of the models is that they allow researchers to completely control experimental conditions. This opens the possibility of establishing that one parameter, such as a protein expressed by a neuron, causes a change in the cell or a disease condition, such as epileptic seizures, Anastassiou said. In the lab, investigators can often show an association, but it is difficult to prove a cause.
    “In laboratory experiments, the researcher doesn’t control everything,” Anastassiou said. “Biology controls a lot. But in a computational simulation, all the parameters are under the creator’s control. In a model, I can change one parameter and see how it affects another, something that is very hard to do in a biological experiment.”
    To create their models, Anastassiou and his team from the Anastassiou Lab — members of the Departments of Neurology and Neurosurgery, the Board of Governors Regenerative Medicine Institute and the Center for Neural Science and Medicine at Cedars-Sinai, used two different sets of data on the mouse primary visual cortex, the area of the brain that processes information coming from the eyes. More

  • in

    Mountain events could improve safety with ultra-high resolution weather models

    In late May of 2021, 172 runners set out to tackle a 100-kilometer (62-mile) ultramarathon in northwestern China. By midday, as the runners made their way through a rugged, high-elevation part of the course, temperatures plunged, strong winds whipped around the hillslopes and freezing rain and hail pummeled the runners. By the next day, the death toll from the sudden storm had risen to 21.
    A new study revisits the deadly event with the goal of testing how hyper-local modeling can improve forecast accuracy for mountain events. The runners ran into trouble because hourly weather forecasts for the race underestimated the storm. The steep mountain slopes had highly localized effects on wind, precipitation and temperature at too small a scale for the weather forecasts for the event, according to the new study, which is published in the AGU journal JGR Atmospheres.
    Hourly forecasts for the 2021 race were based on relatively large-scale atmospheric processes, with models running at a resolution of three kilometers — sufficient for most regional predictions, but too coarse to capture the “hyper-local” weather like the storm that struck the course, says Haile Xue, a climate scientist at China’s CMA Earth System Modeling and Prediction Centre and lead author of the new study. Even though a wind and cold temperature advisory had been issued the night before, it lacked the resolution required to pinpoint the danger zones on the course.
    “An apparent temperature forecast based on a high-resolution simulation may be helpful” in addition to general regional forecasts, Xue says. Conditions like the 2021 storm are common in mountains with extremely high elevations, such as Mount Everest and Denali, the paper states. While less frequent at lower elevations, when such storms do occur, they can strike suddenly and lead to injuries and loss of life.
    The new study uses topographic data from the course, at tens of meters of resolution rather than kilometers, to model the hyper-local weather conditions created by the mountains. With a resolution two orders of magnitude finer than the original forecasts for that weekend, as well as detailed considerations of mountainous topography, the model accurately recreated the storm conditions from the race and even offered greater insight into what may have happened that day.
    The original forecast included a large-scale cold front, which would have led to temperature drops and stronger — but not extreme — winds, with only a low-level wind advisory issued. The new study found the apparent temperature could have dropped as low as -10 degrees Celsius (14 degrees Fahrenheit), about 3 degrees Celsius cooler than what the original models predicted.
    The model also generated an “impact forecast,” including apparent temperature, which could have dropped even lower as it considers humidity and would ideally include the effect of wet clothes or skin on body temperature. Including these in forecasts, Xue says, could help mitigate the risk of hypothermia.
    Along with the weather, planning for the race and gear requirements for the runners were discussed following the event. Many endurance events require ample layers for warmth and rain protection; these were suggested but not required, which could have contributed to the loss of life. Both accurate weather forecasts and gear requirements are essential for an event to be safe.
    Story Source:
    Materials provided by American Geophysical Union. Note: Content may be edited for style and length. More

  • in

    Microrobotics in endodontic treatment, diagnostics

    With its irregularities and anatomical complexities, the root canal system is one of the most clinically challenging spaces in the oral cavity. As a result, biofilm not fully cleared from the nooks and crannies of the canals remains a leading cause of treatment failure and persistent endodontic infections, and there are limited means to diagnose or assess the efficacy of disinfection. One day, clinicians may have a new tool to overcome these challenges in the form of microrobots.
    In a proof-of-concept study, researchers from Penn Dental Medicine and its Center for Innovation & Precision Dentistry (CiPD), have shown that microrobots can access the difficult to reach surfaces of the root canal with controlled precision, treating and disrupting biofilms and even retrieving samples for diagnostics, enabling a more personalized treatment plan. The Penn team shared their findings on the use of two different microrobotic platforms for endodontic therapy in the August issue of the Journal of Dental Research ; the work was selected for the issue’s cover.
    “The technology could enable multimodal functionalities to achieve controlled, precision targeting of biofilms in hard-to-reach spaces, obtain microbiological samples, and perform targeted drug delivery, ” says Dr. Alaa Babeer, lead author of the study and a Penn Dental Medicine Doctor of Science in Dentistry (DScD) and endodontics graduate, who is now within the lab of Dr. Michel Koo, co-director of the CiPD .
    In both platforms, the building blocks for the microrobots are iron oxide nanoparticles (NPs) that have both catalytic and magnetic activity and have been FDA approved for other uses. In the first platform, a magnetic field is used to concentrate the NPs in aggregated microswarms and magnetically control them to the apical area of the tooth to disrupt and retrieve biofilms through a catalytic reaction. The second platform uses 3D printing to create miniaturized helix-shaped robots embedded with iron oxide NPs. These helicoids are guided by magnetic fields to move within the root canal, transporting bioactives or drugs that can be released on site.
    “This technology offers the potential to advance clinical care on a variety of levels,” says Dr. Koo, co-corresponding author of the study with Dr. Edward Steager, a senior research investigator in Penn’s School of Engineering and Applied Science.
    “One important aspect is the ability to have diagnostic as well as therapeutic applications. In the microswarm platform, we can not only remove the biofilm, but also retrieve it, enabling us identify what microorganisms caused the infection. In addition, the ability to conform to the narrow and difficult-to-reach spaces within the root canal allows for a more effective disinfection in comparison to the files and instrumentation techniques presently used.”
    A Collaborative System More

  • in

    Robot helps reveal how ants pass on knowledge

    Scientists have developed a small robot to understand how ants teach one another.
    The team built the robot to mimic the behaviour of rock ants that use one-to-one tuition, in which an ant that has discovered a much better new nest can teach the route there to another individual.
    The findings, published in the Journal of Experimental Biology today, confirm that most of the important elements of teaching in these ants are now understood because the teaching ant can be replaced by a machine.
    Key to this process of teaching is tandem running where one ant literally leads another ant quite slowly along a route to the new nest. The pupil ant learns the route sufficiently well that it can find its own way back home and then lead a tandem-run with another ant to the new nest, and so on.
    Prof Nigel Franks of Bristol’s School of Biological Sciences said: “Teaching is so important in our own lives that we spend a great deal of time either instructing others or being taught ourselves. This should cause us to wonder whether teaching actually occurs among non-human animals. And, in fact, the first case in which teaching was demonstrated rigorously in any other animal was in an ant.” The team wanted to determine what was necessary and sufficient in such teaching. If they could build a robot that successfully replaced the teacher, this should show that they largely understood all the essential elements in this process.
    The researchers built a large arena so there was an appreciable distance between the ants’ old nest, which was deliberately made to be of low quality, and a new much better one that ants could be led to by a robot. A gantry was placed atop the arena to move back and forth with a small sliding robot attached to it, so that the scientists could direct the robot to move along either straight or wavy routes. Attractive scent glands, from a worker ant, were attached to the robot to give it the pheromones of an ant teacher. More

  • in

    Leadership online: Charisma matters most in video communication

    Managers need to make a consistent impression in order to motivate and inspire people, and that applies even more to video communication than to other digital channels. That is the result of a study by researchers at Karlsruhe Institute of Technology (KIT). They investigated the influence that charismatic leadership tactics used in text, audio and video communication channels have on employee performance. They focused on mobile work and the gig economy, in which jobs are flexibly assigned to freelancers via online platforms.
    Since the onset of the Covid-19 pandemic, more and more people are working partly or entirely from home or in mobile work arrangements. At the same time, the so-called gig economy is growing. It involves the flexible assignment of short-term work to freelancers or part-time, low-wage staff via online platforms. Both trends are accelerating the digitalization of work. However, compared to face-to-face conversation between people in the same place, communication through digital channels offers fewer opportunities to motivate people and show charisma. This presents new challenges for managers. The impact of charismatic leadership tactics (CLTs) and the choice of communications channel (text, audio or video) on staff performance is the subject of a study by Petra Nieken, professor of human resource management at the Institute of Management at KIT. The study has been published in the journal The Leadership Quarterly.
    Charismatic Leadership Tactics Can Be Learned and Objectively Observed
    A charismatic leadership style can be learned; researchers speak of charismatic leadership tactics, which include verbal, paraverbal and non-verbal means such as metaphors, anecdotes, contrasts, rhetorical questions, pitch and tone of voice, and gestures. CLTs can be objectively observed and measured. They can be selectively changed in randomized controlled trials. “Managers can use the entire range of CLTs in face-to-face meetings. Digital communication reduces the opportunities to signal charisma,” says Nieken. “Depending on the communication channel, visual and/or acoustic cues can be missing. The question is whether people’s performance suffers as a result or if they adjust their expectations to the selected channel.”
    In the first part of her study, Nieken conducted a field test with text, audio and video communication channels in which a task description was presented neutrally in one case and with the use of as many CLTs as possible in the other. In the neutral case, video messages led to lower performance than did audio and text messages. In contrast, there were no significant differences in performance in the CLT case. “The results show a positive correlation between video communication and charismatic communication; the charismatic video led to better performance than the neutral video,” explains Nieken. “So we can conclude that it’s most important for managers to convey a consistent impression when they use the video channel.”
    Traditional Charisma Questionnaires Do Not Predict Staff Performance
    In the second part of her study, Nieken had the different cases assessed with traditional questionnaires like the Multifactor Leadership Questionnaire (MLQ) and compared the results with those from the first part. Charisma noted in the questionnaires correlated with the use of CLTs but not with staff performance. “Traditional questionnaires like the MLQ are not suitable for predicting how people will perform in mobile work situations, working from home or in the gig economy,” concludes Nieken.
    Story Source:
    Materials provided by Karlsruher Institut für Technologie (KIT). Note: Content may be edited for style and length. More

  • in

    AI pilot can navigate crowded airspace

    A team of researchers at Carnegie Mellon University believe they have developed the first AI pilot that enables autonomous aircraft to navigate a crowded airspace.
    The artificial intelligence can safely avoid collisions, predict the intent of other aircraft, track aircraft and coordinate with their actions, and communicate over the radio with pilots and air traffic controllers. The researchers aim to develop the AI so the behaviors of their system will be indistinguishable from those of a human pilot.
    “We believe we could eventually pass the Turing Test,” said Jean Oh, an associate research professor at CMU’s Robotics Institute (RI) and a member of the AI pilot team, referring to the test of an AI’s ability to exhibit intelligent behavior equivalent to a human.
    To interact with other aircraft as a human pilot would, the AI uses both vision and natural language to communicate its intent with other aircraft, whether piloted or not. This behavior leads to safe and socially compliant navigation. Researchers achieved this implicit coordination by training the AI on data collected at the Allegheny County Airport and the Pittsburgh-Butler Regional Airport that included air traffic patterns, images of aircraft and radio transmissions.
    The AI uses six cameras and a computer vision system to detect nearby aircraft in a manner similar to that of a human pilot. Its automatic speech recognition function uses natural language processing techniques to both understand incoming radio messages and communicate with pilots and air traffic controllers using speech.
    Advancement in autonomous aircraft will broaden opportunities for drones, air taxis, helicopters and other aircraft to operate — moving people and goods, inspecting infrastructure, treating fields to protect crops, and monitoring for poaching or deforestation — often without a pilot behind the controls. These aircraft will have to fly, however, in an airspace already crowded with small airplanes, medical helicopters and more. More

  • in

    New AI technology integrates multiple data types to predict cancer outcomes

    While it’s long been understood that predicting outcomes in patients with cancer requires considering many factors, such as patient history, genes and disease pathology, clinicians struggle with integrating this information to make decisions about patient care. A new study from researchers from the Mahmood Lab at Brigham and Women’s Hospital reveals a proof-of-concept model that uses artificial intelligence (AI) to combine multiple types of data from different sources to predict patient outcomes for 14 different types of cancer. Results are published in Cancer Cell.
    Experts depend on several sources of data, like genomic sequencing, pathology, and patient history, to diagnose and prognosticate different types of cancer. While existing technology enables them to use this information to predict outcomes, manually integrating data from different sources is challenging and experts often find themselves making subjective assessments.
    “Experts analyze many pieces of evidence to predict how well a patient may do,” said Faisal Mahmood, PhD, an assistant professor in the Division of Computational Pathology at the Brigham and associate member of the Cancer Program at the Broad Institute of Harvard and MIT. “These early examinations become the basis of making decisions about enrolling in a clinical trial or specific treatment regimens. But that means that this multimodal prediction happens at the level of the expert. We’re trying to address the problem computationally.”
    Through these new AI models, Mahmood and colleagues uncovered a means to integrate several forms of diagnostic information computationally to yield more accurate outcome predictions. The AI models demonstrate the ability to make prognostic determinations while also uncovering the predictive bases of features used to predict patient risk — a property that could be used to uncover new biomarkers.
    Researchers built the models using The Cancer Genome Atlas (TCGA), a publicly available resource containing data on many different types of cancer. They then developed a multimodal deep learning-based algorithm which is capable of learning prognostic information from multiple data sources. By first creating separate models for histology and genomic data, they could fuse the technology into one integrated entity that provides key prognostic information. Finally, they evaluated the model’s efficacy by feeding it data sets from 14 cancer types as well as patient histology and genomic data. Results demonstrated that the models yielded more accurate patient outcome predictions than those incorporating only single sources of information.
    This study highlights that using AI to integrate different types of clinically informed data to predict disease outcomes is feasible. Mahmood explained that these models could allow researchers to discover biomarkers that incorporate different clinical factors and better understand what type of information they need to diagnose different types of cancer. The researchers also quantitively studied the importance of each diagnostic modality for individual cancer types and the benefit of integrating multiple modalities.
    The AI models are also capable of elucidating pathologic and genomic features that drive prognostic predictions. The team found that the models used patient immune responses as a prognostic marker without being trained to do so, a notable finding given that previous research shows that patients whose tumors elicit stronger immune responses tend to experience better outcomes.
    While this proof-of-concept model reveals a newfound role for AI technology in cancer care, this research is only a first step in implementing these models clinically. Applying these models in the clinic requires incorporating larger data sets and validating on large independent test cohorts. Going forward, Mahmood aims to integrate even more types of patient information, such as radiology scans, family histories, and electronic medical records, and eventually bring the model to clinical trials.
    “This work sets the stage for larger health care AI studies that combine data from multiple sources,” said Mahmood. “In a broader sense, our findings emphasize a need for building computational pathology prognostic models with much larger datasets and downstream clinical trials to establish utility.”
    Story Source:
    Materials provided by Brigham and Women’s Hospital. Note: Content may be edited for style and length. More