More stories

  • in

    Talk with your hands? You might think with them too!

    How do we understand words? Scientists don’t fully understand what happens when a word pops into your brain. A research group led by Professor Shogo Makioka at the Graduate School of Sustainable System Sciences, Osaka Metropolitan University, wanted to test the idea of embodied cognition. Embodied cognition proposes that people understand the words for objects through how they interact with them, so the researchers devised a test to observe semantic processing of words when the ways that the participants could interact with objects were limited.
    Words are expressed in relation to other words; a “cup,” for example, can be a “container, made of glass, used for drinking.” However, you can only use a cup if you understand that to drink from a cup of water, you hold it in your hand and bring it to your mouth, or that if you drop the cup, it will smash on the floor. Without understanding this, it would be difficult to create a robot that can handle a real cup. In artificial intelligence research, these issues are known as symbol grounding problems, which map symbols onto the real world.
    How do humans achieve symbol grounding? Cognitive psychology and cognitive science propose the concept of embodied cognition, where objects are given meaning through interactions with the body and the environment.
    To test embodied cognition, the researchers conducted experiments to see how the participants’ brains responded to words that describe objects that can be manipulated by hand, when the participants’ hands could move freely compared to when they were restrained.
    “It was very difficult to establish a method for measuring and analyzing brain activity. The first author, Ms. Sae Onishi, worked persistently to come up with a task, in a way that we were able to measure brain activity with sufficient accuracy,” Professor Makioka explained.
    In the experiment, two words such as “cup” and “broom” were presented to participants on a screen. They were asked to compare the relative sizes of the objects those words represented and to verbally answer which object was larger — in this case, “broom.” Comparisons were made between the words, describing two types of objects, hand-manipulable objects, such as “cup” or “broom” and nonmanipulable objects, such as “building” or “lamppost,” to observe how each type was processed.
    During the tests, the participants placed their hands on a desk, where they were either free or restrained by a transparent acrylic plate. When the two words were presented on the screen, to answer which one represented a larger object, the participants needed to think of both objects and compare their sizes, forcing them to process each word’s meaning.
    Brain activity was measured with functional near-infrared spectroscopy (fNIRS), which has the advantage of taking measurements without imposing further physical constraints. The measurements focused on the interparietal sulcus and the inferior parietal lobule (supramarginal gyrus and angular gyrus) of the left brain, which are responsible for semantic processing related to tools. The speed of the verbal response was measured to determine how quickly the participant answered after the words appeared on the screen.
    The results showed that the activity of the left brain in response to hand-manipulable objects was significantly reduced by hand restraints. Verbal responses were also affected by hand constraints. These results indicate that constraining hand movement affects the processing of object-meaning, which supports the idea of embodied cognition. These results suggest that the idea of embodied cognition could also be effective for artificial intelligence to learn the meaning of objects. The paper was published in Scientific Reports.
    Story Source:
    Materials provided by Osaka Metropolitan University. Note: Content may be edited for style and length. More

  • in

    Using artificial intelligence to improve tuberculosis treatments

    Imagine you have 20 new compounds that have shown some effectiveness in treating a disease like tuberculosis (TB), which affects 10 million people worldwide and kills 1.5 million each year. For effective treatment, patients will need to take a combination of three or four drugs for months or even years because the TB bacteria behave differently in different environments in cells — and in some cases evolve to become drug-resistant. Twenty compounds in three- and four-drug combinations offer nearly 6,000 possible combinations. How do you decide which drugs to test together?
    In a recent study, published in the September issue of Cell Reports Medicine, researchers from Tufts University used data from large studies that contained laboratory measurements of two-drug combinations of 12 anti-tuberculosis drugs. Using mathematical models, the team discovered a set of rules that drug pairs need to satisfy to be potentially good treatments as part of three- and four-drug cocktails.
    The use of drug pairs rather than three- and four- drug combination measurement cuts down significantly on the amount of testing that needs to be done before moving a drug combination into further study.
    “Using the design rules we’ve established and tested, we can substitute one drug pair for another drug pair and know with a high degree of confidence that the drug pair should work in concert with the other drug pair to kill the TB bacteria in the rodent model,” says Bree Aldridge, associate professor of molecular biology and microbiology at Tufts University School of Medicine and of biomedical engineering at the School of Engineering, and an immunology and molecular microbiology program faculty member at the Graduate School of Biomedical Sciences. “The selection process we developed is both more streamlined and more accurate in predicting success than prior processes, which necessarily considered fewer combinations.”
    The lab of Aldridge, who is corresponding author on the paper and also associate director of Tufts Stuart B. Levy Center for Integrated Management of Antimicrobial Resistance, previously developed and uses DiaMOND, or diagonal measurement of n-way drug interactions, a method to systemically study pairwise and high-order drug combination interactions to identify shorter, more efficient treatment regimens for TB and potentially other bacterial infections. With the design rules established in this new study, researchers believe they can increase the speed at which scientists determine which drug combinations will most effectively treat tuberculosis, the second leading infectious killer in the world.
    Story Source:
    Materials provided by Tufts University. Note: Content may be edited for style and length. More

  • in

    Dense liquid droplets act as cellular computers

    An emerging field explores how groups of molecules condense together inside cells, the way oil droplets assemble and separate from water in a vinaigrette.
    In human cells, “liquid-liquid phase separation” occurs because similar, large molecules glom together into dense droplets separated from the more diluted parts of the fluid cell interior. Past work had suggested that evolution harnessed the natural formation of these “condensates” to organize cells, providing, for instance, isolated spaces for the building of cellular machines.
    Furthermore, abnormal, condensed — also called “tangled” — groups of molecules in droplets are nearly always present in the cells of patients with neurodegenerative conditions, including Alzheimer’s disease. While no one knows why such condensates form, one new theory argues that the biophysical properties of cell interiors change as people age — driven in part by “molecular crowding” that packs more molecules into the same spaces to affect phase separation.
    Researchers compare condensates to microprocessors, computers built into circuits, because both recognize and calculate responses based on incoming information. Despite the suspected impact of physical changes on liquid processors, the field has struggled to clarify the mechanisms connecting phase separation, condensate formation, and computation based on chemical signals, which occur at much smaller scale, researchers say. This is because natural condensates have so many functions that experiments struggle to delineate them.
    To address this challenge, researchers at NYU Grossman School of Medicine and the German Center for Neurodegenerative Diseases built an artificial system that revealed how the formation of condensates changes the action at the molecular level of enzymes called kinases, an example of chemical computation. Kinases are protein switches that influence cellular processes by phosphorylating — attaching a molecule called a phosphate group — to target molecules.
    The new analysis, published online September 14 in Molecular Cell, found that the formation of engineered condensates during phase separation offered more “sticky” regions where medically important kinases and their targets could interact and trigger phosphorylation signals. More

  • in

    New method for comparing neural networks exposes how artificial intelligence works

    A team at Los Alamos National Laboratory has developed a novel approach for comparing neural networks that looks within the “black box” of artificial intelligence to help researchers understand neural network behavior. Neural networks recognize patterns in datasets; they are used everywhere in society, in applications such as virtual assistants, facial recognition systems and self-driving cars.
    “The artificial intelligence research community doesn’t necessarily have a complete understanding of what neural networks are doing; they give us good results, but we don’t know how or why,” said Haydn Jones, a researcher in the Advanced Research in Cyber Systems group at Los Alamos. “Our new method does a better job of comparing neural networks, which is a crucial step toward better understanding the mathematics behind AI.”
    Jones is the lead author of the paper “If You’ve Trained One You’ve Trained Them All: Inter-Architecture Similarity Increases With Robustness,” which was presented recently at the Conference on Uncertainty in Artificial Intelligence. In addition to studying network similarity, the paper is a crucial step toward characterizing the behavior of robust neural networks.
    Neural networks are high performance, but fragile. For example, self-driving cars use neural networks to detect signs. When conditions are ideal, they do this quite well. However, the smallest aberration — such as a sticker on a stop sign — can cause the neural network to misidentify the sign and never stop.
    To improve neural networks, researchers are looking at ways to improve network robustness. One state-of-the-art approach involves “attacking” networks during their training process. Researchers intentionally introduce aberrations and train the AI to ignore them. This process is called adversarial training and essentially makes it harder to fool the networks.
    Jones, Los Alamos collaborators Jacob Springer and Garrett Kenyon, and Jones’ mentor Juston Moore, applied their new metric of network similarity to adversarially trained neural networks, and found, surprisingly, that adversarial training causes neural networks in the computer vision domain to converge to very similar data representations, regardless of network architecture, as the magnitude of the attack increases.
    “We found that when we train neural networks to be robust against adversarial attacks, they begin to do the same things,” Jones said.
    There has been extensive effort in industry and in the academic community searching for the “right architecture” for neural networks, but the Los Alamos team’s findings indicate that the introduction of adversarial training narrows this search space substantially. As a result, the AI research community may not need to spend as much time exploring new architectures, knowing that adversarial training causes diverse architectures to converge to similar solutions.
    “By finding that robust neural networks are similar to each other, we’re making it easier to understand how robust AI might really work. We might even be uncovering hints as to how perception occurs in humans and other animals,” Jones said.
    Story Source:
    Materials provided by DOE/Los Alamos National Laboratory. Note: Content may be edited for style and length. More

  • in

    Intelligent cooperation to provide surveillance and epidemic services in smart cities

    The potential of unmanned aerial vehicles (UAVs) to provide a safe environment and epidemic prevention services to people is huge. This potential has been harnessed by the scientists at Incheon National University to design a cooperative infrastructure for artificial intelligence-assisted aerial and ground operations using UAVs and mobile robots. This infrastructure can provide surveillance and epidemic prevention activities to smart cities.
    There has been a lot of interest in mobile robots and unmanned aerial vehicles (UAVs) in recent times, primarily because these technologies have the potential to provide us with immense benefits. With the rise of 5G technology, it is expected that UAVs or drones and mobile robots will efficiently and safely provide a wide range of services in smart cities, including surveillance and epidemic prevention. It is now well established that robots can be deployed in various environments to perform activities like surveillance and rescue operations. But till date, all these operations have been independent of each other, often working in parallel. To realize the full potential of UAVs and mobile robots, we need to use these technologies together so that they can support each other and augment mutual functions.
    To this end, a team of researchers led by Associate Professor Hyunbum Kim from Incheon National University, South Korea, have designed an Artificial Intelligence (AI)-assisted cooperative infrastructure for UAVs and mobile robots. In a paper published in volume 36 issue 3 of IEEE Network on 13 July 2022, the researchers outline the entire structure that can use UAVs and mobile robots in public and private areas for multiple operations like patrolling, accident detection and rescue, and epidemic prevention. According to Dr Kim, “It is critical to look at surveillance and unprecedented epidemic spread such as COVID-19 together. This is why we designed the next generation system to focus on aerial-ground surveillance and epidemic prevention supported by intelligent mobile robots and smart UAVs.”
    The system designed by the team is composed of two subsystems, one for public areas and one for private areas. Both systems comprise of a Centralized Administrator Center (CAC). The CAC is connected to various Unified Rendezvous Stations (URSs) that are situated in public areas. These URSs are where the UAVs and mobile robots receive replenishment and share data. Mobile robots are also equipped with charging facilities to recharge airborne docking UAVs. The public system aims at patrolling public areas, detecting accidents and calamities, providing aid, and performing epidemic prevention activities like transporting medical equipment. The private system can provide rapid medical deliveries and screening tests to homes.
    But what about privacy under such surveillance? Dr Kim allays concerns, saying, “Privacy is indeed a major concern for any surveillance mechanism. Therefore, we have created different privacy settings for different systems. For the public system, there are restricted districts where only authorized public UAVs can enter. For the private system, there are permanent private zones where no UAVs can enter except in emergencies and temporal access zones where permitted UAVs can enter with legal permission from the owners.”
    The authors are optimistic about the potential of this infrastructure to improve people’s lives. The system can provide a vast array of services, from detecting and preventing potential terror in public spaces to detecting and extinguishing fires in private homes. Indeed, two is better than one and we look forward to living in this cooperative and optimistic future!
    Story Source:
    Materials provided by Incheon National University. Note: Content may be edited for style and length. More

  • in

    Tiny, caterpillar-like soft robot folds, rolls, grabs and degrades

    When you hear the term “robot,” you might think of complicated machinery working in factories or roving on other planets. But “millirobots” might change that. They’re robots about as wide as a finger that someday could deliver drugs or perform minimally invasive surgery. Now, researchers reporting in ACS Applied Polymer Materials have developed a soft, biodegradable, magnetic millirobot inspired by the walking and grabbing capabilities of insects.
    Some soft millirobots are already being developed for a variety of biomedical applications, thanks to their small size and ability to be powered externally, often by a magnetic field. Their unique structures allow them to inch or roll themselves through the bumpy tissues of our gastrointestinal tract, for example. They could someday even be coated in a drug solution and deliver the medicine exactly where it’s needed in the body. However, most millirobots are made of non-degradable materials, such as silicone, which means they’d have to be surgically removed if used in clinical applications. In addition, these materials aren’t that flexible and don’t allow for much fine-tuning of the robot’s properties, limiting their adaptability. So, Wanfeng Shang, Yajing Shen and colleagues wanted to create a millirobot out of soft, biodegradable materials that can grab, roll and climb, but then easily dissolve away after its job is done.
    As a proof of concept, the researchers created a millirobot using a gelatin solution mixed with iron oxide microparticles. Placing the material above a permanent magnet caused the microparticles in the solution to push the gel outward, forming insect-like “legs” along the lines of the magnetic field. Then, the hydrogel was placed in the cold to make it more solid. The final step was to soak the material in ammonium sulfate to cause cross-linking in the hydrogel, making it even stronger. Changing various factors, such as the composition of the ammonium sulfate solution, the thickness of the gel or the strength of the magnetic field allowed the researchers to tune the properties. For example, placing the hydrogel farther away from the magnet resulted in fewer, but longer, legs.
    Because the iron oxide microparticles form magnetic chains within the gel, moving a magnet near the hydrogel caused the legs to bend and produce a claw-like grasping motion. In experiments, the material gripped a 3D-printed cylinder and a rubber band and carried each one to new locations. In addition, the researchers tested the millirobot’s ability to deliver a drug by coating it in a dye solution, then rolling it through a stomach model. Once at its destination, the robot unfurled and released the dye with the strategic use of magnets. Since it’s made using water-soluble gelatin, the millirobot easily degraded in water in two days, leaving behind only the tiny magnetic particles. The researchers say that the new millirobot could open up new possibilities for drug delivery and other biomedical applications.
    The authors acknowledge funding from the National Natural Science Foundation of China, Hong Kong RGC General Research Fund and Shenzhen Key Basic Research Project.
    Video: https://youtu.be/1va-OQvfJDg
    Story Source:
    Materials provided by American Chemical Society. Note: Content may be edited for style and length. More

  • in

    New laser-based instrument designed to boost hydrogen research

    Researchers have developed an analytical instrument that uses an ultrafast laser for precise temperature and concentration measurements of hydrogen. Their new approach could help advance the study of greener hydrogen-based fuels for use in spacecraft and airplanes.
    “This instrument will provide powerful capabilities to probe dynamical processes such as diffusion, mixing, energy transfer and chemical reactions,” said research team leader Alexis Bohlin from Luleå University of Technology in Sweden. “Understanding these processes is fundamental to developing more environmentally friendly propulsion engines.”
    In the Optica Publishing Group journal Optics Express, Bohlin and colleagues from Delft University of Technology and Vrije Universiteit Amsterdam, both in the Netherlands, describe their new coherent Raman spectroscopy instrument for studying hydrogen. It was made possible due to a setup that converts broadband light from a laser with short (femtosecond) pulses into extremely short supercontinuum pulses, which contain a wide range of wavelengths.
    The researchers demonstrated that this supercontinuum generation could be performed behind the same type of thick optical window found on high-pressure chambers used to study a hydrogen-based engine. This is important because other methods for generating ultrabroadband excitation don’t work when these types of optical windows are present.
    “Hydrogen-rich fuel, when made from renewable resources, could have a huge impact on reducing emissions and make a significant contribution to alleviating anthropogenic climate change,” said Bohlin. “Our new method could be used to study these fuels under conditions that closely resemble those in rocket and aerospace engines.”
    Getting light in
    There is much interest in developing aerospace engines that run on renewable hydrogen-rich fuels. In addition to their sustainability appeal, these fuels have among the highest achievable specific impulse — a measure of how efficiently the chemical reaction in an engine creates thrust. However, it has been very challenging to make hydrogen-based chemical propulsion systems reliable. This is because the increased reactivity of hydrogen-rich fuels substantially changes the fuel mixture combustion properties, which increases the flame temperature and decreases ignition delay times. Also, combustion in rocket engines is generally very challenging to control because of the extremely high pressures and high temperatures encountered when traveling to space.
    “The advancement of technology for sustainable launch and aerospace propulsion systems relies on a coherent interplay between experiments and modeling,” said Bohlin. “However, several challenges still exist in terms of producing reliable quantitative data for validating the models.”
    One of the hurdles is that the experiments are usually run in an enclosed space with limited transmission of optical signals in-and-out through optical windows. This window can cause the supercontinuum pulses needed for coherent Raman spectroscopy to become stretched out as they go through the glass. To overcome this problem, the researchers developed a way to transmit femtosecond pulsed laser through a thick optical window and then used a process called laser induced filamentation to transform it into supercontinuum pulses that remain coherent on the other side.
    Studying a hydrogen flame
    To demonstrate the new instrument, the researchers set up a femtosecond laser beam with the ideal properties for supercontinuum generation. They then used it to perform coherent Raman spectroscopy by exciting hydrogen molecules and measuring their rotational transitions. They were able to demonstrate robust measurements of hydrogen gas over a wide range of temperatures and concentrations and also analyzed a hydrogen/air diffusion flame similar to what would be seen when a hydrogen-rich fuel is burned.
    The researchers are now using their instrument to perform a detailed analysis in a turbulent hydrogen flame in hopes of making new discoveries about the combustion process. With a goal of adopting the method for research and testing of rocket engines, the scientists are exploring the limitations of the technique and would like to test it with hydrogen flames in an enclosed slightly pressurized housing.
    Story Source:
    Materials provided by Optica. Note: Content may be edited for style and length. More

  • in

    AI helps detect pancreatic cancer

    An artificial intelligence (AI) tool is highly effective at detecting pancreatic cancer on CT, according to a study published in Radiology, a journal of the Radiological Society of North America (RSNA).
    Pancreatic cancer has the lowest five-year survival rate among cancers. It is projected to become the second leading cause of cancer death in the United States by 2030. Early detection is the best way to improve the dismal outlook, as prognosis worsens significantly once the tumor grows beyond 2 centimeters.
    CT is the key imaging method for detection of pancreatic cancer, but it misses about 40% of tumors under 2 centimeters. There is an urgent need for an effective tool to help radiologists in improving pancreatic cancer detection.
    Researchers in Taiwan have been studying a computer-aided detection (CAD) tool that uses a type of AI called deep learning to detect pancreatic cancer. They previously showed that the tool could accurately distinguish pancreatic cancer from noncancerous pancreas. However, that study relied on radiologists manually identifying the pancreas on imaging — a labor-intensive process known as segmentation. In the new study, the AI tool identified the pancreas automatically. This is an important advance considering that the pancreas borders multiple organs and structures and varies widely in shape and size.
    The researchers developed the tool with an internal test set consisting of 546 patients with pancreatic cancer and 733 control participants. The tool achieved 90% sensitivity and 96% specificity in the internal test set.
    Validation followed with a set of 1,473 individual CT exams from institutions throughout Taiwan. The tool achieved 90% sensitivity and 93% specificity in distinguishing pancreatic cancer from controls in that set. Sensitivity for detecting pancreatic cancers less than 2 centimeters was 75%.
    “The performance of the deep learning tool seemed on par with that of radiologists,” said study senior author Weichung Wang, Ph.D., professor at National Taiwan University and director of the university’s MeDA Lab. “Specifically, in this study, the sensitivity of the deep learning computer-aided detection tool for pancreatic cancer was comparable with that of radiologists in a tertiary referral center regardless of tumor size and stage.”
    The CAD tool has the potential to provide a wealth of information to assist clinicians, Dr. Wang said. It could indicate the region of suspicion to speed radiologist interpretation.
    “The CAD tool may serve as a supplement for radiologists to enhance the detection of pancreatic cancer,” said the study’s co-senior author, Wei-Chi Liao, M.D., Ph.D., from National Taiwan University and National Taiwan University Hospital.
    The researchers are planning further studies. In particular, they want to look at the tool’s performance in more diverse populations. Since the current study was retrospective, they want to see how it performs going forward in real-world clinical settings.
    Story Source:
    Materials provided by Radiological Society of North America. Note: Content may be edited for style and length. More