More stories

  • in

    Enhanced AI tracks neurons in moving animals

    Recent advances allow imaging of neurons inside freely moving animals. However, to decode circuit activity, these imaged neurons must be computationally identified and tracked. This becomes particularly challenging when the brain itself moves and deforms inside an organism’s flexible body, e.g. in a worm. Until now, the scientific community has lacked the tools to address the problem.
    Now, a team of scientists from EPFL and Harvard have developed a pioneering AI method to track neurons inside moving and deforming animals. The study, now published in Nature Methods, was led by Sahand Jamal Rahi at EPFL’s School of Basic Sciences.
    The new method is based on a convolutional neural network (CNN), which is a type of AI that has been trained to recognize and understand patterns in images. This involves a process called “convolution,” which looks at small parts of the picture — like edges, colors, or shapes — at a time and then combines all that information together to make sense of it and to identify objects or patterns.
    The problem is that to identify and track neurons during a movie of an animal’s brain, many images have to be labeled by hand because the animal appears very differently across time due to the many different body deformations. Given the diversity of the animal’s postures, generating a sufficient number of annotations manually to train a CNN can be daunting.
    To address this, the researchers developed an enhanced CNN featuring ‘targeted augmentation’. The innovative technique automatically synthesizes reliable annotations for reference out of only a limited set of manual annotations. The result is that the CNN effectively learns the internal deformations of the brain and then uses them to create annotations for new postures, drastically reducing the need for manual annotation and double-checking.
    The new method is versatile, being able to identify neurons whether they are represented in images as individual points or as 3D volumes. The researchers tested it on the roundworm Caenorhabditis elegans, whose 302 neurons have made it a popular model organism in neuroscience.
    Using the enhanced CNN, the scientists measured activity in some of the worm’s interneurons (neurons that bridge signals between neurons). They found that they exhibit complex behaviors, for example changing their response patterns when exposed to different stimuli, such as periodic bursts of odors.
    The team have made their CNN accessible, providing a user-friendly graphical user interface that integrates targeted augmentation, streamlining the process into a comprehensive pipeline, from manual annotation to final proofreading.
    “By significantly reducing the manual effort required for neuron segmentation and tracking, the new method increases analysis throughput three times compared to full manual annotation,” says Sahand Jamal Rahi. “The breakthrough has the potential to accelerate research in brain imaging and deepen our understanding of neural circuits and behaviors.” More

  • in

    Underwater vehicle AI model could be used in other adaptive control systems

    Unmanned Underwater Vehicles (UUVs) are used around the world to conduct difficult environmental, remote, oceanic, defence and rescue missions in often unpredictable and harsh conditions.
    A new study led by Flinders University and French researchers has now used a novel bio-inspired computing artificial intelligence solution to improve the potential of UUVs and other adaptive control systems to operate more reliability in rough seas and other unpredictable conditions.
    This innovative approach, using the Biologically-Inspired Experience Replay (BIER) method, has been published by the Institute of Electrical and Electronics Engineers journal IEEE Access.
    Unlike conventional methods, BIER aims to overcome data inefficiency and performance degradation by leveraging incomplete but valuable recent experiences, explains first author Dr Thomas Chaffre.
    “The outcomes of the study demonstrated that BIER surpassed standard Experience Replay methods, achieving optimal performance twice as fast as the latter in the assumed UUV domain.
    “The method showed exceptional adaptability and efficiency, exhibiting its capability to stabilize the UUV in varied and challenging conditions.”
    The method incorporates two memory buffers, one focusing on recent state-action pairs and the other emphasising positive rewards.

    To test the effectiveness of the proposed method, researchers conducted simulated scenarios using a robot operating system (ROS)-based UUV simulator and gradually increasing scenarios’ complexity.
    These scenarios varied in target velocity values and the intensity of current disturbances.
    Senior author Flinders University Associate Professor in AI and Robotics Paulo Santos says the BIER method’s success holds promise for enhancing adaptability and performance in various fields requiring dynamic, adaptive control systems.
    UUVs’ capabilities in mapping, imaging and sensor controls are rapidly improving, including with Deep Reinforcement Learning (DRL), which is rapidly advancing the adaptive control responses to underwater disturbances UUVs can encounter.
    However, the efficiency of these methods gets challenged when faced with unforeseen variations in real-world applications.
    The complex dynamics of the underwater environment limit the observability of UUV manoeuvring tasks, making it difficult for existing DRL methods to perform optimally.
    The introduction of BIER marks a significant step forward in enhancing the effectiveness of deep reinforcement learning method in general.
    Its ability to efficiently navigate uncertain and dynamic environments signifies a promising advancement in the area of adaptive control systems, researchers conclude.
    Acknowledgements: This work was funded by Flinders University and ENSTA Bretagne with support from the Government of South Australia (Australia), the Région Bretagne (France) and Naval Group. More

  • in

    Optical data storage breakthrough

    Physicists at The City College of New York have developed a technique with the potential to enhance optical data storage capacity in diamonds. This is possible by multiplexing the storage in the spectral domain. The research by Richard G. Monge and Tom Delord members of the Meriles Group in CCNY’s Division of Science, is entitled “Reversible optical data storage below the diffraction limit” and appears in the journal Nature Nanotechnology.
    “It means that we can store many different images at the same place in the diamond by using a laser of a slightly different color to store different information into different atoms in the same microscopic spots,” said Delord, postdoctoral research associate at CCNY. “If this method can be applied to other materials or at room temperature, it could find its way to computing applications requiring high-capacity storage.”
    The CCNY research focused on a tiny element in diamonds and similar materials, known as “color centers.” These, basically, are atomic defects that can absorb light and serve as a platform for what are termed quantum technologies.
    “What we did was control the electrical charge of these color centers very precisely using a narrow-band laser and cryogenic conditions” explained Delord. “This new approach allowed us to essentially write and read tiny bits of data at a much finer level than previously possible, down to a single atom.”
    Optical memory technologies have a resolution defined by what’s called the “diffraction limit,” that is, the minimum diameter that a beam can be focused to, which approximately scales as half the light beam wavelength (for example, green light would have a diffraction limit of 270 nm). “So, you cannot use a beam like this to write with resolution smaller than the diffraction limit because if you displace the beam less than that, you would impact what you already wrote. So normally, optical memories increase storage capacity by making the wavelength shorter (shifting to the blue), which is why we have “Blu-ray” technology,” said Delord.
    What differentiates the CCNY optical storage approach from others is that it circumvents the diffraction limit by exploiting the slight color (wavelength) changes existing between color centers separated by less than the diffraction limit. “By tuning the beam to slightly shifted wavelengths, it can be kept at the same physical location but interact with different color centers to selectively change their charges — that is to write data with sub-diffraction resolution,” said Monge, a postdoctoral fellow at CCNY who was involved in study as a PhD student at the Graduate Center, CUNY.
    Another unique aspect of this approach is that it’s reversible. “One can write, erase, and rewrite an infinite number of times,” Monge noted. “While there are some other optical storage technologies also able to do this, this is not the typical case, especially when it comes to high spatial resolution. A Blu-ray disc is again a good reference example — you can write a movie in it but you cannot erase it and write another one.” More

  • in

    New wearable communication system offers potential to reduce digital health divide

    Wearable devices that use sensors to monitor biological signals can play an important role in health care. These devices provide valuable information that allows providers to predict, diagnose and treat a variety of conditions while improving access to care and reducing costs.
    However, wearables currently require significant infrastructure — such as satellites or arrays of antennas that use cell signals — to transmit data, making many of those devices inaccessible to rural and under-resourced communities.
    A group of University of Arizona researchers has set out to change that with a wearable monitoring device system that can send health data up to 15 miles — much farther than Wi-Fi or Bluetooth systems can — without any significant infrastructure. Their device, they hope, will help make digital health access more equitable.
    The researchers introduced novel engineering concepts that make their system possible in an upcoming paper in the journal Proceedings of the National Academy of Sciences.
    Philipp Gutruf, an assistant professor of biomedical engineering and Craig M. Berge Faculty Fellow in the College of Engineering, directed the study in the Gutruf Lab. Co-lead authors are Tucker Stuart, a UArizona biomedical engineering doctoral alumnus, and Max Farley, an undergraduate student studying biomedical engineering.
    Designed for ease, function and future
    The COVID-19 pandemic, and the strain it placed on the global health care system, brought attention to the need for accurate, fast and robust remote patient monitoring, Gutruf said. Non-invasive wearable devices currently use the internet to connect clinicians to patient data for aggregation and investigation.

    “These internet-based communication protocols are effective and well-developed, but they require cell coverage or internet connectivity and main-line power sources,” said Gutruf, who is also a member of the UArizona BIO5 Institute. “These requirements often leave individuals in remote or resource-constrained environments underserved.”
    In contrast, the system the Gutruf Lab developed uses a low power wide area network, or LPWAN, that offers 2,400 times the distance of Wi-Fi and 533 times that of Bluetooth. The new system uses LoRa, a patented type of LPWAN technology.
    “The choice of LoRa helped address previous limitations associated with power and electromagnetic constraints,” Stuart said.
    Alongside the implementation of this protocol, the lab developed circuitry and an antenna, which, in usual LoRa-enabled devices, is a large box that seamlessly integrates into the soft wearable. These electromagnetic, electronic and mechanical features enable it to send data to the receiver over a long distance. To make the device almost imperceptible to the wearer, the lab also enables recharge of its batteries over 2 meters of distance. The soft electronics, and the device’s ability to harvest power, are the keys to the performance of this first-of-its-kind monitoring system, Gutruf said.
    The Gutruf Lab calls the soft mesh wearable biosymbiotic, meaning it is custom 3D-printed to fit the user and is so unobtrusive it almost begins to feel like part of their body. The device, worn on the low forearm, stays in place even during exercise, ensuring high-quality data collection, Gutruf said. The user wears the device at all times, and it charges without removal or effort.
    “Our device allows for continuous operation over weeks due to its wireless power transfer feature for interaction-free recharging — all realized within a small package that even includes onboard computation of health metrics,” Farley said.
    Gutruf, Farley and Stuart plan to further improve and extend communication distances with the implementation of LoRa wireless area network gateways that could serve hundreds of square miles and hundreds of device users, using only a small number of connection points.
    The wearable device and its communication system have the potential to aid remote monitoring in underserved rural communities, ensure high-fidelity recording in war zones, and monitor health in bustling cities, said Gutruf, whose long-term goal is to make the technology available to the communities with the most need.
    “This effort is not just a scientific endeavor,” he said. “It’s a step toward making digital medicine more accessible, irrespective of geographical and resource constraints.” More

  • in

    Snail-inspired robot could scoop ocean microplastics

    Inspired by a small and slow snail, scientists have developed a robot protype that may one day scoop up microplastics from the surfaces of oceans, seas and lakes.
    The robot’s design is based on the Hawaiian apple snail (Pomacea canaliculate), a common aquarium snail that uses the undulating motion of its foot to drive water surface flow and suck in floating food particles.
    Currently, plastic collection devices mostly rely on drag nets or conveyor belts to gather and remove larger plastic debris from water, but they lack the fine scale required for retrieving microplastics. These tiny particles of plastic can be ingested and end up in the tissues of marine animals, thereby entering the food chain where they become a health issue and potentially carcinogenic to humans.
    “We were inspired by how this snail collects food particles at the [water and air] interface to engineer a device that could possibly collect microplastics in the ocean or at a water body’s surface, ” said Sunghwan “Sunny” Jung, professor in the department of biological and environmental engineering at Cornell University. Jung is senior author of a study, “Optimal free-surface pumping by an undulating carpet,” which published in Nature Communications.
    The prototype, modified from an existing design, would need to be scaled up to be practical in a real-world setting. The researchers used a 3D printer to make a flexible carpet-like sheet capable of undulating. A helical structure on the underside of the sheet rotates like a corkscrew to cause the carpet to undulate and create a travelling wave on the water.
    Analyzing the motion of the fluid was key to this research. “We needed to understand the fluid flow to characterize the pumping behavior,” Jung said. The fluid-pumping system based on the snail’s technique is open to the air. The researchers calculated that a similar closed system, where the pump is enclosed and uses a tube to suck in water and particles, would require high energy inputs to operate. On the other hand, the snail-like open system is far more efficient. For example, the prototype, though small, runs on only 5 volts of electricity while still effectively sucking in water, Jung said.
    Due to the weight of a battery and motor, the researchers may need to attach a floatation device to the robot to keep it from sinking, Jung said.
    Anupam Pandey, a former postdoctoral researcher in Jung’s lab, currently an assistant professor of mechanical engineering at Syracuse University, is the paper’s first author.
    The study was funded by the National Science Foundation. More

  • in

    Tiny electromagnets made of ultra-thin carbon

    Graphene, that is extremely thin carbon, is considered a true miracle material. An international research team has now added another facet to its diverse properties with experiments at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR): The experts, led by the University of Duisburg-Essen (UDE), fired short terahertz pulses at micrometer-sized discs of graphene, which briefly turned these minuscule objects into surprisingly strong magnets. This discovery may prove useful for developing future magnetic switches and storage devices.
    Graphene consists of an ultra-thin sheet of just one layer of carbon atoms. But the material, which was only discovered as recently as 2004, displays remarkable properties. Among them is its ability to conduct electricity extremely well, and that is precisely what international researchers from Germany, Poland, India, and the USA took advantage of.
    They applied thousands of tiny, micrometer-sized graphene discs onto a small chip using established semiconductor techniques. This chip was then exposed to a particular type of radiation situated between the microwave and infrared range: short terahertz pulses.
    To achieve the best possible conditions, the working group, led by the UDE, used a particular light source for the experiment: The FELBE free-electron laser at the HZDR can generate extremely intense terahertz pulses. The astonishing result: “The tiny graphene disks briefly turned into electromagnets,” reports HZDR physicist Dr. Stephan Winnerl.
    “We were able to generate magnetic fields in the range of 0.5 Tesla, which is roughly ten thousand times the Earth’s magnetic field.” These were short magnetic pulses, only about ten picoseconds or one-hundredth of a billionth of a second long.
    Radiation pulses stir electrons
    The prerequisite for success: The researchers had to polarize the terahertz flashes in a specific way. Specialized optics changed the direction of oscillation of the radiation so that it moved, figuratively speaking, helically through space.

    When these circularly polarized flashes hit the micrometer-sized graphene discs, the decisive effect occurred: Stimulated by the radiation, the free electrons in the discs began to circle — just like water in a bucket stirred with a wooden spoon. And because, according to the basic laws of physics, a circulating current always generates a magnetic field, the graphene disks mutated into tiny electromagnets.
    “The idea is actually quite simple,” says Martin Mittendorff, professor at the University of Duisburg-Essen. “In hindsight, we are surprised nobody had done it before.” Equally astonishing is the efficiency of the process: Compared to experiments irradiating nanoparticles of gold with light, the experiment at the HZDR was a million times more efficient — an impressive increase. The new phenomenon could initially be used for scientific experiments in which material samples are exposed to short but strong magnetic pulses to investigate certain material properties in more detail.
    The advantage: “With our method, the magnetic field does not reverse polarity, as is the case with many other methods,” explains Winnerl. “It, therefore, remains unipolar.” In other words, during the ten picoseconds that the magnetic pulse from the graphene disks lasts, the north pole remains a north pole and the south pole a south pole — a potential advantage for certain series of experiments.
    The dream of magnetic electronics
    In the long run, those minuscule magnets might even be useful for certain future technologies: As ultra-short radiation flashes generate them, the graphene discs could carry out extremely fast and precise magnetic switching operations. This would be interesting for magnetic storage technology, for example, but also for so-called spintronics — a form of magnetic electronics.
    Here, instead of electrical charges flowing in a processor, weak magnetic fields in the form of electron spins are passed on like tiny batons. This may, so it is hoped, significantly speed up the switching processes once again. Graphene disks could conceivably be used as switchable electromagnets to control future spintronic chips.
    However, experts would have to invent very small, highly miniaturized terahertz sources for this purpose — certainly still a long way to go. “You cannot use a full-blown free-electron laser for this, like the one we used in our experiment,” comments Stephan Winnerl. “Nevertheless, radiation sources fitting on a laboratory table should be sufficient for future scientific experiments.” Such significantly more compact terahertz sources can already be found in some research facilities. More

  • in

    Scientists propose a model to predict personal learning performance for virtual reality-based safety training

    In Korea, occupational hazards are on the rise, particularly in the construction sector. According to a report on the ‘Occupational Safety Accident Status’ by Korea’s Ministry of Employment and Labor, the industry accounted for the highest number of accidents and fatalities among all sectors in 2021. To address this rise, the Korea Occupational Safety and Health Agency has been providing virtual reality (VR)-based construction safety content to daily workers as part of their educational training initiatives.
    Nevertheless, current VR-based training methods grapple with two limitations. Firstly, VR-based construction safety training is essentially a passive exercise, with learners following one-way instructions that fail to adapt to their judgments and decisions. Secondly, there is an absence of an objective evaluation process during VR-based safety training. To address these challenges, researchers have introduced immersive VR-based construction safety content to promote active worker engagement and have conducted post-written tests. However, these post-written tests have limitations in terms of immediacy and objectivity. Furthermore, among the individual characteristics that can affect learning performance, including personal, academic, social, and cognitive aspects, cognitive characteristics may undergo changes during VR-based safety training.
    To address this, a team of researchers led by Associate Professor Choongwan Koo from the Division of Architecture & Urban Division at Incheon National University, Korea, has now proposed a groundbreaking machine learning approach for forecasting personal learning performance in VR-based construction safety training that uses real-time biometric responses. Their paper was made available online on October 7, 2023, and will be published in Volume 156 of the journal Automation in Construction in December 2023.
    “While traditional methods of evaluating learning outcomes that use post-written tests may lack objectivity, real-time biometric responses, collected from eye-tracking and electroencephalogram (EEG) sensors, can be used to promptly and objectively evaluate personal learning performances during VR-based safety training,” explains Dr. Koo.
    The study involved 30 construction workers undergoing VR-based construction safety training. Real-time biometric responses, collected from eye-tracking and EEG to monitor brain activity, were gathered during the training to assess the psychological responses of the participants. Combining this data with pre-training surveys and post-training written tests, the researchers developed machine-learning-based forecasting models to evaluate the overall personal learning performance of the participants during VR-based safety training.
    The team developed two models — a full forecast model (FM) that uses both demographic factors and biometric responses as independent variables and a simplified forecast model (SM) which solely relies on the identified principal features as independent variables to reduce complexity. While the FM exhibited higher accuracy in predicting personal learning performance than traditional models, it also displayed a high level of overfitting. In contrast, the SM demonstrated higher prediction accuracy than the FM due to a smaller number of variables, significantly reducing overfitting. The team thus concluded that the SM was best suited for practical use.
    Explaining these results, Dr. Koo emphasizes, “This approach can have a significant impact on improving personal learning performance during VR-based construction safety training, preventing safety incidents, and fostering a safe working environment.” Further, the team also emphasizes the need for future research to consider various accident types and hazard factors in VR-based safety training.
    In conclusion, this study marks a significant stride in enhancing personalized safety in construction environments and improving the evaluation of learning performance! More

  • in

    AI networks are more vulnerable to malicious attacks than previously thought

    Artificial intelligence tools hold promise for applications ranging from autonomous vehicles to the interpretation of medical images. However, a new study finds these AI tools are more vulnerable than previously thought to targeted attacks that effectively force AI systems to make bad decisions.
    At issue are so-called “adversarial attacks,” in which someone manipulates the data being fed into an AI system in order to confuse it. For example, someone might know that putting a specific type of sticker at a specific spot on a stop sign could effectively make the stop sign invisible to an AI system. Or a hacker could install code on an X-ray machine that alters the image data in a way that causes an AI system to make inaccurate diagnoses.
    “For the most part, you can make all sorts of changes to a stop sign, and an AI that has been trained to identify stop signs will still know it’s a stop sign,” says Tianfu Wu, co-author of a paper on the new work and an associate professor of electrical and computer engineering at North Carolina State University. “However, if the AI has a vulnerability, and an attacker knows the vulnerability, the attacker could take advantage of the vulnerability and cause an accident.”
    The new study from Wu and his collaborators focused on determining how common these sorts of adversarial vulnerabilities are in AI deep neural networks. They found that the vulnerabilities are much more common than previously thought.
    “What’s more, we found that attackers can take advantage of these vulnerabilities to force the AI to interpret the data to be whatever they want,” Wu says. “Using the stop sign example, you could make the AI system think the stop sign is a mailbox, or a speed limit sign, or a green light, and so on, simply by using slightly different stickers — or whatever the vulnerability is.
    “This is incredibly important, because if an AI system is not robust against these sorts of attacks, you don’t want to put the system into practical use — particularly for applications that can affect human lives.”
    To test the vulnerability of deep neural networks to these adversarial attacks, the researchers developed a piece of software called QuadAttacK. The software can be used to test any deep neural network for adversarial vulnerabilities.

    “Basically, if you have a trained AI system, and you test it with clean data, the AI system will behave as predicted. QuadAttacK watches these operations and learns how the AI is making decisions related to the data. This allows QuadAttacK to determine how the data could be manipulated to fool the AI. QuadAttacK then begins sending manipulated data to the AI system to see how the AI responds. If QuadAttacK has identified a vulnerability it can quickly make the AI see whatever QuadAttacK wants it to see.”
    In proof-of-concept testing, the researchers used QuadAttacK to test four deep neural networks: two convolutional neural networks (ResNet-50 and DenseNet-121) and two vision transformers (ViT-B and DEiT-S). These four networks were chosen because they are in widespread use in AI systems around the world.
    “We were surprised to find that all four of these networks were very vulnerable to adversarial attacks,” Wu says. “We were particularly surprised at the extent to which we could fine-tune the attacks to make the networks see what we wanted them to see.”
    The research team has made QuadAttacK publicly available, so that the research community can use it themselves to test neural networks for vulnerabilities. The program can be found here: https://thomaspaniagua.github.io/quadattack_web/.
    “Now that we can better identify these vulnerabilities, the next step is to find ways to minimize those vulnerabilities,” Wu says. “We already have some potential solutions — but the results of that work are still forthcoming.”
    The paper, “QuadAttacK: A Quadratic Programming Approach to Learning Ordered Top-K Adversarial Attacks,” will be presented Dec. 16 at the Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS 2023), which is being held in New Orleans, La. First author of the paper is Thomas Paniagua, a Ph.D. student at NC State. The paper was co-authored by Ryan Grainger, a Ph.D. student at NC State.
    The work was done with support from the U.S. Army Research Office, under grants W911NF1810295 and W911NF2210010; and from the National Science Foundation, under grants 1909644, 2024688 and 2013451. More