More stories

  • in

    Model shows how intelligent-like behavior can emerge from non-living agents

    From a distance, they looked like clouds of dust. Yet, the swarm of microrobots in author Michael Crichton’s bestseller “Prey” was self-organized. It acted with rudimentary intelligence, learning, evolving and communicating with itself to grow more powerful.
    A new model by a team of researchers led by Penn State and inspired by Crichton’s novel describes how biological or technical systems form complex structures equipped with signal-processing capabilities that allow the systems to respond to stimulus and perform functional tasks without external guidance.
    “Basically, these little nanobots become self-organized and self-aware,” said Igor Aronson, Huck Chair Professor of Biomedical Engineering, Chemistry, and Mathematics at Penn State, explaining the plot of Crichton’s book. The novel inspired Aronson to study the emergence of collective motion among interacting, self-propelled agents. The research was recently published in Nature Communications.
    Aronson and a team of physicists from the LMU University, Munich, have developed a new model to describe how biological or synthetic systems form complex structures equipped with minimal signal-processing capabilities that allow the systems to respond to stimulus and perform functional tasks without external guidance. The findings have implications in microrobotics and for any field involving functional, self-assembled materials formed by simple building blocks, Aronson said. For example, robotics engineers could create swarms of microrobots capable of performing complex tasks such as pollutant scavenging or threat detection.
    “If we look to nature, we see that many living creatures rely on communication and teamwork because it enhances their chances of survival,” Aronson said.
    The computer model conceived by researchers from Penn State and Ludwig-Maximillian University predicted that communications by small, self-propelled agents lead to intelligent-like collective behavior. The study demonstrated that communications dramatically expand an individual unit’s ability to form complex functional states akin to living systems. More

  • in

    Flying snakes help scientists design new robots

    Robots have been designed to move in ways that mimic animal movements, such as walking and swimming. Scientists are now considering how to design robots that mimic the gliding motion exhibited by flying snakes.
    In Physics of Fluids, by AIP Publishing, researchers from the University of Virginia and Virginia Tech explored the lift production mechanism of flying snakes, which undulate side-to-side as they move from the tops of trees to the ground to escape predators or to move around quickly and efficiently. The undulation allows snakes to glide for long distances, as much as 25 meters from a 15-meter tower.
    To understand how the undulations provide lift, the investigators developed a computational model derived from data obtained through high-speed video of flying snakes. A key component of this model is the cross-sectional shape of the snake’s body, which resembles an elongated frisbee or flying disc.
    The cross-sectional shape is essential for understanding how the snake can glide so far. In a frisbee, the spinning disc creates increased air pressure below the disc and suction on its top, lifting the disc into the air. To help create the same type of pressure differential across its body, the snake undulates side to side, producing a low-pressure region above its back and a high-pressure region beneath its belly. This lifts the snake and allows it to glide through the air.
    “The snake’s horizontal undulation creates a series of major vortex structures, including leading edge vortices, LEV, and trailing edge vortices, TEV,” said author Haibo Dong of the University of Virginia. “The formation and development of the LEV on the dorsal, or back, surface of the snake body plays an important role in producing lift.”
    The LEVs form near the head and move back along the body. The investigators found that the LEVs hold for longer intervals at the curves in the snake’s body before being shed. These curves form during the undulation and are key to understanding the lift mechanism.
    The group considered several features, such as the angle of attack that the snake forms with the oncoming airflow and the frequency of its undulations, to determine which were important in producing glide. In their natural setting, flying snakes typically undulate at a frequency between 1-2 times per second. Surprisingly, the researchers found that more rapid undulation decreases aerodynamic performance.
    “The general trend we see is that a frequency increase leads to an instability in the vortex structure, causing some vortex tubes to spin. The spinning vortex tubes tend to detach from the surface, leading to a decrease in lift,” said Dong.
    The scientists hope their findings will lead to increased understanding of gliding motion and to a more optimal design for gliding snake robots.
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    AI model proactively predicts if a COVID-19 test might be positive or not

    COVID-19 and its latest Omicron strains continue to cause infections across the country as well as globally. Serology (blood) and molecular tests are the two most commonly used methods for rapid COVID-19 testing. Because COVID-19 tests use different mechanisms, they vary significantly. Molecular tests measure the presence of viral SARS-CoV-2 RNA while serology tests detect the presence of antibodies triggered by the SARS-CoV-2 virus.
    Currently, there is no existing study on the correlation between serology and molecular tests and which COVID-19 symptoms play a key role in producing a positive test result. A study from Florida Atlantic University’s College of Engineering and Computer Science using machine learning provides important new evidence in understanding how molecular tests versus serology tests are correlated, and what features are the most useful in distinguishing between COVID-19 positive versus test outcomes.
    Researchers from the College of Engineering and Computer Science trained five classification algorithms to predict COVID-19 test results. They created an accurate predictive model using easy-to-obtain symptom features, along with demographic features such as number of days post-symptom onset, fever, temperature, age and gender.
    The study demonstrates that machine-learning models, trained using simple symptom and demographic features, can help predict COVID-19 infections. Results, published in the journal Smart Health, identify the key symptom features associated with COVID-19 infection and provide a way for rapid screening and cost effective infection detection.
    Findings reveal that number of days experiencing symptoms such as fever and difficulty breathing play a large role in COVID-19 test results. Findings also show that molecular tests have much narrower post-symptom onset days (between three to eight days), compared to post-symptom onset days of serology tests (between five to 38 days). As a result, the molecular test has the lowest positive rate because it measures current infection.
    Furthermore, COVID-19 tests vary significantly, partially because donors’ immune response and viral load — the target of different test methods — continuously change. Even for the same donor, it might be possible to observe different positive/negative results from two types of tests. More

  • in

    2D material may enable ultra-sharp cellphone photos in low light

    A new type of active pixel sensor that uses a novel two-dimensional material may both enable ultra-sharp cellphone photos and create a new class of extremely energy-efficient Internet of Things (IoT) sensors, according to a team of Penn State researchers.
    “When people are looking for a new phone, what are the specs that they are looking for?” said Saptarshi Das, associate professor of engineering science and mechanics and lead author of the study published Nov. 17 in Nature Materials. “Quite often, they are looking for a good camera, and what does a good camera mean to most people? Sharp photos with high resolution.”
    Most people just snap a photo of a friend, a family gathering, or a sporting event, and never think about what happens “behind the scenes” inside the phone when one snaps a picture. According to Das, there is quite a bit happening to enable you to see a photo right after you take it, and this involves image processing.
    “When you take an image, many of the cameras have some kind of processing that goes on in the phone, and in fact, this sometimes makes the photo look even better than what you are seeing with your eyes,” Das said. “These next generation of phone cameras integrate image capture with image processing to make this possible, and that was not possible with older generations of cameras.”
    However, the great photos in the newest cameras have a catch — the processing requires a lot of energy.
    “There’s an energy cost associated with taking a lot of images,” said Akhil Dodda, a graduate research assistant at Penn State at the time of the study who is now a research staff member at Western Digital, and co-first author of the study. “If you take 10,000 images, that is fine, but somebody is paying the energy costs for that. If you can bring it down by a hundredfold, then you can take 100 times more images and still spend the same amount of energy. It makes photography more sustainable so that people can take more selfies and other pictures when they are traveling. And this is exactly where innovation in materials comes into the picture.”
    The innovation in materials outlined in the study revolves around how they added in-sensor processing to active pixel sensors to reduce their energy use. So, they turned to a novel 2D material, which is a class of materials only one or a few atoms thick, molybdenum disulfide. It is also a semiconductor and sensitive to light, which makes it ideal as a potential material to explore for low-energy in-sensor processing of images. More

  • in

    Hummingbird flight could provide insights for biomimicry in aerial vehicles

    Hummingbirds occupy a unique place in nature: They fly like insects but have the musculoskeletal system of birds. According to Bo Cheng, the Kenneth K. and Olivia J. Kuo Early Career Associate Professor in Mechanical Engineering at Penn State, hummingbirds have extreme aerial agility and flight forms, which is why many drones and other aerial vehicles are designed to mimic hummingbird movement. Using a novel modeling method, Cheng and his team of researchers gained new insights into how hummingbirds produce wing movement, which could lead to design improvements in flying robots.
    Their results were published this week in the Proceedings of Royal Society B.
    “We essentially reverse-engineered the inner working of the wing musculoskeletal system — how the muscles and skeleton work in hummingbirds to flap the wings,” said first author and Penn State mechanical engineering graduate student Suyash Agrawal. “The traditional methods have mostly focused on measuring activity of a bird or insect when they are in natural flight or in an artificial environment where flight-like conditions are simulated. But most insects and, among birds specifically, hummingbirds are very small. The data that we can get from those measurements are limited.”
    The researchers used muscle anatomy literature, computational fluid dynamics simulation data and wing-skeletal movement information captured using micro-CT and X-ray methods to inform their model. They also used an optimization algorithm based on evolutionary strategies, known as the genetic algorithm, to calibrate the parameters of the model. According to the researchers, their approach is the first to integrate these disparate parts for biological fliers.
    “We can simulate the whole reconstructed motion of the hummingbird wing and then simulate all the flows and forces generated by the flapping wing, including all the pressure acting on the wing,” Cheng said. “From that, we are able to back-calculate the required total muscular torque that is needed to flap the wing. And that torque is something we use to calibrate our model.”
    With this model, the researchers uncovered previously unknown principles of hummingbird wing actuation.
    The first discovery, according to Cheng, was that hummingbirds’ primary muscles, that is, their flight engines, do not simply flap their wings in a simple back and forth motion, but instead pull their wings in three directions: up and down, back and forth, and twisting — or pitching — of the wing. The researchers also found that hummingbirds tighten their shoulder joints in both the up-and-down direction and the pitch direction using multiple smaller muscles.
    “It’s like when we do fitness training and a trainer says to tighten your core to be more agile,” Cheng said. “We found that hummingbirds are using similar kind of a mechanism. They tighten their wings in the pitch and up-down directions but keep the wing loose along the back-and-forth direction, so their wings appear to be flapping back and forth only while their power muscles, or their flight engines, are actually pulling the wings in all three directions. In this way, the wings have very good agility in the up and down motion as well as the twist motion.”
    While Cheng emphasized that the results from the optimized model are predictions that will need validation, he said that it has implications for technological development of aerial vehicles.
    “Even though the technology is not there yet to fully mimic hummingbird flight, our work provides essential principles for informed mimicry of hummingbirds hopefully for the next generation of agile aerial systems,” he said.
    The other authors were Zafar Anwar, a doctoral student in the Penn State Department of Mechanical Engineering; Bret W. Tobalske of the Division of Biological Sciences at the University of Montana; Haoxiang Luo of the Department of Mechanical Engineering at Vanderbilt University; and Tyson L. Hedrick of the Department of Biology at the University of North Carolina.
    The Office of Naval Research funded this work.
    Story Source:
    Materials provided by Penn State. Original written by Sarah Small. Note: Content may be edited for style and length. More

  • in

    Does throwing my voice make you want to shop here?

    Virtual environments, including those for commerce, are increasingly common so as to provide an experience for the user that is as realistic as possible. However, virtual environments also provide a new opportunity for researchers to conduct experiments that would not be possible in the real world. Researchers from the University of Tsukuba have done just that by exploring how changing the position of the virtual shop assistant’s voice from its visual position would impact the shopping experience of humans in a virtual reality store.
    Humans locate sound by combining visual and auditory cues. Because the visual cues are generally less variable, they can override audio cues, leading to the well-known ventriloquism effect, which occurs when a human perceives the location of a sound to be different from its actual source. It is also well known that humans have personal space, which varies according to social, personal, and environmental factors. Although both phenomena have long been studied individually, until the development of virtual reality, it has not been possible to study how the ventriloquism effect alters personal space.
    “In particular, we wanted to know how it affects the rapport between the user and shop assistant,” says Professor Zempo Keiichi, lead author of the study. Rapport, or the quality of interpersonal service, strongly affects loyalty and satisfaction, and skilled salespeople use several techniques to build rapport with customers.
    In their experiments, the researchers asked 16 people in the virtual shop environment to define their personal space and record their impression when approached by shop assistants. Some assistants had both a voice and an image at the same position, and others had a voice that was located at different distances between the user and assistant.
    “We found that rapport was not affected when the deviation between the sound and visual positions could not be tolerated; however, when it could be tolerated, we found two distinct phenomena,” explains Professor Zempo Keiichi. The first was similar to the “uncanny valley,” which occurs when an imperfect human representation invokes feelings of uneasiness in a real human. This decreased rapport with the virtual assistant. But when the sound moved even closer to the human, the rapport increased.
    The authors call this phenomenon the “mouth-in-the-door” phenomenon because it is similar to the “foot-in-the-door” phenomenon, in which a small, unconscious consent, such as not moving away when someone starts to speak, causes a person to improve their evaluation of the other person. Without these virtual experiments, this phenomenon would have likely remained undiscovered. But now that it is known, the authors believe it can be used to improve the user experience, especially in virtual shop scenarios.
    Story Source:
    Materials provided by University of Tsukuba. Note: Content may be edited for style and length. More

  • in

    Bolstering the safety of self-driving cars with a deep learning-based object detection system

    Self-driving cars, or autonomous vehicles, have long been earmarked as the next generation mode of transport. To enable the autonomous navigation of such vehicles in different environments, many different technologies relating to signal processing, image processing, artificial intelligence deep learning, edge computing, and IoT, need to be implemented.
    One of the largest concerns around the popularization of autonomous vehicles is that of safety and reliability. In order to ensure a safe driving experience for the user, it is essential that an autonomous vehicle accurately, effectively, and efficiently monitors and distinguishes its surroundings as well as potential threats to passenger safety.
    To this end, autonomous vehicles employ high-tech sensors, such as Light Detection and Ranging (LiDaR), radar, and RGB cameras that produce large amounts of data as RGB images and 3D measurement points, known as a “point cloud.” The quick and accurate processing and interpretation of this collected information is critical for the identification of pedestrians and other vehicles. This can be realized through the integration of advanced computing methods and Internet-of-Things (IoT) into these vehicles, which allows for fast, on-site data processing and navigation of various environments and obstacles more efficiently.
    In a recent study published in the IEEE Transactions of Intelligent Transport Systems journal on 17 October 2022, a group of international researchers, led by Professor Gwanggil Jeon from Incheon National University, Korea have now developed a smart IoT-enabled end-to-end system for 3D object detection in real time based on deep learning and specialized for autonomous driving situations.
    “For autonomous vehicles, environment perception is critical to answer a core question, ‘What is around me?’ It is essential that an autonomous vehicle can effectively and accurately understand its surrounding conditions and environments in order to perform a responsive action,” explains Prof. Jeon. “We devised a detection model based onYOLOv3, a well-known identification algorithm. The model was first used for 2D object detection and then modified for 3D objects,” he elaborates.
    The team fed the collected RGB images and point cloud data as input to YOLOv3, which, in turn, output classification labels and bounding boxes with confidence scores. They then tested its performance with the Lyft dataset. The early results revealed that YOLOv3 achieved an extremely high accuracy of detection ( >96%) for both 2D and 3D objects, outperforming other state-of-the-art detection models.
    The method can be applied to autonomous vehicles, autonomous parking, autonomous delivery, and future autonomous robots as well as in applications where object and obstacle detection, tracking, and visual localization is required. “At present, autonomous driving is being performed through LiDAR-based image processing, but it is predicted that a general camera will replace the role of LiDAR in the future. As such, the technology used in autonomous vehicles is changing every moment, and we are at the forefront,” highlights Prof. Jeon. “Based on the development of element technologies, autonomous vehicles with improved safety should be available in the next 5-10 years,” he concludes optimistically.
    Story Source:
    Materials provided by Incheon National University. Note: Content may be edited for style and length. More

  • in

    A peculiar protected structure links Viking knots with quantum vortices

    Scientists have shown how three vortices can be linked in a way that prevents them from being dismantled. The structure of the links resembles a pattern used by Vikings and other ancient cultures, although this study focused on vortices in a special form of matter known as a Bose-Einstein condensate. The findings have implications for quantum computing, particle physics and other fields.
    Postdoctoral researcherToni Annala uses strings and water vortices to explain the phenomenon: ‘If you make a link structure out of, say, three unbroken strings in a circle, you can’t unravel it because the string can’t go through another string. If, on the other hand, the same circular structure is made in water, the water vortices can collide and merge if they are not protected.’
    ‘In a Bose-Einstein condensate, the link structure is somewhere between the two,’ says Annala, who began working on this in Professor Mikko Möttönen’s research group at Aalto University before moving back to the University of British Columbia and then to the Institute for Advanced Study in Princeton. Roberto Zamora-Zamora, a postdoctoral researcher in Möttönen’s group, was also involved in the study.
    The researchers mathematically demonstrated the existence of a structure of linked vortices that cannot break apart because of their fundamental properties. ‘The new element here is that we were able to mathematically construct three different flow vortices that were linked but could not pass through each other without topological consequences. If the vortices interpenetrate each other, a cord would form at the intersection, which binds the vortices together and consumes energy. This means that the structure cannot easily break down,’ says Möttönen.
    From antiquity to cosmic strands
    The structure is conceptually similar to the Borromean rings, a pattern of three interlinked circles which has been widely used in symbolism and as a coat of arms. A Viking symbol associated with Odin has three triangles interlocked in a similar way. If one of the circles or triangles is removed, the entire pattern dissolves because the remaining two are not directly connected. Each element thus links its two partners, stabilising the structure as a whole.
    The mathematical analysis in this research shows how similarly robust structures could exist between knotted or linked vortices. Such structures might be observed in certain types of liquid crystals or condensed matter systems and could affect how those systems behave and develop.
    ‘To our surprise, these topologically protected links and knots had not been invented before. This is probably because the link structure requires vortices with three different types of flow, which is much more complex than the previously considered two-vortex systems,’ says Möttönen.
    These findings may one day help make quantum computing more accurate. In topological quantum computing, the logical operations would be carried out by braiding different types of vortices around each other in various ways. ‘In normal liquids, knots unravel, but in quantum fields there can be knots with topological protection, as we are now discovering,’ says Möttönen.
    Annala adds that ‘the same theoretical model can be used to describe structures in many different systems, such as cosmic strings in cosmology.’ The topological structures used in the study also correspond to the vacuum structures in quantum field theory. The results could therefore also have implications for particle physics.
    Next, the researchers plan to theoretically demonstrate the existence of a knot in a Bose-Einstein condensate that would be topologically protected against dissolving in an experimentally feasible scenario. ‘The existence of topologically protected knots is one of the fundamental questions of nature. After a mathematical proof, we can move on to simulations and experimental research,’ says Möttönen.
    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More