More stories

  • in

    Quantum dots at room temp, using lab-designed protein

    Nature uses 20 canonical amino acids as building blocks to make proteins, combining their sequences to create complex molecules that perform biological functions.
    But what happens with the sequences not selected by nature? And what possibilities lie in constructing entirely new sequences to make novel, or de novo, proteins bearing little resemblance to anything in nature?
    That’s the terrain Princeton University’s Hecht Lab works in. And recently, their curiosity for designing their own sequences paid off.
    They discovered the first known de novo protein that catalyzes, or drives, the synthesis of quantum dots. Quantum dots are fluorescent nanocrystals used in electronic applications from LED screens to solar panels.
    Their work opens the door to making nanomaterials in a more sustainable way by demonstrating that protein sequences not derived from nature can be used to synthesize functional materials — with pronounced benefits to the environment.
    Quantum dots are normally made in industrial settings with high temperatures and toxic, expensive solvents — a process that is neither economical nor environmentally friendly. But Hecht Lab researchers pulled off the process at the bench using water as a solvent, making a stable end-product at room temperature. More

  • in

    Particles of light may create fluid flow, data-theory comparison suggests

    A new computational analysis by theorists at the U.S. Department of Energy’s Brookhaven National Laboratory and Wayne State University supports the idea that photons (a.k.a. particles of light) colliding with heavy ions can create a fluid of “strongly interacting” particles. In a paper just published in Physical Review Letters, they show that calculations describing such a system match up with data collected by the ATLAS detector at Europe’s Large Hadron Collider (LHC).
    As the paper explains, the calculations are based on the hydrodynamic particle flow seen in head-on collisions of various types of ions at both the LHC and the Relativistic Heavy Ion Collider (RHIC), a DOE Office of Science user facility for nuclear physics research at Brookhaven Lab. With only modest changes, these calculations also describe flow patterns seen in near-miss collisions, where photons that form a cloud around the speeding ions collide with the ions in the opposite beam.
    “The upshot is that, using the same framework we use to describe lead-lead and proton-lead collisions, we can describe the data of these ultra-peripheral collisions where we have a photon colliding with a lead nucleus,” said Brookhaven Lab theorist Bjoern Schenke, a coauthor of the paper. “That tells you there’s a possibility that, in these photon-ion collisions, we create a small dense strongly interacting medium that is well described by hydrodynamics — just like in the larger systems.”
    Fluid signatures
    Observations of particles flowing in characteristic ways have been key evidence that the larger collision systems (lead-lead and proton-lead collisions at the LHC; and gold-gold and proton-gold collisions at RHIC) create a nearly perfect fluid. The flow patterns were thought to stem from the enormous pressure gradients created by the large number of strongly interacting particles produced where the colliding ions overlap.
    “By smashing these high-energy nuclei together we’re creating such high energy density — compressing the kinetic energy of these guys into such a small space — that this stuff essentially behaves like a fluid,” Schenke said. More

  • in

    Model shows how intelligent-like behavior can emerge from non-living agents

    From a distance, they looked like clouds of dust. Yet, the swarm of microrobots in author Michael Crichton’s bestseller “Prey” was self-organized. It acted with rudimentary intelligence, learning, evolving and communicating with itself to grow more powerful.
    A new model by a team of researchers led by Penn State and inspired by Crichton’s novel describes how biological or technical systems form complex structures equipped with signal-processing capabilities that allow the systems to respond to stimulus and perform functional tasks without external guidance.
    “Basically, these little nanobots become self-organized and self-aware,” said Igor Aronson, Huck Chair Professor of Biomedical Engineering, Chemistry, and Mathematics at Penn State, explaining the plot of Crichton’s book. The novel inspired Aronson to study the emergence of collective motion among interacting, self-propelled agents. The research was recently published in Nature Communications.
    Aronson and a team of physicists from the LMU University, Munich, have developed a new model to describe how biological or synthetic systems form complex structures equipped with minimal signal-processing capabilities that allow the systems to respond to stimulus and perform functional tasks without external guidance. The findings have implications in microrobotics and for any field involving functional, self-assembled materials formed by simple building blocks, Aronson said. For example, robotics engineers could create swarms of microrobots capable of performing complex tasks such as pollutant scavenging or threat detection.
    “If we look to nature, we see that many living creatures rely on communication and teamwork because it enhances their chances of survival,” Aronson said.
    The computer model conceived by researchers from Penn State and Ludwig-Maximillian University predicted that communications by small, self-propelled agents lead to intelligent-like collective behavior. The study demonstrated that communications dramatically expand an individual unit’s ability to form complex functional states akin to living systems. More

  • in

    Flying snakes help scientists design new robots

    Robots have been designed to move in ways that mimic animal movements, such as walking and swimming. Scientists are now considering how to design robots that mimic the gliding motion exhibited by flying snakes.
    In Physics of Fluids, by AIP Publishing, researchers from the University of Virginia and Virginia Tech explored the lift production mechanism of flying snakes, which undulate side-to-side as they move from the tops of trees to the ground to escape predators or to move around quickly and efficiently. The undulation allows snakes to glide for long distances, as much as 25 meters from a 15-meter tower.
    To understand how the undulations provide lift, the investigators developed a computational model derived from data obtained through high-speed video of flying snakes. A key component of this model is the cross-sectional shape of the snake’s body, which resembles an elongated frisbee or flying disc.
    The cross-sectional shape is essential for understanding how the snake can glide so far. In a frisbee, the spinning disc creates increased air pressure below the disc and suction on its top, lifting the disc into the air. To help create the same type of pressure differential across its body, the snake undulates side to side, producing a low-pressure region above its back and a high-pressure region beneath its belly. This lifts the snake and allows it to glide through the air.
    “The snake’s horizontal undulation creates a series of major vortex structures, including leading edge vortices, LEV, and trailing edge vortices, TEV,” said author Haibo Dong of the University of Virginia. “The formation and development of the LEV on the dorsal, or back, surface of the snake body plays an important role in producing lift.”
    The LEVs form near the head and move back along the body. The investigators found that the LEVs hold for longer intervals at the curves in the snake’s body before being shed. These curves form during the undulation and are key to understanding the lift mechanism.
    The group considered several features, such as the angle of attack that the snake forms with the oncoming airflow and the frequency of its undulations, to determine which were important in producing glide. In their natural setting, flying snakes typically undulate at a frequency between 1-2 times per second. Surprisingly, the researchers found that more rapid undulation decreases aerodynamic performance.
    “The general trend we see is that a frequency increase leads to an instability in the vortex structure, causing some vortex tubes to spin. The spinning vortex tubes tend to detach from the surface, leading to a decrease in lift,” said Dong.
    The scientists hope their findings will lead to increased understanding of gliding motion and to a more optimal design for gliding snake robots.
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    AI model proactively predicts if a COVID-19 test might be positive or not

    COVID-19 and its latest Omicron strains continue to cause infections across the country as well as globally. Serology (blood) and molecular tests are the two most commonly used methods for rapid COVID-19 testing. Because COVID-19 tests use different mechanisms, they vary significantly. Molecular tests measure the presence of viral SARS-CoV-2 RNA while serology tests detect the presence of antibodies triggered by the SARS-CoV-2 virus.
    Currently, there is no existing study on the correlation between serology and molecular tests and which COVID-19 symptoms play a key role in producing a positive test result. A study from Florida Atlantic University’s College of Engineering and Computer Science using machine learning provides important new evidence in understanding how molecular tests versus serology tests are correlated, and what features are the most useful in distinguishing between COVID-19 positive versus test outcomes.
    Researchers from the College of Engineering and Computer Science trained five classification algorithms to predict COVID-19 test results. They created an accurate predictive model using easy-to-obtain symptom features, along with demographic features such as number of days post-symptom onset, fever, temperature, age and gender.
    The study demonstrates that machine-learning models, trained using simple symptom and demographic features, can help predict COVID-19 infections. Results, published in the journal Smart Health, identify the key symptom features associated with COVID-19 infection and provide a way for rapid screening and cost effective infection detection.
    Findings reveal that number of days experiencing symptoms such as fever and difficulty breathing play a large role in COVID-19 test results. Findings also show that molecular tests have much narrower post-symptom onset days (between three to eight days), compared to post-symptom onset days of serology tests (between five to 38 days). As a result, the molecular test has the lowest positive rate because it measures current infection.
    Furthermore, COVID-19 tests vary significantly, partially because donors’ immune response and viral load — the target of different test methods — continuously change. Even for the same donor, it might be possible to observe different positive/negative results from two types of tests. More

  • in

    2D material may enable ultra-sharp cellphone photos in low light

    A new type of active pixel sensor that uses a novel two-dimensional material may both enable ultra-sharp cellphone photos and create a new class of extremely energy-efficient Internet of Things (IoT) sensors, according to a team of Penn State researchers.
    “When people are looking for a new phone, what are the specs that they are looking for?” said Saptarshi Das, associate professor of engineering science and mechanics and lead author of the study published Nov. 17 in Nature Materials. “Quite often, they are looking for a good camera, and what does a good camera mean to most people? Sharp photos with high resolution.”
    Most people just snap a photo of a friend, a family gathering, or a sporting event, and never think about what happens “behind the scenes” inside the phone when one snaps a picture. According to Das, there is quite a bit happening to enable you to see a photo right after you take it, and this involves image processing.
    “When you take an image, many of the cameras have some kind of processing that goes on in the phone, and in fact, this sometimes makes the photo look even better than what you are seeing with your eyes,” Das said. “These next generation of phone cameras integrate image capture with image processing to make this possible, and that was not possible with older generations of cameras.”
    However, the great photos in the newest cameras have a catch — the processing requires a lot of energy.
    “There’s an energy cost associated with taking a lot of images,” said Akhil Dodda, a graduate research assistant at Penn State at the time of the study who is now a research staff member at Western Digital, and co-first author of the study. “If you take 10,000 images, that is fine, but somebody is paying the energy costs for that. If you can bring it down by a hundredfold, then you can take 100 times more images and still spend the same amount of energy. It makes photography more sustainable so that people can take more selfies and other pictures when they are traveling. And this is exactly where innovation in materials comes into the picture.”
    The innovation in materials outlined in the study revolves around how they added in-sensor processing to active pixel sensors to reduce their energy use. So, they turned to a novel 2D material, which is a class of materials only one or a few atoms thick, molybdenum disulfide. It is also a semiconductor and sensitive to light, which makes it ideal as a potential material to explore for low-energy in-sensor processing of images. More

  • in

    Hummingbird flight could provide insights for biomimicry in aerial vehicles

    Hummingbirds occupy a unique place in nature: They fly like insects but have the musculoskeletal system of birds. According to Bo Cheng, the Kenneth K. and Olivia J. Kuo Early Career Associate Professor in Mechanical Engineering at Penn State, hummingbirds have extreme aerial agility and flight forms, which is why many drones and other aerial vehicles are designed to mimic hummingbird movement. Using a novel modeling method, Cheng and his team of researchers gained new insights into how hummingbirds produce wing movement, which could lead to design improvements in flying robots.
    Their results were published this week in the Proceedings of Royal Society B.
    “We essentially reverse-engineered the inner working of the wing musculoskeletal system — how the muscles and skeleton work in hummingbirds to flap the wings,” said first author and Penn State mechanical engineering graduate student Suyash Agrawal. “The traditional methods have mostly focused on measuring activity of a bird or insect when they are in natural flight or in an artificial environment where flight-like conditions are simulated. But most insects and, among birds specifically, hummingbirds are very small. The data that we can get from those measurements are limited.”
    The researchers used muscle anatomy literature, computational fluid dynamics simulation data and wing-skeletal movement information captured using micro-CT and X-ray methods to inform their model. They also used an optimization algorithm based on evolutionary strategies, known as the genetic algorithm, to calibrate the parameters of the model. According to the researchers, their approach is the first to integrate these disparate parts for biological fliers.
    “We can simulate the whole reconstructed motion of the hummingbird wing and then simulate all the flows and forces generated by the flapping wing, including all the pressure acting on the wing,” Cheng said. “From that, we are able to back-calculate the required total muscular torque that is needed to flap the wing. And that torque is something we use to calibrate our model.”
    With this model, the researchers uncovered previously unknown principles of hummingbird wing actuation.
    The first discovery, according to Cheng, was that hummingbirds’ primary muscles, that is, their flight engines, do not simply flap their wings in a simple back and forth motion, but instead pull their wings in three directions: up and down, back and forth, and twisting — or pitching — of the wing. The researchers also found that hummingbirds tighten their shoulder joints in both the up-and-down direction and the pitch direction using multiple smaller muscles.
    “It’s like when we do fitness training and a trainer says to tighten your core to be more agile,” Cheng said. “We found that hummingbirds are using similar kind of a mechanism. They tighten their wings in the pitch and up-down directions but keep the wing loose along the back-and-forth direction, so their wings appear to be flapping back and forth only while their power muscles, or their flight engines, are actually pulling the wings in all three directions. In this way, the wings have very good agility in the up and down motion as well as the twist motion.”
    While Cheng emphasized that the results from the optimized model are predictions that will need validation, he said that it has implications for technological development of aerial vehicles.
    “Even though the technology is not there yet to fully mimic hummingbird flight, our work provides essential principles for informed mimicry of hummingbirds hopefully for the next generation of agile aerial systems,” he said.
    The other authors were Zafar Anwar, a doctoral student in the Penn State Department of Mechanical Engineering; Bret W. Tobalske of the Division of Biological Sciences at the University of Montana; Haoxiang Luo of the Department of Mechanical Engineering at Vanderbilt University; and Tyson L. Hedrick of the Department of Biology at the University of North Carolina.
    The Office of Naval Research funded this work.
    Story Source:
    Materials provided by Penn State. Original written by Sarah Small. Note: Content may be edited for style and length. More

  • in

    Does throwing my voice make you want to shop here?

    Virtual environments, including those for commerce, are increasingly common so as to provide an experience for the user that is as realistic as possible. However, virtual environments also provide a new opportunity for researchers to conduct experiments that would not be possible in the real world. Researchers from the University of Tsukuba have done just that by exploring how changing the position of the virtual shop assistant’s voice from its visual position would impact the shopping experience of humans in a virtual reality store.
    Humans locate sound by combining visual and auditory cues. Because the visual cues are generally less variable, they can override audio cues, leading to the well-known ventriloquism effect, which occurs when a human perceives the location of a sound to be different from its actual source. It is also well known that humans have personal space, which varies according to social, personal, and environmental factors. Although both phenomena have long been studied individually, until the development of virtual reality, it has not been possible to study how the ventriloquism effect alters personal space.
    “In particular, we wanted to know how it affects the rapport between the user and shop assistant,” says Professor Zempo Keiichi, lead author of the study. Rapport, or the quality of interpersonal service, strongly affects loyalty and satisfaction, and skilled salespeople use several techniques to build rapport with customers.
    In their experiments, the researchers asked 16 people in the virtual shop environment to define their personal space and record their impression when approached by shop assistants. Some assistants had both a voice and an image at the same position, and others had a voice that was located at different distances between the user and assistant.
    “We found that rapport was not affected when the deviation between the sound and visual positions could not be tolerated; however, when it could be tolerated, we found two distinct phenomena,” explains Professor Zempo Keiichi. The first was similar to the “uncanny valley,” which occurs when an imperfect human representation invokes feelings of uneasiness in a real human. This decreased rapport with the virtual assistant. But when the sound moved even closer to the human, the rapport increased.
    The authors call this phenomenon the “mouth-in-the-door” phenomenon because it is similar to the “foot-in-the-door” phenomenon, in which a small, unconscious consent, such as not moving away when someone starts to speak, causes a person to improve their evaluation of the other person. Without these virtual experiments, this phenomenon would have likely remained undiscovered. But now that it is known, the authors believe it can be used to improve the user experience, especially in virtual shop scenarios.
    Story Source:
    Materials provided by University of Tsukuba. Note: Content may be edited for style and length. More