More stories

  • in

    Novel navigation strategies for microscopic swimmers

    Autonomous optimal navigation of microswimmers is in fact possible, as researchers from the Max Planck Institute for Dynamics and Self-Organization (MPI-DS) showed. In contrast to the targeted navigation of boats, the motion of swimmers at the microscale is strongly disturbed by fluctuations. The researchers now described a navigation strategy for microswimmers that does not need an external interpreter. Their findings may contribute to the understanding of transport mechanisms in the microcosm as well as to applications such as targeted drug delivery.
    Whereas the shortest way between two points is a straight connection, it might not be the most efficient path to follow. Complex currents often affect the motion of microswimmers and make it difficult for them to reach their destination. At the same time, making use of these currents to navigate as fast as possible is a certain evolutionary advantage. Whereas such strategies allow biological microswimmers to better access food or escape a predator, microrobots could this way be directed to perform specific tasks.
    The optimal path in a given current can readily be determined mathematically, yet fluctuations perturb the motion of microswimmers and deviate them from the optimal route. Thus, they have to readjust their motion in order to account for environmental changes. This typically requires the help of an external interpreter and takes away their autonomy.
    “Thanks to evolution, some microorganisms have developed autonomous strategies that enable directed motion towards larger concentration of nutrients or light,” first author of the study Lorenzo Piro explains. Inspired by this idea, the researchers from the Department of Living Matter Physics at the MPI-DS designed strategies that allow microswimmers to navigate optimally in a nearly autonomous way.
    Light as a guide for autonomous navigation
    When an external interpreter defines the navigation pattern, microswimmers on average follow a well-defined path. Thus, it is a good approach to guide the microswimmer along that path within the current. This can be achieved autonomously via external stimuli, despite the presence of fluctuations. This principle could be applied to swimmers that respond to variation of light, such as certain algae, in which case the optimal path can simply be illuminated. Remarkably, the resulting performances are comparable to externally supervised navigation. “These new strategies can moreover conveniently be applied to more complex scenario such as navigation on curved surfaces or in presence of random currents,” concludes Ramin Golestanian, director at MPI-DS.
    Possible applications of the study thus range from targeted drug delivery at the microscale to the optimal design of autonomous micromachines.
    Story Source:
    Materials provided by Max Planck Institute for Dynamics and Self-Organization. Note: Content may be edited for style and length. More

  • in

    Optical foundations illuminated by quantum light

    Optics, the study of light, is one of the oldest fields in physics and has never ceased to surprise researchers. Although the classical description of light as a wave phenomenon is rarely questioned, the physical origins of some optical effects are. A team of researchers at Tampere University have brought the discussion on one fundamental wave effect, i.e., the debate around the anomalous behaviour of focused light waves, to the quantum domain.
    The researchers have been able to show that quantum waves behave significantly differently from their classical counterparts and can be used to increase the precision of distance measurements. Their findings also add to the discussion on physical origin of the anomalous focusing behaviour. The results are now published in the journal of Nature Photonics.
    “Interestingly, we started with an idea based on our earlier results and set out to structure quantum light for enhanced measurement precision. However, we then realised that the underlying physics of this application also contributes to the long debate about the origins of the Gouy phase anomaly of focused light fields.,” explains Robert Fickler, group leader of the Experimental Quantum Optics group at Tampere University.
    Quantum waves behave differently but point to the same origin
    Over the last decades, methods for structuring light fields down on the single photon level have vastly matured and led to a myriad of novel findings. In addition, a better of optics’ foundations has been achieved. However, the physical origin of why light behaves in such an unexpected way when going through a focus, the so-called Gouy phase anomaly, is still often debated. This is despite its widespread use and importance in optical systems. The novelty of the current study is now to put the effect into the quantum domain.
    “When developing the theory to describe our experimental results, we realised (after a long debate) that the Gouy phase for quantum light is not only different than the standard one, but its origin can be linked to another quantum effect. This is just like what was speculated in an earlier work,” adds Doctoral researcher Markus Hiekkamäki, leading author of the study.
    In the quantum domain, the anomalous behaviour is sped up when compared to classical light. As the Gouy phase behaviour can be used to determine the distance a beam of light has propagated, the speed up of the quantum Gouy phase could allow for an improvement in the precision of measuring distances.
    With this new understanding at hand, the researchers are planning to develop novel techniques to enhance their measurement abilities such that it will be possible to measure more complex beams of structured photons. The team expects that this will help them push forward the application of the observed effect, and potentially bring to light more differences between quantum and classical light fields.
    Story Source:
    Materials provided by Tampere University. Note: Content may be edited for style and length. More

  • in

    Sleep mode makes Energy Internet more energy efficient

    A group of scientists in Nagoya University, Japan, have developed a possible solution to one of the biggest problems of the Internet of Energy, energy efficiency. They did so by creating a controller that has a sleep mode and only procures energy when needed.
    Widespread generation of electricity based on renewable energy has become necessary to combat the climate crisis. One solution to realize society’s electrification needs is the Internet of Energy, which would operate like the information Internet, except that it would consist of energy linked by smart power generation, smart power consumption, smart interconnection, and cloud sharing.
    When information is sent over the Internet, it is divided into transmittable units called ‘packets’, which are tagged with their destination. The energy Internet is based on a similar concept. Information tags are added to power pulses to create units called ‘power packets’. On the basis of requests from terminals, these are then distributed over networks to where they are needed. However, one problem is that since the packets are sent sporadically, the energy supply is intermittent. Current solutions, such as storage batteries or capacitors, complicate the system and reduce its efficiency.
    An alternative solution is what is known as ‘sparse control’, where the terminal’s actuators are active part of the time and are in sleep mode for the rest of the time. In sleep mode, they do not consume fuel or electricity, leading to efficient energy saving and reducing environmental and noise pollution. Although sparse control has been used with a single actuator, it does not necessarily provide good performance when multiple actuators are used. The problem of determining how to do this for multiple actuators is called the ‘maximum turn-off control problem’.
    Now, a Nagoya University research group, led by Professor Shun-ichi Azuma and Doctoral student Takumi Iwata of the Graduate School of Engineering, has developed a model control scheme for multiple actuators. The model has an awake mode, during which it procures and controls the necessary power packets for when they are needed, and a sleep mode. The research was published in the International Journal of Robust and Nonlinear Control.
    “We can see our research being useful in the motor control of production equipment,” explains Professor Azuma. “This research provides a control system configuration method based on the assumption that the energy supply is intermittent. It has the advantage of eliminating the need for storage batteries and capacitors. It is expected to accelerate the practical application of the power packet type energy Internet.”
    This research was supported by Japan Science and Technology Agency Emergent Research Support Program and Grant-in-Aid for Scientific Research from the Ministry of Education, Culture, Sports, Science and Technology of Japan.
    Story Source:
    Materials provided by Nagoya University. Note: Content may be edited for style and length. More

  • in

    Superconducting hardware could scale up brain-inspired computing

    Scientists have long looked to the brain as an inspiration for designing computing systems. Some researchers have recently gone even further by making computer hardware with a brainlike structure. These “neuromorphic chips” have already shown great promise, but they have used conventional digital electronics, limiting their complexity and speed. As the chips become larger and more complex, the signals between their individual components become backed up like cars on a gridlocked highway and reduce computation to a crawl.
    Now, a team at the National Institute of Standards and Technology (NIST) has demonstrated a solution to these communication challenges that may someday allow artificial neural systems to operate 100,000 times faster than the human brain.
    The human brain is a network of about 86 billion cells called neurons, each of which can have thousands of connections (known as synapses) with its neighbors. The neurons communicate with each other using short electrical pulses called spikes to create rich, time-varying activity patterns that form the basis of cognition. In neuromorphic chips, electronic components act as artificial neurons, routing spiking signals through a brainlike network.
    Doing away with conventional electronic communication infrastructure, researchers have designed networks with tiny light sources at each neuron that broadcast optical signals to thousands of connections. This scheme can be especially energy-efficient if superconducting devices are used to detect single particles of light known as photons — the smallest possible optical signal that could be used to represent a spike.
    In a new Nature Electronics paper, NIST researchers have achieved for the first time a circuit that behaves much like a biological synapse yet uses just single photons to transmit and receive signals. Such a feat is possible using superconducting single-photon detectors. The computation in the NIST circuit occurs where a single-photon detector meets a superconducting circuit element called a Josephson junction. A Josephson junction is a sandwich of superconducting materials separated by a thin insulating film. If the current through the sandwich exceeds a certain threshold value, the Josephson junction begins to produce small voltage pulses called fluxons. Upon detecting a photon, the single-photon detector pushes the Josephson junction over this threshold and fluxons are accumulated as current in a superconducting loop. Researchers can tune the amount of current added to the loop per photon by applying a bias (an external current source powering the circuits) to one of the junctions. This is called the synaptic weight.
    This behavior is similar to that of biological synapses. The stored current serves as a form of short-term memory, as it provides a record of how many times the neuron produced a spike in the near past. The duration of this memory is set by the time it takes for the electric current to decay in the superconducting loops, which the NIST team demonstrated can vary from hundreds of nanoseconds to milliseconds, and likely beyond. This means the hardware could be matched to problems occurring at many different time scales — from high-speed industrial control systems to more leisurely conversations with humans. The ability to set different weights by changing the bias to the Josephson junctions permits a longer-term memory that can be used to make the networks programmable so that the same network could solve many different problems.
    Synapses are a crucial computational component of the brain, so this demonstration of superconducting single-photon synapses is an important milestone on the path to realizing the team’s full vision of superconducting optoelectronic networks. Yet the pursuit is far from complete. The team’s next milestone will be to combine these synapses with on-chip sources of light to demonstrate full superconducting optoelectronic neurons.
    “We could use what we’ve demonstrated here to solve computational problems, but the scale would be limited,” NIST project leader Jeff Shainline said. “Our next goal is to combine this advance in superconducting electronics with semiconductor light sources. That will allow us to achieve communication between many more elements and solve large, consequential problems.”
    The team has already demonstrated light sources that could be used in a full system, but further work is required to integrate all the components on a single chip. The synapses themselves could be improved by using detector materials that operate at higher temperatures than the present system, and the team is also exploring techniques to implement synaptic weighting in larger-scale neuromorphic chips.
    The work was funded in part by the Defense Advanced Research Projects Agency.
    Story Source:
    Materials provided by National Institute of Standards and Technology (NIST). Note: Content may be edited for style and length. More

  • in

    Repurposing existing drugs to fight new COVID-19 variants

    MSU researchers are using big data and AI to identify current drugs that could be applied to treat new COVID-19 variants.
    Finding new ways to treat the novel coronavirus and its ever-changing variants has been a challenge for researchers, especially when the traditional drug development and discovery process can take years. A Michigan State University researcher and his team are taking a hi-tech approach to determine whether drugs already on the market can pull double duty in treating new COVID variants.
    “The COVID-19 virus is a challenge because it continues to evolve,” said Bin Chen, an associate professor in the College of Human Medicine. “By using artificial intelligence and really large data sets, we can repurpose old drugs for new uses.”
    Chen built an international team of researchers with expertise on topics ranging from biology to computer science to tackle this challenge. First, Chen and his team turned to publicly available databases to mine for the unique coronavirus gene expression signatures from 1,700 host transcriptomic profiles that came from patient tissues, cell cultures and mouse models. These signatures revealed the biology shared by COVID-19 and its variants.
    With the virus’s signature and knowing which genes need to be suppressed and which genes need to be activated, the team was able to use a computer program to screen a drug library consisting of FDA-approved or investigational drugs to find candidates that could correct the expression of signature genes and further inhibit the coronavirus from replicating. Chen and his team discovered one novel candidate, IMD-0354, a drug that passed phase I clinical trials for the treatment of atopic dermatitis. A group in Korea later observed that it was 90-fold more effective against six COVID-19 variants than remdesivir, the first drug approved to treat COVID-19. The team further found that IMD-0354 inhibited the virus from copying itself by boosting the immune response pathways in the host cells. Based on the information learned, the researchers studied a prodrug of IMD-0354 called IMD-1041. A prodrug is an inactive substance that is metabolized within the body to create an active drug.
    “IMD-1041 is even more promising as it is orally available and has been investigated for chronic obstructive pulmonary disease, a group of lung diseases that block airflow and make it difficult to breathe,” Chen said. “Because the structure of IMD-1041 is undisclosed, we are developing a new artificial intelligence platform to design novel compounds that hopefully could be tested and evaluated in more advanced animal models.”
    The research was published in the journal iScience.
    This project was led by two senior postdoctoral scholars in the Chen lab: Jing Xing, who recently became a young investigator at the Chinese Academy of Sciences, and Rama Shankar, with the support from researchers from Institute Pasteur Korea, Shanghai Institute of Materia Medica, University of Texas Medical Branch, Spectrum Health in Grand Rapids and Stanford University.
    Story Source:
    Materials provided by Michigan State University. Original written by Emilie Lorditch. Note: Content may be edited for style and length. More

  • in

    Zooming in on the signals of cancer

    This year, about 240,000 people in the U.S. will discover they have lung cancer. Some 200,000 of them will be diagnosed with non-small-cell lung cancer, which is the second leading cause of death after cardiovascular disease.
    Georgia Tech researcher Ahmet Coskun is working to improve the odds for these patients in two recently published studies that are essentially focused on understanding why and how patients respond differently to disease and treatments.
    “What we have learned is connectivity and communication between molecules and between cells is what really controls everything, regarding whether or not patients get healthy, or how they will respond to drugs,” said Coskun, an assistant professor in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University.
    Published in the journals npj Precision Oncology and iScience, the studies detail the development of tools and techniques to deeply explore the tumor microenvironment at the subcellular level, utilizing the Coskun lab’s expertise in combining multiplex cellular imaging methods with artificial intelligence.
    “We are developing a better grasp of cellular signaling and decision making, and how it is coordinated in the tumor microenvironment, which can lead to better personalized, precision treatments for these patients,” said Coskun, who is keenly interested in why some patients respond to groundbreaking immunotherapy drugs, and some don’t.
    With that in mind, his team developed SpatialVizScore, a new method they describe in npj Precision Oncology, to deeply study tumor immunology in cancer tissues and help identify which patients are more likely to respond to an immunotherapy. It’s a significant upgrade to the current standard methodology used by cancer physicians and researchers, Immunoscore. More

  • in

    Algorithms predict sports teams' moves with 80% accuracy

    Algorithms developed in Cornell’s Laboratory for Intelligent Systems and Controls can predict the in-game actions of volleyball players with more than 80% accuracy, and now the lab is collaborating with the Big Red hockey team to expand the research project’s applications.
    The algorithms are unique in that they take a holistic approach to action anticipation, combining visual data — for example, where an athlete is located on the court — with information that is more implicit, like an athlete’s specific role on the team.
    “Computer vision can interpret visual information such as jersey color and a player’s position or body posture,” said Silvia Ferrari, the John Brancaccio Professor of Mechanical and Aerospace Engineering, who led the research. “We still use that real-time information, but integrate hidden variables such as team strategy and player roles, things we as humans are able to infer because we’re experts at that particular context.”
    Ferrari and doctoral students Junyi Dong and Qingze Huo trained the algorithms to infer hidden variables the same way humans gain their sports knowledge — by watching games. The algorithms used machine learning to extract data from videos of volleyball games, and then used that data to help make predictions when shown a new set of games.
    The results were published Sept. 22 in the journal ACM Transactions on Intelligent Systems and Technology, and show the algorithms can infer players’ roles — for example, distinguishing a defense-passer from a blocker — with an average accuracy of nearly 85%, and can predict multiple actions over a sequence of up to 44 frames with an average accuracy of more than 80%. The actions included spiking, setting, blocking, digging, running, squatting, falling, standing and jumping.
    Ferrari envisions teams using the algorithms to better prepare for competition by training them with existing game footage of an opponent and using their predictive abilities to practice specific plays and game scenarios.
    Ferrari has filed for a patent and is now working with the Big Red men’s hockey team to further develop the software. Using game footage provided by the team, Ferrari and her graduate students, led by Frank Kim, are designing algorithms that autonomously identify players, actions and game scenarios. One goal of the project is to help annotate game film, which is a tedious task when performed manually by team staff members.
    “Our program places a major emphasis on video analysis and data technology,” said Ben Russell, director of hockey operations for the Cornell men’s team. “We are constantly looking for ways to evolve as a coaching staff in order to better serve our players. I was very impressed with the research Professor Ferrari and her students have conducted thus far. I believe that this project has the potential to dramatically influence the way teams study and prepare for competition.”
    Beyond sports, the ability to anticipate human actions bears great potential for the future of human-machine interaction, according to Ferrari, who said improved software can help autonomous vehicles make better decisions, bring robots and humans closer together in warehouses, and can even make video games more enjoyable by enhancing the computer’s artificial intelligence.
    “Humans are not as unpredictable as the machine learning algorithms are making them out to be right now,” said Ferrari, who is also associate dean for cross-campus engineering research, “because if you actually take into account all of the content, all of the contextual clues, and you observe a group of people, you can do a lot better at predicting what they’re going to do.”
    The research was supported by the Office of Naval Research Code 311 and Code 351, and commercialization efforts are being supported by the Cornell Office of Technology Licensing.
    Story Source:
    Materials provided by Cornell University. Original written by Syl Kacapyr, courtesy of the Cornell Chronicle. Note: Content may be edited for style and length. More

  • in

    Milestones achieved on the path to useful quantum technologies

    Tiny particles that are interconnected despite sometimes being thousands of kilometres apart — Albert Einstein called this ‘spooky action at a distance’. Something that would be inexplicable by the laws of classical physics is a fundamental part of quantum physics. Entanglement like this can occur between multiple quantum particles, meaning that certain properties of the particles are intimately linked with each other. Entangled systems containing multiple quantum particles offer significant benefits in implementing quantum algorithms, which have the potential to be used in communications, data security or quantum computing.
    Researchers from Paderborn University have been working with colleagues from Ulm University to develop the first programmable optical quantum memory. The study was published as an ‘editor’s suggestion’ in the Physical Review Letters journal.
    Entangled light particles
    The ‘Integrated Quantum Optics’ group led by Prof. Christine Silberhorn from the Department of Physics and Institute for Photonic Quantum Systems (PhoQS) at Paderborn University is using minuscule light particles, or photons, as quantum systems. The researchers are seeking to entangle as many as possible in large states. Working together with researchers from the Institute of Theoretical Physics at Ulm University, they have now presented a new approach.
    Previously, attempts to entangle more than two particles only resulted in very inefficient entanglement generation. If researchers wanted to link two particles with others, in some cases this involved a long wait, as the interconnections that promote this entanglement only operate with limited probability rather than at the touch of a button. This meant that the photons were no longer a part of the experiment once the next suitable particle arrived — as storing qubit states represents a major experimental challenge.
    Gradually achieving greater entanglement
    “We have now developed a programmable, optical, buffer quantum memory that can switch dynamically back and forth between different modes — storage mode, interference mode and the final release,” Silberhorn explains. In the experimental setup, a small quantum state can be stored until another state is generated, and then the two can be entangled. This enables a large, entangled quantum state to ‘grow’ particle by particle. Silberhorn’s team has already used this method to entangle six particles, making it much more efficient than any previous experiments. By comparison, the largest ever entanglement of photon pairs, performed by Chinese researchers, consisted of twelve individual particles. However, creating this state took significantly more time, by orders of magnitude.
    The quantum physicist explains: “Our system allows entangled states of increasing size to be gradually built up — which is much more reliable, faster, and more efficient than any previous method. For us, this represents a milestone that puts us in striking distance of practical applications of large, entangled states for useful quantum technologies.” The new approach can be combined with all common photon-pair sources, meaning that other scientists will also be able to use the method.
    Story Source:
    Materials provided by Universität Paderborn. Note: Content may be edited for style and length. More