More stories

  • in

    Deep-space optical communication demonstration project forges ahead

    Researchers report new results from the NASA Deep Space Optical Communications (DSOC) technology demonstration project, which develops and tests new advanced laser sources for deep-space optical communication. The ability to perform free-space optical communication throughout the solar system would go beyond the capabilities of the radio communication systems used now and provide the bandwidth necessary for future space missions to transmit large amounts of data, including high-definition images and video.
    The demonstration system consists of a flight laser transceiver, a ground laser transmitter and a ground laser receiver. The downlink transmitter has been installed on the Psyche spacecraft, which will travel to a unique metal asteroid also called Psyche, which orbits the Sun between Mars and Jupiter.
    Malcolm. W. Wright, from the Jet Propulsion Laboratory, California Institute of Technology, will present the functional and environmental test results of the DSOC downlink flight laser transmitter assembly and ground uplink transmitter assembly at the Optica Laser Congress, 11 — 15 December 2022.
    Validating deep space optical communications will allow streaming back high-definition imagery during robotic and manned exploration of planetary bodies, utilizing resources comparable to state-of-art radio-frequency telecommunications.
    Transmitting into deep space
    Although free-space optical communications from space to ground have been demonstrated at distances as far away as the moon, extending such links to deep space ranges requires new types of laser transmitters. The downlink flight laser must have a high photon efficiency while supporting near kilowatt peak power. The uplink laser requires multi-kilowatt average powers with narrow linewidth, good beam quality and low modulation rates.
    The flight laser transmitter assembly uses a 5 W average power Er-Yb co-doped fiber-based master oscillator power amplifier laser with discrete pulse widths from 0.5 to 8 ns in a polarized output beam at 1550 nm with an extinction ratio of more than 33 dB. The laser passed verification and environmental tests before being integrated into spacecraft. End-to-end testing of the flight laser transmitter with the ground receiver assembly also validated the optical link performance for a variety of pulse formats and verified the interface to the DSOC electronics assembly.
    Launching a new approach
    The ground uplink transmitter assembly can support optical links with up to 5.6 kW average power at 1064 nm. It includes ten kilowatt-class continuous wavelength fiber-based laser transmitters modified to support the modulation formats. A remotely placed chiller provides thermal management for the lasers and power supplies. The uplink laser will also provide a light beacon onto which the flight transceiver can lock.
    “Using multiple individual laser sources that propagate through sub-apertures on the telescope’s primary mirror relieves the power requirement from a single source,” said Wright. “It also allows atmospheric turbulence mitigation and reduces the power density on the telescope mirrors.”
    Now that spacecraft-level testing is complete, the Psyche spacecraft — with the flight laser transceiver aboard — will be integrated into a launch vehicle. The DSOC technology demonstration will begin shortly after launch and continue for one year as the spacecraft travels away from Earth and eventually performs a flyby of Mars.
    Story Source:
    Materials provided by Optica. Note: Content may be edited for style and length. More

  • in

    Curved spacetime in the lab

    In a laboratory experiment, researchers from Heidelberg University have succeeded in realising an effective spacetime that can be manipulated. In their research on ultracold quantum gases, they were able to simulate an entire family of curved universes to investigate different cosmological scenarios and compare them with the predictions of a quantum field theoretical model.
    According to Einstein’s Theory of Relativity, space and time are inextricably connected. In our Universe, whose curvature is barely measurable, the structure of this spacetime is fixed. In a laboratory experiment, researchers from Heidelberg University have succeeded in realising an effective spacetime that can be manipulated. In their research on ultracold quantum gases, they were able to simulate an entire family of curved universes to investigate different cosmological scenarios and compare them with the predictions of a quantum field theoretical model. The research results were published in Nature.
    The emergence of space and time on cosmic time scales from the Big Bang to the present is the subject of current research that can only be based on the observation of our single Universe. The expansion and curvature of space are essential to cosmological models. In a flat space like our current Universe, the shortest distance between two points is always a straight line. “It is conceivable, however, that our Universe was curved in its early phase. Studying the consequences of a curved spacetime is therefore a pressing question in research,” states Prof. Dr Markus Oberthaler, a researcher at the Kirchhoff Institute for Physics at Heidelberg University. With his “Synthetic Quantum Systems” research group, he developed a quantum field simulator for this purpose.
    The quantum field simulator created in the lab consists of a cloud of potassium atoms cooled to just a few nanokelvins above absolute zero. This produces a Bose-Einstein condensate — a special quantum mechanical state of the atomic gas that is reached at very cold temperatures. Prof. Oberthaler explains that the Bose-Einstein condensate is a perfect background against which the smallest excitations, i.e. changes in the energy state of the atoms, become visible. The form of the atomic cloud determines the dimensionality and the properties of spacetime on which these excitations ride like waves. In our Universe, there are three dimensions of space as well as a fourth: time.
    In the experiment conducted by the Heidelberg physicists, the atoms are trapped in a thin layer. The excitations can therefore only propagate in two spatial directions — the space is two-dimensional. At the same time, the atomic cloud in the remaining two dimensions can be shaped in almost any way, whereby it is also possible to realise curved spacetimes. The interaction between the atoms can be precisely adjusted by a magnetic field, changing the propagation speed of the wavelike excitations on the Bose-Einstein condensate.
    “For the waves on the condensate, the propagation speed depends on the density and the interaction of the atoms. This gives us the opportunity to create conditions like those in an expanding universe,” explains Prof. Dr Stefan Flörchinger. The researcher, who previously worked at Heidelberg University and joined the University of Jena at the beginning of this year, developed the quantum field theoretical model used to quantitatively compare the experimental results.
    Using the quantum field simulator, cosmic phenomena, such as the production of particles based on the expansion of space, and even the spacetime curvature can be made measurable. “Cosmological problems normally take place on unimaginably large scales. To be able to specifically study them in the lab opens up entirely new possibilities in research by enabling us to experimentally test new theoretical models,” states Celia Viermann, the primary author of the “Nature” article. “Studying the interplay of curved spacetime and quantum mechanical states in the lab will occupy us for some time to come,” says Markus Oberthaler, whose research group is also part of the STRUCTURES Cluster of Excellence at Ruperto Carola.
    The work was conducted as part of Collaborative Research Centre 1225, “Isolated Quantum Systems and Universality in Extreme Conditions” (ISOQUANT), of Heidelberg University.
    Story Source:
    Materials provided by Heidelberg University. Note: Content may be edited for style and length. More

  • in

    How AI found the words to kill cancer cells

    Using new machine learning techniques, researchers at UC San Francisco (UCSF), in collaboration with a team at IBM Research, have developed a virtual molecular library of thousands of “command sentences” for cells, based on combinations of “words” that guided engineered immune cells to seek out and tirelessly kill cancer cells.
    The work, published online Dec. 8, 2022, in Science, represents the first time such sophisticated computational approaches have been applied to a field that, until now, has progressed largely through ad hoc tinkering and engineering cells with existing, rather than synthesized, molecules.
    The advance allows scientists to predict which elements — natural or synthesized — they should include in a cell to give it the precise behaviors required to respond effectively to complex diseases.
    “This is a vital shift for the field,” said Wendell Lim, PhD, the Byers Distinguished Professor of Cellular and Molecular Pharmacology, who directs the UCSF Cell Design Institute and led the study. “Only by having that power of prediction can we get to a place where we can rapidly design new cellular therapies that carry out the desired activities.”
    Meet the Molecular Words That Make Cellular Command Sentences
    Much of therapeutic cell engineering involves choosing or creating receptors that, when added to the cell, will enable it to carry out a new function. Receptors are molecules that bridge the cell membrane to sense the outside environment and provide the cell with instructions on how to respond to environmental conditions. More

  • in

    Finding simplicity within complexity

    Picture a tall stately grandfather clock, its long pendulum swinging back and forth, over and again, keeping rhythm with the time. Scientists can describe that motion with an equation, or dynamical model, and though there are seemingly hundreds of factors contributing to the sway, (the weight of the clock, the material of the pendulum, ad infinitum) there is only one variable necessary to describe the motion of the pendulum and translate it into math: the angle of the swing. How long it took scientists and mathematicians to discover that is unknown. It could have taken years to test each variable in the equation to determine the single important variable for sway.
    Now a University of Houston researcher is reporting a method to describe these kinds of complex systems with the least number of variables possible, sometimes reducing the possibility of millions to a minimal amount, and just one on rare occasions. It’s an advancement that can speed up science with its efficiency and ability to understand and predict the behavior of natural systems, and it has implications for speeding up an array of activities that use simulations from weather forecasting to production of aircraft.
    “In the example of the grandfather clock, I can take a video of the pendulum swinging back and forth and from that video, automatically discover what is the right variable. Accurate models of system dynamics enable deeper understanding of these systems, as well as the ability to predict their future behavior,” reports Daniel Floryan, Kalsi Assistant Professor of Mechanical Engineering, in the journal Nature Machine Intelligence.
    To begin building the compact-yet-accurate models, one principle is fundamental: For every action, even those seemingly complex and random, there exists an underlying pattern that enables a compact representation of the system.
    “Our method finds the very most compact description that is mathematically possible, and that’s what differentiates our method from others,” said Floryan.
    Using ideas from machine learning and smooth manifold theory, the method makes simulations extremely fast and inexpensive.
    In one application, Floryan simulated a reaction between a couple of chemicals. The reaction resulted in complex behavior among the chemicals when they met: a repetitive rhythmic spiraling requiring more than 20,000 variables to simulate it. Floryan fed video of the reaction into his algorithm, and it discovered he needed just one variable to understand the action. The necessary variable was the time the spiral took to come back to where it started, like a second hand on a watch.
    Regarding weather prediction, numerical models are computer simulations of the atmosphere that use complicated physics and fluid dynamics equations.
    “For weather prediction and climate modeling, if you have something that is much faster you can better model the earth’s climate and better predict what’s going to happen,” said Floryan.
    Story Source:
    Materials provided by University of Houston. Original written by Laurie Fickman. Note: Content may be edited for style and length. More

  • in

    Improving the accuracy of markerless gait analysis

    Gait analysis systems measure certain metrics to give their results. These results then drive clinical treatment for gait correction. However, detailed gait analysis requires expensive equipment, and a lot of space, markers, time. Measurements from markerless, video-based gait analysis systems, on the other hand, are inaccurate. To improve upon existing systems, researchers have now combined RGB camera-based pose estimation and an inertial measurement unit sensor for gait analysis. This significantly reduces errors in the process.
    In people with gait disabilities (i.e., a pattern of walking — or gait — that is not normal), assessing gait speed, stride length, and joint kinematics are essential. Measurement of gait parameters over a period of time is critical to determine treatment effects, predict fall risk in elderly individuals, and plan physiotherapy treatments. In this regard, optoelectronic marker-based three-dimensional motion capture (3DMC) — a gait analysis tool — can accurately measure gait metrics. However, economic and time constraints, coupled with requirements for a large space, extensive equipment, and technical expertise make 3DMC impractical in clinical settings. Alternate methods include inertial measurement unit (IMU)-based motion capture systems and RGB camera-based methods, which can measure gait without reflective markers when equipped with depth sensors. But these have their own drawbacks. IMU-based systems require many IMU sensors to be attached to human body segments, reducing their feasibility, and compared to optoelectronic 3DMC systems, RGB camera-based methods are less accurate in their measurement of kinematic parameters such as lower limb joint angles.
    Hence, improved gait analysis systems are needed.
    To this end, a team of researchers comprising Dr. Masataka Yamamoto, Mr. Yuto Ishige, and Professor Hiroshi Takemura from the Faculty of Science and Technology, Tokyo University of Science, and Professor Koji Shimatani from the Prefectural University of Hiroshima, Japan, have developed a simple and accurate sensor-fusion method for accurate gait analysis. “We combined information from a small IMU sensor attached to the shoe with estimated information on the bones and joints of the lower limb, obtained by capturing the gait from a single RGB camera,” explains Dr. Yamamoto, the lead author of the study. In a recent article published in Volume 12 of Scientific Reportson October 21, 2022, the researchers have detailed this method and the results they achieved with it.
    The team used single RGB camera-based pose estimation by OpenPose (OP) and an IMU sensor on the foot to measure ankle joint kinematics under various gait conditions for sixteen healthy adult men between 21 and 23 years of age who did not have any limitation of physical activity. The participants’ gait parameters and lower limb joint angles during four gait conditions with varying gait speed and foot progression angles were noted using only OP as well combined measurements from OP and the IMUs. The latter was the team’s novel proposed method. Results from these techniques were compared to gait analysis using 3DMC, the current gold standard.
    The proposed combination method could measure gait parameters and lower limb joint angles in the sagittal plane (which divides the body into right and left). Moreover, the mean absolute errors of peak ankle joint angles calculated by the combination method were significantly less compared to OP alone in all the four gait conditions. This is a significant development in gait analysis. “Our method has the potential to be used not in medicine and welfare, but also to predict the decline of gait function in healthcare, for training and skill evaluation in gyms and sports facilities, and accurate projection of human movements onto an avatar by integrating with virtual reality systems,” notes Dr. Yamamoto.
    With further research, this method can be adapted to clinical settings and a larger demographic.
    Story Source:
    Materials provided by Tokyo University of Science. Note: Content may be edited for style and length. More

  • in

    Using light to manipulate neuron excitability

    Nearly 20 years ago, scientists developed ways to stimulate or silence neurons by shining light on them. This technique, known as optogenetics, allows researchers to discover the functions of specific neurons and how they communicate with other neurons to form circuits.
    Building on that technique, MIT and Harvard University researchers have now devised a way to achieve longer-term changes in neuron activity. With their new strategy, they can use light exposure to change the electrical capacitance of the neurons’ membranes, which alters their excitability (how strongly or weakly they respond to electrical and physiological signals).
    Changes in neuron excitability have been linked to many processes in the brain, including learning and aging, and have also been observed in some brain disorders, including Alzheimer’s disease.
    “This new tool is designed to tune neuron excitability up and down in a light-controllable and long-term manner, which will enable scientists to directly establish the causality between the excitability of various neuron types and animal behaviors,” says Xiao Wang, the Thomas D. and Virginia Cabot Assistant Professor of Chemistry at MIT, and a member of the Broad Institute of MIT and Harvard. “Future application of our approach in disease models will tell whether fine-tuning neuron excitability could help reset abnormal brain circuits to normal.”
    Wang and Jia Liu, an assistant professor at Harvard School of Engineering and Applied Sciences, are the senior authors of the paper, which appears today in Science Advances.
    Chanan Sessler, an MIT graduate student in the Department of Chemistry; Yiming Zhou, a postdoc at the Broad Institute; and Wenbo Wang, a graduate student at Harvard, are the lead authors of the paper. More

  • in

    Soft robot detects damage, heals itself

    Cornell University engineers have created a soft robot capable of detecting when and where it was damaged — and then healing itself on the spot.
    “Our lab is always trying to make robots more enduring and agile, so they operate longer with more capabilities,” said Rob Shepherd, associate professor of mechanical and aerospace engineering. “If you make robots operate for a long time, they’re going to accumulate damage. And so how can we allow them to repair or deal with that damage?”
    Shepherd’s Organic Robotics Lab has developed stretchable fiber-optic sensors for use in soft robots and related components — from skin to wearable technology.
    For self-healing to work, Shepard says the key first step is that the robot must be able to identify that there is, in fact, something that needs to be fixed.
    To do this, researchers have pioneered a technique using fiber-optic sensors coupled with LED lights capable of detecting minute changes on the surface of the robot.
    These sensors are combined with a polyurethane urea elastomer that incorporates hydrogen bonds, for rapid healing, and disulfide exchanges, for strength.
    The resulting SHeaLDS — self-healing light guides for dynamic sensing — provides a damage-resistant soft robot that can self-heal from cuts at room temperature without any external intervention.
    To demonstrate the technology, the researchers installed the SHeaLDS in a soft robot resembling a four-legged starfish and equipped it with feedback control. Researchers then punctured one of its legs six times, after which the robot was then able to detect the damage and self-heal each cut in about a minute. The robot could also autonomously adapt its gait based on the damage it sensed.
    While the material is sturdy, it is not indestructible.
    “They have similar properties to human flesh,” Shepherd said. “You don’t heal well from burning, or from things with acid or heat, because that will change the chemical properties. But we can do a good job of healing from cuts.”
    Shepherd plans to integrate SHeaLDS with machine learning algorithms capable of recognizing tactile events to eventually create “a very enduring robot that has a self-healing skin but uses the same skin to feel its environment to be able to do more tasks.”
    Story Source:
    Materials provided by Cornell University. Original written by David Nutt, courtesy of the Cornell Chronicle. Note: Content may be edited for style and length. More

  • in

    Coupled computer modeling can help more accurately predict coastal flooding, study demonstrates

    When Hurricane Florence hit the coast of North Carolina as a Category 1 storm in 2018, it set new records for rainfall, creating damaging 500-year flooding events along the Cape Fear River Basin.
    This is exactly the sort of weather event Z. George Xue of the LSU Department of Oceanography and Coastal Sciences, or DOCS, believes his novel coupled computer modeling approach can more accurately predict, and thereby assist communities with disaster planning. Xue said as far as he knows, his lab is the only one using this technique.
    Xue, along with DOCS graduate student Daoyang Bao and the rest of their research team, recently published a study using the events of Hurricane Florence to demonstrate the validity of this new approach in the Journal of Advances in Modeling Earth Systems.
    Improving the accuracy of flooding predictions can help in hurricane preparedness, said John C. Warner of the US Geological Survey, another collaborator on the study. “More accurate forecasts can help coastal managers to better alert communities of impending storms.”
    Xue said this breakthrough coupled modeling technique could provide long-term benefits to communities as well.
    “Our model can identify which region is most vulnerable in terms of compound flooding and provide not only short-term forecasts but also scenario analysis regarding future climate and sea level conditions,” he said. More