More stories

  • in

    Terahertz-to-visible light conversion for future telecommunications

    A study carried out by a research team from the Helmholtz-Zentrum Dresden-Rossendorf (HZDR), the Catalan Institute of Nanoscience and Nanotechnology (ICN2), University of Exeter Centre for Graphene Science, and TU Eindhoven demonstrates that graphene-based materials can be used to efficiently convert high-frequency signals into visible light, and that this mechanism is ultrafast and tunable, as the team presents its findings in Nano Letters. These outcomes open the path to exciting applications in near-future information and communication technologies.
    The ability to convert signals from one frequency regime to another is key to various technologies, in particular in telecommunications, where, for example, data processed by electronic devices are often transmitted as optical signals through glass fibers. To enable significantly higher data transmission rates future 6G wireless communication systems will need to extend the carrier frequency above 100 gigahertz up to the terahertz range. Terahertz waves are a part of the electromagnetic spectrum that lies between microwaves and infrared light. However, terahertz waves can only be used to transport data wirelessly over very limited distances. “Therefore, a fast and controllable mechanism to convert terahertz waves into visible or infrared light will be required, which can be transported via optical fibers. Imaging and sensing technologies could also benefit from such a mechanism,” says Dr. Igor Ilyakov of the Institute of Radiation Physics at HZDR.
    What is missing so far is a material that is capable of upconverting photon energies by a factor of about 1000. The team has only recently identified the strong nonlinear response of so-called Dirac quantum materials, e.g. graphene and topological insulators, to terahertz light pulses. “This manifests in the highly efficient generation of high harmonics, that is, light with a multiple of the original laser frequency. These harmonics are still within the terahertz range, however, there were also first observations of visible light emission from graphene upon infrared and terahertz excitation,” recalls Dr. Sergey Kovalev of the Institute of Radiation Physics at HZDR. “Until now, this effect has been extremely inefficient, and the underlying physical mechanism unknown.”
    The mechanism behind
    The new results provide a physical explanation for this mechanism and show how the light emission can be strongly enhanced by using highly doped graphene or by using a grating-graphene metamaterial — a material with a tailored structure characterized by special optical, electrical or magnetic properties. The team also observed that the conversion occurs very rapidly — on the sub-nanosecond time scale, and that it can be controlled by electrostatic gating.
    “We ascribe the light frequency conversion in graphene to a terahertz-induced thermal radiation mechanism, that is, the charge carriers absorb electromagnetic energy from the incident terahertz field. The absorbed energy rapidly distributes in the material, leading to carrier heating; and finally this leads to emission of photons in the visible spectrum, quite like light emitted by any heated object,” explains Prof. Klaas-Jan Tielrooij of ICN2’s Ultrafast Dynamics in Nanoscale Systems group and Eindhoven University of Technology.
    The tunability and speed of the terahertz-to-visible light conversion achieved in graphene-based materials has great potential for application in information and communication technologies. The underlying ultrafast thermodynamic mechanism could certainly produce an impact on terahertz-to-telecom interconnects, as well as in any technology that requires ultrafast frequency conversion of signals. More

  • in

    High-quality child care contributes to later success in science, math

    Children who receive high-quality child care as babies, toddlers and preschoolers do better in science, technology, engineering and math through high school, and that link is stronger among children from low-income backgrounds, according to research published by the American Psychological Association.
    “Our results suggest that caregiving quality in early childhood can build a strong foundation for a trajectory of STEM success,” said study author Andres S. Bustamante, PhD, of the University of California Irvine. “Investing in quality child care and early childhood education could help remedy the underrepresentation of racially and ethnically diverse populations in STEM fields.”
    The research was published in the journal Developmental Psychology.
    Many studies have demonstrated that higher quality caregiving in early childhood is associated with better school readiness for young children from low-income families. But not as many have looked at how the effects of early child care extend into high school, and even fewer have focused specifically on STEM subjects, according to Bustamante.
    To investigate those questions, Bustamante and his colleagues examined data from 979 families who participated in the National Institute of Child Health and Human Development Study of Early Child Care and Youth Development, from the time of the child’s birth in 1991 until 2006.
    As part of the study, trained observers visited the day cares and preschools of all the children who were enrolled for 10 or more hours per week. The observers visited when the children were 6, 15, 24, 36 and 54 months old, and rated two aspects of the child care: the extent to which the caregivers provided a warm and supportive environment and responded to children’s interests and emotions, and the amount of cognitive stimulation they provided through using rich language, asking questions to probe the children’s thinking, and providing feedback to deepen the children’s understanding of concepts.
    The researchers then looked at how the students performed in STEM subjects in elementary and high school. To measure STEM success, they examined the children’s scores on the math and reasoning portions of a standardized test in grades three to five. To measure high school achievement, the researchers looked at standardized test scores and the students’ most advanced science course completed, the most advanced math course completed, GPA in science courses and GPA in math courses.
    Overall, they found that both aspects of caregiving quality (more cognitive stimulation and better caregiver sensitivity-responsivity) predicted greater STEM achievement in late elementary school (third, fourth and fifth grade), which in turn predicted greater STEM achievement in high school at age 15. Sensitive and responsive caregiving in early childhood was a stronger predictor of high school STEM performance for children from low-income families compared with children from higher income families.
    “Our hypothesis was that cognitive stimulation would be more strongly related to STEM outcomes because those kinds of interactions provide the foundation for exploration and inquiry, which are key in STEM learning,” Bustamante said. “However, what we saw was that the caregiver sensitivity and responsiveness was just as predictive of later STEM outcomes, highlighting the importance of children’s social emotional development and settings that support cognitive and social emotional skills.”
    Overall, Bustamante said, research and theory suggest that high-quality early care practices support a strong foundation for science learning. “Together, these results highlight caregiver cognitive stimulation and sensitivity and responsiveness in early childhood as an area for investment to strengthen the STEM pipeline, particularly for children from low-income households.” More

  • in

    Video games spark exciting new frontier in neuroscience

    University of Queensland researchers have used an algorithm from a video game to gain insights into the behaviour of molecules within live brain cells.
    Dr Tristan Wallis and Professor Frederic Meunier from UQ’s Queensland Brain Institute came up with the idea while in lockdown during the COVID-19 pandemic.
    “Combat video games use a very fast algorithm to track the trajectory of bullets, to ensure the correct target is hit on the battlefield at the right time,” Dr Wallis said.
    “The technology has been optimised to be highly accurate, so the experience feels as realistic as possible.
    “We thought a similar algorithm could be used to analyse tracked molecules moving within a brain cell.”
    Until now, technology has only been able to detect and analyse molecules in space, and not how they behave in space and time.

    “Scientists use super-resolution microscopy to look into live brain cells and record how tiny molecules within them cluster to perform specific functions,” Dr Wallis said.
    “Individual proteins bounce and move in a seemingly chaotic environment, but when you observe these molecules in space and time, you start to see order within the chaos.
    “It was an exciting idea — and it worked.”
    Dr Wallis used coding tools to build an algorithm that is now used by several labs to gather rich data about brain cell activity.
    “Rather than tracking bullets to the bad guys in video games, we applied the algorithm to observe molecules clustering together — which ones, when, where, for how long and how often,” Dr Wallis said.

    “This gives us new information about how molecules perform critical functions within brain cells and how these functions can be disrupted during ageing and disease.”
    Professor Meunier said the potential impact of the approach was exponential.
    “Our team is already using the technology to gather valuable evidence about proteins such as Syntaxin-1A, essential for communication within brain cells,” Professor Meunier said.
    “Other researchers are also applying it to different research questions.
    “And we are collaborating with UQ mathematicians and statisticians to expand how we use this technology to accelerate scientific discoveries.”
    Professor Meunier said it was gratifying to see the effect of a simple idea.
    “We used our creativity to solve a research challenge by merging two unrelated high-tech worlds, video games and super-resolution microscopy,” he said.
    “It has brought us to a new frontier in neuroscience.”
    The research was published in Nature Communications. More

  • in

    Metaverse could put a dent in global warming

    For many technology enthusiasts, the metaverse has the potential to transform almost every facet of human life, from work to education to entertainment. Now, new Cornell University research shows it could have environmental benefits, too.
    Researchers find the metaverse could lower global surface temperature by up to 0.02 degrees Celsius before the end of the century.
    The team’s paper, “Growing Metaverse Sector Can Reduce Greenhouse Gas Emissions by 10 Gt CO2e in the United States by 2050,” published June 14 in Energy and Environmental Science.
    They used AI-based modeling to analyze data from key sectors — technology, energy, environment and business — to anticipate the growth of metaverse usage and the impact of its most promising applications: remote work, virtual traveling, distance learning, gaming and non-fungible tokens (NFTs).
    The researchers projected metaverse expansion through 2050 along three different trajectories — slow, nominal and fast — and they looked to previous technologies, such as television, the internet and the iPhone, for insight into how quickly that adoption might occur. They also factored in the amount of energy that increasing usage would consume. The modeling suggested that within 30 years, the technology would be adopted by more than 90% of the population.
    “One thing that did surprise us is that this metaverse is going to grow much quicker than what we expected,” said Fengqi You, professor in energy systems engineering and the paper’s senior author. “Look at earlier technologies — TV, for instance. It took decades to be eventually adopted by everyone. Now we are really in an age of technology explosion. Think of our smartphones. They grew very fast.”
    Currently, two of the biggest industry drivers of metaverse development are Meta and Microsoft, both of which contributed to the study. Meta has been focusing on individual experiences, such as gaming, while Microsoft specializes in business solutions, including remote conferencing and distance learning.

    Limiting business travel would generate the largest environmental benefit, according to You.
    “Think about the decarbonization of our transportation sector,” he said. “Electric vehicles work, but you can’t drive a car to London or Tokyo. Do I really have to fly to Singapore for a conference tomorrow? That will be an interesting decision-making point for some stakeholders to consider as we move forward with these technologies with human-machine interface in a 3D virtual world.”
    The paper notes that by 2050 the metaverse industry could potentially lower greenhouse gas emissions by 10 gigatons; lower atmospheric carbon dioxide concentration by 4.0 parts per million; decrease effective radiative forcing by 0.035 watts per square meter; and lower total domestic energy consumption by 92 EJ, a reduction that surpasses the annual nationwide energy consumption of all end-use sectors in previous years.
    These findings could help policymakers understand how metaverse industry growth can accelerate progress towards achieving net-zero emissions targets and spur more flexible decarbonization strategies. Metaverse-based remote working, distance learning and virtual tourism could be promoted to improve air quality. In addition to alleviating air pollutant emissions, the reduction of transportation and commercial energy usage could help transform the way energy is distributed, with more energy supply going towards the residential sector.
    “This mechanism is going to help, but in the end, it is going to help lower the global surface temperature by up to 0.02 degrees,” You said. “There are so many sectors in this economy. You cannot count on the metaverse to do everything. But it could do a little bit if we leverage it in a reasonable way.”
    The research was supported by the National Science Foundation. More

  • in

    AI helps show how the brain’s fluids flow

    A new artificial intelligence-based technique for measuring fluid flow around the brain’s blood vessels could have big implications for developing treatments for diseases such as Alzheimer’s.
    The perivascular spaces that surround cerebral blood vessels transport water-like fluids around the brain and help sweep away waste. Alterations in the fluid flow are linked to neurological conditions, including Alzheimer’s, small vessel disease, strokes, and traumatic brain injuries but are difficult to measure in vivo.
    A multidisciplinary team of mechanical engineers, neuroscientists, and computer scientists led by University of Rochester Associate Professor Douglas Kelley developed novel AI velocimetry measurements to accurately calculate brain fluid flow. The results are outlined in a study published by Proceedings of the National Academy of Sciences.
    “In this study, we combined some measurements from inside the animal models with a novel AI technique that allowed us to effectively measure things that nobody’s ever been able to measure before,” says Kelley, a faculty member in Rochester’s Department of Mechanical Engineering.
    The work builds upon years of experiments led by study coauthor Maiken Nedergaard, the codirector of Rochester’s Center for Translational Neuromedicine. The group has previously been able to conduct two-dimensional studies on the fluid flow in perivascular spaces by injecting tiny particles into the fluid and measuring their position and velocity over time. But scientists needed more complex measurements to understand the full intricacy of the system — and exploring such a vital, fluid system is a challenge.
    To address that challenge, the team collaborated with George Karniadakis from Brown University to leverage artificial intelligence. They integrated the existing 2D data with physics-informed neural networks to create unprecedented high-resolution looks at the system.
    “This is a way to reveal pressures, forces, and the three-dimensional flow rate with much more accuracy than we can otherwise do,” says Kelley. “The pressure is important because nobody knows for sure quite what pumping mechanism drives all these flows around the brain yet. This is a new field.”
    The scientists conducted the research with support from the Collaborative Research in Computational Neuroscience program, the National Institutes of Health Brain Initiative, and the Army Research Office’s Multidisciplinary University Research Initiatives program. More

  • in

    Metamaterials with built-in frustration have mechanical memory

    Researchers from the UvA Institute of Physics and ENS de Lyon have discovered how to design materials that necessarily have a point or line where the material doesn’t deform under stress, and that even remember how they have been poked or squeezed in the past. These results could be used in robotics and mechanical computers, while similar design principles could be used in quantum computers.
    The outcome is a breakthrough in the field of metamaterials: designer materials whose responses are determined by their structure rather than their chemical composition. To construct a metamaterial with mechanical memory, physicists Xiaofei Guo, Marcelo Guzmán, David Carpentier, Denis Bartolo and Corentin Coulais realised that its design needs to be ‘frustrated’, and that this frustration corresponds to a new type of order, which they call non-orientable order.
    Physics with a twist
    A simple example of a non-orientable object is a Möbius strip, made by taking a strip of material, adding half a twist to it and then gluing its ends together. You can try this at home with a strip of paper. Following the surface of a Möbius strip with your finger, you’ll find that when you get back to your starting point, your finger will be on the other side of the paper.
    A Möbius strip is non-orientable because there is no way to label the two sides of the strip in a consistent manner; the twist makes the entire surface one and the same. This is in contrast to a simple cylinder (a strip without any twists whose ends are glued together), which has a distinct inner and outer surface.
    Guo and her colleagues realised that this non-orientability strongly affects how an object or metamaterial responds to being pushed or squeezed. If you place a simple cylinder and a Möbius strip on a flat surface and press down on them from above, you’ll find that the sides of the cylinder will all bulge out (or in), while the sides of the Möbius strip cannot do the same. Instead, the non-orientability of the latter ensures that there is always a point along the strip where it does not deform under pressure.

    Frustration is not always a bad thing
    Excitingly, this behaviour extends far beyond Möbius strips. ‘We discovered that the behaviour of non-orientable objects such as Möbius strips allows us to describe any material that is globally frustrated. These materials naturally want to be ordered, but something in their structure forbids the order to span the whole system and forces the ordered pattern to vanish at one point or line in space. There is no way to get rid of that vanishing point without cutting the structure, so it has to be there no matter what,’ explains Coulais, who leads the Machine Materials Laboratory at the University of Amsterdam.
    The research team designed and 3D-printed their own mechanical metamaterial structures which exhibit the same frustrated and non-orientable behaviour as Möbius strips. Their designs are based on rings of squares connected by hinges at their corners. When these rings are squeezed, neighbouring squares will rotate in opposite directions so that their edges move closer together. The opposite rotation of neighbours makes the system’s response analogous to the anti-ferromagnetic ordering that occurs in certain magnetic materials.
    Rings composed of an odd number of squares are frustrated, because there is no way for all neighbouring squares to rotate in opposite directions. Squeezed odd-numbered rings therefore exhibit non-orientable order, in which the rotation angle at one point along the ring must go to zero.
    Being a feature of the overall shape of the material makes this a robust topological property. By connecting multiple metarings together, it is even possible to emulate the mechanics of higher-dimensional topological structures such as the Klein bottle.

    Mechanical memory
    Having an enforced point or line of zero deformation is key to endowing materials with mechanical memory. Instead of squeezing a metamaterial ring from all sides, you can press the ring at distinct points. Doing so, the order in which you press different points determines where the zero deformation point or line ends up.
    This is a form of storing information. It can even be used to execute certain types of logic gates, the basis of any computer algorithm. A simple metamaterial ring can thus function as a mechanical computer.
    Beyond mechanics, the results of the study suggest that non-orientability could be a robust design principle for metamaterials that can effectively store information across scales, in fields as diverse as colloidal science, photonics, magnetism, and atomic physics. It could even be useful for new types of quantum computers.
    Coulais concludes: ‘Next, we want to exploit the robustness of the vanishing deformations for robotics. We believe the vanishing deformations could be used to create robotic arms and wheels with predictable bending and locomotion mechanisms.’ More

  • in

    New technique in error-prone quantum computing makes classical computers sweat

    Despite steady improvements in quantum computers, they’re still noisy and error prone, which leads to questionable or wrong answers. Scientists predict that they won’t truly outcompete today’s “classical” supercomputers for at least five or 10 years, until researchers can adequately correct the errors that bedevil entangled quantum bits, or qubits.
    But a new study shows that, even lacking good error correction, there are ways to mitigate errors that could make quantum computers useful today.
    Researchers at IBM Quantum in New York and their collaborators at the University of California, Berkeley, and Lawrence Berkeley National Laboratory report today (June 14) in the journal Nature that they pitted a 127-qubit quantum computer against a state-of-the-art supercomputer and, for at least one type of calculation, bested the supercomputer.
    The calculation wasn’t chosen because it was difficult for classical computers, the researchers say, but because it’s similar to ones that physicists make all the time. Crucially, the calculation could be made increasingly complex in order to test whether today’s noisy, error-prone quantum computers can produce accurate results for certain types of common calculations.
    The fact that the quantum computer produced the verifiably correct solution as the calculation became more complex, while the supercomputer algorithm produced an incorrect answer, provides hope that quantum computing algorithms with error mitigation, instead of the more difficult error correction, could tackle cutting-edge physics problems, such as understanding the quantum properties of superconductors and novel electronic materials.
    “We’re entering the regime where the quantum computer might be able to do things that current algorithms on classical computers cannot do,” said UC Berkeley graduate student and study co-author Sajant Anand.

    “We can start to think of quantum computers as a tool for studying problems that we wouldn’t be able to study otherwise,” added Sarah Sheldon, senior manager for Quantum Theory and Capabilities at IBM Quantum.
    Conversely, the quantum computer’s trouncing of the classical computer could also spark new ideas to improve the quantum algorithms now used on classical computers, according to co-author Michael Zaletel, UC Berkeley associate professor of physics and holder of the Thomas and Alison Schneider Chair in Physics.
    “Going into it, I was pretty sure that the classical method would do better than the quantum one,” he said. “So, I had mixed emotions when IBM’s zero-noise extrapolated version did better than the classical method. But thinking about how the quantum system is working might actually help us figure out the right classical way to approach the problem. While the quantum computer did something that the standard classical algorithm couldn’t, we think it’s an inspiration for making the classical algorithm better so that the classical computer performs just as well as the quantum computer in the future.”
    Boost the noise to suppress the noise
    One key to the seeming advantage of IBM’s quantum computer is quantum error mitigation, a novel technique for dealing with the noise that accompanies a quantum computation. Paradoxically, IBM researchers controllably increased the noise in their quantum circuit to get even noisier, less accurate answers and then extrapolated backward to estimate the answer the computer would have gotten if there were no noise. This relies on having a good understanding of the noise that affects quantum circuits and predicting how it affects the output.

    The problem of noise comes about because IBM’s qubits are sensitive superconducting circuits that represent the zeros and ones of a binary computation. When the qubits are entangled for a calculation, unavoidable annoyances, such as heat and vibration, can alter the entanglement, introducing errors. The greater the entanglement, the worse the effects of noise.
    In addition, computations that act on one set of qubits can introduce random errors in other, uninvolved qubits. Additional computations then compound these errors. Scientists hope to use extra qubits to monitor such errors so they can be corrected — so-called fault-tolerant error correction. But achieving scalable fault-tolerance is a huge engineering challenge, and whether it will work in practice for ever greater numbers of qubits remains to be shown, Zaletel said.
    Instead, IBM engineers came up with a strategy of error mitigation they called zero noise extrapolation (ZNE), which uses probabilistic methods to controllably increase the noise on the quantum device. Based on a recommendation from a former intern, IBM researchers approached Anand, postdoctoral researcher Yantao Wu and Zaletel to ask their help in assessing the accuracy of the results obtained using this error mitigation strategy. Zaletel develops supercomputer algorithms to solve difficult calculations involving quantum systems, such as the electronic interactions in new materials. These algorithms, which employ tensor network simulations, can be directly applied to simulate interacting qubits in a quantum computer.
    Over a period of several weeks, Youngseok Kim and Andrew Eddins at IBM Quantum ran increasingly complex quantum calculations on the advanced IBM Quantum Eagle processor, and then Anand attempted the same calculations using state-of-the-art classical methods on the Cori supercomputer and Lawrencium cluster at Berkeley Lab and the Anvil supercomputer at Purdue University. When Quantum Eagle was rolled out in 2021, it had the highest number of high-quality qubits of any quantum computer, seemingly beyond the ability of classical computers to simulate.
    In fact, exactly simulating all 127 entangled qubits on a classical computer would require an astronomical amount of memory. The quantum state would need to be represented by 2 to the power of 127 separate numbers. That’s 1 followed by 38 zeros; typical computers can store around 100 billion numbers, 27 orders of magnitude too small. To simplify the problem, Anand, Wu and Zaletel used approximation techniques that allowed them to solve the problem on a classical computer in a reasonable amount of time, and at a reasonable cost. These methods are somewhat like jpeg image compression, in that they get rid of less important information and keep only what’s required to achieve accurate answers within the limits of the memory available.
    Anand confirmed the accuracy of the quantum computer’s results for the less complex calculations, but as the depth of the calculations grew, the results of the quantum computer diverged from those of the classical computer. For certain specific parameters, Anand was able to simplify the problem and calculate exact solutions that verified the quantum calculations over the classical computer calculations. At the largest depths considered, exact solutions were not available, yet the quantum and classical results disagreed.
    The researchers caution that, while they can’t prove that the quantum computer’s final answers for the hardest calculations were correct, Eagle’s successes on the previous runs gave them confidence that they were.
    “The success of the quantum computer wasn’t like a fine-tuned accident. It actually worked for a whole family of circuits it was being applied to,” Zaletel said.
    Friendly competition
    While Zaletel is cautious about predicting whether this error mitigation technique will work for more qubits or calculations of greater depth, the results were nonetheless inspiring, he said.
    “It sort of spurred a feeling of friendly competition,” he said. “I have a sense that we should be able to simulate on a classical computer what they’re doing. But we need to think about it in a clever and better way — the quantum device is in a regime where it suggests we need a different approach.”
    One approach is to simulate the ZNE technique developed by IBM.
    “Now, we’re asking if we can take the same error mitigation concept and apply it to classical tensor network simulations to see if we can get better classical results,” Anand said. “This work gives us the ability to maybe use a quantum computer as a verification tool for the classical computer, which is flipping the script on what’s usually done.”
    Anand and Zaletel’s work was supported by the U.S. Department of Energy under an Early Career Award (DE-SC0022716). Wu’s work was supported by a RIKEN iTHEMS fellowship. Cori is part of the National Energy Research Scientific Computing Center (NERSC), the primary scientific computing facility for the Office of Science in the U.S. Department of Energy. More

  • in

    Hybrid AI-powered computer vision combines physics and big data

    Researchers from UCLA and the United States Army Research Laboratory have laid out a new approach to enhance artificial intelligence-powered computer vision technologies by adding physics-based awareness to data-driven techniques.
    Published in Nature Machine Intelligence, the study offered an overview of a hybrid methodology designed to improve how AI-based machinery sense, interact and respond to its environment in real time — as in how autonomous vehicles move and maneuver, or how robots use the improved technology to carry out precision actions.
    Computer vision allows AIs to see and make sense of their surroundings by decoding data and inferring properties of the physical world from images. While such images are formed through the physics of light and mechanics, traditional computer vision techniques have predominantly focused on data-based machine learning to drive performance. Physics-based research has, on a separate track, been developed to explore the various physical principles behind many computer vision challenges.
    It has been a challenge to incorporate an understanding of physics — the laws that govern mass, motion and more — into the development of neural networks, where AIs modeled after the human brain with billions of nodes to crunch massive image data sets until they gain an understanding of what they “see.” But there are now a few promising lines of research that seek to add elements of physics-awareness into already robust data-driven networks.
    The UCLA study aims to harness the power of both the deep knowledge from data and the real-world know-how of physics to create a hybrid AI with enhanced capabilities.
    “Visual machines — cars, robots, or health instruments that use images to perceive the world — are ultimately doing tasks in our physical world,” said the study’s corresponding author Achuta Kadambi, an assistant professor of electrical and computer engineering at the UCLA Samueli School of Engineering. “Physics-aware forms of inference can enable cars to drive more safely or surgical robots to be more precise.”
    The research team outlined three ways in which physics and data are starting to be combined into computer vision artificial intelligence: Incorporating physics into AI data sets Tag objects with additional information, such as how fast they can move or how much they weigh, similar to characters in video games Incorporating physics into network architectures Run data through a network filter that codes physical properties into what cameras pick up Incorporating physics into network loss function Leverage knowledge built on physics to help AI interpret training data on what it observesThese three lines of investigation have already yielded encouraging results in improved computer vision. For example, the hybrid approach allows AI to track and predict an object’s motion more precisely and can produce accurate, high-resolution images from scenes obscured by inclement weather.
    With continued progress in this dual modality approach, deep learning-based AIs may even begin to learn the laws of physics on their own, according to the researchers.
    The other authors on the paper are Army Research Laboratory computer scientist Celso de Melo and UCLA faculty Stefano Soatto, a professor of computer science; Cho-Jui Hsieh, an associate professor of computer science and Mani Srivastava, a professor of electrical and computer engineering and of computer science.
    The research was supported in part by a grant from the Army Research Laboratory. Kadambi is supported by grants from the National Science Foundation, the Army Young Investigator Program and the Defense Advanced Research Projects Agency. A co-founder of Vayu Robotics, Kadambi also receives funding from Intrinsic, an Alphabet company. Hsieh, Srivastava and Soatto receive support from Amazon. More