More stories

  • in

    Researchers use quantum-inspired approach to increase lidar resolution

    Researchers have shown that a quantum-inspired technique can be used to perform lidar imaging with a much higher depth resolution than is possible with conventional approaches. Lidar, which uses laser pulses to acquire 3D information about a scene or object, is usually best suited for imaging large objects such as topographical features or built structures due to its limited depth resolution.
    “Although lidar can be used to image the overall shape of a person, it typically doesn’t capture finer details such as facial features,” said research team leader Ashley Lyons from the University of Glasgow in the United Kingdom. “By adding extra depth resolution, our approach could capture enough detail to not only see facial features but even someone’s fingerprints.”
    In the Optica Publishing Group journal Optics Express, Lyons and first author Robbie Murray describe the new technique, which they call imaging two-photon interference lidar. They show that it can distinguish reflective surfaces less than 2 millimeters apart and create high-resolution 3D images with micron-scale resolution.
    “This work could lead to much higher resolution 3D imaging than is possible now, which could be useful for facial recognition and tracking applications that involve small features,” said Lyons. “For practical use, conventional lidar could be used to get a rough idea of where an object might be and then the object could be carefully measured with our method.”
    Using classically entangled light
    The new technique uses “quantum inspired” interferometry, which extracts information from the way that two light beams interfere with each other. Entangled pairs of photons — or quantum light — are often used for this type of interferometry, but approaches based on photon entanglement tend to perform poorly in situations with high levels of light loss, which is almost always the case for lidar. To overcome this problem, the researchers applied what they’ve learned from quantum sensing to classical (non-quantum) light. More

  • in

    Researchers develop computer model to predict whether a pesticide will harm bees

    Researchers in the Oregon State University College of Engineering have harnessed the power of artificial intelligence to help protect bees from pesticides.
    Cory Simon, assistant professor of chemical engineering, and Xiaoli Fern, associate professor of computer science, led the project, which involved training a machine learning model to predict whether any proposed new herbicide, fungicide or insecticide would be toxic to honey bees based on the compound’s molecular structure.
    The findings, featured on the cover of The Journal of Chemical Physics in a special issue, “Chemical Design by Artificial Intelligence,” are important because many fruit, nut, vegetable and seed crops rely on bee pollination.
    Without bees to transfer the pollen needed for reproduction, almost 100 commercial crops in the United States would vanish. Bees’ global economic impact is annually estimated to exceed $100 billion.
    “Pesticides are widely used in agriculture, which increase crop yield and provide food security, but pesticides can harm off-target species like bees,” Simon said. “And since insects, weeds, etc. eventually evolve resistance, new pesticides must continually be developed, ones that don’t harm bees.”
    Graduate students Ping Yang and Adrian Henle used honey bee toxicity data from pesticide exposure experiments, involving nearly 400 different pesticide molecules, to train an algorithm to predict if a new pesticide molecule would be toxic to honey bees.
    “The model represents pesticide molecules by the set of random walks on their molecular graphs,” Yang said.
    A random walk is a mathematical concept that describes any meandering path, such as on the complicated chemical structure of a pesticide, where each step along the path is decided by chance, as if by coin tosses.
    Imagine, Yang explains, that you’re out for an aimless stroll along a pesticide’s chemical structure, making your way from atom to atom via the bonds that hold the compound together. You travel in random directions but keep track of your route, the sequence of atoms and bonds that you visit. Then you go out on a different molecule, comparing the series of twists and turns to what you’ve done before.
    “The algorithm declares two molecules similar if they share many walks with the same sequence of atoms and bonds,” Yang said. “Our model serves as a surrogate for a bee toxicity experiment and can be used to quickly screen proposed pesticide molecules for their toxicity.”
    The National Science Foundation supported this research.
    Story Source:
    Materials provided by Oregon State University. Original written by Steve Lundeberg. Note: Content may be edited for style and length. More

  • in

    A robot learns to imagine itself

    As every athletic or fashion-conscious person knows, our body image is not always accurate or realistic, but it’s an important piece of information that determines how we function in the world. When you get dressed or play ball, your brain is constantly planning ahead so that you can move your body without bumping, tripping, or falling over.
    We humans acquire our body-model as infants, and robots are following suit. A Columbia Engineering team announced today they have created a robot that — for the first time — is able to learn a model of its entire body from scratch, without any human assistance. In a new study published by Science Robotics,, the researchers demonstrate how their robot created a kinematic model of itself, and then used its self-model to plan motion, reach goals, and avoid obstacles in a variety of situations. It even automatically recognized and then compensated for damage to its body.
    Robot watches itself like an an infant exploring itself in a hall of mirrors
    The researchers placed a robotic arm inside a circle of five streaming video cameras. The robot watched itself through the cameras as it undulated freely. Like an infant exploring itself for the first time in a hall of mirrors, the robot wiggled and contorted to learn how exactly its body moved in response to various motor commands. After about three hours, the robot stopped. Its internal deep neural network had finished learning the relationship between the robot’s motor actions and the volume it occupied in its environment.
    “We were really curious to see how the robot imagined itself,” said Hod Lipson, professor of mechanical engineering and director of Columbia’s Creative Machines Lab, where the work was done. “But you can’t just peek into a neural network, it’s a black box.” After the researchers struggled with various visualization techniques, the self-image gradually emerged. “It was a sort of gently flickering cloud that appeared to engulf the robot’s three-dimensional body,” said Lipson. “As the robot moved, the flickering cloud gently followed it.” The robot’s self-model was accurate to about 1% of its workspace.
    Self-modeling robots will lead to more self-reliant autonomous systems
    The ability of robots to model themselves without being assisted by engineers is important for many reasons: Not only does it save labor, but it also allows the robot to keep up with its own wear-and-tear, and even detect and compensate for damage. The authors argue that this ability is important as we need autonomous systems to be more self-reliant. A factory robot, for instance, could detect that something isn’t moving right, and compensate or call for assistance.
    “We humans clearly have a notion of self,” explained the study’s first author Boyuan Chen, who led the work and is now an assistant professor at Duke University. “Close your eyes and try to imagine how your own body would move if you were to take some action, such as stretch your arms forward or take a step backward. Somewhere inside our brain we have a notion of self, a self-model that informs us what volume of our immediate surroundings we occupy, and how that volume changes as we move.”
    Self-awareness in robots
    The work is part of Lipson’s decades-long quest to find ways to grant robots some form of self-awareness. “Self-modeling is a primitive form of self-awareness,” he explained. “If a robot, animal, or human, has an accurate self-model, it can function better in the world, it can make better decisions, and it has an evolutionary advantage.”
    The researchers are aware of the limits, risks, and controversies surrounding granting machines greater autonomy through self-awareness. Lipson is quick to admit that the kind of self-awareness demonstrated in this study is, as he noted, “trivial compared to that of humans, but you have to start somewhere. We have to go slowly and carefully, so we can reap the benefits while minimizing the risks.” More

  • in

    Turning white blood cells into medicinal microrobots with light

    Medicinal microrobots could help physicians better treat and prevent diseases. But most of these devices are made with synthetic materials that trigger immune responses in vivo. Now, for the first time, researchers reporting in ACS Central Science have used lasers to precisely control neutrophils — a type of white blood cell — as a natural, biocompatible microrobot in living fish. The “neutrobots” performed multiple tasks, showing they could someday deliver drugs to precise locations in the body.
    Microrobots currently in development for medical applications would require injections or the consumption of capsules to get them inside an animal or person. But researchers have found that these microscopic objects often trigger immune reactions in small animals, resulting in the removal of microrobots from the body before they can perform their jobs. Using cells already present in the body, such as neutrophils, could be a less invasive alternative for drug delivery that wouldn’t set off the immune system. These white blood cells already naturally pick up nanoparticles and dead red blood cells and can migrate through blood vessels into adjacent tissues, so they are good candidates for becoming microrobots. Previously, researchers have guided neutrophils with lasers in lab dishes, moving them around as “neutrobots.” However, information on whether this approach will work in living animals was lacking. So, Xianchuang Zheng, Baojun Li and colleagues wanted to demonstrate the feasibility of light-driven neutrobots in animals using live zebrafish.
    The researchers manipulated and maneuvered neutrophils in zebrafish tails, using focused laser beams as remote optical tweezers. The light-driven microrobot could be moved up to a velocity of 1.3 µm/s, which is three times faster than a neutrophil naturally moves. In their experiments, the researchers used the optical tweezers to precisely and actively control the functions that neutrophils conduct as part of the immune system. For instance, a neutrobot was moved through a blood vessel wall into the surrounding tissue. Another one picked up and transported a plastic nanoparticle, showing its potential for carrying medicine. And when a neutrobot was pushed toward red blood cell debris, it engulfed the pieces. Surprisingly, at the same time, a different neutrophil, which wasn’t controlled by a laser, tried to naturally remove the cellular debris. Because they successfully controlled neutrobots in vivo, the researchers say this study advances the possibilities for targeted drug delivery and precise treatment of diseases.
    The authors acknowledge funding from the National Natural Science Foundation of China, the Basic and Applied Basic Research Foundation of Guangdong Province, and the Science and Technology Program of Guangzhou.
    Story Source:
    Materials provided by American Chemical Society. Note: Content may be edited for style and length. More

  • in

    Incidental pulmonary embolism on chest CT: AI vs. clinical reports

    According to ARRS’ American Journal of Roentgenology (AJR), an AI tool for detection of incidental pulmonary embolus (iPE) on conventional contrast-enhanced chest CT examinations had high NPV and moderate PPV for detection, even finding some iPEs missed by radiologists.
    “Potential applications of the AI tool include serving as a second reader to help detect additional iPEs or as a worklist triage tool to allow earlier iPE detection and intervention,” wrote lead investigator Kiran Batra from the University of Texas Southwestern Medical Center in Dallas. “Various explanations of misclassifications by the AI tool (both false positives and false negatives) were identified, to provide targets for model improvement.”
    Batra and colleagues’ retrospective study included 2,555 patients (1,340 women, 1,215 men; mean age, 53.6 years) who underwent 3,003 conventional contrast-enhanced chest CT examinations between September 2019 and February 2020 at Parkland Health in Dallas, TX. Using an FDA-approved, commercially available AI tool (Aidoc, New York, NY) to detect acute iPE on the images, a vendor-supplied natural language processing algorithm (RepScheme, Tel Aviv, Israel) was then applied to the clinical reports to identify examinations interpreted as positive for iPE.
    Ultimately, the commercial AI tool had NPV of 99.8% and PPV of 86.7% for detection of iPE on conventional contrast-enhanced chest CT examinations (i.e., not using CT pulmonary angiography protocols). Of 40 iPEs present in the team’s study sample, 7 were detected only by the clinical reports, and 4 were detected only by AI.
    Noting that both the AI tool and clinical reports detected iPEs missed by the other method, “the diagnostic performance of the AI tool did not show significant variation across study subgroups,” the authors of this AJR article added.
    Story Source:
    Materials provided by American Roentgen Ray Society. Note: Content may be edited for style and length. More

  • in

    Researchers find the missing photonic link to enable an all-silicon quantum internet

    Researchers at Simon Fraser University have made a crucial breakthrough in the development of quantum technology.
    Their research, published in Nature today, describes their observations of over 150,000 silicon ‘T centre’ photon-spin qubits, an important milestone that unlocks immediate opportunities to construct massively scalable quantum computers and the quantum internet that will connect them.
    Quantum computing has enormous potential to provide computing power well beyond the capabilities of today’s supercomputers, which could enable advances in many other fields, including chemistry, materials science, medicine and cybersecurity.
    In order to make this a reality, it is necessary to produce both stable, long-lived qubits that provide processing power, as well as the communications technology that enables these qubits to link together at scale.
    Past research has indicated that silicon can produce some of the most stable and long-lived qubits in the industry. Now the research published by Daniel Higginbottom, Alex Kurkjian, and co-authors provides proof of principle that T centres, a specific luminescent defect in silicon, can provide a ‘photonic link’ between qubits. This comes out of the SFU Silicon Quantum Technology Lab in SFU’s Physics Department, co-led by Stephanie Simmons, Canada Research Chair in Silicon Quantum Technologies and Michael Thewalt, Professor Emeritus.
    “This work is the first measurement of single T centresin isolation, and actually, the first measurement of any single spin in silicon to be performed with only optical measurements,” says Stephanie Simmons.
    “An emitter like the T centre that combines high-performance spin qubits and optical photon generation is ideal to make scalable, distributed, quantum computers, because they can handle the processing and the communications together, rather than needing to interface two different quantum technologies, one for processing and one for communications,” Simmons says.
    In addition, T centres have the advantage of emitting light at the same wavelength that today’s metropolitan fibre communications and telecom networking equipment use.
    “With T centres, you can build quantum processors that inherently communicate with other processors,” Simmons says. “When your silicon qubit can communicate by emitting photons (light) in the same band used in data centres and fiber networks, you get these same benefits for connecting the millions of qubits needed for quantum computing.”
    Developing quantum technology using silicon provides opportunities to rapidly scale quantum computing. The global semiconductor industry is already able to inexpensively manufacture silicon computer chips at scale, with a staggering degree of precision. This technology forms the backbone of modern computing and networking, from smartphones to the world’s most powerful supercomputers.
    “By finding a way to create quantum computing processors in silicon, you can take advantage of all of the years of development, knowledge, and infrastructure used to manufacture conventional computers, rather than creating a whole new industry for quantum manufacturing,” Simmons says. “This represents an almost insurmountable competitive advantage in the international race for a quantum computer.”
    Story Source:
    Materials provided by Simon Fraser University. Original written by Erin Brown-John. Note: Content may be edited for style and length. More

  • in

    Gender bias in search algorithms has effect on users, new study finds

    Gender-neutral internet searches yield results that nonetheless produce male-dominated output, finds a new study by a team of psychology researchers. Moreover, these search results have an effect on users by promoting gender bias and potentially influencing hiring decisions.
    The work, which appears in the journal Proceedings of the National Academy of Sciences (PNAS), is among the latest to uncover how artificial intelligence (AI) can alter our perceptions and actions.
    “There is increasing concern that algorithms used by modern AI systems produce discriminatory outputs, presumably because they are trained on data in which societal biases are embedded,” says Madalina Vlasceanu, a postdoctoral fellow in New York University’s Department of Psychology and the paper’s lead author. “As a consequence, their use by humans may result in the propagation, rather than reduction, of existing disparities.”
    “These findings call for a model of ethical AI that combines human psychology with computational and sociological approaches to illuminate the formation, operation, and mitigation of algorithmic bias,” adds author David Amodio, a professor in NYU’s Department of Psychology and the University of Amsterdam.
    Technology experts have expressed concern that algorithms used by modern AI systems produce discriminatory outputs, presumably because they are trained on data in which societal biases are ingrained.
    “Certain 1950s ideas about gender are actually still embedded in our database systems,” Meredith Broussard, author of Artificial Unintelligence: How Computers Misunderstand the World and a professor at NYU’s Arthur L. Carter Journalism Institute, told the Markup earlier this year. More

  • in

    A proof of odd-parity superconductivity

    Superconductivity is a fascinating state of matter in which an electrical current can flow without any resistance. Usually, it can exist in two forms. One is destroyed easily with a magnetic field and has “even parity,” i.e. it has a point symmetric wave function with respect to an inversion point, and one which is stable in magnetic fields applied in certain directions and has “odd parity,” i.e. it has an antisymmetric wave function.
    Consequently, the latter should present a characteristic angle dependence of the critical field where superconductivity disappears. But odd-parity superconductivity is rare in nature; only a few materials support this state, and in none of them has the expected angle dependence been observed. In a new publication in PRX, the group by Elena Hassinger and collaborators show that the angle dependence in the superconductor CeRh2As2 is exactly that expected of an odd-parity state.
    CeRh2As2 was recently found to exhibit two superconducting states: A low-field state changes into a high-field state at 4 T when a magnetic field is applied along one axis. For varying field directions, we measured the specific heat, magnetic susceptibility, and magnetic torque of this material to obtain the angle dependence of the critical fields. We find that the high-field state quickly disappears when the magnetic field is turned away from the initial axis. These results are in excellent agreement with our model identifying the two states with even- and odd-parity states.
    CeRh2As2 presents an extraordinary opportunity to investigate odd-parity superconductivity further. It also allows for testing mechanisms for a transition between two superconducting states, and especially their relation to spin-orbit coupling, multiband physics, and additional ordered states occurring in this material.
    Story Source:
    Materials provided by Max Planck Institute for Chemical Physics of Solids. Note: Content may be edited for style and length. More