More stories

  • in

    AI helps bring clarity to LASIK patients facing cataract surgery

    While millions of people have undergone LASIK eye surgery since it became commercially available in 1989, patients sometimes develop cataracts later in life and require new corrective lenses to be implanted in their eyes. With an increasing number of intraocular lens options becoming available, scientists have developed computational simulations to help patients and surgeons see the best options.
    In a study in the Journal of Cataracts & Refractive Surgery, researchers from the University of Rochester created computational eye models that included the corneas of post-LASIK surgery patients and studied how standard intraocular lenses and lenses designed to increase depth of focus performed in operated eyes. Susana Marcos, the David R. Williams Director of the Center for Visual Science and the Nicholas George Professor of Optics and of Ophthalmology at Rochester, says the computational models that use anatomical information of the patient’s eye provide surgeons with important guidance on the expected optical quality post-operatively.
    “Currently the only pre-operative data used to select the lens is essentially the length and curvature of the cornea,” says Marcos, a coauthor of the study. “This new technology allows us to reconstruct the eye in three dimensions, providing us the entire topography of the cornea and crystalline lens, where the intraocular lens is implanted. When you have all this three-dimensional information, you’re in a much better position to select the lens that will produce the best image at the retinal plane.”
    The future of optical coherence tomography
    Marcos and her collaborators from the Center for Visual Science, as well as Rochester’s Flaum Eye Institute and Goergen Institute for Data Science, are conducting a larger study to quantify in three dimensions the eye images using the optical coherence tomography quantification tools they’ve developed to find broader trends. They are using machine-learning algorithms to find relationships between pre- and post-operation data, providing parameters that can inform the best outcomes.
    Additionally, they have developed technology that can help patients see for themselves what different lens options will look like.
    “What we see is not strictly the image that is project on the retina,” says Marcos. “There is all the visual processing and perception that comes in. When surgeons are planning the surgery, it is very difficult for them to convey to the patients how they are going to see. A computational, personalized eye model tells which lens is the best fit for the patient’s eye anatomy, but patients want to see for themselves.”
    With an optical bench, the researchers use technology originally developed for astronomy, such as adaptive optics mirrors and spatial light modulators, to manipulate the optics of the eye as an intraocular lens would. The approach allows Marcos and her collaborators to perform fundamental experiments and collaborate with industry partners to test new products. Marcos also helped develop a commercial headset version of the instrumentation called SimVis Gekko that allows patients to see the world around them as if they had had the surgery.
    In addition to studying techniques to help treat cataracts, the researchers are applying their methods to study other major eye conditions, including presbyopia and myopia. More

  • in

    Shh! Quiet cables set to help reveal rare physics events

    Imagine trying to tune a radio to a single station but instead encountering static noise and interfering signals from your own equipment. That is the challenge facing research teams searching for evidence of extremely rare events that could help understand the origin and nature of matter in the universe. It turns out that when you are trying to tune into some of the universe’s weakest signals, it helps to make your instruments very quiet.
    Around the world more than a dozen teams are listening for the pops and electronic sizzle that might mean they have finally tuned into the right channel. These scientists and engineers have gone to extraordinary lengths to shield their experiments from false signals created by cosmic radiation. Most such experiments are found in very inaccessible places — such as a mile underground in a nickel mine in Sudbury, Ontario, Canada, or in an abandoned gold mine in Lead, South Dakota — to shield them from naturally radioactive elements on Earth. However, one such source of fake signals comes from natural radioactivity in the very electronics that are designed to record potential signals.
    Radioactive contaminants, even at concentrations as tiny as one part-per-billion, can mimic the elusive signals that scientists are seeking. Now, a research team at the Department of Energy’s Pacific Northwest National Laboratory, working with Q-Flex Inc., a small business partner in California, has produced electronic cables with ultra-pure materials. These cables are specially designed and manufactured to have such extremely low levels of the radioactive contaminants that they will not interfere with highly sensitive neutrino and dark matter experiments. The scientists report in the journal EPJ Techniques and Instrumentation that the cables have applications not only in physics experiments, but they may also be useful to reduce the effect of ionizing radiation interfering with future quantum computers.
    “We have pioneered a technique to produce electronic cabling that is a hundred times lower than current commercially available options,” said PNNL principal investigator Richard Saldanha. “This manufacturing approach and product has broad application across any field that is sensitive to the presence of even very low levels of radioactive contaminants.”
    An ultra-quiet choreographed ballet
    Small amounts of naturally occurring radioactive elements are found everywhere: in rocks, dirt and dust floating in the air. The amount of radiation that they emit is so low that they do not pose any health hazards, but it’s still enough to cause problems for next-generation neutrino and dark matter detectors.
    “We typically need to get a million or sometimes a billion times cleaner than the contamination levels you would find in just a little speck of dirt or dust,” said PNNL chemist Isaac Arnquist, who co-authored the research article and led the measurement team. More

  • in

    Topological materials open a new pathway for exploring spin hall materials

    A group of researchers have made a significant breakthrough which could revolutionize next-generation electronics by enabling non-volatility, large-scale integration, low power consumption, high speed, and high reliability in spintronic devices.
    Details of their findings were published in the journal Physical Review B on August 25, 2023.
    Spintronic devices, represented by magnetic random access memory (MRAM), utilize the magnetization direction of ferromagnetic materials for information storage and rely on spin current, a flow of spin angular momentum, for reading and writing data.
    Conventional semiconductor electronics have faced limitations in achieving these qualities.
    However, the emergence of three-terminal spintronic devices, which employ separate current paths for writing and reading information, presents a solution with reduced writing errors and increased writing speed. Nevertheless, the challenge of reducing energy consumption during information writing, specifically magnetization switching, remains a critical concern.
    A promising method for mitigating energy consumption during information writing is the utilization of the spin Hall effect, where spin angular momentum (spin current) flows transversely to the electric current. The challenge lies in identifying materials that exhibit a significant spin Hall effect, a task that has been clouded by a lack of clear guidelines.
    “We turned our attention to a unique compound known as cobalt-tin-sulfur (Co3Sn2S2), which exhibits ferromagnetic properties at low temperatures below 177 K (-96 °C) and paramagnetic behavior at room temperature,” explains Yong-Chang Lau and Takeshi Seki, both from the Institute for Materials Research (IMR), Tohoku University and co-authors of the study. “Notably, Co3Sn2S2 is classified as a topological material and exhibits a remarkable anomalous Hall effect when it transitions to a ferromagnetic state due to its distinctive electronic structure.”
    Lau, Seki and colleagues employed theoretical calculations to explore the electronic states of both ferromagnetic and paramagnetic Co3Sn2S2, revealing that electron-doping enhances the spin Hall effect. To validate this theoretical prediction, thin films of Co3Sn2S2 partially substituted with nickel (Ni) and indium (In) were synthesized. These experiments demonstrated that Co3Sn2S2 exhibited the most significant anomalous Hall effect, while (Co2Ni)Sn2S2 displayed the most substantial spin Hall effect, aligning closely with the theoretical predictions.
    “We uncovered the intricate correlation between the Hall effects, providing a clear path to discovering new spin Hall materials by leveraging existing literature as a guide,” adds Seki. “This will hopefully accelerate the development of ultralow-power-consumption spintronic devices, marking a pivotal step toward the future of electronics.” More

  • in

    Shape-changing smart speaker lets users mute different areas of a room

    In virtual meetings, it’s easy to keep people from talking over each other. Someone just hits mute. But for the most part, this ability doesn’t translate easily to recording in-person gatherings. In a bustling cafe, there are no buttons to silence the table beside you.
    The ability to locate and control sound — isolating one person talking from a specific location in a crowded room, for instance — has challenged researchers, especially without visual cues from cameras.
    A team led by researchers at the University of Washington has developed a shape-changing smart speaker, which uses self-deploying microphones to divide rooms into speech zones and track the positions of individual speakers. With the help of the team’s deep-learning algorithms, the system lets users mute certain areas or separate simultaneous conversations, even if two adjacent people have similar voices. Like a fleet of Roombas, each about an inch in diameter, the microphones automatically deploy from, and then return to, a charging station. This allows the system to be moved between environments and set up automatically. In a conference room meeting, for instance, such a system might be deployed instead of a central microphone, allowing better control of in-room audio.
    The team published its findings Sept. 21 in Nature Communications.
    “If I close my eyes and there are 10 people talking in a room, I have no idea who’s saying what and where they are in the room exactly. That’s extremely hard for the human brain to process. Until now, it’s also been difficult for technology,” said co-lead author Malek Itani, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “For the first time, using what we’re calling a robotic ‘acoustic swarm,’ we’re able to track the positions of multiple people talking in a room and separate their speech.”
    Previous research on robot swarms has required using overhead or on-device cameras, projectors or special surfaces. The UW team’s system is the first to accurately distribute a robot swarm using only sound.
    The team’s prototype consists of seven small robots that spread themselves across tables of various sizes. As they move from their charger, each robot emits a high frequency sound, like a bat navigating, using this frequency and other sensors to avoid obstacles and move around without falling off the table. The automatic deployment allows the robots to place themselves for maximum accuracy, permitting greater sound control than if a person set them. The robots disperse as far from each other as possible since greater distances make differentiating and locating people speaking easier. Today’s consumer smart speakers have multiple microphones, but clustered on the same device, they’re too close to allow for this system’s mute and active zones. More

  • in

    Scientists successfully maneuver robot through living lung tissue

    Lung cancer is the leading cause of cancer-related deaths in the United States. Some tumors are extremely small and hide deep within lung tissue, making it difficult for surgeons to reach them. To address this challenge, UNC -Chapel Hill and Vanderbilt University researchers have been working on an extremely bendy but sturdy robot capable of traversing lung tissue.
    Their research has reached a new milestone. In a new paper, published in Science Robotics, Ron Alterovitz, PhD, in the UNC Department of Computer Science, and Jason Akulian, MD MPH, in the UNC Department of Medicine, have proven that their robot can autonomously go from “Point A” to “Point B” while avoiding important structures, such as tiny airways and blood vessels, in a living laboratory model.
    “This technology allows us to reach targets we can’t otherwise reach with a standard or even robotic bronchoscope,” said Dr. Akulian, co-author on the paper and Section Chief of Interventional Pulmonology and Pulmonary Oncology in the UNC Division of Pulmonary Disease and Critical Care Medicine. “It gives you that extra few centimeters or few millimeters even, which would help immensely with pursuing small targets in the lungs.”
    The development of the autonomous steerable needle robot leveraged UNC’s highly collaborative culture by blending medicine, computer science, and engineering expertise. In addition to Alterovitz and Akulian, the development effort included Yueh Z. Lee, MD, PhD, at the UNC Department of Radiology, as well as Robert J. Webster III at Vanderbilt University and Alan Kuntz at the University of Utah.
    The robot is made of several separate components. A mechanical control provides controlled thrust of the needle to go forward and backward and the needle design allows for steering along curved paths. The needle is made from a nickel-titanium alloy and has been laser etched to increase its flexibility, allowing it to move effortlessly through tissue.
    As it moves forward, the etching on the needle allows it to steer around obstacles with ease. Other attachments, such as catheters, could be used together with the needle to perform procedures such as lung biopsies.
    To drive through tissue, the needle needs to know where it is going. The research team used CT scans of the subject’s thoracic cavity and artificial intelligence to create three-dimensional models of the lung, including the airways, blood vessels, and the chosen target. Using this 3-D model and once the needle has been positioned for launch, their AI-driven software instructs it to automatically travel from “Point A” to “Point B” while avoiding important structures. More

  • in

    ‘Garbatrage’ spins e-waste into prototyping gold

    To Ilan Mandel, a Cornell University robotics researcher and builder, the math didn’t add up. How could a new, off-the-shelf hoverboard cost less than the parts that compose it?
    “This becomes an ambient frustration as a designer — the incredible cheapness of products that exist in the world, and the incredible expenses for prototyping or building anything from scratch,” said Mandel, a doctoral student in the field of information science, based at Cornell Tech.
    While sourcing wheels and motors from old hoverboards to build what would become a fleet of trash robots in New York City, Mandel inadvertently uncovered the subject of his newest research: “Recapturing Product as Material Supply: Hoverboards as Garbatrage,” which received an honorable mention at the Association for Computing Machinery conference on Designing Interactive Systems in July. Wendy Ju, associate professor at the Jacobs Technion-Cornell Institute at Cornell Tech and the Technion, and a member of the Department of Information Science in the Cornell Ann S. Bowers College of Computing and Information Science, co-authored the paper.
    “For the large part, we design and manufacture as if we have an infinite supply of perfectly uniform materials and components,” Ju said. “That’s a terrible assumption.”
    Building on work in human-computer interaction that aims to incorporate sustainability and reuse into the field, the Cornell pair introduces “garbatrage,” a framework for prototype builders centered around repurposing underused devices. Mandel and Ju use their repurposing of hoverboards — the hands-free, motorized scooters that rolled in and out of popularity around 2016 — as a test case to highlight the economic factors that create opportunities for garbatrage. They also encourage designers to prioritize material reuse, create more circular economies and sustainable supply chains, and, in turn, minimize electronic waste, or e-waste.
    The time is ripe for a practice like garbatrage, both for sustainability reasons and considering the global supply shortages and international trade issues of the last few years, the researchers said.
    “I think that there’s a real need to appreciate the heterogeneity of hardware that we are surrounded by all the time and look at it as a resource,” Mandel said. “What is often deemed as garbage can be full of value and can be made useful if you are willing to do some bridge work.”
    From old desktop computers, smartphones and printers to smart speakers, Internet of Things appliances, and e-vaping devices, most of today’s e-waste has workable components that can be repurposed and used in the prototypes that become tomorrow’s innovations, researchers said.
    Instead, these devices — along with their batteries, microcontrollers, accelerometers, motors and LCD displays — become part of the estimated 53 million metric tons of e-waste produced globally each year. Nearly 20% of it is properly recycled, but it’s unclear where the other 80% goes, according to a report from the UN’s Global E-waste Monitor 2020. Some ends up in developing countries, where people burn electronics in open-air pits to salvage any valuable metals, poisoning lands and putting public health at risk.
    “Designers are a kind of node of interaction between massive scales of industrialization and end users,” Mandel said. “I think that designers can take that role seriously and use it to leverage e-waste in a way that promotes sustainability, beyond just asking the consumer to reflect more on their own practices.” More

  • in

    Let it flow: Recreating water flow for virtual reality

    The physical laws of everyday water flow were established two centuries ago. However, scientists today struggle to simulate disrupted water flow virtually, e.g., when a hand or object alters its flow.
    Now, a research team from Tohoku University has harnessed the power of deep reinforcement learning to replicate the flow of water when disturbed. Replicating this agitated liquid motion, as it is known, allowed them to recreate water flow in real time based on only a small amount of data from real water. The technology opens up the possibility for virtual reality interactions involving water.
    Details of their findings were published in the journal ACM Transactions on Graphics on September 17, 2023.
    Crucial to the breakthrough was creating both a flow measurement technique and a flow reconstruction method that replicated agitated liquid motion.
    To collect flow data, the group — which comprised researchers from Tohoku University’s Research Institute of Electrical Communication (RIEC) and the Institute of Fluid Science — placed buoys embedded with special magnetic markers on water. The movement of each buoy could then be tracked using a magnetic motion capture system. Yet this was only half of the process. The crucial step involved finding an innovative solution to recovering the detailed water motion from the movement of a few buoys.
    “We overcame this by combining a fluid simulation with deep reinforcement learning to perform the recovery,” says Yoshifumi Kitamura, deputy director of RIEC.
    Reinforcement learning is the trial-and-error process through which learning takes place. A computer performs actions, receives feedback (reward or punishment) from its environment, and then adjusts its future actions to maximize its total rewards over time, much like a dog associates treats with good behavior. Deep reinforcement learning combines reinforcement learning with deep neural networks to solve complex problems.
    First, the researchers used a computer to simulate calm liquid. Then, they made each buoy act like a force that pushes the simulated liquid, making it flow like real liquid. The computer then refines the way of pushing via deep reinforcement learning.
    Previous techniques had typically tracked tiny particles suspended inside the liquid with cameras. But it still remained difficult to measure 3D flow in real-time, especially when the liquid was in an opaque container or was opaque itself. Thanks to the developed magnetic motion capture and flow reconstruction technique, real-time 3D flow measurement is now possible.
    Kitamura stresses that the technology will make VR more immersive and improve online communication. “This technology will enable the creation of VR games where you can control things using water and actually feel the water in the game. We may be able to transmit the movement of water over the internet in real time so that even those far away can experience the same lifelike water motion.” More

  • in

    Artificial Intelligence tools shed light on millions of proteins

    A research team at the University of Basel and the SIB Swiss Institute of Bioinformatics uncovered a treasure trove of uncharacterised proteins. Embracing the recent deep learning revolution, they discovered hundreds of new protein families and even a novel predicted protein fold. The study has now been published in Nature.
    In the past years, AlphaFold has revolutionised protein science. This Artificial Intelligence (AI) tool was trained on protein data collected by life scientists for over 50 years, and is able to predict the 3D shape of proteins with high accuracy. Its success prompted the modelling of an astounding 215 million proteins last year, providing insights into the shapes of almost any protein. This is particularly interesting for proteins that have not been studied experimentally, a complex and time-consuming process.
    “There are now many sources of protein information, enclosing valuable insights into how proteins evolve and work” says Joana Pereira, the leader of the study. Nevertheless, research has long been faced with a data jungle. The research team led by Professor Torsten Schwede, group leader at the Biozentrum, University of Basel, and the Swiss Institute of Bioinformatics (SIB), has now succeeded in decrypting some of the concealed information.
    A bird’s eye view reveals new protein families and folds
    The researchers constructed an interactive network of 53 million proteins with high quality AlphaFold structures. “This network serves as a valuable source for theoretically predicting unknown protein families and their functions on a large scale,” underlines Dr. Janani Durairaj, the first author. The team was able to identify 290 new protein families and one new protein fold that resembles the shape of a flower.
    Building on the expertise of the Schwede group in developing and maintaining the leading software SWISS-MODEL, they made the network available as an interactive web resource, termed the “Protein Universe Atlas.”
    AI as a valuable tool in research
    The team has employed Deep Learning-based tools for finding novelties in this network, paving the way to innovations in life sciences, from basic to applied research. “Understanding the structure and function of proteins is typically one of the first steps to develop a new drug, or modify their functions by protein engineering, for example,” says Pereira. The work was supported by a ‘kickstarter’ grant from SIB to encourage the adoption of AI in life science resources. It underscores the transformative potential of Deep Learning and intelligent algorithms in research.
    With the Protein Universe Atlas, scientists can now learn more about proteins relevant to their research. “We hope this resource will help not only researchers and biocurators but also students and teachers by providing a new platform for learning about protein diversity, from structure, to function, to evolution,” says Janani Durairaj. More