More stories

  • in

    New data analysis tool uncovers important COVID-19 clues

    A new data analysis tool developed by Yale researchers has revealed the specific immune cell types associated with increased risk of death from COVID-19, they report Feb. 28 in the journal Nature Biotechnology.
    Immune system cells such as T cells and antibody-producing B cells are known to provide broad protection against pathogens such as SARS-CoV-2, the virus that causes COVID-19. And large-scale data analyses of millions of cells have given scientists a broad overview of the immune system response to this particular virus. However, they have also found that some immune cell responses — including by cell types that are usually protective — can occasionally trigger deadly inflammation and death in patients.
    Other data analysis tools that allow for examination down to the level of single cells have given scientists some clues about culprits in severe COVID cases. But such focused views often lack the context of particular cell groupings that might cause better or poorer outcomes.
    The Multiscale PHATE tool, a machine learning tool developed at Yale, allows researchers to pass through all resolutions of data, from millions of cells to a single cell, within minutes. The technology builds on an algorithm called PHATE, created in the lab of Smita Krishnaswamy, associate professor of genetics and computer science, which overcomes many of the shortcomings of existing data visualization tools.
    “Machine learning algorithms typically focus on a single resolution view of the data, ignoring information that can be found in other more focused views,” said Manik Kuchroo, a doctoral candidate at Yale School of Medicine who helped develop the technology and is co-lead author of the paper. “For this reason, we created Multiscale PHATE which allows users to zoom in and focus on specific subsets of their data to perform more detailed analysis.”
    Kuchroo, who works in Krishnaswamy’s lab, used the new tool to analyze 55 million blood cells taken from 163 patients admitted to Yale New Haven Hospital with severe cases of COVID-19. Looking broadly, they found that high levels T cells seem to be protective against poor outcomes while high levels of two white blood cell types known as granulocytes and monocytes were associated with higher levels of mortality.
    However, when the researchers drilled down to a more granular level they discovered that TH17, a helper T cell, was also associated with higher mortality when clustered with the immune system cells IL-17 and IFNG.
    By measuring quantities of these cells in the blood, they could predict whether the patient lived or died with 83% accuracy, the researchers report.
    “We were able to rank order risk factors of mortality to show which are the most dangerous,” Krishnaswamy said.
    In theory, the new data analytical tool could be used to fine tune risk assessment in a host of diseases, she said.
    Jessie Huang in the Yale Department of Computer Science and Patrick Wong in the Department of Immunobiology are co-lead authors of the paper. Akiko Iwasaki, the Waldemar Von Zedtwitz Professor of Immunobiology, is co-corresponding author.
    Story Source:
    Materials provided by Yale University. Original written by Bill Hathaway. Note: Content may be edited for style and length. More

  • in

    Computer drug simulations offer warning about promising diabetes and cancer treatment

    Using computer drug simulations, researchers have found that doctors need to be wary of prescribing a particular treatment for all types of cancer and patients.
    The drug, called metformin, has traditionally been prescribed for diabetes but has been used in clinical settings as a cancer treatment in recent years.
    The researchers say while metformin shows great promise, it also has negative consequences for some types of cancers.
    “Metformin is a wonder drug, and we are just beginning to understand all its possible benefits,” said Mehrshad Sadria, a PhD candidate in applied mathematics at the University of Waterloo. “Doctors need to examine the value of the drug on a case-by-case basis, because for some cancers and some patient profiles, it may actually have the opposite of the intended effect by protecting tumour cells against stress.”
    The computer-simulated treatments use models that replicate both the drug and the cancerous cells in a virtual environment. Such models can give clinical trials in humans a considerable head-start and can provide insights to medical practitioners that would take much longer to be discovered in the field.
    “In clinical settings, drugs can sometimes be prescribed in a trial and error manner,” said Anita Layton, professor of applied mathematics and Canada 150 Research Chair in mathematical biology and medicine at Waterloo. “Our mathematical models help accelerate clinical trials and remove some of the guesswork. What we see with this drug is that it can do a lot of good but needs more study.”
    The researchers say their work shows the importance of precision medicine when considering the use of metformin for cancer and other diseases. Precision medicine is an approach that assumes each patient requires individualized medical assessment and treatment.
    “Diseases and treatments are complicated,” Sadria said. “Everything about the patient matters, and even small differences can have a big impact on the effect of a drug, such as age, gender, genetic and epigenetic profiles. All these things are important and can affect a patient’s drug outcome. In addition, no one drug works for everyone, so doctors need to take a close look at each patient when considering treatments like metformin.”
    Sadria, Layton and co-author Deokhwa Seo’s paper was published in the journal BioMed Central Cancer.
    Story Source:
    Materials provided by University of Waterloo. Note: Content may be edited for style and length. More

  • in

    A potential breakthrough for production of superior battery technology

    Micro supercapacitors could revolutionise the way we use batteries by increasing their lifespan and enabling extremely fast charging. Manufacturers of everything from smartphones to electric cars are therefore investing heavily into research and development of these electronic components. Now, researchers at Chalmers University of Technology, Sweden, have developed a method that represents a breakthrough for how such supercapacitors can be produced.
    “When discussing new technologies, it is easy to forget how important the manufacturing method is, so that they can actually be commercially produced and be impactful in society. Here, we have developed methods that can really work in production,” explains Agin Vyas, doctoral student at the Department of Microtechnology and Nanoscience at Chalmers University of Technology and lead author of the article.
    Supercapacitors consist of two electrical conductors separated by an insulating layer. They can store electrical energy and have many positive properties compared to a normal battery, such as much more rapid charging, more efficient energy distribution, and a much greater lifespan without loss of performance, with regards to the charge and discharge cycle. When a supercapacitor is combined with a battery in an electrically powered product, the battery life can be extended many times -up to 4 times for commercial electric vehicles. And whether for personal electronic devices or industrial technologies, the benefits for the end consumer could be huge.
    “It would of course be very convenient to be able to quickly charge, for example, an electric car or not have to change or charge batteries as often as we currently do in our smartphones. But it would also represent a great environmental benefit and be much more sustainable, if batteries had a longer lifespan and did not need to be recycled in complicated processes,” says Agin Vyas.
    Manufacturing a big challenge
    But in practice, today’s supercapacitors are too large for many applications where they could be useful. They need to be about the same size as the battery they are connected to, which is an obstacle to integrating them in mobile phones or electric cars. Therefore, a large part of today’s research and development of supercapacitors is about making them smaller — significantly so.
    Agin Vyas and his colleagues have been working with developing ‘micro’ supercapacitors. These are so small that they can fit on the system circuits which control various functions in mobile phones, computers, electric motors and almost all electronics we use today. This solution is also called ‘system-on-a-chip’.
    One of the most important challenges is that the minimal units need to be manufactured in such a way that they become compatible with other components in a system circuit and can easily be tailored for different areas of use. The new paper demonstrates a manufacturing process in which micro-supercapacitors are integrated with the most common way of manufacturing system circuits (known as CMOS).
    “We used a method known as spin coating, a cornerstone technique in many manufacturing processes. This allows us to choose different electrode materials. We also use alkylamine chains in reduced graphene oxide, to show how that leads to a higher charging and storage capacity,” explains Agin Vyas.
    “Our method is scalable and would involve reduced costs for the manufacturing process. It represents a great step forward in production technology and an important step towards the practical application of micro-supercapacitors in both everyday electronics and industrial applications.”
    A method has also been developed for producing micro-supercapacitors of up to ten different materials in one unified manufacturing process, which means that properties can be easily tailored to suit several different end applications.
    Story Source:
    Materials provided by Chalmers University of Technology. Original written by Karin Wik. Note: Content may be edited for style and length. More

  • in

    A new gravity sensor used atoms’ weird quantum behavior to peer underground

    The best way to find buried treasure may be with a quantum gravity sensor.

    In these devices, free-falling atoms reveal subtle variations in Earth’s gravitational pull at different places. Those variations reflect differences in the density of material beneath the sensor — effectively letting the instrument peer underground. In a new experiment, one of these machines teased out the tiny gravitational signature of an underground tunnel, researchers report in the Feb. 24 Nature.

    “Instruments like this would find many, many applications,” says Nicola Poli, an experimental physicist at the University of Florence, who coauthored a commentary on the study in the same issue of Nature.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Poli imagines using quantum gravity sensors to monitor groundwater or magma beneath volcanoes, or to help archaeologists uncover hidden tombs or other artifacts without having to dig them up (SN: 11/2/17). These devices could also help farmers check soil quality or help engineers inspect potential construction sites for unstable ground.

    “There are many tools to measure gravity,” says Xuejian Wu, an atomic physicist at Rutgers University in Newark, N.J., who wasn’t involved in the study. Some devices measure how far gravity pulls down a mass hanging from a spring. Other tools use lasers to clock how fast an object tumbles down a vacuum chamber. But free-falling atoms, like those in quantum gravity sensors, are the most pristine, reliable test masses out there, Wu says. As a result, quantum sensors promise to be more accurate and stable in the long run than other gravity probes.

    Inside a quantum gravity sensor, a cloud of supercooled atoms is dropped down a chute. A pulse of light then splits each of the falling atoms into a superposition state — a quantum limbo where each atom exists in two places at once (SN: 11/7/19). Due to their slightly different positions in Earth’s gravitational field, the two versions of each atom feel a different downward tug as they fall. Another light pulse then recombines the split atoms.

    Thanks to the atoms’ wave-particle duality — a strange rule of quantum physics that says atoms can act like waves — the reunited atoms interfere with each other (SN: 1/13/22). That is, as the atom waves overlap, their crests and troughs can reinforce or cancel each other out, creating an interference pattern. That pattern reflects the slightly different downward pulls that the split versions of each atom felt as they fell — revealing the gravity field at the atom cloud’s location.

    Extremely precise measurements made by such atom-based devices have helped test Einstein’s theory of gravity (SN: 10/28/20) and measure fundamental constants, such as Newton’s gravitational constant (SN: 4/12/18). But atom-based gravity sensors are highly sensitive to vibrations from seismic activity, traffic and other sources.

    “Even very, very small vibrations create enough noise that you have to measure for a long time” at any location to weed out background tremors, says Michael Holynski, a physicist at the University of Birmingham in England. That has made quantum gravity sensing impractical for many uses outside the lab.  

    Holynski’s team solved that problem by building a gravity sensor with not one but two falling clouds of rubidium atoms. With one cloud suspended a meter above the other, the instrument could gauge the strength of gravity at two different heights in a single location. Comparing those measurements allowed the researchers to cancel out the effects of background noise.

    Holynski and colleagues tested whether their sensor — a 2-meter-tall chute on wheels tethered to a rolling cart of equipment — could detect an underground passageway on the University of Birmingham campus. The 2-by-2-meter concrete tunnel lay beneath a road between two multistory buildings. The quantum sensor measured the local gravitational field every 0.5 meters along an 8.5-meter line that crossed over the tunnel. Those readouts matched the predictions of a computer simulation, which had estimated the gravitational signal of the tunnel based on its structure and other factors that could influence the local gravitational field, such as nearby buildings.

    Based on the machine’s sensitivity in this experiment, it could probably provide a reliable gravity measurement at each location in less than two minutes, the researchers estimate. That’s about one-tenth the time needed for other types of gravity sensors.

    The team has since built a downsized version of the gravity sensor used in the tunnel-detecting experiment. The new machine weighs about 15 kilograms, compared with the 300-kilogram beast used for the tunnel test. Other upgrades could also boost the gravity sensor’s speed.

    In the future, engineer Nicole Metje envisions building a quantum gravity sensor that could be pushed from place to place like a lawn mower. But portability isn’t the only challenge for making these tools more user-friendly, says Metje, a coauthor on the study who is also at the University of Birmingham. “At the moment, we still need someone with a physics degree to operate the sensor.”

    So hopeful beachcombers may be waiting a long time to trade in their metal detectors for quantum gravity sensors. More

  • in

    Research team makes breakthrough discovery in light interactions with nanoparticles, paving the way for advances in optical computing

    Computers are an indispensable part of our daily lives, and the need for ones that can work faster, solve complex problems more efficiently, and leave smaller environmental footprints by minimizing the required energy for computation is increasingly urgent. Recent progress in photonics has shown that it’s possible to achieve more efficient computing through optical devices that use interactions between metamaterials and light waves to apply mathematical operations of interest on the input signals, and even solve complex mathematical problems. But to date, such computers have required a large footprint and precise, large-area fabrication of the components, which, because of their size, are difficult to scale into more complex networks.
    A newly published paper in Physical Review Letters from researchers at the Advanced Science Research Center at the CUNY Graduate Center (CUNY ASRC) details a breakthrough discovery in nanomaterials and light-wave interactions that paves the way for development of small, low-energy optical computers capable of advanced computing.
    “The increasing energy demands of large data centers and inefficiencies in current computing architectures have become a real challenge for our society,” said Andrea Alù, Ph.D., the paper’s corresponding author, founding director of the CUNY ASRC’s Photonics Initiative and Einstein Professor of Physics at the Graduate Center. “Our work demonstrates that it’s possible to design a nanoscale object that can efficiently interact with light to solve complex mathematical problems with unprecedented speeds and nearly zero energy demands.”
    In their study, CUNY ASRC researchers designed a nanoscale object made of silicon so that, when interrogated with light waves carrying an arbitrary input signal, it is able to encode the corresponding solution of a complex mathematical problem into the scattered light. The solution is calculated at the speed of light, and with minimal energy consumption.”
    “This finding is promising because it offers a practical pathway for creating a new generation of very energy-efficient, ultrafast, ultracompact nanoscale optical computers and other nanophotonic technologies that can be used for classical and quantum computations,” said Heedong Goh, Ph.D., the paper’s lead author and a postdoctoral research associate with Alù’s lab. “The very small size of these nanoscale optical computers is particularly appealing for scalability, because multiple nanostructures can be combined and connected together through light scattering to realize complex nanoscale computing networks.”
    Story Source:
    Materials provided by Advanced Science Research Center, GC/CUNY. Note: Content may be edited for style and length. More

  • in

    Researcher urges caution on AI in mammography

    Analyzing breast-cancer tumors with artificial intelligence has the potential to improve healthcare efficiency and outcomes. But doctors should proceed cautiously, because similar technological leaps previously led to higher rates of false-positive tests and over-treatment.
    That’s according to a new editorial in JAMA Health Forum co-written by Joann G. Elmore, MD, MPH, a researcher at the UCLA Jonsson Comprehensive Cancer Center, the Rosalinde and Arthur Gilbert Foundation Endowed Chair in Health Care Delivery and professor of medicine at the David Geffen School of Medicine at UCLA.
    “Without a more robust approach to the evaluation and implementation of AI, given the unabated adoption of emergent technology in clinical practice, we are failing to learn from our past mistakes in mammography,” the JAMA Health Forum editorial states. The piece, posted online Friday, was co-written with Christoph I. Lee, MD, MS, MBA, a professor of radiology at the University of Washington School of Medicine.
    One of those “past mistakes in mammography,” according to the authors, was adjunct computer-aided detection (CAD) tools, which grew rapidly in popularity in the field of breast cancer screening starting more than two decades ago. CAD was approved by the FDA in 1998, and by 2016 more than 92% of U.S. imaging facilities were using the technology to interpret mammograms and hunt for tumors. But the evidence showed CAD did not improve mammography accuracy. “CAD tools are associated with increased false positive rates, leading to overdiagnosis of ductal carcinoma in situ and unnecessary diagnostic testing,” the authors wrote. Medicare stopped paying for CAD in 2018, but by then the tools had racked up more than $400 million a year in unnecessary health costs.
    “The premature adoption of CAD is a premonitory symptom of the wholehearted embrace of emergent technologies prior to fully understanding their impact on patient outcomes,” Elmore and Lee wrote.
    The doctors suggest several safeguards to put in place to avoid “repeating past mistakes,” including tying Medicare reimbursement to “improved patient outcomes, not just improved technical performance in artificial settings.”
    Story Source:
    Materials provided by University of California – Los Angeles Health Sciences. Note: Content may be edited for style and length. More

  • in

    New imager microchip helps devices bring hidden objects to light

    Researchers from The University of Texas at Dallas and Oklahoma State University have developed an innovative terahertz imager microchip that can enable devices to detect and create images through obstacles that include fog, smoke, dust and snow.
    The team is working on a device for industrial applications that require imaging up to 20 meters away. The technology could also be adapted for use in cars to help drivers or autonomous vehicle systems navigate through hazardous conditions that reduce visibility. On an automotive display, for example, the technology could show pixelated outlines and shapes of objects, such as another vehicle or pedestrians.
    “The technology allows you to see in vision-impaired environments. In industrial settings, for example, devices using the microchips could help with packaging inspections for manufacturing process control, monitoring moisture content or seeing through steam. If you are a firefighter, it could help you see through smoke and fire,” said Dr. Kenneth K. O, professor of electrical and computer engineering and the Texas Instruments Distinguished University Chair in the Erik Jonsson School of Engineering and Computer Science.
    Yukun Zhu, a doctoral candidate in electrical engineering, announced the imaging technology on Feb. 21 at the virtual International Solid-State Circuits Conference, sponsored by the Institute of Electrical and Electronics Engineers (IEEE) and its Solid-State Circuits Society.
    The advance is the result of more than 15 years of work by O and his team of students, researchers and collaborators. This latest effort is supported by through its TI Foundational Technology Research Program.
    “TI has been part of the journey through much of the 15 years,” said O, who is director of the Texas Analog Center of Excellence (TxACE) at UT Dallas. “The company has been a key supporter of the research.”
    The microchip emits radiation beams in the terahertz range (430 GHz) of the electromagnetic spectrum from pixels no larger than a grain of sand. The beams travel through fog, dust and other obstacles that optical light cannot penetrate and bounce off objects and back to the microchip, where the pixels pick up the signal to create images. Without the use of external lenses, the terahertz imager includes the microchip and a reflector that increases the imaging distance and quality and reduces power consumption.
    The researchers designed the imager using complementary metal-oxide semiconductor (CMOS) technology. This type of integrated circuit technology is used to manufacture the bulk of consumer electronics devices, which makes the imager affordable. O’s group was one of the first to show that CMOS technology was viable, and since then they have worked to develop a variety of new applications.
    “Another breakthrough result enabled through innovations that overcame fundamental active-gain limits of CMOS is that this imaging technology consumes more than 100 times less power than the phased arrays currently being investigated for the same imaging applications. This and the use of CMOS make consumer applications of this technology possible,” said O, a fellow of the IEEE.
    TxACE is supported by the Semiconductor Research Corp., TI, the UT System and UT Dallas.
    “UT Dallas and Oklahoma State continue to discover technological innovations that will help shape the future,” said Dr. Swaminathan Sankaran, design director and Distinguished Member Technical Staff at TI Kilby Labs. “What Dr. O and his research team were able to accomplish was truly remarkable with this terahertz monostatic reflection-mode imager work. Their research paves a path for improved raw angular resolution and low-power, cost system integration, and we are excited to see what applications and use cases this terahertz imaging technology will lead to.”
    Story Source:
    Materials provided by University of Texas at Dallas. Original written by Kim Horner. Note: Content may be edited for style and length. More

  • in

    Using artificial intelligence to find anomalies hiding in massive datasets

    Identifying a malfunction in the nation’s power grid can be like trying to find a needle in an enormous haystack. Hundreds of thousands of interrelated sensors spread across the U.S. capture data on electric current, voltage, and other critical information in real time, often taking multiple recordings per second.
    Researchers at the MIT-IBM Watson AI Lab have devised a computationally efficient method that can automatically pinpoint anomalies in those data streams in real time. They demonstrated that their artificial intelligence method, which learns to model the interconnectedness of the power grid, is much better at detecting these glitches than some other popular techniques.
    Because the machine-learning model they developed does not require annotated data on power grid anomalies for training, it would be easier to apply in real-world situations where high-quality, labeled datasets are often hard to come by. The model is also flexible and can be applied to other situations where a vast number of interconnected sensors collect and report data, like traffic monitoring systems. It could, for example, identify traffic bottlenecks or reveal how traffic jams cascade.
    “In the case of a power grid, people have tried to capture the data using statistics and then define detection rules with domain knowledge to say that, for example, if the voltage surges by a certain percentage, then the grid operator should be alerted. Such rule-based systems, even empowered by statistical data analysis, require a lot of labor and expertise. We show that we can automate this process and also learn patterns from the data using advanced machine-learning techniques,” says senior author Jie Chen, a research staff member and manager of the MIT-IBM Watson AI Lab.
    The co-author is Enyan Dai, an MIT-IBM Watson AI Lab intern and graduate student at the Pennsylvania State University. This research will be presented at the International Conference on Learning Representations.
    Probing probabilities
    The researchers began by defining an anomaly as an event that has a low probability of occurring, like a sudden spike in voltage. They treat the power grid data as a probability distribution, so if they can estimate the probability densities, they can identify the low-density values in the dataset. Those data points which are least likely to occur correspond to anomalies. More