More stories

  • in

    Personalized immunotherapy: Rapid screening of therapeutic combinations

    An innovative testing platform that more closely mimics what cancer encounters in the body may allow for more precise, personalized therapies by enabling the rapid study of multiple therapeutic combinations against tumor cells. The platform, which uses a three-dimensional environment to more closely mirror a tumor microenvironment, is demonstrated in research published in Communications Biology.
    “This whole platform really gives us a way to optimize personalized immunotherapy on a rapid, high throughput scale,” said Jonathan Dordick, Institute Professor of chemical and biological engineering and member of the Center for Biotechnology and Interdisciplinary Studies (CBIS) at Rensselaer Polytechnic Institute, who led this research. “You can imagine somebody having cancer, and you quickly biopsy the tumor and then you use this biochip platform to identify very quickly — within a day or two — what specific treatment modality might be ideally suited against a particular cancer.”
    Of particular interest to researchers is the behavior of a specific type of immune cell known as natural killer (NK) cells, which seek out cancer or viruses within the body, bind to their receptors, and excrete an enzyme meant to kill the unwanted cells. The platform studied in this paper allows researchers to compare what happens when the NK cells are left to fight tumor cells on their own versus how they behave when an antibody or cancer drug, or a combination of the two, is added.
    The platform is a small two-piece plastic chip that’s about the size of a microscope slide. One side of the sandwich chip contains 330 tiny pillars upon which researchers can place an external matrix, made of a gel-like substance, which mimics the mechanical environment of a tumor cell. When cancer cells are placed inside this gel-like structure, they’re encouraged to grow into a spheroid shape, much as they would inside the body. The second piece contains 330 microwells within which NK cells can be added in suspension — much as they would flow, untethered inside the body.
    At Rensselaer, Dordick collaborated with Seok-Joon Kwon, senior research scientist in CBIS, and Sneha Gopal, who recently received her Ph.D. based, in part, on this study. The Rensselaer team collaborated with researchers from Konyang University and Medical & Bio Decision Company Ltd. To test this platform, researchers studied two types of breast cancer cells, as well as pancreatic cancer cells, with various combinations of NK cells, two monoclonal antibodies, and an anti-cancer chemotherapy drug.
    “You can screen very quickly to determine what combinations of NK cells, antibodies, and chemotherapeutic drugs target the cancer cells within the spheroid geometry,” Dordick said. “What really is amazing is we see very significant differences between what happens in that spheroid, within the slots of the chip, versus what would happen in a more traditional two-dimensional cell culture that’s often used in the screening.”
    In the spheroid design, for instance, the chemotherapy drug paclitaxel had little effect on the three types of cancer cells on its own, whereas in a traditional two-dimensional system, Dordick said, the drug may appear to do well. It performed dramatically better when it was combined with both NK cells and an antibody.
    “This platform moves researchers closer to personalized medicine,” said Deepak Vashishth, director of CBIS. “This work conducted by Professor Dordick and his research group is an excellent example of how we, at Rensselaer, are providing a new angle to human health by developing new approaches at the intersection of engineering and life sciences to enhance cures for diseases such as cancer.”
    To further the potential use of this tool, Dordick said that it must be tested on a wide range of cancer types, including a tumor microenvironment that consists of multiple different types of cells. In the future, he envisions that the platform has the potential to identify combination therapies that work best against a patient’s specific cancer, enabling the identification and delivery of personalized immunotherapy. More

  • in

    Antimatter from laser pincers

    In the depths of space, there are celestial bodies where extreme conditions prevail: Rapidly rotating neutron stars generate super-strong magnetic fields. And black holes, with their enormous gravitational pull, can cause huge, energetic jets of matter to shoot out into space. An international physics team with the participation of the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) has now proposed a new concept that could allow some of these extreme processes to be studied in the laboratory in the future: A special setup of two high-intensity laser beams could create conditions similar to those found near neutron stars. In the discovered process, an antimatter jet is generated and accelerated very efficiently. The experts present their concept in the journal Communications Physics.
    The basis of the new concept is a tiny block of plastic, crisscrossed by micrometer-fine channels. It acts as a target for two lasers. These simultaneously fire ultra-strong pulses at the block, one from the right, the other from the left — the block is literally taken by laser pincers. “When the laser pulses penetrate the sample, each of them accelerates a cloud of extremely fast electrons,” explains HZDR physicist Toma Toncian. “These two electron clouds then race toward each other with full force, interacting with the laser propagating in the opposite direction.” The following collision is so violent that it produces an extremely large number of gamma quanta — light particles with an energy even higher than that of X-rays.
    The swarm of gamma quanta is so dense that the light particles inevitably collide with each other. And then something crazy happens: According to Einstein’s famous formula E=mc2, light energy can transform into matter. In this case, mainly electron-positron pairs should be created. Positrons are the antiparticles of electrons. What makes this process special is that “very strong magnetic fields accompany it,” describes project leader Alexey Arefiev, a physicist at the University of California at San Diego. “These magnetic fields can focus the positrons into a beam and accelerate them strongly.” In numbers: Over a distance of just 50 micrometers, the particles should reach an energy of one gigaelectronvolt (GeV) — a size that usually requires a full-grown particle accelerator.
    Successful computer simulation
    To see whether the unusual idea could work, the team tested it in an elaborate computer simulation. The results are encouraging; in principle, the concept should be feasible. “I was surprised that the positrons that were created in the end were formed into a high-energy and bundled beam in the simulation,” Arefiev says happily. What’s more, the new method should be much more efficient than previous ideas, in which only a single laser pulse is fired at an individual target: According to the simulation, the “laser double strike” should be able to generate up to 100,000 times more positrons than the single-treatment concept.
    “Also, in our case, the lasers would not have to be quite as powerful as in other concepts,” Toncian explains. “This would probably make the idea easier to put into practice.” However, there are only few places in the world where the method could be implemented. The most suitable would be ELI-NP (Extreme Light Infrastructure Nuclear Physics), a unique laser facility in Romania, largely funded by the European Union. It has two ultra-powerful lasers that can fire simultaneously at a target — the basic requirement for the new method.
    First tests in Hamburg
    Essential preliminary tests, however, could take place in Hamburg beforehand: The European XFEL, the most powerful X-ray laser in the world, is located there. The HZDR plays a major role in this large-scale facility: It leads a user consortium called HIBEF, which has been targeting matter in extreme states for some time. “At HIBEF, colleagues from HZDR, together with the Helmholtz Institute in Jena, are developing a platform that can be used to experimentally test whether the magnetic fields actually form as our simulations predict,” explains Toma Toncian. “This should be easy to analyze with the powerful X-ray flashes of the European XFEL.”
    For astrophysics as well as nuclear physics, the new technique could be exceedingly useful. After all, some extreme processes in space are also likely to produce vast quantities of gamma quanta, which then quickly materialize again into high-energy pairs. “Such processes are likely to take place, among others, in the magnetosphere of pulsars, i.e. of rapidly rotating neutron stars,” says Alexey Arefiev. “With our new concept, such phenomena could be simulated in the laboratory, at least to some extent, which would then allow us to understand them better.” More

  • in

    Artificial intelligence models to analyze cancer images take shortcuts that introduce bias

    Artificial intelligence tools and deep learning models are a powerful tool in cancer treatment. They can be used to analyze digital images of tumor biopsy samples, helping physicians quickly classify the type of cancer, predict prognosis and guide a course of treatment for the patient. However, unless these algorithms are properly calibrated, they can sometimes make inaccurate or biased predictions.
    A new study led by researchers from the University of Chicago shows that deep learning models trained on large sets of cancer genetic and tissue histology data can easily identify the institution that submitted the images. The models, which use machine learning methods to “teach” themselves how to recognize certain cancer signatures, end up using the submitting site as a shortcut to predicting outcomes for the patient, lumping them together with other patients from the same location instead of relying on the biology of individual patients. This in turn may lead to bias and missed opportunities for treatment in patients from racial or ethnic minority groups who may be more likely to be represented in certain medical centers and already struggle with access to care.
    “We identified a glaring hole in the in the current methodology for deep learning model development which makes certain regions and patient populations more susceptible to be included in inaccurate algorithmic predictions,” said Alexander Pearson, MD, PhD, assistant Assistant Professor of Medicine at UChicago Medicine and co-senior author. The study was published July 20, in Nature Communications.
    One of the first steps in treatment for a cancer patient is taking a biopsy, or small tissue sample of a tumor. A very thin slice of the tumor is affixed to glass slide, which is stained with multicolored dyes for review by a pathologist to make a diagnosis. Digital images can then be created for storage and remote analysis by using a scanning microscope. While these steps are mostly standard across pathology labs, minor variations in the color or amount of stain, tissue processing techniques and in the imaging equipment can create unique signatures, like tags, on each image. These location-specific signatures aren’t visible to the naked eye, but are easily detected by powerful deep learning algorithms.
    These algorithms have the potential to be a valuable tool for allowing physicians to quickly analyze a tumor and guide treatment options, but the introduction of this kind of bias means that the models aren’t always basing their analysis on the biological signatures it sees in the images, but rather the image artifacts generated by differences between submitting sites.
    Pearson and his colleagues studied the performance of deep learning models trained on data from the Cancer Genome Atlas, one of the largest repositories of cancer genetic and tissue image data. These models can predict survival rates, gene expression patterns, mutations, and more from the tissue histology, but the frequency of these patient characteristics varies widely depending on which institutions submitted the images, and the model often defaults to the “easiest” way to distinguish between samples — in this case, the submitting site.
    For example, if Hospital A serves mostly affluent patients with more resources and better access to care, the images submitted from that hospital will generally indicate better patient outcomes and survival rates. If Hospital B serves a more disadvantaged population that struggles with access to quality care, the images that site submitted will generally predict worse outcomes.
    The research team found that once the models identified which institution submitted the images, they tended to use that as a stand in for other characteristics of the image, including ancestry. In other words, if the staining or imaging techniques for a slide looked like it was submitted by Hospital A, the models would predict better outcomes, whereas they would predict worse outcomes if it looked like an image from Hospital B. Conversely, if all patients in Hospital B had biological characteristics based on genetics that indicated a worse prognosis, the algorithm would link the worse outcomes to Hospital B’s staining patterns instead of things it saw in the tissue.
    “Algorithms are designed to find a signal to differentiate between images, and it does so lazily by identifying the site,” Pearson said. “We actually want to understand what biology within a tumor is more likely to predispose resistance to treatment or early metastatic disease, so we have to disentangle that site-specific digital histology signature from the true biological signal.”
    The key to avoiding this kind of bias is to carefully consider the data used to train the models. Developers can make sure that different disease outcomes are distributed evenly across all sites used in the training data, or by isolating a certain site while training or testing the model when the distribution of outcomes is unequal. The result will produce more accurate tools that can get physicians the information they need to quickly diagnose and plan treatments for cancer patients.
    “The promise of artificial intelligence is the ability to bring accurate and rapid precision health to more people,” Pearson said. “In order to meet the needs of the disenfranchised members of our society, however, we have to be able to develop algorithms which are competent and make relevant predictions for everyone.” More

  • in

    Scientists make X-ray vision-like camera to rapidly retrieve 3D images

    It’s not exactly X-ray vision, but it’s close. In research published in the journal Optica, University of California, Irvine researchers describe a new type of camera technology that, when aimed at an object, can rapidly retrieve 3D images, displaying its chemical content down to the micrometer scale. The new tech promises to help companies inspect things like the insides of computer chips without having to pry them open — an advancement the researchers say could accelerate the production time of such goods by more than a hundred times.
    “This is a paper about a way to visualize things in 3D very fast, even at video rate,” said Dmitry Fishman — director of laser spectroscopy labs in the UCI Department of Chemistry — who, along with Eric Potma, professor of chemistry, spearheaded the work. The novel imaging tech is based on a so-called nonlinear optical effect in silicon — a semiconductor material used in visible-light cameras and detectors.
    Through such a nonlinear optical effect, conventional silicon detectors can sense light coming from the mid-infrared range of the electromagnetic spectrum. The reason being, Fishman explained, is that the mid-infrared spectral region carries important information on the material’s chemical make-up. “Most molecular vibrations and signatures are in the mid-infrared range,” he said.
    Other technologies, he explained, are slow to retrieve images, because the laser light needs to scan across the object — a process that takes a longer amount of time. “A nonlinear optical ‘trick’ with short laser pulses allowed us to capture a depth-resolved image on a camera in one shot, thus providing an alternative method to what other people are doing — and the advance is that this is not just faster, but also produces 3D images with chemical contrast,” Fishman said.
    And the imaging technology isn’t just for computer chips. Potma explained that the system can also image things like ceramics used to make things like heat shield plates on space shuttles and reveal clues about any structural weaknesses that might be there.
    The research follows in the wake of work by Potma and Fishman and a team of researchers published last year in Nature’s Light: Science & Applications that describes the first steps toward creating efficient mid-infrared detection technology using off-the-shelf silicon-based cameras. Back then, the technology was just beginning to take shape, but now, Fishman explained, it’s getting close to being ready for the mainstream. “This time we made it much more efficient and better,” he said.
    Funding for the work came from the National Institutes of Health and the National Science Foundation. The work was done in collaboration between UCI scientists and Yong Chen, a professor in the Epstein Department of Industrial & Systems Engineering at the University of Southern California.
    Story Source:
    Materials provided by University of California – Irvine. Note: Content may be edited for style and length. More

  • in

    Infrared held in a pincer

    Many applications, from fiber-optic telecommunications to biomedical imaging processes require substances that emit light in the near-infrared range (NIR). A research team in Switzerland has now developed the first chromium complex that emits light in the coveted, longer wavelength NIR-II range. In the journal Angewandte Chemie, the team has introduced the underlying concept: a drastic change in the electronic structure of the chromium caused by the specially tailored ligands that envelop it.
    Many materials that emit NIR light are based on expensive or rare metal complexes. Cheaper alternatives that emit in the NIR-I range between 700 and 950 nm have been developed but NIR-II-emitting complexes of non-precious metals remain extremely rare. Luminescence in the NIR-II range (1000 to 1700 nm) is, for example, particularly advantageous for in vivo imaging because this light penetrates very far into tissues.
    The luminescence of complexes is based on the excitement of electrons, through the absorption of light, for example. When the excited electron drops back down to its ground state, part of the energy is emitted as radiation. The wavelength of this radiation depends on the energetic differences between the electronic states. In complexes, these are significantly determined by the type and arrangement of the ligands bound to the metal.
    In typical chemical (covalent) bonds, each partner brings one electron to share in a bonding pair; in many complexes both of the electrons come from the ligand. However, the line between these types of bonds is fluid: metal-ligand bonds can have partial covalent character (nephelauxetic effect). As a consequence, the energy of certain excited states is reduced, giving the emitted radiation a longer wavelength. This has been observed for polypyridine ligands, which cause the ruby red emission of trivalent chromium (Cr(III)) in complexes to shift into the NIR-I range.
    In order to increase the covalence of the metal-ligand bond and further increase the wavelength, Narayan Sinha in a team led by Claude Piguet and Oliver S. Wenger at the Universities of Basel and Geneva (Switzerland) switched from classic polypyridine ligands to a newly tailored, charged, tridentate chelate ligand. The term chelate is derived from the Greek word for the pincer of a crab, and tridentate means that the ligand has three binding sites with which it grabs the central metal ion like a pincer.
    In the resulting new complex, the Cr(III) ion is surrounded on all sides by two tridentate charged chelate ligands to form an octahedral shape. This results in a drastically altered, unusual electronic structure with a high electron density on the Cr(III). In the axial direction, charge transfer takes place from the ligands to the metal, but in the equatorial plane of the octahedron, charge transfer moves from the metal to the ligands. The combined “push” and “pull” interactions likely have a strong influence on the spectroscopically relevant electrons of the Cr(III) — the key to the NIR-II emissions of the new complex.
    Story Source:
    Materials provided by Wiley. Note: Content may be edited for style and length. More

  • in

    Team streamlines neural networks to be more adept at computing on encrypted data

    This week, at the 38th International Conference on Machine Learning (ICML 21), researchers at the NYU Center for Cyber Security at the NYU Tandon School of Engineering are revealing new insights into the basic functions that drive the ability of neural networks to make inferences on encrypted data.
    In the paper, “DeepReDuce: ReLU Reduction for Fast Private Inference,” the team focuses on linear and non-linear operators, key features of neural network frameworks that, depending on the operation, introduce a heavy toll in time and computational resources. When neural networks compute on encrypted data, many of these costs are incurred by rectified linear activation function (ReLU), a non-linear operation.
    Brandon Reagen, professor of computer science and engineering and electrical and computer engineering and a team of collaborators including Nandan Kumar Jha, a Ph.D. student, and Zahra Ghodsi, a former doctoral student under the guidance of Siddharth Garg, developed a framework called DeepReDuce. It offers a solution through rearrangement and reduction of ReLUs in neural networks.
    Reagen explained that this shift requires a fundamental reassessment of where and how many components are distributed in neural networks systems.
    “What we are trying to do is rethink how neural nets are designed in the first place,” he explained. “You can skip a lot of these time- and computationally-expensive ReLU operations and still get high performing networks at 2 to 4 times faster run time.”
    The team found that, compared to the state-of-the-art for private inference, DeepReDuce improved accuracy and reduced ReLU count by up to 3.5% and 3.5×, respectively.
    The inquiry is not merely academic. As the use of AI grows in concert with concerns about the security of personal, corporate, and government data security, neural networks are increasingly making computations on encrypted data. In such scenarios involving neural networks generating private inferences (PI’s) on hidden data without disclosing inputs, it is the non-linear functions that exert the highest “cost” in time and power. Because these costs increase the difficulty and time it takes for learning machines to do PI, researchers have struggled to lighten the load ReLUs exert on such computations.
    The team’s work builds on innovative technology called CryptoNAS. Described in an earlier paper whose authors include Ghodsi and a third Ph.D. student, Akshaj Veldanda, CryptoNAS optimizes the use of ReLUs as one might rearrange how rocks are arranged in a stream to optimize the flow of water: it rebalances the distribution of ReLUS in the network and removes redundant ReLUs.
    DeepReDuce expands on CryptoNAS by streamlining the process further. It comprises a set of optimizations for the judicious removal of ReLUs after CryptoNAS reorganization functions. The researchers tested DeepReDuce by using it to remove ReLUs from classic networks, finding that they were able to significantly reduce inference latency while maintaining high accuracy.
    Reagan, with Mihalis Maniatakos, research assistant professor of electrical and computer engineering, is also part of a collaboration with data security company Duality to design a new microchip designed to handle computation on fully encrypted data.
    The research on ReLUS was supported by ADA and the Data Protection in Virtual Environments (DPRIVE) program at the U.S. Defense Advanced Research Projects Agency (DARPA) and the Center for Applications Driving Architectures.
    Story Source:
    Materials provided by NYU Tandon School of Engineering. Note: Content may be edited for style and length. More

  • in

    Exoskeletons have a problem: They can strain the brain

    Exoskeletons — wearable devices used by workers on assembly lines or in warehouses to alleviate stress on their lower backs — may compete with valuable resources in the brain while people work, canceling out the physical benefits of wearing them, a new study suggests.
    The study, published recently in the journal Applied Ergonomics, found that when people wore exoskeletons while performing tasks that required them to think about their actions, their brains worked overtime and their bodies competed with the exoskeletons rather than working in harmony with them. The study indicates that exoskeletons may place enough burden on the brain that potential benefits to the body are negated.
    “It’s almost like dancing with a really bad partner,” said William Marras, senior author of the study, professor of integrated systems engineering and director of The Ohio State University Spine Research Institute.
    “The exoskeleton is trying to anticipate your moves, but it’s not going well, so you fight with the exoskeleton, and that causes this change in your brain which changes the muscle recruitment — and could cause higher forces on your lower back, potentially leading to pain and possible injuries.”
    For the study, researchers asked 12 people — six men and six women — to repeatedly lift a medicine ball in two 30-minutes sessions. For one of the sessions, the participants wore an exoskeleton. For the other, they did not.
    The exoskeleton, which is attached to the user’s chest and legs, is designed to help control posture and motion during lifting to protect the lower back and reduce the possibility of injury. More

  • in

    New simulator helps robots sharpen their cutting skills

    Researchers from the University of Southern California (USC) Department of Computer Science and NVIDIA have unveiled a new simulator for robotic cutting that can accurately reproduce the forces acting on a knife as it slices through common foodstuffs, such as fruit and vegetables. The system could also simulate cutting through human tissue, offering potential applications in surgical robotics. The paper was presented at the Robotics: Science and Systems (RSS) Conference 2021 on July 16, where it received the Best Student Paper Award.
    In the past, researchers have had trouble creating intelligent robots that replicate cutting. One challenge: in the real world, no two objects are the same, and current robotic cutting systems struggle with variation. To overcome this, the team devised a unique approach to simulate cutting by introducing springs between the two halves of the object being cut, represented by a mesh. These springs are weakened over time in proportion to the force exerted by the knife on the mesh.
    “What makes ours a special kind of simulator is that it is ‘differentiable,’ which means that it can help us automatically tune these simulation parameters from real-world measurements,” said lead author Eric Heiden, a PhD in computer science student at USC. “That’s important because closing this reality gap is a significant challenge for roboticists today. Without this, robots may never break out of simulation into the real world.”
    To transfer skills from simulation to reality, the simulator must be able to model a real system. In one of the experiments, the researchers used a dataset of force profiles from a physical robot to produce highly accurate predictions of how the knife would move in real life. In addition to applications in the food processing industry, where robots could take over dangerous tasks like repetitive cutting, the simulator could improve force haptic feedback accuracy in surgical robots, helping to guide surgeons and prevent injury.
    “Here, it is important to have an accurate model of the cutting process and to be able to realistically reproduce the forces acting on the cutting tool as different kinds of tissue are being cut,” said Heiden. “With our approach, we are able to automatically tune our simulator to match such different types of material and achieve highly accurate simulations of the force profile.” In ongoing research, the team is applying the system to real-world robots.
    Co-authors are Miles Macklin, Yashraj S Narang, Dieter Fox, Animesh Garg, Fabio Ramos, all of NVIDIA.
    Video: https://www.youtube.com/watch?v=bN4yqHhfAfQ
    The full paper (open access) and blog post are available here: https://diff-cutting-sim.github.io/
    Story Source:
    Materials provided by University of Southern California. Original written by Caitlin Dawson. Note: Content may be edited for style and length. More