More stories

  • in

    Personalized medicine: Platform enables comparative research on cancerous tumors

    Researchers at the Technion’s Rappaport Faculty of Medicine have developed an innovative algorithm that detects an uninterrupted common denominator in multidimensional data gathered from tumors of different patients. The study, which was published in Cell Systems, was led by Professor Shai Shen-Orr, Dr. Yishai Ofran, and Dr. Ayelet Alpert, and conducted in collaboration between researchers at the Technion, the Rambam Health Care Campus, the Shaare Zedek Medical Center and the University of Texas.
    In recent years, cancer research has undergone a series of significant revolutions, including the introduction of single-cell high-resolution characterization capabilities, or, more specifically, simultaneous high-throughput profiling of cancer samples using single-cell RNA sequencing and proteomics analysis. This has led to the generation of vast quantities of multidimensional data on a huge number of cells, allowing for the characterization of both the healthy tissue and malignant tissues. This high amount of data has revealed the great variability between tumors of different patients, where cellular characterization that is derived from the patient’s genetic background is unique to each patient.
    Despite the substantial advantage that is derived from such an accurate characterization of the specific patient, this development hinders comparison of different patients: in the absence of a common denominator, the comparison, which is essential for identifying prognostic markers (e.g. mortality or severity of illness), becomes impossible.
    The tuMap algorithm developed by the Technion researchers provides a solution to this complex challenge by means of a “variance-based comparison.” The innovative algorithm delivers the possibility to place numerous different tumors on a uniform scale that provides a benchmark for comparison. In this way, the tumors of different patients can be meaningfully compared, as well as tumors of the same patient over the disease course (for example, on diagnosis and after treatment). The resolution provided by the algorithm can be leveraged for clinical applications such as prediction of various clinical indices with a very high accuracy, outperforming traditional tools. Although the researchers tested the algorithm on leukemia tumors, they believe that it will also be relevant for other cancer types.
    The research was sponsored by the Israel Science Foundation, the Rappaport Family Institute for Research in the Medical Sciences, and the National Institutes of Health (NIH).
    Story Source:
    Materials provided by Technion-Israel Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Physics meets democracy in this modeling study

    A study in the journal Physica A leverages concepts from physics to model how campaign strategies influence the opinions of an electorate in a two-party system.
    Researchers created a numerical model that describes how external influences, modeled as a random field, shift the views of potential voters as they interact with each other in different political environments.
    The model accounts for the behavior of conformists (people whose views align with the views of the majority in a social network); contrarians (people whose views oppose the views of the majority); and inflexibles (people who will not change their opinions).
    “The interplay between these behaviors allows us to create electorates with diverse behaviors interacting in environments with different levels of dominance by political parties,” says first author Mukesh Tiwari, PhD, associate professor at the Dhirubhai Ambani Institute of Information and Communication Technology.
    “We are able to model the behavior and conflicts of democracies, and capture different types of behavior that we see in elections,” says senior author Surajit Sen, PhD, professor of physics in the University at Buffalo College of Arts and Sciences.
    Sen and Tiwari conducted the study with Xiguang Yang, a former UB physics student. Jacob Neiheisel, PhD, associate professor of political science at UB, provided feedback to the team, but was not an author of the research. The study was published online in Physica A in July and will appear in the journal’s Nov. 15 volume. More

  • in

    A novel neural network to understand symmetry, speed materials research

    Understanding structure-property relations is a key goal of materials research, according to Joshua Agar, a faculty member in Lehigh University’s Department of Materials Science and Engineering. And yet currently no metric exists to understand the structure of materials because of the complexity and multidimensional nature of structure.
    Artificial neural networks, a type of machine learning, can be trained to identify similarities?and even correlate parameters such as structure and properties?but there are two major challenges, says Agar. One is that the majority of vast amounts of data generated by materials experiments are never analyzed. This is largely because such images, produced by scientists in laboratories all over the world, are rarely stored in a usable manner and not usually shared with other research teams. The second challenge is that neural networks are not very effective at learning symmetry and periodicity (how periodic a material’s structure is), two features of utmost importance to materials researchers.
    Now, a team led by Lehigh University has developed a novel machine learning approach that can create similarity projections via machine learning, enabling researchers to search an unstructured image database for the first time and identify trends. Agar and his collaborators developed and trained a neural network model to include symmetry-aware features and then applied their method to a set of 25,133 piezoresponse force microscopy images collected on diverse materials systems over five years at the University of California, Berkeley. The results: they were able to group similar classes of material together and observe trends, forming a basis by which to start to understand structure-property relationships.
    “One of the novelties of our work is that we built a special neural network to understand symmetry and we use that as a feature extractor to make it much better at understanding images,” says Agar, a lead author of the paper where the work is described: “Symmetry-Aware Recursive Image Similarity Exploration for Materials Microscopy,” published today in Nature Computational Materials Science. In addition to Agar, authors include, from Lehigh University: Tri N. M. Nguyen, Yichen Guo, Shuyu Qin and Kylie S. Frew and, from Stanford University: Ruijuan Xu. Nguyen, a lead author, was an undergraduate at Lehigh University and is now pursuing a Ph.D. at Stanford.
    The team was able to arrive at projections by employing Uniform Manifold Approximation and Projection (UMAP), a non-linear dimensionality reduction technique. This approach, says Agar, allows researchers to learn .” ..in a fuzzy way, the topology and the higher-level structure of the data and compress it down into 2D.”
    “If you train a neural network, the result is a vector, or a set of numbers that is a compact descriptor of the features. Those features help classify things so that some similarity is learned,” says Agar. “What’s produced is still rather large in space, though, because you might have 512 or more different features. So, then you want to compress it into a space that a human can comprehend such as 2D, or 3D?or, maybe, 4D.”
    By doing this, Agar and his team were able to take the 25,000-plus images and group very similar classes of material together. More

  • in

    Elastic polymer that is both stiff and tough, resolves long-standing quandary

    Polymer science has made possible rubber tires, Teflon and Kevlar, plastic water bottles, nylon jackets among many other ubiquitous features of daily life. Elastic polymers, known as elastomers, can be stretched and released repeatedly and are used in applications such as gloves and heart valves, where they need to last a long time without tearing. But a conundrum has long stumped polymer scientists: Elastic polymers can be stiff, or they can be tough, but they can’t be both.
    This stiffness-toughness conflict is a challenge for scientists developing polymers that could be used in applications including tissue regeneration, bioadhesives, bioprinting, wearable electronics, and soft robots.
    In a paper published today in Science, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have resolved that long-standing conflict and developed an elastomer that is both stiff and tough.
    “In addition to developing polymers for emerging applications, scientists are facing an urgent challenge: plastic pollution,” said Zhigang Suo, the Allen E. and Marilyn M. Puckett Professor of Mechanics and Materials, the senior author of the study. “The development of biodegradable polymers has once again brought us back to fundamental questions — why are some polymers tough, but others brittle? How do we make polymers resist tearing under repeated stretching?”
    Polymer chains are made by linking together monomer building blocks. To make a material elastic , the polymer chains are crosslinked by covalent bonds. The more crosslinks, the shorter the polymer chains and the stiffer the material.
    “As your polymer chains become shorter, the energy you can store in the material becomes less and the material becomes brittle,” said Junsoo Kim, a graduate student at SEAS and co-first author of the paper. “If you have only a few crosslinks, the chains are longer, and the material is tough but it’s too squishy to be useful.”
    To develop a polymer that is both stiff and tough, the researchers looked to physical, rather than chemical bonds to link the polymer chains. These physical bonds, called entanglements, have been known in the field for almost as long as polymer science has existed, but they’ve been thought to only impact stiffness, not toughness. More

  • in

    New images lead to better prediction of shear thickening

    For the first time, researchers have been able capture images providing unprecedented details of how particles behave in a liquid suspension when the phenomenon known as shear thickening takes place. The work allows us to directly understand the processes behind shear thickening, which had previously only been understood based on inference and computational modeling.
    Shear thickening is a phenomenon that can occur when particles are suspended in a low-viscosity solution. If the concentration of particles is high enough, then when stress is applied to the solution it becomes very viscous — effectively behaving like a solid. When the stress is removed or dissipates, the suspension returns to its normal fluid-like viscosity. This phenomenon can be seen in popular YouTube videos in which people are able to run across a solution of corn starch suspended in water — but sink into the solution when they stand still.
    Shear thickening can be a liability or an advantage, depending on the context.
    For example, in industries from food processing to pharmaceutical manufacturing, companies often try to pump liquids with high particle concentrations to make manufacturing processes more efficient and cost-effective. And if those companies don’t properly account for shear thickening, the liquids being pumped can jam or clog — costing them valuable time and potentially damaging their equipment.
    On the other hand, the properties of shear thickening can also be used to develop force-absorbing materials for use in applications such as body armor, or as a mechanism for controlling the physical characteristics of soft robotics devices.
    For these reasons, researchers have spent years trying to understand precisely how and why shear thickening occurs. However, researchers have been forced to rely on indirect experimentation, because they were unable to capture the precise behavior of the particles in solution as shear thickening takes place. Until now. More

  • in

    Screen time linked to risk of myopia in young people

    A new study published in one of the world’s leading medical journals has revealed a link between screen time and higher risk and severity of myopia, or short-sightedness, in children and young adults.
    The open-access research, published this week in The Lancet Digital Health, was undertaken by researchers and eye health experts from Singapore, Australia, China and the UK, including Professor Rupert Bourne from Anglia Ruskin University (ARU). The authors examined more than 3,000 studies investigating smart device exposure and myopia in children and young adults aged between 3 months old and 33 years old.
    After analysing and statistically combining the available studies, the authors revealed that high levels of smart device screen time, such as looking at a mobile phone, is associated with around a 30% higher risk of myopia and, when combined with excessive computer use, that risk rose to around 80%.
    The research comes as millions of children around the world have spent substantial time using remote learning methods following the closure of schools due to the COVID-19 pandemic.
    Professor Bourne, Professor of Ophthalmology in the Vision and Eye Research Institute at Anglia Ruskin University (ARU), said: “Around half the global population is expected to have myopia by 2050, so it is a health concern that is escalating quickly. Our study is the most comprehensive yet on this issue and shows a potential link between screen time and myopia in young people.
    “This research comes at a time when our children have been spending more time than ever looking at screens for long periods, due to school closures, and it is clear that urgent research is needed to further understand how exposure to digital devices can affect our eyes and vision. We also know that people underestimate their own screen time, so future studies should use objective measures to capture this information.”
    Story Source:
    Materials provided by Anglia Ruskin University. Note: Content may be edited for style and length. More

  • in

    Quantum networking milestone in real-world environment

    A team from the U.S. Department of Energy’s Oak Ridge National Laboratory, Stanford University and Purdue University developed and demonstrated a novel, fully functional quantum local area network, or QLAN, to enable real-time adjustments to information shared with geographically isolated systems at ORNL using entangled photons passing through optical fiber.
    This network exemplifies how experts might routinely connect quantum computers and sensors at a practical scale, thereby realizing the full potential of these next-generation technologies on the path toward the highly anticipated quantum internet. The team’s results, which are published in PRX Quantum, mark the culmination of years of related research.
    Local area networks that connect classical computing devices are nothing new, and QLANs have been successfully tested in tabletop studies. Quantum key distribution has been the most common example of quantum communications in the field thus far, but this procedure is limited because it only establishes security, not entanglement, between sites.
    “We’re trying to lay a foundation upon which we can build a quantum internet by understanding critical functions, such as entanglement distribution bandwidth,” said Nicholas Peters, the Quantum Information Science section head at ORNL. “Our goal is to develop the fundamental tools and building blocks we need to demonstrate quantum networking applications so that they can be deployed in real networks to realize quantum advantages.”
    When two photons — particles of light — are paired together, or entangled, they exhibit quantum correlations that are stronger than those possible with any classical method, regardless of the physical distance between them. These interactions enable counterintuitive quantum communications protocols that can only be achieved using quantum resources.
    One such protocol, remote state preparation, harnesses entanglement and classical communications to encode information by measuring one half of an entangled photon pair and effectively converting the other half to the preferred quantum state. Peters led the first general experimental realization of remote state preparation in 2005 while earning his doctorate in physics. The team applied this technique across all the paired links in the QLAN — a feat not previously accomplished on a network — and demonstrated the scalability of entanglement-based quantum communications. More

  • in

    Physical athletes’ visual skills prove sharper than action video game players

    Athletes still have the edge over action video gamers when it comes to dynamic visual skills, a new study from the University of Waterloo shows.
    For an athlete, having strong visual skills can be the difference between delivering a peak performance and achieving average results.
    “Athletes involved in sports with a high-level of movement — like soccer, football, or baseball — often score higher on dynamic visual acuity tests than non-athletes,” said Dr. Kristine Dalton of Waterloo’s School of Optometry & Vision Science. “Our research team wanted to investigate if action video gamers — who, like e-sport athletes, are regularly immersed in a dynamic, fast-paced 2-D video environment for large periods of time — would also show superior levels of dynamic visual acuity on par with athletes competing in physical sport.”
    While visual acuity (clarity or sharpness of vision) is most often measured under static conditions during annual check-ups with an optometrist, research shows that testing dynamic visual acuity is a more effective measure of a person’s ability to see moving objects clearly — a baseline skill necessary for success in physical and e-sports alike.
    Using a dynamic visual acuity skills-test designed and validated at the University of Waterloo, researchers discovered that while physical athletes score highly on dynamic visual acuity tests as expected, action video game players tested closer to non-athletes.
    “Ultimately, athletes showed a stronger ability to identify smaller moving targets, which suggests visual processing differences exist between them and our video game players,” said Alan Yee, a PhD candidate in vision science. All participants were matched based on their level of static visual acuity and refractive error, distinguishing dynamic visual acuity as the varying factor on their test performance.
    These findings are also important for sports vision training centres that have been exploring the idea of developing video game-based training programs to help athletes elevate their performance.
    “Our findings show there is still a benefit to training in a 3-D environment,” said Dalton. “For athletes looking to develop stronger visual skills, the broader visual field and depth perception that come with physical training may be crucial to improving their dynamic visual acuity — and ultimately, their sport performance.”
    Story Source:
    Materials provided by University of Waterloo. Note: Content may be edited for style and length. More