More stories

  • in

    Next step in simulating the universe

    Computer simulations have struggled to capture the impact of elusive particles called neutrinos on the formation and growth of the large-scale structure of the Universe. But now, a research team from Japan has developed a method that overcomes this hurdle.
    In a study published this month in The Astrophysical Journal, researchers led by the University of Tsukuba present simulations that accurately depict the role of neutrinos in the evolution of the Universe.
    Why are these simulations important? One key reason is that they can set constraints on a currently unknown quantity: the neutrino mass. If this quantity is set to a particular value in the simulations and the simulation results differ from observations, that value can be ruled out. However, the constraints can be trusted only if the simulations are accurate, which was not guaranteed in previous work. The team behind this latest research aimed to address this limitation.
    “Earlier simulations used certain approximations that might not be valid,” says lead author of the study Lecturer Kohji Yoshikawa. “In our work, we avoided these approximations by employing a technique that accurately represents the velocity distribution function of the neutrinos and follows its time evolution.”
    To do this, the research team directly solved a system of equations known as the Vlasov-Poisson equations, which describe how particles move in the Universe. They then carried out simulations for different values of the neutrino mass and systemically examined the effects of neutrinos on the large-scale structure of the Universe.
    The simulation results demonstrate, for example, that neutrinos suppress the clustering of dark matter — the ‘missing’ mass in the Universe — and in turn galaxies. They also show that neutrino-rich regions are strongly correlated with massive galaxy clusters and that the effective temperature of the neutrinos varies substantially depending on the neutrino mass.
    “Overall, our findings suggest that neutrinos considerably affect the large-scale structure formation, and that our simulations provide an accurate account for the important effect of neutrinos,” explains Lecturer Yoshikawa. “It is also reassuring that our new results are consistent with those from entirely different simulation approaches.”

    Story Source:
    Materials provided by University of Tsukuba. Note: Content may be edited for style and length. More

  • in

    AI predicts which drug combinations kill cancer cells

    When healthcare professionals treat patients suffering from advanced cancers, they usually need to use a combination of different therapies. In addition to cancer surgery, the patients are often treated with radiation therapy, medication, or both.
    Medication can be combined, with different drugs acting on different cancer cells. Combinatorial drug therapies often improve the effectiveness of the treatment and can reduce the harmful side-effects if the dosage of individual drugs can be reduced. However, experimental screening of drug combinations is very slow and expensive, and therefore, often fails to discover the full benefits of combination therapy. With the help of a new machine learning method, one could identify best combinations to selectively kill cancer cells with specific genetic or functional makeup.
    Researchers at Aalto University, University of Helsinki and the University of Turku in Finland developed a machine learning model that accurately predicts how combinations of different cancer drugs kill various types of cancer cells. The new AI model was trained with a large set of data obtained from previous studies, which had investigated the association between drugs and cancer cells. ‘The model learned by the machine is actually a polynomial function familiar from school mathematics, but a very complex one,’ says Professor Juho Rousu from Aalto University.
    The research results were published in the journal Nature Communications, demonstrating that the model found associations between drugs and cancer cells that were not observed previously. ‘The model gives very accurate results. For example, the values ??of the so-called correlation coefficient were more than 0.9 in our experiments, which points to excellent reliability,’ says Professor Rousu. In experimental measurements, a correlation coefficient of 0.8-0.9 is considered reliable.
    The model accurately predicts how a drug combination selectively inhibits particular cancer cells when the effect of the drug combination on that type of cancer has not been previously tested. ‘This will help cancer researchers to prioritize which drug combinations to choose from thousands of options for further research,’ says researcher Tero Aittokallio from the Institute for Molecular Medicine Finland (FIMM) at the University of Helsinki.
    The same machine learning approach could be used for non-cancerous diseases. In this case, the model would have to be re-taught with data related to that disease. For example, the model could be used to study how different combinations of antibiotics affect bacterial infections or how effectively different combinations of drugs kill cells that have been infected by the SARS-Cov-2 coronavirus.

    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More

  • in

    Microfluidic system with cell-separating powers may unravel how novel pathogens attack

    To develop effective therapeutics against pathogens, scientists need to first uncover how they attack host cells. An efficient way to conduct these investigations on an extensive scale is through high-speed screening tests called assays.
    Researchers at Texas A&M University have invented a high-throughput cell separation method that can be used in conjunction with droplet microfluidics, a technique whereby tiny drops of fluid containing biological or other cargo can be moved precisely and at high speeds. Specifically, the researchers successfully isolated pathogens attached to host cells from those that were unattached within a single fluid droplet using an electric field.
    “Other than cell separation, most biochemical assays have been successfully converted into droplet microfluidic systems that allow high-throughput testing,” said Arum Han, professor in the Department of Electrical and Computer Engineering and principal investigator of the project. “We have addressed that gap, and now cell separation can be done in a high-throughput manner within the droplet microfluidic platform. This new system certainly simplifies studying host-pathogen interactions, but it is also very useful for environmental microbiology or drug screening applications.”
    The researchers reported their findings in the August issue of the journal Lab on a Chip.
    Microfluidic devices consist of networks of micron-sized channels or tubes that allow for controlled movements of fluids. Recently, microfluidics using water-in-oil droplets have gained popularity for a wide range of biotechnological applications. These droplets, which are picoliters (or a million times less than a microliter) in volume, can be used as platforms for carrying out biological reactions or transporting biological materials. Millions of droplets within a single chip facilitate high-throughput experiments, saving not just laboratory space but the cost of chemical reagents and manual labor.
    Biological assays can involve different cell types within a single droplet, which eventually need to be separated for subsequent analyses. This task is extremely challenging in a droplet microfluidic system, Han said.
    “Getting cell separation within a tiny droplet is extremely difficult because, if you think about it, first, it’s a tiny 100-micron diameter droplet, and second, within this extremely tiny droplet, multiple cell types are all mixed together,” he said.
    To develop the technology needed for cell separation, Han and his team chose a host-pathogen model system consisting of the salmonella bacteria and the human macrophage, a type of immune cell. When both these cell types are introduced within a droplet, some of the bacteria adhere to the macrophage cells. The goal of their experiments was to separate the salmonella that attached to the macrophage from the ones that did not.
    For cell separation, Han and his team constructed two pairs of electrodes that generated an oscillating electric field in close proximity to the droplet containing the two cell types. Since the bacteria and the host cells have different shapes, sizes and electrical properties, they found that the electric field produced a different force on each cell type. This force resulted in the movement of one cell type at a time, separating the cells into two different locations within the droplet. To separate the mother droplet into two daughter droplets containing one type of cells, the researchers also made a downstream Y-shaped splitting junction.
    Han said although these experiments were carried with a host and pathogen whose interaction is well-established, their new microfluidic system equipped with in-drop separation is most useful when the pathogenicity of bacterial species is unknown. He added that their technology enables quick, high-throughput screening in these situations and for other applications where cell separation is required.
    “Liquid handling robotic hands can conduct millions of assays but are extremely costly. Droplet microfluidics can do the same in millions of droplets, much faster and much cheaper,” Han said. “We have now integrated cell separation technology into droplet microfluidic systems, allowing the precise manipulation of cells in droplets in a high-throughput manner, which was not possible before.”

    Story Source:
    Materials provided by Texas A&M University. Original written by Vandana Suresh. Note: Content may be edited for style and length. More

  • in

    Report assesses promises and pitfalls of private investment in conservation

    The Ecological Society of America (ESA) today released a report entitled “Innovative Finance for Conservation: Roles for Ecologists and Practitioners” that offers guidelines for developing standardized, ethical and effective conservation finance projects.
    Public and philanthropic sources currently supply most of the funds for protecting and conserving species and ecosystems. However, the private sector is now driving demand for market-based mechanisms that support conservation projects with positive environmental, social and financial returns. Examples of projects that can support this triple bottom line include green infrastructure for stormwater management, clean transport projects and sustainable production of food and fiber products.
    “The reality is that public and philanthropic funds are insufficient to meet the challenge to conserve the world’s biodiversity,” said Garvin Professor and Senior Director of Conservation Science at Cornell University Amanda Rodewald, the report’s lead author. “Private investments represent a new path forward both because of their enormous growth potential and their ability to be flexibly adapted to a wide variety of social and ecological contexts.”
    Today’s report examines the legal, social and ethical issues associated with innovative conservation finance and offers resources and guidelines for increasing private capital commitments to conservation. It also identifies priority actions that individuals and organizations working in conservation finance will need to adopt in order to “mainstream” the field.
    One priority action is to standardize the metrics that allow practitioners to compare and evaluate projects. While the financial services and investment sectors regularly employ standardized indicators of financial risk and return, it is more difficult to apply such indicators to conservation projects. Under certain conservation financing models, for example, returns on investment are partially determined by whether the conservation project is successful — but “success” can be difficult to quantify when it is defined by complex social or environmental changes, such as whether a bird species is more or less at risk of going extinct as a result of a conservation project.
    Another priority action is to establish safeguards and ethical standards for involving local stakeholders, including Indigenous communities. In the absence of robust accountability and transparency measures, mobilizing private capital in conservation can result in unjust land grabs or in unscrupulous investments where profits flow disproportionately to wealthy or powerful figures. The report offers guidelines for ensuring that conservation financing improves the prosperity of local communities.
    According to co-author Peter Arcese, a professor at the University of British Columbia and adjunct professor at Cornell University, opportunities in conservation finance are growing for patient investors who are interested in generating modest returns while simultaneously supporting sustainable development.
    “Almost all landowners I’ve worked with in Africa and North and South America share a deep desire to maintain or enhance the environmental, cultural and aesthetic values of the ecosystems their land supports,” Arcese said. “By creating markets and stimulating investment in climate mitigation, and forest, water and biodiversity conservation projects, we can offer landowners alternative income sources and measurably slow habitat loss and degradation.”
    Rodewald sees a similar landscape of interest and opportunity. “No matter the system — be it a coffee plantation in the Andes, a timber harvest in the Pacific Northwest, or a farm in the Great Plains — I am reminded again and again that conservation is most successful when we safeguard the health and well-being of local communities. Private investments can be powerful tools to do just that,” said Rodewald.
    Report: Amanda Rodewald, et al. 2020. “Innovative Finance for Conservation: Roles for Ecologists and Practitioners.

    Story Source:
    Materials provided by Ecological Society of America. Note: Content may be edited for style and length. More

  • in

    Esports: Fit gamers challenge ‘fat’ stereotype

    Esports players are up to 21 per cent healthier weight than the general population, hardly smoke and drink less too, finds a new QUT (Queensland University of Technology) study.
    The findings, published in the International Journal of Environmental Research and Public Health, were based on 1400 survey participants from 65 countries.
    First study to investigate the BMI (Body Mass Index) status of a global sample of esports players.
    Esports players were between 9 and 21 per cent more likely to be a healthy weight than the general population.
    Esports players drank and smoked less than the general population.
    The top 10 per cent of esports players were significantly more physically active than lower level players, showing that physical activity could influence esports expertise.
    QUT eSports researcher Michael Trotter said the results were surprising considering global obesity levels.
    “The findings challenge the stereotype of the morbidly obese gamer,” he said.
    Mr Trotter said the animated satire South Park poked fun at the unfit gamer but the link between video gaming and obesity had not been strongly established.
    “When you think of esports, there are often concerns raised regarding sedentary behaviour and poor health as a result, and the study revealed some interesting and mixed results,” he said.

    advertisement

    “As part of their training regime, elite esports athletes spend more than an hour per day engaging in physical exercise as a strategy to enhance gameplay and manage stress,” he said.
    The World Health Organisation guidelines for time that should be spent being physically active weekly is a minimum of 150 minutes.
    “Only top-level players surveyed met physical activity guidelines, with the best players exercising on average four days a week,” the PhD student said.
    However, the study found 4.03 per cent of esports players were more likely to be morbidly obese compared to the global population.
    Mr Trotter said strategies should be developed to support players classed at the higher end of BMI categories.

    advertisement

    “Exercise and physical activity play a role in success in esports and should be a focus for players and organisations training esports players,” Mr Trotter said.
    “This will mean that in the future, young gamers will have more reason and motivation to be physically active.
    “Grassroots esports pathways, such as growing university and high school esports are likely to be the best place for young esports players to develop good health habits for gamers.”
    The research also found esports players are 7.8 per cent more likely to abstain from drinking daily, and of those players that do drink, only 0.5 per cent reported drinking daily.
    The survey showed only 3.7 per cent of esports players smoked daily, with player smoking frequency lower compared to global data at 18.7 per cent.
    Future research will investigate how high-school and university esports programs can improve health outcomes and increase physical activity for gaming students.
    The study was led by QUT’s Faculty of Health School of Exercise and Nutrition Sciences and in collaboration with the Department of Psychology at Umeå University in Sweden.

    Story Source:
    Materials provided by Queensland University of Technology. Note: Content may be edited for style and length. More

  • in

    Teaching computers the meaning of sensor names in smart home

    The aim of smart homes is to make life easier for those living in them. Applications for environment-aided daily life may have a major social impact, fostering active ageing and enabling older adults to remain independent for longer. One of the keys to smart homes is the system’s ability to deduce the human activities taking place. To this end, different types of sensors are used to detect the changes triggered by inhabitants in this environment (turning lights on and off, opening and closing doors, etc.).
    Normally, the information generated by these sensors is processed using data analysis methods, and the most successful systems are based on supervised learning techniques (i.e., knowledge), with someone supervising the data and an algorithm automatically learning the meaning. Nevertheless, one of the main problems with smart homes is that a system trained in one environment is not valid in another one: ‘Algorithms are usually closely linked to a specific smart environment, to the types of sensor existing in that environment and their configuration, as well as to the concrete habits of one individual. The algorithm learns all this easily, but is then unable to transfer it to a different environment,’ explains Gorka Azkune, a member of the UPV/EHU’s IXA group.
    Giving sensors names
    To date, sensors have been identified using numbers, meaning that ‘they lost any meaning they may have had,’ continues Dr Azkune. ‘We propose using sensor names instead of identifiers, to enable their meaning, their semantics, to be used to determine the activity to which they are linked. Thus, what the algorithm learns in one environment may be valid in a different one, even if the sensors are not the same, because their semantics are similar. This is why we use natural language processing techniques.’
    The researcher also explains that the techniques used are totally automatic. ‘At the end of the day, the algorithm learns the words first and then the representation that we develop using those words. There is no human intervention. This is important from the perspective of scalability, since it has been proven to overcome the aforementioned difficulty.’ Indeed, the new approach has achieved similar results to those obtained using the knowledge-based method.

    Story Source:
    Materials provided by University of the Basque Country. Note: Content may be edited for style and length. More

  • in

    Math enables custom arrangements of liquid 'nesting dolls'

    While the mesmerizing blobs in a classic lava lamp may appear magical, the colorful shapes move in response to temperature-induced changes in density and surface tension. This process, known as liquid-liquid phase separation, is critical to many functions in living cells, and plays a part in making products like medicines and cosmetics.
    Now Princeton University researchers have overcome a major challenge in studying and engineering phase separation. Their system, reported in a paper published Nov. 19 in Physical Review Letters, allows for the design and control of complex mixtures with multiple phases — such as nested structures reminiscent of Russian matryoshka dolls, which are of special interest for applications such as drug synthesis and delivery.
    Their system provides researchers a new way to examine, predict and engineer interactions between multiple liquid phases, including arrangements of mixtures with an arbitrary number of separated phases, the researchers said.
    The arrangement of phases is based on the minimization of surface energies, which capture the interaction energies between molecules at the interfaces of phases. This tends to maximize the contact area between two phases with low surface tension, and minimize or eliminate contact between phases with high surface tension.
    The new method uses the mathematical tools of graph theory to track which phases contact each other within a mixture. The method can predict the final arrangements of phases in a mixture when the surface energies are known, and can also be used to reverse-engineer mixture properties that give rise to desired structures.
    “If you tell us which phases you have and what the surface tensions are, we can tell you how phases will arrange themselves. We can also do it the other way around — if you know how you want the phases to be arranged, we can tell you what surface tensions are needed,” said senior author Andrej Košmrlj, an assistant professor of mechanical and aerospace engineering.

    advertisement

    “The approach is very general, and we think it will have an impact on many different fields,” from cell biology and pharmaceuticals to 3D printing and carbon sequestration technologies, said Košmrlj.
    The work began as the junior paper of Milena Chakraverti-Wuerthwein, a physics concentrator from Princeton’s Class of 2020. She was working with Sheng Mao, then a postdoctoral research associate in Košmrlj’s group, building on previous research that explored phase-separated mixtures. That work developed a computational framework for predicting the number of separated phases and their composition, but did not systematically investigate the actual arrangements of phases.
    Chakraverti-Wuerthwein started drawing examples of multicomponent mixtures, with each phase represented by a different color. At one point, she said, she felt like she was “going in circles,” but then “took a step back and thought about the distinguishing feature that makes one of these morphologies different from another. I came up with the idea that it’s really the edges where phases are touching each other. That was the birth of the idea of using the graphs,” in which each phase is represented by a colored dot, and the lines between dots indicate which phases touch one another in a mixture.
    “That was the spark we needed, because once you can represent it in terms of graphs, then it’s very easy to enumerate all the possibilities” for different arrangements of phases, said Košmrlj.
    Chakraverti-Wuerthwein is a co-lead author of the paper along with Mao, who is now an assistant professor at Peking University in China. Coauthor Hunter Gaudio, a 2020 graduate of Villanova University, helped run simulations to produce all distinct arrangements of four phases during summer 2019 as a participant in the Princeton Center for Complex Materials’ Research Experience for Undergraduates program.

    advertisement

    “Normally, liquids like to make simple droplets, and not much else. With this theory, one can program droplets to spontaneously organize into chains, stacks, or nested layers, like Russian dolls,” said Eric Dufresne, a professor of soft and living materials at ETH Zürich in Switzerland, who was not involved in the research. “This could be useful for controlling a complex sequence of chemical reactions, as found in living cells. The next challenge will be to develop experimental methods to realize the interactions specified by the theory.”
    Košmrlj is part of a group of Princeton faculty members exploring various facets and applications of liquid-liquid phase separation — a major focus of an Interdisciplinary Research Group recently launched by the Princeton Center for Complex Materials with support from the National Science Foundation.
    In liquid environments, there is a tendency for small droplets to morph into larger droplets over time — a process called coarsening. However, in living cells and industrial processes it is desirable to achieve structures of specific size. Košmrlj said his team’s future work will consider how coarsening might be controlled to achieve mixtures with targeted small-scale structures. Another open question is how multicomponent mixtures form in living systems, where active biological processes and the basic physics of the materials are both contributing factors.
    Chakraverti-Wuerthwein, who will begin a Ph.D. program in biophysical sciences at the University of Chicago in 2021, said it was gratifying to see “that this kernel of an idea that I came up with ended up being something valuable that could be expanded into a more broadly applicable tool.”
    The work was supported by the U.S. National Science Foundation through the Princeton University Materials Research Science and Engineering Center, and through the Research Experience for Undergraduates program of the Princeton Center for Complex Materials. More

  • in

    AI model uses retinal scans to predict Alzheimer's disease

    A form of artificial intelligence designed to interpret a combination of retinal images was able to successfully identify a group of patients who were known to have Alzheimer’s disease, suggesting the approach could one day be used as a predictive tool, according to an interdisciplinary study from Duke University.
    The novel computer software looks at retinal structure and blood vessels on images of the inside of the eye that have been correlated with cognitive changes.
    The findings, appearing last week in the British Journal of Ophthalmology, provide proof-of-concept that machine learning analysis of certain types of retinal images has the potential to offer a non-invasive way to detect Alzheimer’s disease in symptomatic individuals.
    “Diagnosing Alzheimer’s disease often relies on symptoms and cognitive testing,” said senior author Sharon Fekrat, M.D., retina specialist at the Duke Eye Center. “Additional tests to confirm the diagnosis are invasive, expensive, and carry some risk. Having a more accessible method to identify Alzheimer’s could help patients in many ways, including improving diagnostic precision, allowing entry into clinical trials earlier in the disease course, and planning for necessary lifestyle adjustments.”
    Fekrat is part of an interdisciplinary team at Duke that also includes expertise from Duke’s departments of Neurology, Electrical and Computer Engineering, and Biostatistics and Bioinformatics. The team built on earlier work in which they identified changes in retinal blood vessel density that correlated with changes in cognition. They found decreased density of the capillary network around the center of the macula in patients with Alzheimer’s disease.
    Using that knowledge, they then trained a machine learning model, known as a convolutional neural network (CNN), using four types of retinal scans as inputs to teach a computer to discern relevant differences among images.
    Scans from 159 study participants were used to build the CNN; 123 patients were cognitively healthy, and 36 patients were known to have Alzheimer’s disease.
    “We tested several different approaches, but our best-performing model combined retinal images with clinical patient data,” said lead author C. Ellis Wisely, M.D., a comprehensive ophthalmologist at Duke. “Our CNN differentiated patients with symptomatic Alzheimer’s disease from cognitively healthy participants in an independent test group.”
    Wisely said it will be important to enroll a more diverse group of patients to build models that can predict Alzheimer’s in all racial groups as well as in those who have conditions such as glaucoma and diabetes, which can also alter retinal and vascular structures.
    “We believe additional training using images from a larger, more diverse population with known confounders will improve the model’s performance,” added co-author Dilraj S. Grewal, M.D., Duke retinal specialist.
    He said additional studies will also determine how well the AI approach compares to current methods of diagnosing Alzheimer’s disease, which often include expensive and invasive neuroimaging and cerebral spinal fluid tests.
    “Links between Alzheimer’s disease and retinal changes — coupled with non-invasive, cost-effective, and widely available retinal imaging platforms — position multimodal retinal image analysis combined with artificial intelligence as an attractive additional tool, or potentially even an alternative, for predicting the diagnosis of Alzheimer’s,” Fekrat said.

    Story Source:
    Materials provided by Duke University Medical Center. Note: Content may be edited for style and length. More