More stories

  • in

    3D-patient tumor avatars: Maximizing their potential for next-generation precision oncology

    At any time, most cancer patients are receiving a treatment that does not significantly benefit them while enduring bodily and financial toxicity. Aiming to guide each patient to the most optimal treatment, precision medicine has been expanding from genetic mutations to other drivers of clinical outcome. There has been a concerted effort to create “avatars” of patient tumors for testing and selecting therapies before administering them into patients.
    A recently published Cancer Cell paper, which represents several National Cancer Institute consortia and includes key opinion leaders from both the research and clinical sectors in the United States and Europe, laid out the vision for next-generation, functional precision medicine by recommending measures to enable 3D patient tumor avatars (3D-PTAs) to guide treatment decisions in the clinic. According to Dr. Xiling Shen, the corresponding author of this article and the chief scientific officer of the Terasaki Institute for Biomedical Innovation, the power of 3D-PTAs, which include patient-derived organoids, 3D bioprinting, and microscale models, lie in their accurate real-life depiction of a tumor with its microenvironment and their speed and scalability to test and predict the efficacy of prospective therapeutic drugs. To fully realize this aim and maximize clinical accuracy, however, many steps are needed to standardize methods and criteria, design clinical trials, and incorporate complete patient data for the best possible outcome in personalized care.
    The use of such tools and resources can involve a great variety of materials, methods, and handling of data, however, and to ensure the accuracy and integrity for any clinical decision making, major efforts are needed to aggregate, standardize, and validate the uses of 3D-PTAs. Attempts by the National Cancer Institute’s Patient-Derived Models of Cancer Consortium and other groups have initiated official protocol standardizations, and much work needs to be done.
    The authors emphasize that in addition to unifying and standardizing protocols over a widespread number of research facilities, there must be quantification using validated software pipelines, and information must be codified and shared amongst all the research groups involved. They also recommend that more extensive and far-reaching clinical patient profile be compiled, which encompass every facet of a patient’s history, including not only medical, but demographic information as well; these are important factors in patient outcome. To achieve standardization in this regard, regulatory infrastructure provided by the National Institutes of Health and other institutes and journals must also be included to allow reliable global data sharing and access.
    Clinical trials are also a major part of the 3D-PTA effort, and to date, studies have been conducted to examine clinical trial workflows and turnaround times using 3D-PTA. The authors advise innovative clinical trial designs that can help with selecting patients for specific trials or custom treatments, especially when coupled with the patient’s clinical and demographic information.
    Combining these patient omics profiles with information in 3D-PTA functional data libraries can be facilitated by well-defined computational pipelines, and the authors advocate the utilization of relevant consortia, such as NCI Patient-Derived Model of Cancer Program, PDXnet, Tissue Engineering Collaborative, and Cancer Systems Biology Centers as well as European research infrastructure such as INFRAFRONTIER, EuroPDX)
    Integrating data from existing 3D-PTA initiatives, consortia, and biobanks with omics profiles can bring precision medicine to a new level, providing enhanced vehicles for making optimum choices among approved therapeutic drugs, as well as investigational, alternative, non-chemotherapeutic drugs. It can also provide solutions for patients experiencing drug resistance and expand opportunities for drug repurposing.
    “The integration of the 3D-PTA platform is a game-changing tool for oncological drug development,” said Ali Khademhosseini, Director and CEO for the Terasaki Institute for Biomedical Innovation. “We must combine it in a robust fashion with existing cancer genomics to produce the most powerful paradigm for precision oncology.” More

  • in

    Harnessing artificial intelligence technology for IVF embryo selection

    An artificial intelligence algorithm can determine non-invasively, with about 70 percent accuracy, if an in vitro fertilized embryo has a normal or abnormal number of chromosomes, according to a new study from researchers at Weill Cornell Medicine.
    Having an abnormal number of chromosomes, a condition called aneuploidy, is a major reason embryos derived from in vitro fertilization (IVF) fail to implant or result in a healthy pregnancy. One of the current methods for detecting aneuploidy involves the biopsy-like sampling and genetic testing of cells from an embryo — an approach that adds cost to the IVF process and is invasive to the embryo. The new algorithm, STORK-A, described in a paper published Dec. 19 in Lancet Digital Health, can help predict aneuploidy without the disadvantages of biopsy. It operates by analyzing microscope images of the embryo and incorporates information about maternal age and the IVF clinic’s scoring of the embryo’s appearance.
    “Our hope is that we’ll ultimately be able to predict aneuploidy in a completely non-invasive way, using artificial intelligence and computer vision techniques,” said study senior author Dr. Iman Hajirasouliha, associate professor of computational genomics and of physiology and biophysics at Weill Cornell Medicine and a member of the Englander Institute for Precision Medicine.
    The study’s first author is Josue Barnes, a doctoral student in the Weill Cornell Graduate School of Medical Sciences who studies in the Hajirasouliha Laboratory. Dr. Nikica Zaninovic, associate professor of embryology in clinical obstetrics and gynecology and director of the Embryology Laboratory at the Ronald O. Perelman and Claudia Cohen Center for Reproductive Medicine at Weill Cornell Medicine and NewYork-Presbyterian/Weill Cornell Medical Center led the embryology work for the study.
    According to the U.S. Centers for Disease Control and Prevention, there were more than 300,000 IVF cycles performed in the United States in 2020, resulting in about 80,000 live births. IVF experts are always looking for ways to boost that success rate, to achieve more successful pregnancies with fewer embryo transfers — which means developing better methods for identifying viable embryos.
    Fertility clinic staff currently use microscopy to assess embryos for large-scale abnormalities that correlate with poor viability. To obtain information about the chromosomes, clinic staff may also use a biopsy method called preimplantation genetic testing for aneuploidy (PGT-A), predominantly in women over the age of 37. More

  • in

    Strong metaphorical messages can help tackle toxic e-waste

    Consumers told that not recycling their batteries ‘risked polluting the equivalent of 140 Olympic swimming pools every year’ were more likely to participate in an electronic waste recycling scheme, a new study has found.
    The paper from the University of Portsmouth explores how to improve our sustainable disposal of electronic waste (e-waste).
    With Christmas around the corner and consumers buying the latest mobile phones, tablets, headphones and televisions, older electronic products become defunct and add to the alarming quantity of potentially toxic e-waste.
    Experts at the University of Portsmouth carried out research to test what factors encourage consumers to safely dispose of e-waste, which will be useful for managers and policy-makers implementing disposal schemes.
    Lead author of the study, Dr Diletta Acuti, from the University’s Faculty of Business and Law, said: “The world’s electronic waste is an enormous problem which needs to be addressed urgently. E-waste often contains hazardous substances, such as mercury, lead or acid, which ends up in landfills without any treatment or special precautions, causing significant long-term damage to the environment and human health.
    “Adequate treatment of this waste is therefore an environmental necessity.”
    In 2019, 205,000 tons of portable batteries were sold in Europe, but only half were collected for recycling. Dr Acuti’s research looks specifically at the disposal of batteries. More

  • in

    Signal processing algorithms improved turbulence in free-space optic tests

    New signal-processing algorithms have been shown to help mitigate the impact of turbulence in free-space optical experiments, potentially bringing ‘free space’ internet a step closer to reality.
    The team of researchers, from Aston University’s Aston Institute of Photonic Technologies and Glasgow University, used commercially available photonic lanterns, a commercial transponder, and a spatial light modulator to emulate turbulence. By applying a successive interference cancellation digital signal processing algorithm, they achieved record results.
    The findings are published in the IEEE Journal of Lightwave Technology.
    Free space optical technology wirelessly transmits data as light through the air around us — called ‘free space’ — for use in telecoms or computer networking. Because free space optical communication doesn’t require the expensive laying of fibre cables, it’s seen as an exciting development in bringing communications to places where there is limited existing infrastructure.
    But because data is sent as pulses of light, weather conditions can cause problems. A bright sunny day or thick fog can diffract or scintillate the beam of light, creating turbulence which causes data to be lost.
    The researchers simultaneously transmitted multiple data signals using different spatially shaped beams of light using a so-called photonic lantern. Turbulence changes the shape of the beams, often losing the signal if only a single simple shape is transmitted and detected, but by detecting light with these shapes using a second lantern, more of the light is collected at the receiver, and the original data can be unscrambled. This can greatly reduce the impact of the atmosphere on the quality of the data received, in a technique known as Multiple-input multiple-output (MIMO) digital signal processing.
    Professor Andrew Ellis at Aston University said: “Using a single beam, when a single beam was transmitted, turbulence similar to a hot sunny day destroyed the signal 50% of the time. By transmitting multiple beams of different shapes through the same telescopes and detecting the different shapes, not only did we increase the availability to more than 99%, we increased the capacity to more than 500 Gbit/s, or more than 500 ultra-fast Pure-Fibre broadband links.”
    A project investigating the real-world applications of FSO technology is presently underway in South Africa, where researchers from Aston University and Glasgow University are working with the University of the Witwatersrand in Johannesburg to attempt to bring internet access to communities living in informal settlements and schools in underprivileged areas.
    The Fibre Before the Fibre Project, aims to provide the internet performance of a Pure-Fibre connection without the need to install cables. It uses a free space optical communication system that can link to remote sites using a wireless optical line of site signal to link to nearby fibre sources in more affluent suburbs.
    Professor Ellis said: “Our role in the project is to look at the impact and educational benefit free space optics will have for the school children who will finally be able to access the internet.”
    Story Source:
    Materials provided by Aston University. Note: Content may be edited for style and length. More

  • in

    Virtual reality game to objectively detect ADHD

    Researchers have used virtual reality games, eye tracking and machine learning to show that differences in eye movements can be used to detect ADHD, potentially providing a tool for more precise diagnosis of attention deficits. Their approach could also be used as the basis for an ADHD therapy, and with some modifications, to assess other conditions, such as autism.
    ADHD is a common attention disorder that affects around six percent of the world’s children. Despite decades of searching for objective markers, ADHD diagnosis is still based on questionnaires, interviews and subjective observation. The results can be ambiguous, and standard behavioural tests don’t reveal how children manage everyday situations. Recently, a team consisting of researchers from Aalto University, the University of Helsinki, and Åbo Akademi University developed a virtual reality game called EPELI that can be used to assess ADHD symptoms in children by simulating situations from everyday life.
    Now, the team tracked the eye movements of children in a virtual reality game and used machine learning to look for differences in children with ADHD. The new study involved 37 children diagnosed with ADHD and 36 children in a control group. The children played EPELI and a second game, Shoot the Target, in which the player is instructed to locate objects in the environment and “shoot” them by looking at them. 
    ‘We tracked children’s natural eye movements as they performed different tasks in a virtual reality game, and this proved to be an effective way of detecting ADHD symptoms. The ADHD children’s gaze paused longer on different objects in the environment, and their gaze jumped faster and more often from one spot to another. This might indicate a delay in visual system development and poorer information processing than other children,’ said Liya Merzon, a doctoral researcher at Aalto University.
    Brushing your teeth with distractions
    Project lead Juha Salmitaival, an Academy Research Fellow at Aalto, explains that part of the game’s strength is its motivational value. ‘This isn’t just a new technology to objectively assess ADHD symptoms. Children also find the game more interesting than standard neuropsychological tests,’ he says. More

  • in

    New software based on Artificial Intelligence helps to interpret complex data

    Experimental data is often not only highly dimensional, but also noisy and full of artefacts. This makes it difficult to interpret the data. Now a team at HZB has designed software that uses self-learning neural networks to compress the data in a smart way and reconstruct a low-noise version in the next step. This enables to recognise correlations that would otherwise not be discernible. The software has now been successfully used in photon diagnostics at the FLASH free electron laser at DESY. But it is suitable for very different applications in science.
    More is not always better, but sometimes a problem. With highly complex data, which have many dimensions due to their numerous parameters, correlations are often no longer recognisable. Especially since experimentally obtained data are additionally disturbed and noisy due to influences that cannot be controlled.
    Helping humans to interpret the data
    Now, new software based on artificial intelligence methods can help: It is a special class of neural networks (NN) that experts call “disentangled variational autoencoder network (β-VAE).” Put simply, the first NN takes care of compressing the data, while the second NN subsequently reconstructs the data. “In the process, the two NNs are trained so that the compressed form can be interpreted by humans,” explains Dr Gregor Hartmann. The physicist and data scientist supervises the Joint Lab on Artificial Intelligence Methods at HZB, which is run by HZB together with the University of Kassel.
    Extracting core principles without prior knowledge
    Google Deepmind had already proposed to use β-VAEs in 2017. Many experts assumed that the application in the real world would be challenging, as non-linear components are difficult to disentangle. “After several years of learning how the NNs learn, it finally worked,” says Hartmann. β-VAEs are able to extract the underlying core principle from data without prior knowledge.
    Photon energy of FLASH determined
    In the study now published, the group used the software to determine the photon energy of FLASH from single-shot photoelectron spectra. “We succeeded in extracting this information from noisy electron time-of-flight data, and much better than with conventional analysis methods,” says Hartmann. Even data with detector-specific artefacts can be cleaned up this way.
    A powerful tool for different problems
    “The method is really good when it comes to impaired data,” Hartmann emphasises. The programme is even able to reconstruct tiny signals that were not visible in the raw data. Such networks can help uncover unexpected physical effects or correlations in large experimental data sets. “AI-based intelligent data compression is a very powerful tool, not only in photon science,” says Hartmann.
    Now plug and play
    In total, Hartmann and his team spent three years developing the software. “But now, it is more or less plug and play. We hope that soon many colleagues will come with their data and we can support them.”
    Story Source:
    Materials provided by Helmholtz-Zentrum Berlin für Materialien und Energie. Note: Content may be edited for style and length. More

  • in

    New winged robot can land like a bird

    A bird landing on a branch makes the maneuver look like the easiest thing in the world, but in fact, the act of perching involves an extremely delicate balance of timing, high-impact forces, speed, and precision. It’s a move so complex that no flapping-wing robot (ornithopter) has been able to master it, until now.
    Raphael Zufferey, a postdoctoral fellow in the Laboratory of Intelligent Systems (LIS) and Biorobotics ab (BioRob) in the School of Engineering, is the first author on a recent Nature Communications paper describing the unique landing gear that makes such perching possible. He built and tested it in collaboration with colleagues at the University of Seville, Spain, where the 700-gram ornithopter itself was developed as part of the European project GRIFFIN.
    “This is the first phase of a larger project. Once an ornithopter can master landing autonomously on a tree branch, then it has the potential to carry out specific tasks, such as unobtrusively collecting biological samples or measurements from a tree. Eventually, it could even land on artificial structures, which could open up further areas of application,” Zufferey says.
    He adds that the ability to land on a perch could provide a more efficient way for ornithopters – which, like many unmanned aerial vehicles (UAVs) have limited battery life – to recharge using solar energy, potentially making them ideal for long-range missions.
    “This is a big step toward using flapping-wing robots, which as of now can really only do free flights, for manipulation tasks and other real-world applications,” he says.
    Maximizing strength and precision; minimizing weight and speed
    The engineering problems involved in landing an ornithopter on a perch without any external commands required managing many factors that nature has already so perfectly balanced. The ornithopter had to be able to slow down significantly as it perched, while still maintaining flight. The claw needed to be strong enough to grasp the perch and support the weight of the robot, without being so heavy that it could not be held aloft. “That’s one reason we went with a single claw rather than two,” Zufferey notes. Finally, the robot needed to be able to perceive its environment and the perch in front of it in relation to its own position, speed, and trajectory. More

  • in

    Lucky find! How science behind epidemics helped physicists to develop state-of-the-art conductive paint

    In new research published in Nature Communications, University of Sussex scientists demonstrate how a highly conductive paint coating that they have developed mimics the network spread of a virus through a process called ‘explosive percolation’ — a mathematical process which can also be applied to population growth, financial systems and computer networks, but which has not been seen before in materials systems. The finding was a serendipitous development as well as a scientific first for the researchers.
    The process of percolation — the statistical connectivity in a system, such as when water flows through soil or through coffee grounds — is an important component in the development of liquid technology. And it was that process which researchers in the University of Sussex Material Physics group were expecting to see when they added graphene oxide to polymer latex spheres, such as those used in emulsion paint, to make a polymer composite.
    But when they gently heated the graphene oxide to make it electrically conductive, the scientists kick-started a process that saw this conductive system grow exponentially, to the extent that the new material created consumed the network, similar to the way a new strain of a virus can become dominant. This emergent material behaviour led to a new highly-conductive paint solution that, because graphene oxide is a cheap and easy to mass produce nanomaterial, is both one of the most affordable and most conductive low-loading composites reported. Before, now, it was accepted that such paints or coatings were necessarily one or the other.
    Electrically conductive paints and inks have a range of useful applications in new printed technologies, for example by imparting coatings with properties such as anti-static or making coatings that block electromagnetic interference (EMI), as well as being vital in the development of wearable health monitors.
    Alan Dalton, Professor of Experimental Physics, who heads up the Materials Physics Group at the University of Sussex explains the potential of this serendipitous finding: “My research team and I have been working on developing conductive paints and inks for the last ten years and it was to both my surprise and delight that we have discovered the key to revolutionising this work is a mathematical process that we normally associate with population growth and virus transmission.
    “By enabling us to create highly-conductive polymer composites that are also affordable, thanks to the cheap and scalable nature of graphene oxide, this development opens up the doors to a range of applications that we’ve not even been able to fully consider yet, but which could greatly enhance the sustainability of Electric Vehicle materials — including batteries — as well as having the potential to add conductive coatings to materials, such as ceramics, that aren’t inherently so. We can’t wait to get going on exploring the possibilities.”
    Dr Sean Ogilvie, a research fellow in Professor Dalton’s Materials Physics Group at the University of Sussex, who worked on this development adds: “The most exciting aspect of these nanocomposites is that we are using a very simple process, similar to applying emulsion paint and drying with a heat gun, which then kickstarts a process creating chemical bridges between the graphene sheets, producing electrical paths which are more conductive than if they were made entirely from graphene. More