More stories

  • in

    LIONESS redefines brain tissue imaging

    Brain tissue is one of the most intricate specimens that scientists have arguably ever dealt with. Packed with currently immeasurable amount of information, the human brain is the most sophisticated computational device with its network of around 86 billion neurons. Understanding such complexity is a difficult task, and hence making progress requires technologies to unravel the tiny, complex interactions taking place in the brain at microscopic scales. Imaging is therefore an enabling tool in neuroscience.
    The new imaging and virtual reconstruction technology developed by Johann Danzl’s group at ISTA is a big leap in imaging brain activity and is aptly named LIONESS — Live Information Optimized Nanoscopy Enabling Saturated Segmentation. LIONESS is a pipeline to image, reconstruct, and analyze live brain tissue with a comprehensiveness and spatial resolution not possible until now.
    “With LIONESS, for the first time, it is possible to get a comprehensive, dense reconstruction of living brain tissue. By imaging the tissue multiple times, LIONESS allows us to observe and measure the dynamic cellular biology in the brain take its course,” says first author Philipp Velicky. “The output is a reconstructed image of the cellular arrangements in three dimensions, with time making up the fourth dimension, as the sample can be imaged over minutes, hours, or days,” he adds.
    With LIONESS neuroscientists can image living brain tissue and achieve high-resolution 3D imagery without damaging the living sample.
    Collaboration and AI the Key
    The strength of LIONESS lies in refined optics and in the two levels of deep learning — a method of Artificial Intelligence — that make up its core: the first enhances the image quality and the second identifies the different cellular structures in the dense neuronal environment.

    The pipeline is a result of a collaboration between the Danzl group, Bickel group, Jonas group, Novarino group, and ISTA’s Scientific Service Units, as well as other international collaborators. “Our approach was to assemble a dynamic group of scientists with unique combined expertise across disciplinary boundaries, who work together to close a technology gap in the analysis of brain tissue,” Johann Danzl of ISTA says.
    Surpassing hurdles
    Previously it was possible to get reconstructions of brain tissue by using Electron Microscopy. This method images the sample based on its interactions with electrons. Despite its ability to capture images at a few nanometers — a millionth of a millimeter — resolution, Electron Microscopy requires a sample to be fixed in one biological state, which needs to be physically sectioned to obtain 3D information. Hence, no dynamic information can be obtained.
    Another previously known technique of Light Microscopy allows observation of living systems and record intact tissue volumes by slicing them “optically” rather than physically. However, Light Microscopy is severely hampered in its resolving power by the very properties of the light waves it uses to generate an image. Its best-case resolution is a few hundred nanometers, much too coarse-grained to capture important cellular details in brain tissue.
    Using Super-resolution Light Microscopy scientists can break this resolution barrier. Recent work in this field, dubbed SUSHI (Super-resolution Shadow Imaging), showed that applying dye molecules to the spaces around cells and applying the Nobel Prize-winning super-resolution technique STED (Stimulated Emission Depletion) microscopy reveals super-resolved ‘shadows’ of all the cellular structures and thus visualizes them in the tissue. Nevertheless, it has been impossible to image entire volumes of brain tissue with resolution enhancement that matches the brain tissue’s complex 3D architecture. This is because increasing resolution also entails a high load of imaging light on the sample, which may damage or ‘fry’ the subtle, living tissue.

    Herein lies the prowess of LIONESS, having been developed for, according to the authors, “fast and mild” imaging conditions, thus keeping the sample alive. The technique does so while providing isotropic super-resolution — meaning that it is equally good in all three spatial dimensions — that allows visualization of the tissue’s cellular components in 3D nanoscale resolved detail.
    LIONESS collects only as little information from the sample as needed during the imaging step. This is followed by the first deep learning step to fill in additional information on the brain tissue’s structure in a process called Image Restoration. In this innovative way, it achieves a resolution of around 130 nanometers, while being gentle enough for imaging of living brain tissue in real-time. Together, these steps allow for a second step of deep learning, this time to make sense of the extremely complex imaging data and identify the neuronal structures in an automated manner.
    Homing In
    “The interdisciplinary approach allowed us to break the intertwined limitations in resolving power and light exposure to the living system, to make sense of the complex 3D data, and to couple the tissue’s cellular architecture with molecular and functional measurements,” says Danzl.
    For virtual reconstruction, Danzl and Velicky teamed up with visual computing experts: the Bickel group at ISTA and the group led by Hanspeter Pfister at Harvard University, who contributed their expertise in automated segmentation — the process of automatically recognizing the cellular structures in the tissue — and visualization, with further support by ISTA’s image analysis staff scientist Christoph Sommer. For sophisticated labeling strategies, neuroscientists and chemists from Edinburgh, Berlin, and ISTA contributed. Consequently, it was possible to bridge functional measurements, i.e. to read out the cellular structures together with biological signaling activity in the same living neuronal circuit. This was done by imaging Calcium ion fluxes into cells and measuring the cellular electrical activity in collaboration with the Jonas group at ISTA. The Novarino group contributed human cerebral organoids, often nicknamed mini-brains that mimic human brain development. The authors underline that all of this was facilitated through expert support by ISTA’s top-notch scientific service units.
    Brain structure and activity are highly dynamic; its structures evolve as the brain performs and learns new tasks. This aspect of the brain is often referred to as “plasticity.” Hence, observing the changes in the brain’s tissue architecture is essential to unlocking the secrets behind its plasticity. The new tool developed at ISTA shows potential for understanding the functional architecture of brain tissue and potentially other organs by revealing the subcellular structures and capturing how these might change over time. More

  • in

    Taking a lesson from spiders: Researchers create an innovative method to produce soft, recyclable fibers for smart textiles

    Smart textiles offer many potential wearable technology applications, from therapeutics to sensing to communication. For such intelligent textiles to function effectively, they need to be strong, stretchable, and electrically conductive. However, fabricating fibres that possess these three properties is challenging and requires complex conditions and systems.
    Drawing inspiration from how spiders spin silk to make webs, a team of researchers led by Assistant Professor Swee-Ching Tan from the Department of Materials Science and Engineering under the National University of Singapore’s College of Design and Engineering, together with their international collaborators, have developed an innovative method of producing soft fibres that possess these three key properties, and at the same time can be easily reused to produce new fibres. The fabrication process can be carried out at room temperature and pressure, and uses less solvent as well as less energy, making it an attractive option for producing functional soft fibres for various smart applications.
    “Technologies for fabricating soft fibres should be simple, efficient and sustainable to meet the high demand for smart textile electronics. Soft fibres created using our spider-inspired method of spinning has been demonstrated to be versatile for various smart technology applications — for example, these functional fibres can be incorporated into a strain-sensing glove for gaming purposes, and a smart face mask to monitor breathing status for conditions such as obstructive sleep apnea. These are just some of the many possibilities,” said Asst Prof Tan.
    Their innovation was demonstrated and outlined in their paper that was published in scientific journal Nature Electronics on 27 April 2023.
    Spinning a web of soft fibres
    Conventional artificial spinning methods to fabricate synthetic fibres require high pressure, high energy input, large volumes of chemicals, and specialised equipment. Moreover, the resulting fibres typically have limited functions.

    In contrast, the spider silk spinning process is highly efficient and can form strong and versatile fibres under room temperature and pressure. To address the current technological challenges, the NUS team decided to emulate this natural spinning process to create one-dimensional (1D) functional soft fibres that are strong, stretchable, and electrically conductive. They identified two unique steps in spider silk formation that they could mimic.
    Spider silk formation involves the change of a highly concentrated protein solution, known as a silk dope, into a strand of fibre. The researchers first identified that the protein concentration and interactions in the silk dope increase from dope synthesis to spinning. The second step identified was that the arrangement of proteins within the dope changes when triggered by external factors to help separate the liquid portion from the silk dope, leaving the solid part — the spider silk fibres. This second step is known as liquid-solid phase separation.
    The team recreated the two steps and developed a new spinning process known as the phase separation-enabled ambient (PSEA) spinning approach.
    The soft fibres were spun from a viscous gel solution composed of polyacrylonitrile (PAN) and silver ions — referred to as PANSion — dissolved in dimethylformamide (DMF), a common solvent. This gel solution is known as the spinning dope, which forms into a strand of soft fibre through the spinning process when the gel is pulled and spun under ambient conditions.
    Once the PANSion gel is pulled and exposed to air, water molecules in the air act as a trigger to cause the liquid portion of the gel to separate in the form of droplets from the solid portion of the gel, this phenomenon is known as the nonsolvent vapour-induced phase separation effect. When separated from the solid fibre, the droplets of the liquid portion are removed by holding the fibre vertically or at an angle for gravity to do its work.

    “Fabrication of 1D soft fibres with seamless integration of all-round functionalities is much more difficult to achieve and requires complicated fabrication or multiple post-treatment processes. This innovative method fulfils an unmet need to create a simple yet efficient spinning approach to produce functional 1D soft fibres that simultaneously possess unified mechanical and electrical functionalities,” said Asst Prof Tan.
    Three properties, one method
    The biomimetic spinning process combined with the unique formulation of the gel solution allowed the researchers to fabricate soft fibres that are imbued with three key properties — strong, stretchable, and electrically conductive.
    Researchers tested the mechanical properties, strength, and elasticity, of the PANSion gel through a series of stress tests and demonstrated that this remarkable innovation possessed excellent strength and elasticity. These tests also allowed the researchers to deduce that the formation of strong chemical networks between metal-based complexes within the gel is responsible for its mechanical properties.
    Further analysis of the PANSion soft fibres at the molecular level confirmed its electrical conductivity and showed that the silver ions present in the PANSion gel contributed to the electrical conductivity of the soft fibres.
    The team then concluded that PANSion soft fibres fulfils all the properties that would allow it to be versatile and potentially be used in a wide range of smart technology applications.
    Potential applications and next steps
    The team demonstrated the capabilities of the PANSion soft fibres in a number of applications, such as communication and temperature sensing. PANSion fibres were sewn to create an interactive glove that exemplified a smart gaming glove. When connected to a computer interface, the glove could successfully detect human hand gestures and enable a user to play simple games.
    PANSion fibres could also detect changes in electrical signals that could be used as a form of communication like Morse code. In addition, these fibres could sense temperature changes, a property that can potentially be capitalised to protect robots from environments with extreme temperatures. Researchers also sewed PANSion fibres into a smart face mask for monitoring the breathing activities of the mask wearer.
    On top of the wide range of potential applications of PANSion soft fibres, this innovative discovery earns points in sustainability. PANSion fibres could be recycled by dissolving in DMF, allowing it to be converted back into a gel solution for spinning new fibres. A comparison with other current fibre-spinning methods revealed that this new spider-inspired method consumes significantly lower amounts of energy and requires lower volume of chemicals.
    Further to this cutting-edge discovery, the research team will continue to work on improving the sustainability of the PANSion soft fibres throughout its production cycle, from the raw materials to recycling the final product. More

  • in

    AI nursing ethics: Viability of robots and artificial intelligence in nursing practice

    The recent progress in the field of robotics and artificial intelligence (AI) promises a future where these technologies would play a more prominent role in society. Current developments, such as the introduction of autonomous vehicles, the ability to generate original artwork, and the creation of chatbots capable of engaging in human-like conversations, highlight the immense possibilities held by these technologies. While these advancements offer numerous benefits, they also pose some fundamental questions. The characteristics such as creativity, communication, critical thinking, and learning — once considered to be unique to humans — are now being replicated by AI. So, can intelligent machines be considered ‘human’?
    In a step toward answering this question, Associate Professor Tomohide Ibuki from Tokyo University of Science, in collaboration with medical ethics researcher Dr. Eisuke Nakazawa from The University of Tokyo and nursing researcher Dr. Ai Ibuki from Kyoritsu Women’s University, recently explored whether robots and AI can be entrusted with nursing, a highly humane practice. Their work was made available online on 12 June 2023 and published in the journal Nursing Ethics on 12 June 2023.
    “This study in applied ethics examines whether robotics, human engineering, and human intelligence technologies can and should replace humans in nursing tasks,” says Dr. Ibuki.
    Nurses demonstrate empathy and establish meaningful connections with their patients. This human touch is essential in fostering a sense of understanding, trust, and emotional support. The researchers examined whether the current advancements in robotics and AI can implement these human qualities by replicating the ethical concepts attributed to human nurses, including advocacy, accountability, cooperation, and caring.
    Advocacy in nursing involves speaking on behalf of patients to ensure that they receive the best possible medical care. This encompasses safeguarding patients from medical errors, providing treatment information, acknowledging the preferences of a patient, and acting as mediators between the hospital and the patient. In this regard, the researchers noted that while AI can inform patients about medical errors and present treatment options, they questioned its ability to truly understand and empathize with patients’ values and to effectively navigate human relationships as mediators.
    The researchers also expressed concerns about holding robots accountable for their actions. They suggested the development of explainable AI, which would provide insights into the decision-making process of AI systems, improving accountability.
    The study further highlights that nurses are required to collaborate effectively with their colleagues and other healthcare professionals to ensure the best possible care for patients. As humans rely on visual cues to build trust and establish relationships, unfamiliarity with robots might lead to suboptimal interactions. Recognizing this issue, the researchers emphasized the importance of conducting further investigations to determine the appropriate appearance of robots for facilitating efficient cooperation with human medical staff.
    Lastly, while robots and AI have the potential to understand a patient’s emotions and provide appropriate care, the patient must also be willing to accept robots as care providers.
    Having considered the above four ethical concepts in nursing, the researchers acknowledge that while robots may not fully replace human nurses anytime soon, they do not dismiss the possibility. While robots and AI can potentially reduce the shortage of nurses and improve treatment outcomes for patients, their deployment requires careful weighing of the ethical implications and impact on nursing practice.
    “While the present analysis does not preclude the possibility of implementing the ethical concepts of nursing in robots and AI in the future, it points out that there are several ethical questions. Further research could not only help solve them but also lead to new discoveries in ethics,” concludes Dr. Ibuki.
    Here’s hoping for such novel applications of robotics and AI to emerge soon! More

  • in

    Solving rare disease mysteries … and protecting privacy

    Macquarie University researchers have demonstrated a new way of linking personal records and protecting privacy. The first application is in identifying cases of rare genetic disorders. There are many other potential applications across society.
    The research will be presented at the 18th ACM ASIA Conference on Computer and Communications Security in Melbourne on 12 July.
    A five-year-old boy in the US has a mutation in a gene called GPX4, which he shares with just 10 other children in the world. The condition causes skeletal and central nervous system abnormalities. There are likely to be other children with the disorder recorded in hundreds of health and diagnostic databases worldwide, but we do not know of them, because their privacy is guarded for legal and commercial reasons.
    But what if records linked to the condition could be found and counted while still preserving privacy? Researchers from the Macquarie University Cyber Security Hub have developed a technique to achieve exactly that. The team includes Dr Dinusha Vatsalan and Professor Dali Kaafar of the University’s School of Computing and the boy’s father, software engineer Mr Sanath Kumar Ramesh, who is CEO of the OpenTreatments Foundation in Seattle, Washington.
    “I am very excited about this work,” says Mr Ramesh, whose foundation initiated and supported the project. “Knowing how many people have a condition underpins economic assumptions. If a condition was previously thought to have 15 patients and now we know, having pulled in data from diagnostic testing companies, that there are 100 patients, that increases market-size hugely.
    “It would have a significant economic impact. The valuation of a company working on the condition would go up. Product costing would go down. How insurance companies account for medical costs would change. Diagnostic companies would target [the condition] more. And you can start to do epidemiology more precisely.”
    Linking and counting data records might seem simple but, in reality, it involves many issues, says Professor Kaafar. First, because we are dealing with a rare disease, there is no centralised database, and the records are sprinkled across the world. “In this case in hundreds of databases,” he says. “And from a business perspective, data is precious, and the companies holding it are not necessarily interested in sharing.”

    Then, there are technical issues of matching data that is recorded, encoded, and stored in different ways, and accounting for individuals who are double-counted in and between different databases. And, on top of all that, are the privacy considerations. “We are dealing with very, very sensitive health data,” Professor Kaafar says.
    This personal data isn’t needed for a simple estimate of the number of patients and for epidemiological purposes. But, until now, it was needed to ensure that records are unique and can be linked.
    Dr Vatsalan and her colleagues used a technique known as Bloom filter encoding with differential privacy. They devised a suite of algorithms which deliberately introduces enough noise into the data to blur precise details to the point where they cannot be extracted from individual records, but it still allows the patterns of records of the same disease condition to be matched and clustered.
    The accuracy of their technique was then evaluated using North Carolina voter registration data. And the results showed the method led to a negligible error rate with a guarantee of a very high level of privacy, even on highly corrupted datasets. The technique significantly outperforms existing methods.
    In addition to detecting and counting rare diseases, the research has many other applications; for determining awareness of a new product in marketing, for instance, or in cybersecurity for tracking the number of unique views of particular social media posts.

    But it is the application to rare diseases about which the Macquarie University researchers are passionate. “There is no better feeling for a researcher than seeing the technology they’ve been developing having a real impact and making the world a better place,” says Professor Kaafar. “In this case, it is so real and so important.”
    The OpenTreatment Foundation partly funded the research.
    “The Foundation wanted to make this project completely open source from the very beginning,” Dr Vatsalan adds. “So the algorithm we implemented is being published openly.”
    The authors will present their research at the 18th ACM ASIA Conference on Computer and Communications Security (ACM ASIACCS 2023) in Melbourne on 12 July. More

  • in

    Bees make decisions better and faster than we do, for the things that matter to them

    Honey bees have to balance effort, risk and reward, making rapid and accurate assessments of which flowers are mostly likely to offer food for their hive. Research published in the journal eLife today reveals how millions of years of evolution has engineered honey bees to make fast decisions and reduce risk.
    The study enhances our understanding of insect brains, how our own brains evolved, and how to design better robots.
    The paper presents a model of decision-making in bees and outlines the paths in their brains that enable fast decision-making. The study was led by Professor Andrew Barron from Macquarie University in Sydney, and Dr HaDi MaBouDi, Neville Dearden and Professor James Marshall from the University of Sheffield.
    “Decision-making is at the core of cognition,” says Professor Barron. “It’s the result of an evaluation of possible outcomes, and animal lives are full of decisions. A honey bee has a brain smaller than a sesame seed. And yet she can make decisions faster and more accurately than we can. A robot programmed to do a bee’s job would need the back up of a supercomputer.
    “Today’s autonomous robots largely work with the support of remote computing,” Professor Barron continues. “Drones are relatively brainless, they have to be in wireless communication with a data centre. This technology path will never allow a drone to truly explore Mars solo — NASA’s amazing rovers on Mars have travelled about 75 kilometres in years of exploration.”
    Bees need to work quickly and efficiently, finding nectar and returning it to the hive, while avoiding predators. They need to make decisions. Which flower will have nectar? While they’re flying, they’re only prone to aerial attack. When they land to feed, they’re vulnerable to spiders and other predators, some of which use camouflage to look like flowers.

    “We trained 20 bees to recognise five different coloured ‘flower disks’. Blue flowers always had sugar syrup,” says Dr MaBouDi. “Green flowers always had quinine [tonic water] with a bitter taste for bees. Other colours sometimes had glucose.”
    “Then we introduced each bee to a ‘garden’ where the ‘flowers’ just had distilled water. We filmed each bee then watched more than 40 hours of video, tracking the path of the bees and timing how long it took them to make a decision.
    “If the bees were confident that a flower would have food, then they quickly decided to land on it taking an average of 0.6 seconds),” says Dr MaBouDi. “If they were confident that a flower would not have food, they made a decision just as quickly.”
    If they were unsure, then they took much more time — on average 1.4 seconds — and the time reflected the probability that a flower had food.
    The team then built a computer model from first principles aiming to replicate the bees’ decision-making process. They found the structure of their computer model looked very similar to the physical layout of a bee brain.
    “Our study has demonstrated complex autonomous decision-making with minimal neural circuitry,” says Professor Marshall. “Now we know how bees make such smart decisions, we are studying how they are so fast at gathering and sampling information. We think bees are using their flight movements to enhance their visual system to make them better at detecting the best flowers.”
    AI researchers can learn much from insects and other ‘simple’ animals. Millions of years of evolution has led to incredibly efficient brains with very low power requirements. The future of AI in industry will be inspired by biology, says Professor Marshall, who co-founded Opteran, a company that reverse-engineers insect brain algorithms to enable machines to move autonomously, like nature. More

  • in

    Unraveling the humanity in metacognitive ability: Distinguishing human metalearning from AI

    Monitoring and controlling one’s own learning process objectively is essential for improving one’s learning abilities. This ability, often referred to as “learning to learn” or “metacognition,” has been studied in educational psychology. Owing to the tight coupling between the higher meta-level and the lower object-level cognitive systems, a conventional reduction approach has difficulty understanding the neural basis of metacognition. To overcome this limitation, the researchers employed a novel research approach where they compared the metacognition of artificial intelligence (AI) to that of humans.
    First, they demonstrated that the metacognitive system of AI, which aims to maximize rewards and minimize punishments, can effectively regulate learning speed and memory retention in response to the environment and task. Second, they demonstrated the metacognitive behavior of human motor learning, which demonstrates that providing monetary feedback as a function of memory can either promote or suppress motor learning and memory retention. This constitutes the first-ever empirical demonstration of the bi-directional regulation of implicit motor learning abilities by economic factors. Notably, while AI exhibited equal metacognitive abilities for reward and punishment, humans exhibited an asymmetric response to monetary gain and loss; humans adjust their memory retention in response to gain and their learning speed in response to loss. This asymmetric property may provide valuable insights into the neural mechanisms underlying human metacognition.
    Researchers anticipate that these findings could be effectively applied to enhance the learning abilities of individuals engaging in new sports or motor-related activities, such as post-stroke rehabilitation training.
    This work was supported by the Japan Society for the Promotion of Science KAKENHI (JP19H04977, JP19H05729, and JP22H00498). TS was supported by a JSPS Research Fellowship for Young Scientists and KAKENHI (JP19J20366). NS was supported by NIH R21 NS120274. More

  • in

    Organic electronics: Sustainability during the entire lifecycle

    Organic electronics can make a decisive contribution to decarbonization and, at the same time, help to cut the consumption of rare and valuable raw materials. To do so, it is not only necessary to further develop manufacturing processes, but also to devise technical solutions for recycling as early on as the laboratory phase. Materials scientists from Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) are now promoting this circular strategy in conjunction with researchers from the UK and USA in the 
    Organic electronic components, such as solar modules, have several exceptional features. They can be applied in extremely thin layers on flexible carrier materials and therefore have a wider range of applications than crystalline materials. Since their photoactive substances are carbon based, they also contribute to cutting the consumption of rare, expensive and sometimes toxic materials such as iridium, platinum and silver.
    Organic electronic components are experiencing major growth in the field of OLED technologies in particular, and above all for television or computer screens. “One the one hand, this is progress, but on the other, it causes some problems,” says Prof. Dr. Christoph Brabec, Chair of Materials Science (Materials in Electronics and Energy Technology) at FAU and Director of the Helmholtz Institute Erlangen-Nürnberg for Renewable Energy (HI ERN). As a materials scientist, Brabec sees the danger of permanently incorporating environmentally friendly technology into a device architecture that is not sustainable on the whole. This not only affects electronic devices, but also organic sensors in textiles that have an extremely short operating life. Brabec: “Applied research in particular must now set the course to ensure that electronic components and all their individual parts must leave an ecological footprint that is as small as possible during their entire lifecycle.”
    More efficient synthesis and more robust materials
    The further development of organic electronics themselves is elementary here, since new materials and more efficient manufacturing processes lead to the reduction of outlay and energy during production. “Compared with simple polymers, the manufacturing process for the photoactive layer requires significantly higher amounts of energy as it is deposited in a vacuum at high temperatures,” explains Brabec. The researchers are therefore proposing cheaper and more environmentally-friendly processes, such as deposition from water-based solutions and printing using inkjet processes. Brabec: “One major challenge is developing functional materials that can be processed without toxic solvents that are harmful to the environment.” In the case of OLED screens, inkjet printing also offers the possibility of replacing precious metals such as iridium and platinum with organic materials.
    In addition to their efficiency, the operating stability of materials is decisive. Complex encapsulation is required in order to protect the vacuum-deposited carbon layers of organic solar modules, which can make up to two thirds of their overall weight. More robust combinations of materials could contribute to significant savings in materials, weight and energy.
    Planning the recycling process in the laboratory
    To make a realistic evaluation of the environmental footprint of organic electronics, the entire product lifecycle has to be considered. In terms of output, organic photovoltaic systems are still lagging behind conventional silicon modules, but 30% less CO2 is emitted during the manufacturing process. Aiming for maximum efficiency levels is not everything, says Brabec: “18 percent could make more sense environmentally than 20, if it’s possible to manufacture the photoactive material in five steps instead of eight.”
    In addition, the shorter operating life of organic modules is also relative if you look more closely. Although photovoltaic modules based on silicon last longer, they are very difficult to recycle. “Biocompatibility and biodegradability will increasingly become important criteria, both for product development as well as for packaging design,” says Christoph Brabec. “We really must start taking recycling into consideration in the laboratory.” This means, for example, using substrates that can either be easily recycled or that are as biodegradable as the active substances. Using what is known as multilayer designs as early on as the product design phase could ensure that various materials can easily be separated and recycled at the end of the product lifecycle. Brabec: “This cradle-to-cradle approach will be a decisive prerequisite for establishing organic electronics as an important component in the transition to renewable energy.” More

  • in

    AI tool decodes brain cancer’s genome during surgery

    Scientists have designed an AI tool that can rapidly decode a brain tumor’s DNA to determine its molecular identity during surgery — critical information that under the current approach can take a few days and up to a few weeks.
    Knowing a tumor’s molecular type enables neurosurgeons to make decisions such as how much brain tissue to remove and whether to place tumor-killing drugs directly into the brain — while the patient is still on the operating table.
    A report on the work, led by Harvard Medical School researchers, is published July 7 in the journal Med.
    Accurate molecular diagnosis — which details DNA alterations in a cell — during surgery can help a neurosurgeon decide how much brain tissue to remove. Removing too much when the tumor is less aggressive can affect a patient’s neurologic and cognitive function. Likewise, removing too little when the tumor is highly aggressive may leave behind malignant tissue that can grow and spread quickly.
    “Right now, even state-of-the-art clinical practice cannot profile tumors molecularly during surgery. Our tool overcomes this challenge by extracting thus-far untapped biomedical signals from frozen pathology slides,” said study senior author Kun-Hsing Yu, assistant professor of biomedical informatics in the Blavatnik Institute at HMS.
    Knowing a tumor’s molecular identity during surgery is also valuable because certain tumors benefit from on-the-spot treatment with drug-coated wafers placed directly into the brain at the time of the operation, Yu said.

    “The ability to determine intraoperative molecular diagnosis in real time, during surgery, can propel the development of real-time precision oncology,” Yu added.
    The standard intraoperative diagnostic approach used now involves taking brain tissue, freezing it, and examining it under a microscope. A major drawback is that freezing the tissue tends to alter the appearance of cells under a microscope and can interfere with the accuracy of clinical evaluation. Furthermore, the human eye, even when using potent microscopes, cannot reliably detect subtle genomic variations on a slide.
    The new AI approach overcomes these challenges.
    The tool, called CHARM (Cryosection Histopathology Assessment and Review Machine), is freely available to other researchers. It still has to be clinically validated through testing in real-world settings and cleared by the FDA before deployment in hospitals, the research team said.
    Cracking cancer’s molecular code
    Recent advances in genomics have allowed pathologists to differentiate the molecular signatures — and the behaviors that such signatures portend — across various types of brain cancer as well as within specific types of brain cancer. For example, glioma — the most aggressive brain tumor and the most common form of brain cancer — has three main subvariants that carry different molecular markers and have different propensities for growth and spread.

    The new tool’s ability to expedite molecular diagnosis could be particularly valuable in areas with limited access to technology to perform rapid cancer genetic sequencing.
    Beyond the decisions made during surgery, knowledge of a tumor’s molecular type provides clues about its aggressiveness, behavior, and likely response to various treatments. Such knowledge can inform post-operative decisions.
    Furthermore, the new tool enables during-surgery diagnoses aligned with the World Health Organization’s recently updated classification system for diagnosing and grading the severity of gliomas, which calls for such diagnoses to be made based on a tumor’s genomic profile.
    Training CHARM
    CHARM was developed using 2,334 brain tumor samples from 1,524 people with glioma from three different patient populations. When tested on a never-before-seen set of brain samples, the tool distinguished tumors with specific molecular mutations at 93 percent accuracy and successfully classified three major types of gliomas with distinct molecular features that carry different prognoses and respond differently to treatments.
    Going a step further, the tool successfully captured visual characteristics of the tissue surrounding the malignant cells. It was capable of spotting telltale areas with greater cellular density and more cell death within samples, both of which signal more aggressive glioma types.
    The tool was also able to pinpoint clinically important molecular alterations in a subset of low-grade gliomas, a subtype of glioma that is less aggressive and therefore less likely to invade surrounding tissue. Each of these changes also signals different propensity for growth, spread, and treatment response.
    The tool further connected the appearance of the cells — the shape of their nuclei, the presence of edema around the cells — with the molecular profile of the tumor. This means that the algorithm can pinpoint how a cell’s appearance relates to the molecular type of a tumor.
    This ability to assess the broader context around the image renders the model more accurate and closer to how a human pathologist would visually assess a tumor sample, Yu said.
    The researchers say that while the model was trained and tested on glioma samples, it could be successfully retrained to identify other brain cancer subtypes.
    Scientists have already designed AI models to profile other types of cancer — colon, lung, breast — but gliomas have remained particularly challenging due to their molecular complexity and huge variation in tumor cells’ shape and appearance.
    The CHARM tool would have to be retrained periodically to reflect new disease classifications as they emerge from new knowledge, Yu said.
    “Just like human clinicians who must engage in ongoing education and training, AI tools must keep up with the latest knowledge to remain at peak performance.”
    Authorship, funding, disclosures
    Coinvestigators included MacLean P. Nasrallah, Junhan Zhao, Cheng Che Tsai, David Meredith, Eliana Marostica, Keith L. Ligon, and Jeffrey A. Golden.
    This work was supported in part by the National Institute of General Medical Sciences grant R35GM142879, the Google Research Scholar Award, the Blavatnik Center for Computational Biomedicine Award, the Partners Innovation Discovery Grant, and the Schlager Family Award for Early-Stage Digital Health Innovations. More