More stories

  • in

    Pairing imaging, AI may improve colon cancer screening, diagnosis

    A research team from the lab of Quing Zhu, the Edwin H. Murty Professor of Engineering in the Department of Biomedical Engineering at the McKelvey School of Engineering at Washington University in St. Louis, has combined optical coherence tomography (OCT) and machine learning to develop a colorectal cancer imaging tool that may one day improve the traditional endoscopy currently used by doctors.
    The results were published in the June issue of the Journal of Biophotonics.
    Screening for colon cancer now relies on human visual inspection of tissue during a colonoscopy procedure. This technique, however, does not detect and diagnose subsurface lesions.
    An endoscopy OCT essentially shines a light in the colon to help a clinician see deeper to visualize and diagnose abnormalities. By collaborating with physicians at Washington University School of Medicine and with Chao Zhou, associate professor of biomedical engineering, the team developed a small OCT catheter, which uses a longer wavelength of light, to penetrate 1-2 mm into the tissue samples.
    Hongbo Luo, a PhD student in Zhu’s lab, led the work.
    The technique provided more information about an abnormality than surface-level, white-light images currently used by physicians. Shuying Li, a biomedical engineering PhD student, used the imaging data to train a machine learning algorithm to differentiate between “normal” and “cancerous” tissue. The combined system allowed them to detect and classify cancerous tissue samples with a 93% diagnostic accuracy.
    Zhu also is a professor of radiology at the School of Medicine. Her team worked with Vladimir Kushnir and Vladimir Lamm at the School of Medicine, Zhu’s team of PhD students, including Tiger Nie, started a trial in patients in July 2022.
    Story Source:
    Materials provided by Washington University in St. Louis. Original written by Brandie Jefferson. Note: Content may be edited for style and length. More

  • in

    Proteins and natural language: Artificial intelligence enables the design of novel proteins

    Artificial intelligence (AI) has created new possibilities for designing tailor-made proteins to solve everything from medical to ecological problems. A research team at the University of Bayreuth led by Prof. Dr. Birte Höcker has now successfully applied a computer-based natural language processing model to protein research. Completely independently, the ProtGPT2 model designs new proteins that are capable of stable folding and could take over defined functions in larger molecular contexts. The model and its potential are detailed scientifically in Nature Communications.
    Natural languages and proteins are actually similar in structure. Amino acids arrange themselves in a multitude of combinations to form structures that have specific functions in the living organism — similar to the way words form sentences in different combinations that express certain facts. In recent years, numerous approaches have therefore been developed to use principles and processes that control the computer-assisted processing of natural language in protein research. “Natural language processing has made extraordinary progress thanks to new AI technologies. Today, models of language processing enable machines not only to understand meaningful sentences but also to generate them themselves. Such a model was the starting point of our research. With detailed information concerning about 50 million sequences of natural proteins, my colleague Noelia Ferruz trained the model and enabled it to generate protein sequences on its own. It now understands the language of proteins and can use it creatively. We have found that these creative designs follow the basic principles of natural proteins,” says Prof. Dr. Birte Höcker, Head of the Protein Design Group at the University of Bayreuth.
    The language processing model transferred to protein evolution is called “ProtGPT2.” It can now be used to design proteins that adopt stable structures through folding and are permanently functional in this state. In addition, the Bayreuth biochemists have found out, through complex investigations, that the model can even create proteins that do not occur in nature and have possibly never existed in the history of evolution. These findings shed light on the immeasurable world of possible proteins and open a door to designing them in novel and unexplored ways. There is a further advantage: Most proteins that have been designed de novo so far have idealised structures. Before such structures can have a potential application, they usually must pass through an elaborate functionalization process — for example by inserting extensions and cavities — so that they can interact with their environment and take on precisely defined functions in larger system contexts. ProtGPT2, on the other hand, generates proteins that have such differentiated structures innately, and are thus already operational in their respective environments.
    “Our new model is another impressive demonstration of the systemic affinity of protein design and natural language processing. Artificial intelligence opens up highly interesting and promising possibilities to use methods of language processing for the production of customised proteins. At the University of Bayreuth, we hope to contribute in this way to developing innovative solutions for biomedical, pharmaceutical, and ecological problems,” says Prof. Dr. Birte Höcker.
    Story Source:
    Materials provided by Universität Bayreuth. Note: Content may be edited for style and length. More

  • in

    Gesture-based communication techniques may ease video meeting challenges

    Researchers have developed and demonstrated the potential benefit of a simple set of physical gestures that participants in online group video meetings can use to improve their meeting experience. Paul D. Hills of University College London, U.K., and colleagues from University College London and the University of Exeter, U.K., present the technique, which they call Video Meeting Signals (VMS™), in the open-access journal PLOS ONE on August 3, 2022.
    During the COVID-19 pandemic, online video conferencing has been a useful tool for industry, education, and social interactions. However, it has also been associated with poor mental well-being, poor communication, and fatigue.
    To help overcome the challenges of online video meetings, Hills developed VMS, a set of simple physical gestures that can be used alongside verbal communication during a video meeting. The gestures — including two thumbs up to signal agreement or a hand over the heart to show sympathy — are meant to improve experiences by serving a similar function as subtle face-to-face signals, such as raised eyebrows, while being more visible in a small video setting.
    To investigate the potential of VMS, Hills and colleagues first tested it among more than 100 undergraduate students. After half were trained on the technique, the students participated in two video-based seminars in groups of about 10 students each, before answering a survey about their experience.
    Analysis of the survey results showed that, compared to students without VMS training, those with VMS training reported a better personal experience, better feelings about their seminar group, and better learning outcomes. Analysis of seminar transcripts also suggested that students with VMS training were more likely to use positive language.
    Similar results were seen in a follow-up experiment with participants who were not students. This experiment also suggested that participants trained to use emojis instead of VMS gestures did not experience the same improved experience as participants with VMS training.
    These findings suggest that VMS may be an effective technique to help overcome the challenges of video conferencing. In the future, the researchers plan to continue to study VMS, for instance by investigating the mechanisms that may underlie its effects and how to apply it for maximum benefit.
    Paul D. Hills adds: “Our research indicates that there’s something about the use of gestures specifically which appears to help online interactions and help people connect and engage with each other. This can improve team performance, make meetings more inclusive and help with psychological wellbeing.”
    Daniel C. Richardson adds: “Because you can’t make eye contact or pick up on subtle nods, gestures and murmurs of agreement or dissent in video conferences, it can be hard to know if people are engaged with what you’re saying. We found strong evidence that encouraging people to use more natural hand gestures had a much better effect on their experience.”
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Machine learning enables optimal design of anti-biofouling polymer brush films

    Polymer brush films consists of monomer chains grown in close proximity on a substrate. The monomers, which look like “bristles” at the nanoscale, form a highly functional and versatile coating such that it can selectively adsorb or repel a variety of chemicals or biological molecules. For instance, polymer brush films have been used as a scaffold to grow biological cells and as protective anti-biofouling coatings that repel unwanted biological organisms.
    As anti-biofouling coatings, polymer brushes have been designed based primarily on the interaction between monomers and water molecules. While this makes for simple design, quantitative prediction of the adsorption of biomolecules, such as proteins, onto monomers have proved challenging owing to the complex interactions involved.
    Now, in a recent study published in ACS Biomaterials Science & Engineering, a research group led by Associate Professor Tomohiro Hayashi from Tokyo Institute of Technology (Tokyo Tech), Japan, has used machine learning to predict these interactions and identify the film characteristics that have a significant impact on protein adsorption.
    In their study, the team fabricated 51 different polymer brush films of different thicknesses and densities with five different monomers to train the machine learning algorithm. They then tested several of these algorithms to see how well their predictions matched up against the measured protein adsorption. “We tested several supervised regression algorithms, namely gradient boosting regression, support vector regression, linear regression, and random forest regression, to select the most reliable and suitable model in terms of the prediction accuracy,” says Dr. Hayashi.
    Out of these models, the random forest (RF) regression model showed the best agreement with the measured protein adsorption values. Accordingly, the researchers used the RF model to correlate the physical and chemical properties of the polymer brush with its ability to adsorb serum protein and allow for cell adhesion.
    “Our analyses showed that the hydrophobicity index, or the relative hydrophobicity, was the most critical parameter. Next in line were thickness and density of polymer brush films, the number of C-H bonds, the net charge on monomer, and the density of the films. Monomer molecular weight and the number of O-H bonds, on the other hand, were ranked low in importance,” highlights Dr. Hayashi.
    Given the highly varied nature of polymer brush films and the multiple factors that affect the monomer-protein interactions, adoption of machine learning as a way to optimize polymer brush film properties can provide a good starting point for the efficient design of anti-biofouling materials and functional biomaterials.
    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Augmented reality could be the future of paper books, according to new research

    Augmented reality might allow printed books to make a comeback against the e-book trend, according to researchers from the University of Surrey.
    Surrey has introduced the third generation (3G) version of its Next Generation Paper (NGP) project, allowing the reader to consume information on the printed paper and screen side by side.
    Dr Radu Sporea, Senior lecturer at the Advanced Technology Institute (ATI), comments:
    “The way we consume literature has changed over time with so many more options than just paper books. Multiple electronic solutions currently exist, including e-readers and smart devices, but no hybrid solution which is sustainable on a commercial scale.
    “Augmented books, or a-books, can be the future of many book genres, from travel and tourism to education. This technology exists to assist the reader in a deeper understanding of the written topic and get more through digital means without ruining the experience of reading a paper book.”
    Power efficiency and pre-printed conductive paper are some of the new features which allow Surrey’s augmented books to now be manufactured on a semi-industrial scale. With no wiring visible to the reader, Surrey’s augmented reality books allow users to trigger digital content with a simple gesture (such as a swipe of a finger or turn of a page), which will then be displayed on a nearby device.
    George Bairaktaris, Postgraduate researcher at the University of Surrey and part of the Next Generation Paper project team, said:
    “The original research was carried out to enrich travel experiences by creating augmented travel guides. This upgraded 3G model allows for the possibility of using augmented books for different areas such as education. In addition, the new model disturbs the reader less by automatically recognising the open page and triggering the multimedia content.”
    “What started as an augmented book project, evolved further into scalable user interfaces. The techniques and knowledge from the project led us into exploring organic materials and printing techniques to fabricate scalable sensors for interfaces beyond the a-book.”
    Story Source:
    Materials provided by University of Surrey. Note: Content may be edited for style and length. More

  • in

    Smart lighting system based on quantum dots more accurately reproduces daylight

    Researchers have designed smart, colour-controllable white light devices from quantum dots — tiny semiconductors just a few billionths of a metre in size — which are more efficient and have better colour saturation than standard LEDs, and can dynamically reproduce daylight conditions in a single light.
    The researchers, from the University of Cambridge, designed the next-generation smart lighting system using a combination of nanotechnology, colour science, advanced computational methods, electronics and a unique fabrication process.
    The team found that by using more than the three primary lighting colours used in typical LEDs, they were able to reproduce daylight more accurately. Early tests of the new design showed excellent colour rendering, a wider operating range than current smart lighting technology, and wider spectrum of white light customisation. The results are reported in the journal Nature Communications.
    As the availability and characteristics of ambient light are connected with wellbeing, the widespread availability of smart lighting systems can have a positive effect on human health since these systems can respond to individual mood. Smart lighting can also respond to circadian rhythms, which regulate the daily sleep-wake cycle, so that light is reddish-white in the morning and evening, and bluish-white during the day.
    When a room has sufficient natural or artificial light, good glare control, and views of the outdoors, it is said to have good levels of visual comfort. In indoor environments under artificial light, visual comfort depends on how accurately colours are rendered. Since the colour of objects is determined by illumination, smart white lighting needs to be able to accurately express the colour of surrounding objects. Current technology achieves this by using three different colours of light simultaneously.
    Quantum dots have been studied and developed as light sources since the 1990s, due to their high colour tunability and colour purity. Due their unique optoelectronic properties, they show excellent colour performance in both wide colour controllability and high colour rendering capability. More

  • in

    Using artificial intelligence to control digital manufacturing

    Scientists and engineers are constantly developing new materials with unique properties that can be used for 3D printing, but figuring out howto print with these materials can be a complex, costly conundrum.
    Often, an expert operator must use manual trial-and-error — possibly making thousands of prints — to determine ideal parameters that consistently print a new material effectively. These parameters include printing speed and how much material the printer deposits.
    MIT researchers have now used artificial intelligence to streamline this procedure. They developed a machine-learning system that uses computer vision to watch the manufacturing process and then correct errors in how it handles the material in real-time.
    They used simulations to teach a neural network how to adjust printing parameters to minimize error, and then applied that controller to a real 3D printer. Their system printed objects more accurately than all the other 3D printing controllers they compared it to.
    The work avoids the prohibitively expensive process of printing thousands or millions of real objects to train the neural network. And it could enable engineers to more easily incorporate novel materials into their prints, which could help them develop objects with special electrical or chemical properties. It could also help technicians make adjustments to the printing process on-the-fly if material or environmental conditions change unexpectedly.
    “This project is really the first demonstration of building a manufacturing system that uses machine learning to learn a complex control policy,” says senior author Wojciech Matusik, professor of electrical engineering and computer science at MIT who leads the Computational Design and Fabrication Group (CDFG) within the Computer Science and Artificial Intelligence Laboratory (CSAIL). “If you have manufacturing machines that are more intelligent, they can adapt to the changing environment in the workplace in real-time, to improve the yields or the accuracy of the system. You can squeeze more out of the machine.”
    The co-lead authors are Mike Foshey, a mechanical engineer and project manager in the CDFG, and Michal Piovarci, a postdoc at the Institute of Science and Technology in Austria. MIT co-authors include Jie Xu, a graduate student in electrical engineering and computer science, and Timothy Erps, a former technical associate with the CDFG. The research will be presented at the Association for Computing Machinery’s SIGGRAPH conference. More

  • in

    Computer modelling aims to inform restoration, conservation of coral reefs

    A UBC Okanagan research team has created a computer modelling program to help scientists predict the effect of climate damage and eventual restoration plans on coral reefs around the globe.
    This is a critical objective, says Dr. Bruno Carturan, because climate change is killing many coral species and can lead to the collapse of entire coral reef ecosystems. But, because they are so complex, it’s logistically challenging to study the impact of devastation and regeneration of coral reefs.
    Real-world experiments are impractical, as researchers would need to manipulate and disrupt large areas of reefs, along with coral colonies and herbivore populations, and then monitor the changes in structure and diversity over many years.
    “Needless to say, conducting experiments that will disturb natural coral reefs is unethical and should be avoided, while using big aquariums is simply unfeasible,” says Dr. Carturan, who recently completed his doctoral studies with the Irving K. Barber Faculty of Science. “For these reasons, no such experiments have ever been conducted, which has hindered our capacity to predict coral diversity and the associated resilience of the reefs.”
    For his latest research, published recently in Frontiers in Ecology and Evolution, Dr. Carturan used models to create 245 coral communities, each with a unique set of nine species and each occupying a surface of 25 square metres. The model represents coral colonies and different species of algae that grow, compete and reproduce together while also being impacted by climate.
    Crucially, he notes, all the key components of the model, including species’ traits such as competitive abilities and growth rates, are informed by pre-existing, real-world data from 800 species. More