More stories

  • in

    Scientists code ChatGPT to design new medicine

    Generative artificial intelligence platforms, from ChatGPT to Midjourney, grabbed headlines in 2023. But GenAI can do more than create collaged images and help write emails — it can also design new drugs to treat disease.
    Today, scientists use advanced technology to design new synthetic drug compounds with the right properties and characteristics, also known as “de novo drug design.” However, current methods can be labor-, time-, and cost-intensive.
    Inspired by ChatGPT’s popularity and wondering if this approach could speed up the drug design process, scientists in the Schmid College of Science and Technology at Chapman University in Orange, California, decided to create their own genAI model, detailed in the new paper, “De Novo Drug Design using Transformer-based Machine Translation and Reinforcement Learning of Adaptive Monte-Carlo Tree Search,” to be published in the journal Pharmaceuticals. Dony Ang, Cyril Rakovski, and Hagop Atamian coded a model to learn a massive dataset of known chemicals, how they bind to target proteins, and the rules and syntax of chemical structure and properties writ large.
    The end result can generate countless unique molecular structures that follow essential chemical and biological constraints and effectively bind to their targets — promising to vastly accelerate the process of identifying viable drug candidates for a wide range of diseases, at a fraction of the cost.
    To create the breakthrough model, researchers integrated two cutting-edge AI techniques for the first time in the fields of bioinformatics and cheminformatics: the well-known “Encoder-Decoder Transformer architecture” and “Reinforcement Learning via Monte Carlo Tree Search” (RL-MCTS). The platform, fittingly named “drugAI,” allows users to input a target protein sequence (for instance, a protein typically involved in cancer progression). DrugAI, trained on data from the comprehensive public database BindingDB, can generate unique molecular structures from scratch, and then iteratively refine candidates, ensuring finalists exhibit strong binding affinities to respective drug targets — crucial for the efficacy of potential drugs. The model identifies 50-100 new molecules likely to inhibit these particular proteins.
    “This approach allows us to generate a potential drug that has never been conceived of,” Dr. Atamian said. “It’s been tested and validated. Now, we’re seeing magnificent results.”
    Researchers assessed the molecules drugAI generated along several criteria, and found drugAI’s results were of similar quality to those from two other common methods, and in some cases, better. They found that drugAI’s candidate drugs had a validity rate of 100% — meaning none of the drugs generated were present in the training set. DrugAI’s candidate drugs were also measured for drug-likeness, or the similarity of a compound’s properties to those of oral drugs, and candidate drugs were at least 42% and 75% higher than other models. Plus, all drugAI-generated molecules exhibited strong binding affinities to respective targets, comparable to those identified via traditional virtual screening approaches.
    Ang, Rakovski and Atamian also wanted to see how drugAI’s results for a specific disease compared to existing known drugs for that disease. In a different experiment, screening methods generated a list of natural products that inhibited COVID-19 proteins; drugAI generated a list of novel drugs targeting the same protein to compare their characteristics. They compared drug-likeness and binding affinity between the natural molecules and drugAI’s, and found similar measurements in both — but drugAI was able to identify these in a much quicker and less expensive way.
    Plus, the scientists designed the algorithm to have a flexible structure that allows future researchers to add new functions. “That means you’re going to end up with more refined drug candidates with an even higher probability of ending up as a real drug,” said Dr. Atamian. “We’re excited for the possibilities moving forward.” More

  • in

    How teachers make ethical judgments when using AI in the classroom

    A teacher’s gender and comfort with technology factor into whether artificial intelligence is adopted in the classroom, as shown in a new report from the USC Center for Generative AI and Society.
    The study, “AI in K-12 Classrooms: Ethical Considerations and Lessons Learned,” explores how teachers make ethical judgments about using AI in their classrooms. The paper — authored by Stephen Aguilar, associate director of the center and assistant professor of education at the USC Rossier School of Education — details differences in ethical evaluations of generative AI, as well as rule-based and outcome-based views regarding AI.
    “The way we teach critical thinking will change with AI,” said Aguilar. “Students will need to judge when, how and for what purpose they will use generative AI. Their ethical perspectives will drive those decisions.”
    The study is part of a larger report from the USC Center for Generative AI and Society titled “Critical Thinking and Ethics in the Age of Generative AI in Education.” In addition to the study, the report introduces the center’s inaugural AI Fellows Program to support critical thinking and writing in the age of AI for undergraduate students, and looks ahead to building the next generation of generative AI tools. The center advances the USC Frontiers of Computing initiative, a $1 billion-plus investment to promote and expand advanced computing research and education across the university in a strategic, thoughtful way.
    Ethical ramifications a key factor in adoption of AI in the classroom
    As AI technologies become more prevalent in the classroom, it is essential for educators to consider the ethical implications and foster critical thinking skills among students. Taking a thoughtful approach, educators will need to guide students in evaluating AI-generated content and encourage them to question the ethical considerations surrounding the use of AI.
    The study’s goal was to understand the teachers’ perspectives on ethics around AI. Teachers were asked to rate how much they agreed with different ethical ideas and to rate their willingness to use generative AI, like ChatGPT, in their classrooms.

    The study included 248 K-12 educators from public, charter and private schools, who had an average of 11 years of teaching experience. Of those who participated, 43% taught at elementary school, 16% taught middle school and 40% taught high school students. Over half of participants identified as women; educators from 41 states in the United States participated.
    The published results suggest gender-based nuances. “What we found was that women teachers in our study were more likely to rate their deontological approaches higher,” said Aguilar. “Male teachers cared more about the consequences of AI.” Female teachers supported rule-based (deontological) perspectives when compared to male teachers.
    This sample also suggests that self-efficacy (conindence in using technology) and anxiety (worry about using technology) were found to be important in both rule-based and outcome-based views regarding AI use. “Teachers who had more self-advocacy with using [AI] felt more confident using technologies or had less anxiety,” said Aguilar. “Both of those were important in terms of the sorts of judgments that they’re making.”
    Aguilar found the philosophical thought experiment the “trolley problem” applicable in his research. The trolley problem is a moral dilemma that questions whether it is morally acceptable to sacrifice one to save a greater number. In education, is a teacher a rule-follower (“deontological” perspective) or an outcome-seeker (“consequentialist” perspective)? Educators would have to decide when, where and how students can use generative AI in the classroom.
    In the study, Aguilar concluded that teachers are “active participants, grappling with the moral challenges posed by AI.” Educators are also asking deeper questions about AI system values and student fairness. While teachers have different points of view on AI, there is a consensus for the need to adopt an ethical framework for AI in education.
    Generative AI holds ‘great promise’ as educational tool
    The report is the first from the USC Center for Generative AI and Society. Announced in March 2023, the center was created to explore the transformative impact of AI on culture, education, media and society. The center is led by co-directors William Swartout, chief science officer for the Institute for Creative Technologies at the USC Viterbi School of Engineering, who leads the education effort; and Holly Willis, a professor and chair of the media arts + practice divisions at the USC School of Cinematic Arts, who is researching the intersection with media and culture.

    “Rather than banning generative AI from the classroom, we need to rethink the educational process and consider how generative AI might be used to improve education, much like we did years ago for mathematics education when cheap calculators became available,” said Swartout, whose analysis “Generative AI and Education: Deny and Detect or Embrace and Enhance?” appears in the overall report. “For example, by asking students to look at texts produced by generative AI and consider whether the facts are right and the arguments make sense, we could help improve their critical thinking skills.”
    Swartout said generative AI could be used to help a student brainstorm a topic before they begin writing. Posing questions like “Are there alternative points of view on this topic?” or “What would be a counterargument to what I’m proposing?” to generative AI can also be used to critique an essay, pointing out ways it could be improved, he added. Fears about using these tools to cheat could be alleviated with a process-based approach to evaluate a student’s work.
    “To reduce the risk of cheating, we need to record and evaluate the process that a student goes through in creating an essay, rather than just grading the artifact at the end,” he said.
    “Incorporating generative AI into the classroom — if done right — holds great promise as an educational tool.”
    The report also includes research from Gale Sinatra and Changzhao Wang of USC Rossier, undergraduate Eric Bui of the USC Dornsife College of Letters, Arts and Sciences and Benjamin Nye of the USC Institute for Creative Technologies, who also serves as an associate director for the Center for Generative AI and Society.
    “We must ensure that such technologies are employed to augment human capabilities, not to replace them, to preserve the inherently relational and emotional aspects of teaching and learning,” said USC Rossier Dean Pedro Noguera. “The USC Center for Generative AI and Society’s new report is an invitation to educators, policymakers, technologists and learners to examine how generative AI can contribute to the future of education.” More

  • in

    A machine learning framework that encodes images like a retina

    A major challenge to developing better neural prostheses is sensory encoding: transforming information captured from the environment by sensors into neural signals that can be interpreted by the nervous system. But because the number of electrodes in a prosthesis is limited, this environmental input must be reduced in some way, while still preserving the quality of the data that is transmitted to the brain.
    Demetri Psaltis (Optics Lab) and Christophe Moser (Laboratory of Applied Photonics Devices) collaborated with Diego Ghezzi of the Hôpital ophtalmique Jules-Gonin — Fondation Asile des Aveugles (previously Medtronic Chair in Neuroengineering at EPFL) to apply machine learning to the problem of compressing image data with multiple dimensions, such as color, contrast, etc. In their case, the compression goal was downsampling, or reducing the number of pixels of an image to be transmitted via a retinal prosthesis.
    “Downsampling for retinal implants is currently done by pixel averaging, which is essentially what graphics software does when you want to reduce a file size. But at the end of the day, this is a mathematical process; there is no learning involved,” Ghezzi explains.
    “We found that if we applied a learning-based approach, we got improved results in terms of optimized sensory encoding. But more surprising was that when we used an unconstrained neural network, it learned to mimic aspects of retinal processing on its own.”
    Specifically, the researchers’ machine learning approach, called an actor-model framework, was especially good at finding a “sweet spot” for image contrast. Ghezzi uses Photoshop as an example. “If you move the contrast slider too far in one or the other direction, the image becomes harder to see. Our network evolved filters to reproduce some of the characteristics of retinal processing.”
    The results have recently been published in Nature Communications.
    Validation both in-silico and ex-vivo
    In the actor-model framework, two neural networks work in a complementary fashion. The model portion, or forward model, acts as a digital twin of the retina: it is first trained to receive a high-resolution image and output a binary neural code that is as similar as possible to the neural code generated by a biological retina. The actor network is then trained to downsample a high-resolution image that can elicit a neural code from the forward model that is as close as possible to that produced by the biological retina in response to the original image.

    Using this framework, the researchers tested downsampled images on both the retina digital twin and on mouse cadaver retinas that had been removed (explanted) and placed in a culture medium. Both experiments revealed that the actor-model approach produced images eliciting a neuronal response more akin to the original image response than an image generated by a learning-free computation approach, such as pixel-averaging.
    Despite the methodological and ethical challenges involved in using explanted mouse retinas, Ghezzi says that it was this ex-vivo validation of their model that makes their study a true innovation in the field.
    “We cannot only trust the digital, or in-silico, model. This is why we did these experiments — to validate our approach.”
    Other sensory horizons
    Because the team has past experience working on retinal prostheses, this was their first use of the actor-model framework for sensory encoding. But Ghezzi sees potential to expand the framework’s applications within and beyond the realm of vision restoration. He adds that it will be important to determine how much of the model, which was validated using mice retinas, is applicable to humans.
    “The obvious next step is to see how can we compress an image more broadly, beyond pixel reduction, so that the framework can play with multiple visual dimensions at the same time. Another possibility is to transpose this retinal model to outputs from other regions of the brain. It could even potentially be linked to other devices, like auditory or limb prostheses,” Ghezzi says. More

  • in

    Could artificial intelligence help or hurt scientific research articles?

    Since its introduction to the public in November 2022, ChatGPT, an artificial intelligence system, has substantially grown in use, creating written stories, graphics, art and more with just a short prompt from the user. But when it comes to scientific, peer-reviewed research, could the tool be useful?
    “Right now, many journals do not want people to use ChatGPT to write their articles, but a lot of people are still trying to use it,” said Melissa Kacena, PhD, vice chair of research and a professor of orthopaedic surgery at the Indiana University School of Medicine. “We wanted to study whether ChatGPT is able to write a scientific article and what are the different ways you could successfully use it.”
    The researchers took three different topics — fractures and the nervous system, Alzheimer’s disease and bone health and COVID-19 and bone health — and prompted the subscription version of ChatGPT ($20/month) to create scientific articles about them. The researchers took 3 different approaches for the original draft of the articles — all human, all ChatGPT or a combination. The study is published in a compilation of 12 articles in a new, special edition of Current Osteoporosis Reports.
    “The standard way of writing a review article is to do a literature search, write an outline, start writing, and then faculty members revise and edit the draft,” Kacena said. “We collected data about how much time it takes for this human method and how much time it takes for ChatGPT to write and then for faculty to edit the different articles.”
    In the articles written only by ChatGPT, up to 70% of the references were wrong. But when using an AI-assisted approach with more human involvement, they saw more plagiarism, especially when giving the tool more references up front. Overall, the use of AI decreased time spent to write the article, but required more extensive fact checking.
    Another concern is with the writing style used by ChatGPT. Even though the tool was prompted to use a higher level of scientific writing, the words and phrases were not necessarily written at the level someone would expect to see from a researcher.
    “It was repetitive writing and even if it was structured the way you learn to write in school, it was scary to know there were maybe incorrect references or wrong information,” said Lilian Plotkin, PhD, professor of anatomy, cell biology and physiology at the IU School of Medicine and coauthor on five of the papers.

    Jill Fehrenbacher, PhD, associate professor of pharmacology and toxicology at the school and coauthor on nine of the papers, said she believes even though many scientific journals do not want authors to use ChatGPT, many people still will — especially non-native English speakers.
    “People may still write everything themselves, but then put it into ChatGPT to fix their grammar or help with their writing, so I think we need to look at how do we shepherd people in using it appropriately and even helping them?” Fehrenbacher said. “We hope to provide a guide for the scientific community so that if people are going to use it, here are some tips and advice.”
    “I think it’s here to stay, but we need to understand how we can use it in an appropriate manner that won’t compromise someone’s reputation or spread misinformation,” Kacena said.
    Faculty and students from several departments and centers across the IU School of Medicine were involved, including orthopaedic surgery; anatomy, cell biology and physiology; pharmacology and toxicology; radiology and imaging sciences; anesthesia; the Stark Neuroscience Research Institute; the Indiana Center for Musculoskeletal Health; and the IU School of Dentistry. Authors are also affiliated with the Richard L. Roudebush Veterans Affairs Medical Center in Indianapolis, Eastern Virginia Medical School in Norfolk, Virginia, and Mount Holyoke College in South Hadley, Massachusetts. More

  • in

    Doctors have more difficulty diagnosing disease when looking at images of darker skin

    When diagnosing skin diseases based solely on images of a patient’s skin, doctors do not perform as well when the patient has darker skin, according to a new study from MIT researchers.
    The study, which included more than 1,000 dermatologists and general practitioners, found that dermatologists accurately characterized about 38 percent of the images they saw, but only 34 percent of those that showed darker skin. General practitioners, who were less accurate overall, showed a similar decrease in accuracy with darker skin.
    The research team also found that assistance from an artificial intelligence algorithm could improve doctors’ accuracy, although those improvements were greater when diagnosing patients with lighter skin.
    While this is the first study to demonstrate physician diagnostic disparities across skin tone, other studies have found that the images used in dermatology textbooks and training materials predominantly feature lighter skin tones. That may be one factor contributing to the discrepancy, the MIT team says, along with the possibility that some doctors may have less experience in treating patients with darker skin.
    “Probably no doctor is intending to do worse on any type of person, but it might be the fact that you don’t have all the knowledge and the experience, and therefore on certain groups of people, you might do worse,” says Matt Groh PhD ’23, an assistant professor at the Northwestern University Kellogg School of Management. “This is one of those situations where you need empirical evidence to help people figure out how you might want to change policies around dermatology education.”
    Groh is the lead author of the study, which appears today in Nature Medicine. Rosalind Picard, an MIT professor of media arts and sciences, is the senior author of the paper.
    Diagnostic discrepancies
    Several years ago, an MIT study led by Joy Buolamwini PhD ’22 found that facial-analysis programs had much higher error rates when predicting the gender of darker skinned people. That finding inspired Groh, who studies human-AI collaboration, to look into whether AI models, and possibly doctors themselves, might have difficulty diagnosing skin diseases on darker shades of skin — and whether those diagnostic abilities could be improved.

    “This seemed like a great opportunity to identify whether there’s a social problem going on and how we might want fix that, and also identify how to best build AI assistance into medical decision-making,” Groh says. “I’m very interested in how we can apply machine learning to real-world problems, specifically around how to help experts be better at their jobs. Medicine is a space where people are making really important decisions, and if we could improve their decision-making, we could improve patient outcomes.”
    To assess doctors’ diagnostic accuracy, the researchers compiled an array of 364 images from dermatology textbooks and other sources, representing 46 skin diseases across many shades of skin.
    Most of these images depicted one of eight inflammatory skin diseases, including atopic dermatitis, Lyme disease, and secondary syphilis, as well as a rare form of cancer called cutaneous T-cell lymphoma (CTCL), which can appear similar to an inflammatory skin condition. Many of these diseases, including Lyme disease, can present differently on dark and light skin.
    The research team recruited subjects for the study through Sermo, a social networking site for doctors. The total study group included 389 board-certified dermatologists, 116 dermatology residents, 459 general practitioners, and 154 other types of doctors.
    Each of the study participants was shown 10 of the images and asked for their top three predictions for what disease each image might represent. They were also asked if they would refer the patient for a biopsy. In addition, the general practitioners were asked if they would refer the patient to a dermatologist.
    “This is not as comprehensive as in-person triage, where the doctor can examine the skin from different angles and control the lighting,” Picard says. “However, skin images are more scalable for online triage, and they are easy to input into a machine-learning algorithm, which can estimate likely diagnoses speedily.”
    The researchers found that, not surprisingly, specialists in dermatology had higher accuracy rates: They classified 38 percent of the images correctly, compared to 19 percent for general practitioners.

    Both of these groups lost about four percentage points in accuracy when trying to diagnose skin conditions based on images of darker skin — a statistically significant drop. Dermatologists were also less likely to refer darker skin images of CTCL for biopsy, but more likely to refer them for biopsy for noncancerous skin conditions.
    A boost from AI
    After evaluating how doctors performed on their own, the researchers also gave them additional images to analyze with assistance from an AI algorithm the researchers had developed. The researchers trained this algorithm on about 30,000 images, asking it to classify the images as one of the eight diseases that most of the images represented, plus a ninth category of “other.”
    This algorithm had an accuracy rate of about 47 percent. The researchers also created another version of the algorithm with an artificially inflated success rate of 84 percent, allowing them to evaluate whether the accuracy of the model would influence doctors’ likelihood to take its recommendations.
    “This allows us to evaluate AI assistance with models that are currently the best we can do, and with AI assistance that could be more accurate, maybe five years from now, with better data and models,” Groh says.
    Both of these classifiers are equally accurate on light and dark skin. The researchers found that using either of these AI algorithms improved accuracy for both dermatologists (up to 60 percent) and general practitioners (up to 47 percent).
    They also found that doctors were more likely to take suggestions from the higher-accuracy algorithm after it provided a few correct answers, but they rarely incorporated AI suggestions that were incorrect. This suggests that the doctors are highly skilled at ruling out diseases and won’t take AI suggestions for a disease they have already ruled out, Groh says.
    “They’re pretty good at not taking AI advice when the AI is wrong and the physicians are right. That’s something that is useful to know,” he says.
    While dermatologists using AI assistance showed similar increases in accuracy when looking at images of light or dark skin, general practitioners showed greater improvement on images of lighter skin than darker skin.
    “This study allows us to see not only how AI assistance influences, but how it influences across levels of expertise,” Groh says. “What might be going on there is that the PCPs don’t have as much experience, so they don’t know if they should rule a disease out or not because they aren’t as deep into the details of how different skin diseases might look on different shades of skin.”
    The researchers hope that their findings will help stimulate medical schools and textbooks to incorporate more training on patients with darker skin. The findings could also help to guide the deployment of AI assistance programs for dermatology, which many companies are now developing.
    The research was funded by the MIT Media Lab Consortium and the Harold Horowitz Student Research Fund. More

  • in

    One person can supervise ‘swarm’ of 100 unmanned autonomous vehicles

    Research involving Oregon State University has shown that a “swarm” of more than 100 autonomous ground and aerial robots can be supervised by one person without subjecting the individual to an undue workload.
    The findings represent a big step toward efficiently and economically using swarms in a range of roles from wildland firefighting to package delivery to disaster response in urban environments.
    “We don’t see a lot of delivery drones yet in the United States, but there are companies that have been deploying them in other countries,” said Julie A. Adams of the OSU College of Engineering. “It makes business sense to deploy delivery drones at a scale, but it will require a single person be responsible for very large numbers of these drones. I’m not saying our work is a final solution that shows everything is OK, but it is the first step toward getting additional data that would facilitate that kind of a system.”
    The results, published in Field Robotics, stem from the Defense Advanced Research Project Agency’ program known as OFFSET, short for Offensive Swarm-Enabled Tactics. Adams was part of a group that received an OFFSET grant in 2017.
    During the course of the four-year project, researchers deployed swarms of up to 250 autonomous vehicles — multi-rotor aerial drones, and ground rovers — able to gather information in “concrete canyon” urban surroundings where line-of-sight, satellite-based communication is impaired by buildings. The information the swarms collect during their missions at military urban training sites have the potential to help keep U.S. troops and civilians more safe.
    Adams was a co-principal investigator on one of two swarm system integrator teams that developed the system infrastructure and integrated the work of other teams focused on swarm tactics, swarm autonomy, human-swarm teaming, physical experimentation and virtual environments.
    “The project required taking off-the-shelf technologies and building the autonomy needed for them to be deployed by a single human called the swarm commander,” said Adams, the associate director for deployed systems and policy at OSU’s Collaborative Robotics and Intelligent Systems Institute. “That work also required developing not just the needed systems and the software, but also the user interface for that swarm commander to allow a single human to deploy these ground and aerial systems.”
    Collaborators with Smart Information Flow Technologies developed a virtual reality interface called I3 that lets the commander control the swarm with high-level directions.

    “The commanders weren’t physically driving each individual vehicle, because if you’re deploying that many vehicles, they can’t — a single human can’t do that,” Adams said. “The idea is that the swarm commander can select a play to be executed and can make minor adjustments to it, like a quarterback would in the NFL. The objective data from the trained swarm commanders demonstrated that a single human can deploy these systems in built environments, which has very broad implications beyond this project.”
    Testing took place at multiple Department of Defense Combined Armed Collective Training Facilities. Each multiday field exercise introduced additional vehicles, and every 10 minutes swarm commanders provided information about their workload and how stressed or fatigued they were.
    During the final field exercise, featuring more than 100 vehicles, the commanders’ workload levels were also assessed through physiological sensors that fed information into an algorithm that estimates someone’s sensory channel workload levels and their overall workload.
    “The swarm commanders’ workload estimate did cross the overload threshold frequently, but just for a few minutes at a time, and the commander was able to successfully complete the missions, often under challenging temperature and wind conditions,” Adams said. More

  • in

    Computer-engineered DNA to study cell identities

    A new computer program allows scientists to design synthetic DNA segments that indicate, in real time, the state of cells. Reported by the Gargiulo lab in “Nature Communications,” it will be used to screen for anti-cancer or viral infections drugs, or to improve gene and cell-based immunotherapies.
    All the cells in our body have the same genetic code, and yet they can differ in their identities, functions and disease states. Telling one cell apart from another in a simple manner, in real time, would prove invaluable for scientists trying to understand inflammation, infections or cancers. Now, scientists at the Max Delbrück Center have created an algorithm that can design such tools that reveal the identity and state of cells using segments of DNA called “synthetic locus control regions” (sLCRs). They can be used in a variety of biological systems. The findings, by the lab of Dr Gaetano Gargiulo, head of the Molecular Oncology Lab, are reported in Nature Communications.
    “This algorithm enables us to create precise DNA tools for marking and studying cells, offering new insights into cellular behaviors,” says Gargiulo, senior author of the study. “We hope this research opens doors to a more straightforward and scalable way of understanding and manipulating cells.”
    This effort began when Dr Carlos Company, a former graduate student at the Gargiulo lab and co-first author of the study, started to invest energy into making the design of the DNA tools automated and accessible to other scientists. He coded an algorithm that can generate tools to understand basic cellular processes as well as disease processes such as cancers, inflammation and infections.
    “This tool allows researchers to examine the way cells transform from one type to another. It is particularly innovative because it compiles all the crucial instructions that direct these changes into a simple synthetic DNA sequence. In turn, this simplifies studying complex cellular behaviors in important areas like cancer research and human development,” says Company.
    Algorithm to make a tailored DNA tool
    The computer program is named “logical design of synthetic cis-regulatory DNA” (LSD). The researchers input the known genes and transcription factors associated with the specific cell states they want to study, and the program uses this to identify DNA segments (promoters and enhancers) controlling the activity in the cell of interest. This information is sufficient to discover functional sequences, and scientists do not have to know the precise genetic or molecular reason behind a cell’s behavior; they just have to construct the sLCR.

    The program looks within the genomes of either humans or mouse to find places where transcription factors are highly likely to bind, says Yuliia Dramaretska, a graduate student at the Gargiulo lab and co-first author. It spits out a list of 150-basepair long sequences that are relevant, and which likely act as the active promoters and enhancers for the condition being studied.
    “It’s not giving a random list of those regions, obviously,” she says. “The algorithm is actually ranking them and finding the segments that will most efficiently represent the phenotype you want to study.”
    Like a lamp inside the cells
    Scientists can then make a tool, called a “synthetic locus control region” (sLCR), which includes the generated sequence followed by a DNA segment encoding a fluorescent protein. “The sLCRs are like an automated lamp that you can put inside of the cells. This lamp switches on only under the conditions you want to study,” says Dr Michela Serresi, a researcher at the Gargiulo lab and co-first author. The color of the “lamp” can be varied to match different states of interest, so that scientists can look under a fluorescence microscope and immediately know the state of each cell from its color. “We can follow with our eyes the color in a petri dish when we give a treatment,” Serresi says.
    The scientists have validated the utility of the computer program by using it to screen for drugs in SARS-CoV-2 infected cells, as published last year in “Science Advances.” They also used it to find mechanisms implicated in brain cancers called glioblastomas, where no single treatment works. “In order to find treatment combinations that work for specific cell states in glioblastomas, you not only need to understand what defines these cell states, but you also need to see them as they arise,” says Dr Matthias Jürgen Schmitt, the researcher at the Gargiulo lab and co-first author, who used the tools in the lab to showcase their value.
    Now, imagine immune cells engineered in the lab as a gene therapy to kill a type of cancer. When infused into the patient, not all these cells will work as intended. Some will be potent and while others may be in a dysfunctional state. Funded by an European Research Council grant, the Gargiulo lab will be using this system to study the behavior of these delicate anti-cancer cell-based therapeutics during manufacturing. “With the right collaborations, this method holds potential for advancing treatments in areas like cancer, viral infections, and immunotherapies,” Gargiulo says. More

  • in

    Direct view of tantalum oxidation that impedes qubit coherence

    Scientists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory and DOE’s Pacific Northwest National Laboratory (PNNL) have used a combination of scanning transmission electron microscopy (STEM) and computational modeling to get a closer look and deeper understanding of tantalum oxide. When this amorphous oxide layer forms on the surface of tantalum — a superconductor that shows great promise for making the “qubit” building blocks of a quantum computer — it can impede the material’s ability to retain quantum information. Learning how the oxide forms may offer clues as to why this happens — and potentially point to ways to prevent quantum coherence loss. The research was recently published in the journal ACS Nano.
    The paper builds on earlier research by a team at Brookhaven’s Center for Functional Nanomaterials (CFN), Brookhaven’s National Synchrotron Light Source II (NSLS-II), and Princeton University that was conducted as part of the Co-design Center for Quantum Advantage (C2QA), a Brookhaven-led national quantum information science research center in which Princeton is a key partner.
    “In that work, we used X-ray photoemission spectroscopy at NSLS-II to infer details about the type of oxide that forms on the surface of tantalum when it is exposed to oxygen in the air,” said Mingzhao Liu, a CFN scientist and one of the lead authors on the study. “But we wanted to understand more about the chemistry of this very thin layer of oxide by making direct measurements,” he explained.
    So, in the new study, the team partnered with scientists in Brookhaven’s Condensed Matter Physics & Materials Science (CMPMS) Department to use advanced STEM techniques that enabled them to study the ultrathin oxide layer directly. They also worked with theorists at PNNL who performed computational modeling that revealed the most likely arrangements and interactions of atoms in the material as they underwent oxidation. Together, these methods helped the team build an atomic-level understanding of the ordered crystalline lattice of tantalum metal, the amorphous oxide that forms on its surface, and intriguing new details about the interface between these layers.
    “The key is to understand the interface between the surface oxide layer and the tantalum film because this interface can profoundly impact qubit performance,” said study co-author Yimei Zhu, a physicist from CMPMS, echoing the wisdom of Nobel laureate Herbert Kroemer, who famously asserted, “The interface is the device.”
    Emphasizing that “quantitatively probing a mere one-to-two-atomic-layer-thick interface poses a formidable challenge,” Zhu noted, “we were able to directly measure the atomic structures and bonding states of the oxide layer and tantalum film as well as identify those of the interface using the advanced electron microscopy techniques developed at Brookhaven.”
    “The measurements reveal that the interface consists of a ‘suboxide’ layer nestled between the periodically ordered tantalum atoms and the fully disordered amorphous tantalum oxide. Within this suboxide layer, only a few oxygen atoms are integrated into the tantalum crystal lattice,” Zhu said.

    The combined structural and chemical measurements offer a crucially detailed perspective on the material. Density functional theory calculations then helped the scientists validate and gain deeper insight into these observations.
    “We simulated the effect of gradual surface oxidation by gradually increasing the number of oxygen species at the surface and in the subsurface region,” said Peter Sushko, one of the PNNL theorists.
    By assessing the thermodynamic stability, structure, and electronic property changes of the tantalum films during oxidation, the scientists concluded that while the fully oxidized amorphous layer acts as an insulator, the suboxide layer retains features of a metal.
    “We always thought if the tantalum is oxidized, it becomes completely amorphous, with no crystalline order at all,” said Liu. “But in the suboxide layer, the tantalum sites are still quite ordered.”
    With the presence of both fully oxidized tantalum and a suboxide layer, the scientists wanted to understand which part is most responsible the loss of coherence in qubits made of this superconducting material.
    “It’s likely the oxide has multiple roles,” Liu said.

    First, he noted, the fully oxidized amorphous layer contains many lattice defects. That is, the locations of the atoms are not well defined. Some atoms can shift around to different configurations, each with a different energy level. Though these shifts are small, each one consumes a tiny bit of electrical energy, which contributes to loss of energy from the qubit.
    “This so-called two-level system loss in an amorphous material brings parasitic and irreversible loss to the quantum coherence — the ability of the material to hold onto quantum information,” Liu said.
    But because the suboxide layer is still crystalline, “it may not be as bad as people were thinking,” Liu said. Maybe the more-fixed atomic arrangements in this layer will minimize two-level system loss.
    Then again, he noted, because the suboxide layer has some metallic characteristics, it could cause other problems.
    “When you put a normal metal next to a superconductor, that could contribute to breaking up the pairs of electrons that move through the material with no resistance,” he noted. “If the pair breaks into two electrons again, then you will have loss of superconductivity and coherence. And that is not what you want.”
    Future studies may reveal more details and strategies for preventing loss of superconductivity and quantum coherence in tantalum.
    This research was funded by the DOE Office of Science (BES). In addition to the experimental facilities described above, this research used computational resources at CFN and at the National Energy Research Scientific Computing Center (NERSC) at DOE’s Lawrence Berkeley National Laboratory. CFN, NSLS-II, and NERSC are DOE Office of Science user facilities. More