More stories

  • in

    Recording thousands of nerve cell impulses at high resolution

    For over 15 years, ETH Professor Andreas Hierlemann and his group have been developing microelectrode-array chips that can be used to precisely excite nerve cells in cell cultures and to measure electrical cell activity. These developments make it possible to grow nerve cells in cell-culture dishes and use chips located at the bottom of the dish to examine each individual cell in a connected nerve tissue in detail. Alternative methods for conducting such measurements have some clear limitations. They are either very time-consuming — because contact to each cell has to be individually established — or they require the use of fluorescent dyes, which influence the behaviour of the cells and hence the outcome of the experiments.
    Now, researchers from Hierlemann’s group at the Department of Biosystems Science and Engineering of ETH Zurich in Basel, together with Urs Frey and his colleagues from the ETH spin-off MaxWell Biosystems, developed a new generation of microelectrode-array chips. These chips enable detailed recordings of considerably more electrodes than previous systems, which opens up new applications.
    Stronger signal required
    As with previous chip generations, the new chips have around 20,000 microelectrodes in an area measuring 2 by 4 millimetres. To ensure that these electrodes pick up the relatively weak nerve impulses, the signals need to be amplified. Examples of weak signals that the scientists want to detect include those of nerve cells, derived from human pluripotent stem cells (iPS cells). These are currently used in many cell-culture disease models. Another reason to significantly amplify the signals is if the researchers want to track nerve impulses in axons (fine, very thin fibrous extensions of a nerve cell).
    However, high-performance amplification electronics take up space, which is why the previous chip was able to simultaneously amplify and read out signals from only 1,000 of the 20,000 electrodes. Although the 1,000 electrodes could be arbitrarily selected, they had to be determined prior to every measurement. This meant that it was possible to make detailed recordings over only a fraction of the chip area during a measurement.
    Background noise reduced
    In the new chip, the amplifiers are smaller, permitting the signals of all 20,000 electrodes to be amplified and measured at the same time. However, the smaller amplifiers have higher noise levels. So, to make sure they capture even the weakest nerve impulses, the researchers included some of the larger and more powerful amplifiers into the new chips and employ a nifty trick: they use these powerful amplifiers to identify the time points, at which nerve impulses occur in the cell culture dish. At these time points, they then can search for signals on the other electrodes, and by taking the average of several successive signals, they can reduce the background noise. This procedure yields a clear image of the signal activity over the entire area being measured.
    In first experiments, which the researchers published in the journal Nature Communications, they demonstrated their method on human iPS-derived neuronal cells as well as on brain sections, retina pieces, cardiac cells and neuronal spheroids.
    Application in drug development
    With the new chip, the scientists can produce electrical images of not only the cells but also the extension of their axons, and they can determine how fast a nerve impulse is transmitted to the farthest reaches of the axons. “The previous generations of microelectrode array chips let us measure up to 50 nerve cells. With the new chip, we can perform detailed measurements of more than 1,000 cells in a culture all at once,” Hierlemann says.
    Such comprehensive measurements are suitable for testing the effects of drugs, meaning that scientists can now conduct research and experiments with human cell cultures instead of relying on lab animals. The technology thus also helps to reduce the number of animal experiments.
    The ETH spin-off MaxWell Biosystems is already marketing the existing microelectrode technology, which is now in use around the world by over a hundred research groups at universities and in industry. At present, the company is looking into a potential commercialisation of the new chip.

    Story Source:
    Materials provided by ETH Zurich. Original written by Fabio Bergamin. Note: Content may be edited for style and length. More

  • in

    Avoiding environmental losses in quantum information systems

    New research published in EPJ D has revealed how robust initial states can be prepared in quantum information systems, minimising any unwanted transitions which lead to losses in quantum information.
    Through new techniques for generating ‘exceptional points’ in quantum information systems, researchers have minimised the transitions through which they lose information to their surrounding environments.
    Recently, researchers have begun to exploit the effects of quantum mechanics to process information in some fascinating new ways. One of the main challenges faced by these efforts is that systems can easily lose their quantum information as they interact with particles in their surrounding environments. To understand this behaviour, researchers in the past have used advanced models to observe how systems can spontaneously evolve into different states over time — losing their quantum information in the process. Through new research published in EPJ D, M. Reboiro and colleagues at the University of La Plata in Argentina have discovered how robust initial states can be prepared in quantum information systems, avoiding any unwanted transitions extensive time periods.
    The team’s findings could provide valuable insights for the rapidly advancing field of quantum computing; potentially enabling more complex operations to be carried out using the cutting-edge devices. Their study considered a ‘hybrid’ quantum information system based around a specialised loop of superconducting metal, which interacted with an ensemble of imperfections within the atomic lattice of diamond. Within this system, the researchers aimed to generate sets of ‘exceptional points.’ When these are present, information states don’t decay in the usual way: instead, any gains and losses of quantum information can be perfectly balanced between states.
    By accounting for quantum effects, Reboiro and colleagues modelled how the dynamics of ensembled imperfections were affected by their surrounding environments. From these results, they combined information states which displayed large transition probabilities over long time intervals — allowing them to generate exceptional points. Since this considerably increased the survival probability of a state, the team could finally prepare initial states which were robust against the effects of their environments. Their techniques could soon be used to build quantum information systems which retain their information for far longer than was previously possible.

    Story Source:
    Materials provided by Springer. Note: Content may be edited for style and length. More

  • in

    To kill a quasiparticle: A quantum whodunit

    In large systems of interacting particles in quantum mechanics, an intriguing phenomenon often emerges: groups of particles begin to behave like single particles. Physicists refer to such groups of particles as quasiparticles.
    Understanding the properties of quasiparticles may be key to comprehending, and eventually controlling, technologically important quantum effects like superconductivity and superfluidity.
    Unfortunately, quasiparticles are only useful while they live. It is thus particularly unfortunate that many quasiparticles die young, lasting far, far less than a second.
    The authors of a new Monash University-led study published today in Physical Review Letters investigate the crucial question: how do quasiparticles die?
    Beyond the usual suspect — quasiparticle decay into lower energy states — the authors identify a new culprit: many-body dephasing.

    advertisement

    MANY BODY DEPHASING
    Many-body dephasing is the disordering of the constituent particles in the quasiparticle that occurs naturally over time.
    As the disorder increases, the quasiparticle’s resemblance to a single particle fades. Eventually, the inescapable effect of many-body dephasing kills the quasiparticle.
    Far from a negligible effect, the authors demonstrate that many-body dephasing can even dominate over other forms of quasiparticle death.
    This is shown through investigations of a particularly ‘clean’ quasiparticle — an impurity in an ultracold atomic gas — where the authors find strong evidence of many-body dephasing in past experimental results.
    The authors focus on the case where the ultracold atomic gas is a Fermi sea. An impurity in a Fermi sea gives rise to a quasiparticle known as the repulsive Fermi polaron.
    The repulsive Fermi polaron is a highly complicated quasiparticle and has a history of eluding both experimental and theoretical studies.
    Through extensive simulations and new theory, the authors show that an established experimental protocol — Rabi oscillations between impurity spin states — exhibits the effects of many-body dephasing in the repulsive Fermi polaron.
    These previously unrecognised results provide strong evidence that many-body dephasing is fundamental to the nature of quasiparticles. More

  • in

    Study reveals design flaws of chatbot-based symptom-checker apps

    Millions of people turn to their mobile devices when seeking medical advice. They’re able to share their symptoms and receive potential diagnoses through chatbot-based symptom-checker (CSC) apps.
    But how do these apps compare to a trip to the doctor’s office?
    Not well, according to a new study. Researchers from Penn State’s College of Information Sciences and Technology have found that existing CSC apps lack the functions to support the full diagnostic process of a traditional visit to a medical facility. Rather, they said, the apps can only support five processes of an actual exam: establishing a patient history, evaluating symptoms, giving an initial diagnosis, ordering further diagnostic tests, and providing referrals or other follow-up treatments.
    “These apps do not support conducting physical exams, providing a final diagnosis, and performing and analyzing test results, because these three processes are difficult to realize using mobile apps,” said Yue You, a graduate student in the College of Information Sciences and Technology and lead author on the study.
    In the study, the researchers investigated the functionalities of popular CSC apps through a feature review, then examined user experiences by analyzing user reviews and conducting user interviews. Through their user experience analysis, You and her team also found that users perceive CSC apps to lack support for a comprehensive medical history, flexible symptom input, comprehensible questions, and diverse diseases and user groups.
    The findings could inform functional and conversational design updates for health care chatbots, such as improving the functions that enable users to input their symptoms or using comprehensible language and providing explanations during conversations.
    “Especially in health and medicine, [another question is] is there something else we should consider in the chatbot design, such as how should we let users describe their symptoms when interacting with the chatbot?” said You.
    Additionally, the findings could help individuals understand the influence of AI technology, such as how AI could influence or change traditional medical visits.
    “In the past, people generally trusted doctors,” You said. “But now with the emergence of AI symptom checkers and the internet, people have more sources of information. How would this information challenge doctors? Do people trust this information and why? I think this work is a starting point to think about the influence of AI symptom checkers.”
    The findings will be presented at the American Medical Informatics Association (AMIA) Virtual Annual Symposium in November.
    You’s work serves as a preliminary study for future in-depth exploration. Currently, she is working to investigate how to design a better, explainable COVID-19 symptom checker with College of Information Sciences and Technology faculty members Xinning Gui, assistant professor; Jack Carroll, Distinguished Professor of Information Sciences and Technology; Yubo Kou, assistant professor; and Chun-Hua Tsai, assistant research professor.

    Story Source:
    Materials provided by Penn State. Original written by Jessica Hallman. Note: Content may be edited for style and length. More

  • in

    Physicists develop a method to improve gravitational wave detector sensitivity

    Gravitational wave detectors have opened a new window to the universe by measuring the ripples in spacetime produced by colliding black holes and neutron stars, but they are ultimately limited by quantum fluctuations induced by light reflecting off of mirrors. LSU Ph.D. physics alumnus Jonathan Cripe and his team of LSU researchers have conducted a new experiment with scientists from Caltech and Thorlabs to explore a way to cancel this quantum backaction and improve detector sensitivity.
    In a new paper in Physical Review X, the investigators present a method for removing quantum backaction in a simplified system using a mirror the size of a human hair and show the motion of the mirror is reduced in agreement with theoretical predictions. The research was supported by the National Science Foundation.
    Despite using 40-kilogram mirrors for detecting passing gravitational waves, quantum fluctuations of light disturb the position of the mirrors when the light is reflected. As gravitational wave detectors continue to grow more sensitive with incremental upgrades, this quantum backaction will become a fundamental limit to the detectors’ sensitivity, hampering their ability to extract astrophysical information from gravitational waves.
    “We present an experimental testbed for studying and eliminating quantum backaction,” Cripe said. “We perform two measurements of the position of a macroscopic object whose motion is dominated by quantum backaction and show that by making a simple change in the measurement scheme, we can remove the quantum effects from the displacement measurement. By exploiting correlations between the phase and intensity of an optical field, quantum backaction is eliminated.”
    Garrett Cole, technology manager at Thorlabs Crystalline Solutions (Crystalline Mirror Solutions was acquired by Thorlabs Inc. last year), and his team constructed the micromechanical mirrors from an epitaxial multilayer consisting of alternating GaAs and AlGaAs. An outside foundry, IQE North Carolina, grew the crystal structure while Cole and his team, including process engineers Paula Heu and David Follman, manufactured the devices at the University of California Santa Barbara nanofabrication facility.
    “By performing this measurement on a mirror visible to the naked eye — at room temperature and at frequencies audible to the human ear — we bring the subtle effects of quantum mechanics closer to the realm of human experience,” said LSU Ph.D. candidate Torrey Cullen. “By quieting the quantum whisper, we can now listen to the more subtle notes of the cosmic symphony.”
    “This research is especially timely because the Laser Interferometer Gravitational-wave Observatory, or LIGO, just announced last month in Nature that they have seen the effects of quantum radiation pressure noise at the LIGO Livingston observatory,” said Thomas Corbitt, associate professor in the LSU Department of Physics & Astronomy.
    The effort behind that paper, “Quantum correlations between light and the kilogram-mass mirrors of LIGO,” has been led by Nergis Mavalvala, dean of the MIT School of Science, as well as postdoctoral scholar Haocun Yu and research scientist Lee McCuller, both at the MIT Kavli Institute for Astrophysics and Space Research.
    “Quantum radiation pressure noise is already poking out of the noise floor in Advanced LIGO, and before long, it will be a limiting noise source in GW detectors,” Mavalvala said. “Deeper astrophysical observations will only be possible if we can reduce it, and this beautiful result from the Corbitt group at LSU demonstrates a technique for doing just that.”

    Story Source:
    Materials provided by Louisiana State University. Note: Content may be edited for style and length. More

  • in

    3D camera quickly merges depth, spectral data

    Stripes are in fashion this season at a Rice University lab, where researchers use them to make images that plain cameras could never capture.
    Their compact Hyperspectral Stripe Projector (HSP) is a step toward a new method to collect the spatial and spectral information required for self-driving cars, machine vision, crop monitoring, surface wear and corrosion detection and other applications.
    “I can envision this technology in the hands of a farmer, or on a drone, to look at a field and see not only the nutrients and water content of plants but also, because of the 3D aspect, the height of the crops,” said Kevin Kelly, an associate professor of electrical and computer engineering at Rice’s Brown School of Engineering. “Or perhaps it can look at a painting and see the surface colors and texture in detail, but with near-infrared also see underneath to the canvas.”
    Kelly’s lab could enable 3D spectroscopy on the fly with a system that combines the HSP, a monochrome sensor array and sophisticated programming to give users a more complete picture of an object’s shape and composition.
    “We’re getting four-dimensional information from an image, three spatial and one spectral, in real time,” Kelly said. “Other people use multiple modulators and thus require bright light sources to accomplish this, but we found we could do it with a light source of normal brightness and some clever optics.”
    The work by Kelly, lead author and Rice alumna Yibo Xu and graduate student Anthony Giljum is detailed in an open-access paper in Optics Express.

    advertisement

    HSP takes a cue from portable 3D imaging techniques that are already in consumers’ hands — think of face ID systems in smartphones and body trackers in gaming systems — and adds a way to pull broad spectral data from every pixel captured. This compressed data is reconstructed into a 3D map with spectral information that can incorporate hundreds of colors and be used to reveal not only the shape of an object but also its material composition.
    “Regular RGB (red, green, blue) cameras basically give you only three spectral channels,” Xu said. “But a hyperspectral camera gives us spectra in many, many channels. We can capture red at around 700 nanometers and blue at around 400 nanometers, but we can also have bandwidths at every few nanometers or less between. That gives us fine spectral resolution and a fuller understanding of the scene.
    “HSP simultaneously encodes the depth and hyperspectral measurements in a very simple and efficient way, allowing the use of a monochrome camera instead of an expensive hyperspectral camera as typically used in similar systems,” said Xu, who earned her doctorate at Rice in 2019 and is now a machine learning and computer vision research engineer at Samsung Research America Inc. She developed both the hardware and reconstruction software as part of her thesis in Kelly’s lab.
    HSP uses an off-the-shelf digital micromirror device (DMD) to project patterned stripes that look something like colorful bar codes onto a surface. Sending the white-light projection through a diffraction grating separates the overlapping patterns into colors.
    Each color is reflected back to the monochrome camera, which assigns a numerical grey level to that pixel.

    advertisement

    Each pixel can have multiple levels, one for every color stripe it reflects. These are recombined into an overall spectral value for that part of the object.
    “We use a single DMD and a single grating in HSP,” Xu said. “The novel optical design of folding the light path back to the same diffraction grating and lens is what makes it really compact. The single DMD allows us to keep the light we want and throw away the rest.”
    These finely tuned spectra can reach beyond visible light. What they reflect back to the sensor as multiplexed fine-band spectra can be used to identify the material’s chemical composition.
    At the same time, distortions in the pattern are reconstructed into 3D point clouds, essentially a picture of the target, but with a lot more data than a plain snapshot could provide.
    Kelly envisions HSP built into car headlights that can see the difference between an object and a person. “It could never get confused between a green dress and a green plant, because everything has its own spectral signature,” he said.
    Kelly believes the lab will eventually incorporate ideas from Rice’s groundbreaking single-pixel camera to further reduce the size of the device and adapt it for compressive video capture as well.
    The National Science Foundation funded the research. More

  • in

    Simpler models may be better for determining some climate risk

    Typically, computer models of climate become more and more complex as researchers strive to capture more details of our Earth’s system, but according to a team of Penn State researchers, to assess risks, less complex models, with their ability to better sample uncertainties, may be a better choice.
    “There is a downside to the very detailed, very complex models we often strive for,” said Casey Helgeson, assistant research professor, Earth and Environmental Systems Institute. “Sometimes the complexity of scientific tools constrains what we can learn through science. The choke point isn’t necessarily at the knowledge going into a model, but at the processing.”
    Climate risks are important to planners, builders, government officials and businesses. The probability of a potential event combined with the severity of the event can determine things like whether it makes sense to build in a given location.
    The researchers report online in Philosophy of Science that “there is a trade-off between a model’s capacity to realistically represent the system and its capacity to tell us how confident it is in its predictions.”
    Complex Earth systems models need a lot of supercomputer time to run. However, when looking at risk, uncertainty is an important element and researchers can only discover uncertainty through multiple runs of a computer model. Computer time is expensive.
    “We need complex models to simulate the interactions between Earth system processes,” said Vivek Srikrishnan, assistant research professor, Earth and Environmental Systems Institute. “We need simple models to quantify risks.”
    According to Klaus Keller, professor of geosciences, multiple model runs are important because many events of concern such as floods are, fortunately, the exception, not what is expected. They happen in the tails of the distribution of possible outcomes. Learning about these tails requires many model runs.

    advertisement

    Simple models, while not returning the detailed, complex information of the latest complex model containing all the bells and whistles, can be run many times quickly, to provide a better estimate of the probability of rare events.
    “One of the things we focus on are values embedded in the models and whether the knowledge being produced by those models provides decision makers with the knowledge they need to make the decisions that matter to them,” said Nancy Tuana, DuPont/Class of 1949 Professor of Philosophy and Women’s, Gender, and Sexuality Studies.
    Determining an appropriate model that can address the question and is still transparent is important.
    “We want to obtain fundamental and useful insights,” said Keller. “Using a simple model that allows us to better quantify risks can be more useful for decision-makers than using a complex model that makes it difficult to sample decision-relevant outcomes.”
    Srikrishnan added, “We need to make sure there is an alignment between what researchers are producing and what is required for real-world decision making.”
    The researchers understand that they need to make both the producers and users happy, but sometimes the questions being asked do not match the tools being used because of uncertainties and bottlenecks.
    “We need to ask ‘what do we need to know and how do we go about satisfying the needs of stakeholders and decision makers?'” said Tuana.
    The National Science Foundation through the Network for Sustainable Climate Risk Management supported this work.

    Story Source:
    Materials provided by Penn State. Original written by A’ndrea Elyse Messer. Note: Content may be edited for style and length. More

  • in

    Machine learning takes on synthetic biology: algorithms can bioengineer cells for you

    If you’ve eaten vegan burgers that taste like meat or used synthetic collagen in your beauty routine — both products that are “grown” in the lab — then you’ve benefited from synthetic biology. It’s a field rife with potential, as it allows scientists to design biological systems to specification, such as engineering a microbe to produce a cancer-fighting agent. Yet conventional methods of bioengineering are slow and laborious, with trial and error being the main approach.
    Now scientists at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have developed a new tool that adapts machine learning algorithms to the needs of synthetic biology to guide development systematically. The innovation means scientists will not have to spend years developing a meticulous understanding of each part of a cell and what it does in order to manipulate it; instead, with a limited set of training data, the algorithms are able to predict how changes in a cell’s DNA or biochemistry will affect its behavior, then make recommendations for the next engineering cycle along with probabilistic predictions for attaining the desired goal.
    “The possibilities are revolutionary,” said Hector Garcia Martin, a researcher in Berkeley Lab’s Biological Systems and Engineering (BSE) Division who led the research. “Right now, bioengineering is a very slow process. It took 150 person-years to create the anti-malarial drug, artemisinin. If you’re able to create new cells to specification in a couple weeks or months instead of years, you could really revolutionize what you can do with bioengineering.”
    Working with BSE data scientist Tijana Radivojevic and an international group of researchers, the team developed and demonstrated a patent-pending algorithm called the Automated Recommendation Tool (ART), described in a pair of papers recently published in the journal Nature Communications. Machine learning allows computers to make predictions after “learning” from substantial amounts of available “training” data.
    In “ART: A machine learning Automated Recommendation Tool for synthetic biology,” led by Radivojevic, the researchers presented the algorithm, which is tailored to the particularities of the synthetic biology field: small training data sets, the need to quantify uncertainty, and recursive cycles. The tool’s capabilities were demonstrated with simulated and historical data from previous metabolic engineering projects, such as improving the production of renewable biofuels.
    In “Combining mechanistic and machine learning models for predictive engineering and optimization of tryptophan metabolism,” the team used ART to guide the metabolic engineering process to increase the production of tryptophan, an amino acid with various uses, by a species of yeast called Saccharomyces cerevisiae, or baker’s yeast. The project was led by Jie Zhang and Soren Petersen of the Novo Nordisk Foundation Center for Biosustainability at the Technical University of Denmark, in collaboration with scientists at Berkeley Lab and Teselagen, a San Francisco-based startup company.

    advertisement

    To conduct the experiment, they selected five genes, each controlled by different gene promoters and other mechanisms within the cell and representing, in total, nearly 8,000 potential combinations of biological pathways. The researchers in Denmark then obtained experimental data on 250 of those pathways, representing just 3% of all possible combinations, and that data were used to train the algorithm. In other words, ART learned what output (amino acid production) is associated with what input (gene expression).
    Then, using statistical inference, the tool was able to extrapolate how each of the remaining 7,000-plus combinations would affect tryptophan production. The design it ultimately recommended increased tryptophan production by 106% over the state-of-the-art reference strain and by 17% over the best designs used for training the model.
    “This is a clear demonstration that bioengineering led by machine learning is feasible, and disruptive if scalable. We did it for five genes, but we believe it could be done for the full genome,” said Garcia Martin, who is a member of the Agile BioFoundry and also the Director of the Quantitative Metabolic Modeling team at the Joint BioEnergy Institute (JBEI), a DOE Bioenergy Research Center; both supported a portion of this work. “This is just the beginning. With this, we’ve shown that there’s an alternative way of doing metabolic engineering. Algorithms can automatically perform the routine parts of research while you devote your time to the more creative parts of the scientific endeavor: deciding on the important questions, designing the experiments, and consolidating the obtained knowledge.”
    More data needed
    The researchers say they were surprised by how little data was needed to obtain results. Yet to truly realize synthetic biology’s potential, they say the algorithms will need to be trained with much more data. Garcia Martin describes synthetic biology as being only in its infancy — the equivalent of where the Industrial Revolution was in the 1790s. “It’s only by investing in automation and high-throughput technologies that you’ll be able to leverage the data needed to really revolutionize bioengineering,” he said.

    advertisement

    Radivojevic added: “We provided the methodology and a demonstration on a small dataset; potential applications might be revolutionary given access to large amounts of data.”
    The unique capabilities of national labs
    Besides the dearth of experimental data, Garcia Martin says the other limitation is human capital — or machine learning experts. Given the explosion of data in our world today, many fields and companies are competing for a limited number of experts in machine learning and artificial intelligence.
    Garcia Martin notes that knowledge of biology is not an absolute prerequisite, if surrounded by the team environment provided by the national labs. Radivojevic, for example, has a doctorate in applied mathematics and no background in biology. “In two years here, she was able to productively collaborate with our multidisciplinary team of biologists, engineers, and computer scientists and make a difference in the synthetic biology field,” he said. “In the traditional ways of doing metabolic engineering, she would have had to spend five or six years just learning the needed biological knowledge before even starting her own independent experiments.”
    “The national labs provide the environment where specialization and standardization can prosper and combine in the large multidisciplinary teams that are their hallmark,” Garcia Martin said.
    Synthetic biology has the potential to make significant impacts in almost every sector: food, medicine, agriculture, climate, energy, and materials. The global synthetic biology market is currently estimated at around $4 billion and has been forecast to grow to more than $20 billion by 2025, according to various market reports.
    “If we could automate metabolic engineering, we could strive for more audacious goals. We could engineer microbiomes for therapeutic or bioremediation purposes. We could engineer microbiomes in our gut to produce drugs to treat autism, for example, or microbiomes in the environment that convert waste to biofuels,” Garcia Martin said. “The combination of machine learning and CRISPR-based gene editing enables much more efficient convergence to desired specifications.” More