More stories

  • in

    Quantum quirk yields giant magnetic effect, where none should exist

    In a twist befitting the strange nature of quantum mechanics, physicists have discovered the Hall effect — a characteristic change in the way electricity is conducted in the presence of a magnetic field — in a nonmagnetic quantum material to which no magnetic field was applied.
    The discovery by researchers from Rice University, Austria’s Vienna University of Technology (TU Wien), Switzerland’s Paul Scherrer Institute and Canada’s McMaster University is detailed in a paper in the Proceedings of the National Academy of Sciences. Of interest are both the origins of the effect, which is typically associated with magnetism, and its gigantic magnitude — more than 1,000 times larger than one might observe in simple semiconductors.
    Rice study co-author Qimiao Si, a theoretical physicist who has investigated quantum materials for nearly three decades, said, “It’s really topology at work,” referring to the patterns of quantum entanglement that give rise the unorthodox state.
    The material, an exotic semimetal of cerium, bismuth and palladium, was created and measured at TU Wien by Silke Bühler-Paschen, a longtime collaborator of Si’s. In late 2017, Si, Bühler-Paschen and colleagues discovered a new type of quantum material they dubbed a “Weyl-Kondo semimetal.” The research laid the groundwork for empirical investigations, but Si said the experiments were challenging, in part because it wasn’t clear “which physical quantity would pick up the effect.”
    In April 2018, Bühler-Paschen and TU Wien graduate student Sami Dzsaber, the study’s first author, dropped by Si’s office while attending a workshop at the Rice Center for Quantum Materials (RCQM). When Si saw Dzsaber’s data, he was dubious.
    “Upon seeing this, everybody’s first reaction is that it is not possible,” he said.

    advertisement

    To appreciate why, it helps to understand both the nature and the 1879 discovery of Edwin Hall, a doctoral student who found that applying a magnetic field at a 90-degree angle to conducting wire produced a voltage difference across the wire, in the direction perpendicular to both the current and the magnetic field. Physicists eventually discovered the source of the Hall effect: The magnetic field deflects the motion of passing electrons, pulling them toward one side of the wire. The Hall effect is a standard tool in physics labs, and devices that make use of it are found in products as diverse as rocket engines and paintball guns. Studies related to the quantum nature of the Hall effect captured Nobel Prizes in 1985 and 1998.
    Dzsaber’s experimental data clearly showed a characteristic Hall signal, even though no magnetic field was applied.
    “If you don’t apply a magnetic field, the electron is not supposed to bend,” Si said. “So, how could you ever get a voltage drop along the perpendicular direction? That’s why everyone didn’t believe this at first.”
    Experiments at the Paul Scherrer Institute ruled out the presence of a tiny magnetic field that could only be detected on a microscopic scale. So the question remained: What caused the effect?
    “In the end, all of us had to accept that this was connected to topology,” Si said.

    advertisement

    In topological materials, patterns of quantum entanglement produce “protected” states, universal features that cannot be erased. The immutable nature of topological states is of increasing interest for quantum computing. Weyl semimetals, which manifest a quasiparticle known as the Weyl fermion, are topological materials.
    So are the Weyl-Kondo semimetals Si, Bühler-Paschen and colleagues discovered in 2018. Those feature both Weyl fermions and the Kondo effect, an interaction between the magnetic moments of electrons attached to atoms inside the metal and the spins of passing conduction electrons.
    “The Kondo effect is the quintessential form of strong correlations in quantum materials,” Si said in reference to the correlated, collective behavior of billions upon billions of quantum entangled particles. “It qualifies the Weyl-Kondo semimetal as one of the rare examples of a topological state that’s driven by strong correlations.
    “Topology is a defining characteristic of the Weyl-Kondo semimetal, and the discovery of this spontaneous giant Hall effect is really the first detection of topology that’s associated with this kind of Weyl fermion,” Si said.
    Experiments showed that the effect arose at the characteristic temperature associated with the Kondo effect, indicating the two are likely connected, Si said.
    “This kind of spontaneous Hall effect was also observed in contemporaneous experiments in some layered semiconductors, but our effect is more than 1,000 times larger,” he said. “We were able to show that the observed giant effect is, in fact, natural when the topological state develops out of strong correlations.”
    Si said the new observation is likely “a tip of the iceberg” of extreme responses that result from the interplay between strong correlations and topology.
    He said the size of the topologically generated Hall effect is also likely to spur investigations into potential uses of the technology for quantum computation.
    “This large magnitude, and its robust, bulk nature presents intriguing possibilities for exploitation in topological quantum devices,” Si said.
    Si is the Harry C. and Olga K. Wiess Professor in Rice’s Department of Physics and Astronomy and director of RCQM. Bühler-Paschen is a professor at TU Wien’s Institute for Solid State Physics. More

  • in

    Artificial microswimmers slow down and accumulate in low-fuel regions

    A Mason Engineering researcher has discovered that artificial microswimmers accumulate where their speed is minimized, an idea that could have implications for improving the efficacy of targeted cancer therapy.
    Jeff Moran, an assistant professor of mechanical engineering in the Volgenau School of Engineering, and colleagues from the University of Washington in Seattle studied self-propelled half-platinum/half-gold rods that “swim” in water using hydrogen peroxide as a fuel. The more peroxide there is, the faster the swimming; without peroxide in pure water, the rods don’t swim.
    In this work, they set out to understand what happens when these artificial microswimmers are placed in a fluid reservoir containing a gradient of hydrogen peroxide–lots of peroxide on one side, not much on the other side.
    They found that, predictably, the microswimmers swam faster in regions with high peroxide concentration, says Moran, whose research was published in the new issue of Scientific Reports.
    As others had observed, the direction of swimming varied randomly in time as the swimmers explored their surroundings. In contrast, in the low-concentration regions, the rods slowed down and accumulated in these regions over the course of a few minutes.
    The results suggest a simple strategy to make microswimmers passively accumulate in specific regions, an idea that might have useful, practical applications, he says.

    advertisement

    Swimming at the microscopic scale is a ubiquitous phenomenon in biology, Moran says. “Lots of cells and microorganisms, such as bacteria, can autonomously swim toward higher or lower concentrations of chemicals that benefit or harm the cell, respectively.”
    This behavior is called chemotaxis, and it’s both common and important, he says. “For example, your immune cells use chemotaxis to detect and swim toward sites of injury, so they can initiate tissue repair.”
    Moran and colleagues, like others in the field, have long been curious whether artificial microswimmers can mimic cells by performing chemotaxis, continuously swimming toward higher chemical concentrations. Some had claimed that the platinum/gold rods, in particular, could swim autonomously toward peroxide-rich regions.
    “We were skeptical of these claims since the rods aren’t alive, and therefore they don’t have the sensing and response capabilities that are necessary for cells to execute this behavior,” he says.
    “Instead, we found the opposite: the rods built up in the lower concentration regions. This is the opposite of what one would expect from chemotaxis,” Moran says.

    advertisement

    The researchers conducted computer simulations that predicted this and validated them with experiments, he says.
    “We propose a simple explanation for this behavior: Wherever they are, the rods move in randomly varying directions, exploring their surroundings. When they get to a low-fuel region, they can’t explore as vigorously. In a sense, they get trapped in their comfort zones,” Moran says.
    “Conversely, in the high-peroxide regions, they move at higher speeds and, because their direction is constantly changing, escape from these regions more often. Over time, the net result is that rods accumulate in low-concentration regions,” he says. “They don’t have any intelligence. They end up where their mobility is the lowest.”
    Moran says this research is promising from a technical standpoint because it suggests a new strategy to make chemicals accumulate in a highly acidic area.
    “Due to their abnormal metabolic processes, cancer cells cause their immediate surroundings to become acidic. These are the cells that need the most drugs because the acidic environment is known to promote metastasis and confer resistance to drugs. Thus, the cells in these regions are a major target of many cancer therapies.”
    Moran and colleagues are now designing microswimmers that move slowly in acidic regions and fast in neutral or basic regions. Through the mechanism they discovered here, they hypothesize that acid-dependent swimmers will accumulate and release their cargo preferentially where their speeds are minimized, namely the most acidic and hypoxic regions of the tumor, where the most problematic cells reside.
    There is much more research to be conducted, but “these rods may have the ability to deliver chemotherapy drugs to the cancer cells that need them the most,” Moran says.
    “To be clear, our study doesn’t prove that chemotaxis is impossible in artificial microswimmers, period; just that these particular microswimmers don’t undergo chemotaxis.
    “Instead, we’ve identified an elegantly simple method of causing unguided microswimmers to accumulate and deliver drugs to the most problematic cancer cells, which could have implications for the treatment of many cancers, as well as other diseases like fibrosis. We’re excited to see where this goes.”
      More

  • in

    Can a robot operate effectively underwater?

    If you’ve ever watched Planet Earth, you know the ocean is a wild place to live. The water is teaming with different ecosystems and organisms varying in complexity from an erudite octopus to a sea star. Unexpectedly, it is the sea star, a simple organism characterized by a decentralized nervous system, that offers insights into advanced adaptation to hydrodynamic forces — the forces created by water pressure and flow.
    Researchers from the USC Viterbi School of Engineering found that sea stars effectively stay attached to surfaces under extreme hydrodynamic loads by altering their shape. The researchers, including the Henry Salvatori Early Career Chair in Aerospace and Mechanical Engineering Mitul Luhar and doctoral student Mark Hermes, found sea stars create a “downforce” due to their shape. This mean that instead of being lifted by the flow forces, the sea stars are pushed downward toward the rock or floor surface they are on.
    Sea stars are incredibly adaptive, said Luhar, assistant professor in the USC Viterbi Department of Aerospace and Mechanical Engineering. “When there is high wave activity and high water forces, sea stars will grow skinnier and take on a lower profile. When the sea star is transported to a sheltered environment with lower hydrodynamic forces, they pop up a bit and their cross sections get bigger.”
    Understanding such shape shifting could help design underwater robots that can similarly adapt to extreme hydrodynamic environments, Luhar said.
    Interaction between Shape and Force
    The researchers tested this understanding of sea star shape and its impact on force in the water with both computational and 3-D printed models. “Right away what we noticed,” Luhar said, “is that instead of the sea stars being pulled away from the surfaces they were on, they were being pushed down — simply because of their shape.”
    Luhar said the researchers saw this downforce effect as key to how the sea star — and in the future, an underwater robot — could stay attached to a sea bed or a rock as opposed to being lifted up away from it, even in the most extreme conditions.

    advertisement

    The researchers tested other shapes, as well. With a cone or a dome, Luhar said, the water flows up and then down, following the contours of the shape reasonably well. With the flow ultimately pushing downward, an equal and opposite force is created, resulting in an overall lifting effect. With the sea star shape — which is similar to a triangular wedge — the water flows upward, with the angles on each side acting like a ramp that pushes water away from its surface.
    “As the sea star pushes the flow away, the flow creates an equal and opposite force that pushes down on the sea star,” Luhar said. “A cone or sphere does not create that same ‘ramp effect,’ and thus does not create a similar downforce.”
    To get the full three-dimensional understanding of what the force fields look like, Luhar said they use the computational models to further illuminate what they witnessed with the 3-D printed shapes. Of the shapes they considered, Luhar said the spherical dome performed the worst in terms of lift versus downforce, meaning, it performed poorest in staying attached to the bottom surface or ground.
    Soft Robotics
    The next step is studying a soft structure than can morph in real time, Luhar said. Hermes is working on developing this structure currently. Key to its design is allowing it to be responsive in the water channel, Luhar said, thus giving it the ability to adapt its shape as needed to stay adherent to a rock or sea bed, or alternately, to allow it to lift up with the water flow.
    “Let’s say the water changes speed,” Luhar said. “We can determine what shape would be best and the robot could shift its form accordingly.”
    Ultimately, Luhar said, the idea is to understand how to develop a robot that will work with the flow, instead of fight through it.
    “If we can take advantage of the surrounding environment instead of battle it, we can also create more efficiency and performance gains,” Luhar said. More

  • in

    Early-warning for seizures could be a game-changer for epilepsy patients

    Epilepsy is one of the most common neurological conditions, affecting more than 65 million worldwide. For those dealing with epilepsy, the advent of a seizure can feel like a ticking time bomb. It could happen at any time or any place, potentially posing a fatal risk when a seizure strikes during risky situations, such as while driving.
    A research team at USC Viterbi School of Engineering and Keck Medicine of USC is tackling this dangerous problem with a powerful new seizure predicting mathematical model that will give epilepsy patients an accurate warning five minutes to one hour before they are likely to experience a seizure, offering enhanced freedom for the patient and cutting the need for medical intervention.
    The research, published in the Journal of Neural Engineering, is led by corresponding authors Dong Song, research associate professor of biomedical engineering at USC Viterbi School of Engineering and Pen-Ning Yu, former PhD researcher in Song’s lab, in collaboration with Charles Liu, professor of clinical neurological surgery and director of the USC Neurorestoration Center. The other authors are David Packard Chair in Engineering and professor of biomedical engineering, Ted Berger, and medical director of the USC Comprehensive Epilepsy Program at the Keck Medical Center, Christianne Heck.
    The mathematical model works by learning from large amounts of brain signal data collected from an electrical implant in the patient. Liu and his team have already been working with epilepsy patients with implantable devices, which are able to offer ongoing real-time monitoring of the brain’s electrical signals in the same way that an electroencephalogram (EEG) uses external electrodes to measure signals. The new mathematical model can take this data and learn each patient’s unique brain signals, looking out for precursors, or patterns of brain activity that show a “pre-ictal” state, in which a patient is at risk of seizure onset.
    Song said the new model is able to accurately predict whether a seizure may happen within one hour, allowing the patient to take the necessary intervention.
    “For example, it could be as simple as just alerting the patient their seizure is coming the next hour, so they shouldn’t drive their car right now, or they should take their medicine, or they should go and sit down” Song said. “Or ideally in future we can detect seizure signals and then send electrical stimulation through an implantable device to the brain to prevent the seizure from happening.”
    Liu said that the discovery would have major positive implications for public health, given epilepsy treatment had been severely impacted in the past year by the pandemic.

    advertisement

    “This is hopefully, going to change the way we deal with epilepsy going forward and it’s driven by the needs that have been in place for a long time, but have been highlighted and accelerated by COVID,” Liu said.
    He said that currently, patients with medically intractable epilepsy-epilepsy that cannot be controlled with medication-are admitted electively to the hospital for video EEG monitoring. With the advent of COVID, these elective admissions completely halted and epilepsy programs across the country ground to a halt over the past year. Liu said this highlights the need for a new workflow by which EEG recordings from scalp or intradural electrodes can be acquired at home and analyzed computationally.
    “So we need to create a new workflow by which, instead of bringing patients to the ICU, we take the recordings from their home and use the computation models to do everything they would have done in the hospital,” Liu said. “Not only can you manage patients using physical distancing, you can also scale in a way that only technology allows. Computation can analyze thousands of pages of data at once, whereas a single neurologist cannot.”
    How the Seizure Prediction Model Works
    Song said the new model was different to previous seizure prediction models in that it extracts both linear and non-linear information from the patient’s brain signals.

    advertisement

    “Linear is the simple feature. If you understand the parts, you can understand the whole,” Song said. “Whereas the non-linear feature means that even if you understand the parts, when you scale up it has some emergent properties that cannot be explained.”
    “For some patients, linear features are more important and for other patients, non-linear features are more important,” Song said.
    Song said that while other models predict brain activity over a short time scale, a matter of milliseconds, his team’s model examined an extended time scale.
    “The brain is a multi-temporal scale device so we need to understand what happens not just in the short term, but many more steps in the future,” Song said.
    He said that the model is also unique in that it is patient-specific-it extracts the information that is significant for each individual patient. Because every brain is very different in terms of the signals that indicate a “pre-ictal” state.
    “Patients are all different from each other, so in order to accurately predict seizures, we need to record signals, we need to look at a lot of different features and we need to have an algorithm to select the most important feature for prediction,” Song said.
    “I can’t tell you how exciting, this is. At USC we’ve been very interested in trying to create tools that enhance the public health dimension of these diseases that we’re treating, and it’s really difficult,” Liu said
    “Epileptologists are still relatively few in number in many parts of our country and world. While they can identify many subtle features on EEG, the kinds of models that Song can create can identify additional features at a massive scale necessary to help the millions of patients affected by epilepsy in our region and worldwide,” Liu said.
    Heck, who is also co-director for the USC Neurorestoration Center, said that there are two important issues to the clinical relevance of this technology.
    “One is that a majority of patients who suffer from epilepsy live with fear and anxiety about their next seizure which may strike like lightening in the most inopportune moment, perhaps while driving, or just walking in public. An ample warning provides a critical ‘get safe’ opportunity,” Heck said. “The second relevant issue clinically is that we have brain implants, smart devices, that this engineered technology can enhance, giving greater hope for efficacy of our existing therapies.” More

  • in

    Social media use driven by search for reward, akin to animals seeking food

    Our use of social media, specifically our efforts to maximize “likes,” follows a pattern of “reward learning,” concludes a new study by an international team of scientists. Its findings, which appear in the journal Nature Communications, reveal parallels with the behavior of animals, such as rats, in seeking food rewards.
    “These results establish that social media engagement follows basic, cross-species principles of reward learning,” explains David Amodio, a professor at New York University and the University of Amsterdam and one of the paper’s authors. “These findings may help us understand why social media comes to dominate daily life for many people and provide clues, borrowed from research on reward learning and addiction, to how troubling online engagement may be addressed.”
    In 2020, more than four billion people spent several hours per day, on average, on platforms such as Instagram, Facebook, Twitter, and other more specialized forums. This widespread social media engagement has been likened by many to an addiction, in which people are driven to pursue positive online social feedback, such as “likes,” over direct social interaction and even basic needs like eating and drinking.
    While social media usage has been studied extensively, what actually drives people to engage, sometimes obsessively, with others on social media is less clear.
    To examine these motivations, the Nature Communications study, which also included scientists from Boston University, the University of Zurich, and Sweden’s Karolinska Institute, directly tested, for the first time, whether social media use can be explained by the way our minds process and learn from rewards.
    To do so, the authors analyzed more than one million social media posts from over 4,000 users on Instagram and other sites. They found that people space their posts in a way that maximizes how many “likes” they receive on average: they post more frequently in response to a high rate of likes and less frequently when they receive fewer likes.
    The researchers then used computational models to reveal that this pattern conforms closely to known mechanisms of reward learning, a long-established psychological concept that posits behavior may be driven and reinforced by rewards.
    More specifically, their analysis suggested that social media engagement is driven by similar principles that lead non-human animals, such as rats, to maximize their food rewards in a Skinner Box — a commonly used experimental tool in which animal subjects, placed in a compartment, access food by taking certain actions (e.g., pressing a particular lever).
    The researchers then corroborated these results with an online experiment, in which human participants could post funny images with phrases, or “memes,” and receive likes as feedback on an Instagram-like platform. Consistent with the study’s quantitative analysis, the results showed that people posted more often when they received more likes — on average.
    “Our findings can help lead to a better understanding of why social media dominates so many people’s daily lives and can also provide leads for ways of tackling excessive online behavior,” says the University of Amsterdam’s Bjo?rn Lindstro?m, the paper’s lead author.

    Story Source:
    Materials provided by New York University. Note: Content may be edited for style and length. More

  • in

    First complete coronavirus model shows cooperation

    The COVID-19 virus holds some mysteries. Scientists remain in the dark on aspects of how it fuses and enters the host cell; how it assembles itself; and how it buds off the host cell.
    Computational modeling combined with experimental data provides insights into these behaviors. But modeling over meaningful timescales of the pandemic-causing SARS-CoV-2 virus has so far been limited to just its pieces like the spike protein, a target for the current round of vaccines.
    A new multiscale coarse-grained model of the complete SARS-CoV-2 virion, its core genetic material and virion shell, has been developed for the first time using supercomputers. The model offers scientists the potential for new ways to exploit the virus’s vulnerabilities.
    “We wanted to understand how SARS-CoV-2 works holistically as a whole particle,” said Gregory Voth, the Haig P. Papazian Distinguished Service Professor at the University of Chicago. Voth is the corresponding author of the study that developed the first whole virus model, published November 2020 in the Biophysical Journal.
    “We developed a bottom-up coarse-grained model,” said Voth, “where we took information from atomistic-level molecular dynamics simulations and from experiments.” He explained that a coarse-grained model resolves only groups of atoms, versus all-atom simulations, where every single atomic interaction is resolved. “If you do that well, which is always a challenge, you maintain the physics in the model.”
    The early results of the study show how the spike proteins on the surface of the virus move cooperatively.

    advertisement

    “They don’t move independently like a bunch of random, uncorrelated motions,” Voth said. “They work together.”
    This cooperative motion of the spike proteins is informative of how the coronavirus explores and detects the ACE2 receptors of a potential host cell.
    “The paper we published shows the beginnings of how the modes of motion in the spike proteins are correlated,” Voth said. He added that the spikes are coupled to each other. When one protein moves another one also moves in response.
    “The ultimate goal of the model would be, as a first step, to study the initial virion attractions and interactions with ACE2 receptors on cells and to understand the origins of that attraction and how those proteins work together to go on to the virus fusion process,” Voth said.
    Voth and his group have been developing coarse-grained modeling methods on viruses such as HIV and influenza for more than 20 years. They ‘coarsen’ the data to make it simpler and more computationally tractable, while staying true to the dynamics of the system.

    advertisement

    “The benefit of the coarse-grained model is that it can be hundreds to thousands of times more computationally efficient than the all-atom model,” Voth explained. The computational savings allowed the team to build a much larger model of the coronavirus than ever before, at longer time-scales than what has been done with all-atom models.
    “What you’re left with are the much slower, collective motions. The effects of the higher frequency, all-atom motions are folded into those interactions if you do it well. That’s the idea of systematic coarse-graining.”
    The holistic model developed by Voth started with atomic models of the four main structural elements of the SARS-CoV-2 virion: the spike, membrane, nucleocapsid, and envelope proteins. These atomic models were then simulated and simplified to generate the complete course-grained model.
    The all-atom molecular dynamics simulations of the spike protein component of the virion system, about 1.7 million atoms, were generated by study co-author Rommie Amaro, a professor of chemistry and biochemistry at the University of California, San Diego.
    “Their model basically ingests our data, and it can learn from the data that we have at these more detailed scales and then go beyond where we went,” Amaro said. “This method that Voth has developed will allow us and others to simulate over the longer time scales that are needed to actually simulate the virus infecting a cell.”
    Amaro elaborated on the behavior observed from the coarse-grained simulations of the spike proteins.
    “What he saw very clearly was the beginning of the dissociation of the S1 subunit of the spike. The whole top part of the spike peels off during fusion,” Amaro said.
    One of the first steps of viral fusion with the host cell is this dissociation, where it binds to the ACE2 receptor of the host cell.
    “The larger S1 opening movements that they saw with this coarse-grained model was something we hadn’t seen yet in the all-atom molecular dynamics, and in fact it would be very difficult for us to see,” Amaro said. “It’s a critical part of the function of this protein and the infection process with the host cell. That was an interesting finding.”
    Voth and his team used the all-atom dynamical information on the open and closed states of the spike protein generated by the Amaro Lab on the Frontera supercomputer, as well as other data. The National Science Foundation (NSF)-funded Frontera system is operated by the Texas Advanced Computing Center (TACC) at The University of Texas at Austin.
    “Frontera has shown how important it is for these studies of the virus, at multiple scales. It was critical at the atomic level to understand the underlying dynamics of the spike with all of its atoms. There’s still a lot to learn there. But now this information can be used a second time to develop new methods that allow us to go out longer and farther, like the coarse-graining method,” Amaro said.
    “Frontera has been especially useful in providing the molecular dynamics data at the atomistic level for feeding into this model. It’s very valuable,” Voth said.
    The Voth Group initially used the Midway2 computing cluster at the University of Chicago Research Computing Center to develop the coarse-grained model.
    The membrane and envelope protein all-atom simulations were generated on the Anton 2 system. Operated by the Pittsburgh Supercomputing Center (PSC) with support from National Institutes of Health, Anton 2 is a special-purpose supercomputer for molecular dynamics simulations developed and provided without cost by D. E. Shaw Research.
    “Frontera and Anton 2 provided the key molecular level input data into this model,” Voth said.
    “A really fantastic thing about Frontera and these types of methods is that we can give people much more accurate views of how these viruses are moving and carrying about their work,” Amaro said.
    “There are parts of the virus that are invisible even to experiment,” she continued. “And through these types of methods that we use on Frontera, we can give scientists the first and important views into what these systems really look like with all of their complexity and how they’re interacting with antibodies or drugs or with parts of the host cell.”
    The type of information that Frontera is giving researchers helps to understand the basic mechanisms of viral infection. It is also useful for the design of safer and better medicines to treat the disease and to prevent it, Amaro added.
    Said Voth: “One thing that we’re concerned about right now are the UK and the South African SARS-CoV-2 variants. Presumably, with a computational platform like we have developed here, we can rapidly assess those variances, which are changes of the amino acids. We can hopefully rather quickly understand the changes these mutations cause to the virus and then hopefully help in the design of new modified vaccines going forward.”
    The study, “A multiscale coarse-grained model of the SARS-CoV-2 virion,” was published on November 27, 2020 in the Biophysical Journal. The study co-authors are Alvin Yu, Alexander J. Pak, Peng He, Viviana Monje-Galvan, Gregory A. Voth of the University of Chicago; and Lorenzo Casalino, Zied Gaieb, Abigail C. Dommer, and Rommie E. Amaro of the University of California, San Diego. Funding was provided by the NSF through NSF RAPID grant CHE-2029092, NSF RAPID MCB-2032054, the National Institute of General Medical Sciences of the National Institutes of Health through grant R01 GM063796, National Institutes of Health GM132826, and a UC San Diego Moore’s Cancer Center 2020 SARS-COV-2 seed grant. Computational resources were provided by the Research Computing Center at the University of Chicago, Frontera at the Texas Advanced Computer Center funded by the NSF grant (OAC-1818253), and the Pittsburgh Super Computing Center (PSC) through the Anton 2 machine. Anton 2 computer time was allocated by the COVID-19 HPC Consortium and provided by the PSC through Grant R01GM116961 from the National Institutes of Health. The Anton 2 machine at PSC was generously made available by D. E. Shaw Research.” More

  • in

    Smartphones could help to prevent glaucoma blindness

    Smartphones could be used to scan people’s eyes for early-warning signs of glaucoma — helping to prevent severe ocular diseases and blindness, a new study reveals.
    Some of the most common eye-related diseases are avoidable and display strong risk factors before onset, but it is much harder to pinpoint a group of people at risk from glaucoma.
    Glaucoma is associated with elevated levels of intraocular pressure (IOP) and an accurate, non-invasive way of monitoring an individual’s IOP over an extended period would help to significantly increase their chances of maintaining their vision.
    Soundwaves used as a mobile measurement method would detect increasing values of IOP, prompting early diagnosis and treatment.
    Scientists at the University of Birmingham have successfully carried out experiments using soundwaves and an eye model, publishing their findings in Engineering Reports.
    Co-author Dr. Khamis Essa, Director of the Advanced Manufacturing Group at the University of Birmingham, commented: “We discovered a relationship between the internal pressure of an object and its acoustic reflection coefficient. With further investigation into eye geometry and how this affects the interaction with soundwaves, it possible to use a smartphone to accurately measure IOP from the comfort of the user’s home.”
    Risk factors for other eye diseases are easier to assess — for example, in the case of diabetic retinopathy, individuals with diabetes are specifically at risk and are constantly monitored for tiny bulges that develop in the blood vessels of the eye.

    advertisement

    The current ‘gold standard’ method of measuring IOP is applanation tonometry, where numbing drops followed by non-toxic dye are applied to the patient’s eyes. There are problems and measurement errors associated with this method.
    An independent risk factor of glaucoma is having a thin central corneal thickness (CCT) — either by natural occurrence or a common procedure like laser eye surgery. A thin CCT causes artificially low readings of IOP when using applanation tonometry.
    The only way to verify the reading is by a full eye examination — not possible in a mobile situation. Also, the equipment is too expensive for most people to purchase for long-term home monitoring.
    IOP is a vital measurement of healthy vision, defined as pressure created by continued renewal of eye fluids.
    Ocular hypertension is caused by an imbalance in production and drainage of aqueous fluid — most common in older adults. Risk increases with age, in turn increasing the likelihood of an individual developing glaucoma.
    Glaucoma is a disease of the optic nerve which is estimated to affect 79.6 million people world-wide and, if left untreated, causes irreversible damage. In most cases, blindness can be prevented with appropriate control and treatment.

    Story Source:
    Materials provided by University of Birmingham. Note: Content may be edited for style and length. More

  • in

    Laser system generates random numbers at ultrafast speeds

    An international team of scientists has developed a system that can generate random numbers over a hundred times faster than current technologies, paving the way towards faster, cheaper, and more secure data encryption in today’s digitally connected world.
    The random generator system was jointly developed by researchers from Nanyang Technological University, Singapore (NTU Singapore), Yale University, and Trinity College Dublin, and made in NTU.
    Random numbers are used for a variety of purposes, such as generating data encryption keys and one-time passwords (OTPs) in everyday processes such online banking and e-commerce to shore up their security.
    The system uses a laser with a special hourglass-shaped cavity to generate random patterns, which are formed by light rays reflecting and interacting with each other within the cavity. By reading the patterns, the system generates many series of random numbers at the same time.
    The researchers found that like snowflakes, no two number sequences generated using the system were the same, due to the unpredictable nature of how the light rays reflect and interact with each other in the cavity.
    The laser used in the system is about one millimeter long, smaller than most other lasers. It is also energy efficient and can be operated with any household power socket, as it only requires a one-ampere (1A) current.

    advertisement

    In their study published in one of the world’s leading scientific journals Science on 26 February 2021, the researchers verified the effectiveness of their random number generator using two tests, including one published by the US National Institute of Standards and Technology.
    The research team has proven that the NTU-made random number generator which is faster and more secure than existing comparable technologies, could help safeguard users’ data in a world that is steadily relying more on Internet transactions (see Image 2).
    Professor Wang Qijie from NTU’s School of Electrical and Electronic Engineering & School of Physical and Mathematical Science, as well as The Photonics Institute, who led the NTU team involved in the international research, said, “Current random number generators run by computers are cheap and effective. However, they are vulnerable to attacks, as hackers could predict future number sequences if they discover the algorithm used to generate the numbers. Our system is safer as it uses an unpredictable method to generate numbers, making it impossible for even those with the same device to replicate.”
    Dr Zeng Yongquan, a Research Fellow from NTU’s School of Physical and Mathematical Sciences, who co-designed the laser system, said: “Our system surpasses current random number generators, as the method can simultaneously generate many more random sequences of information at an even faster rate.”
    The team’s laser system can also generate about 250 terabytes of random bits per second — more than a hundred times faster than current computer-based random number generators.
    At its speed, the system would only take about 12 seconds to generate a body of random numbers equivalent to the size of information in the largest library in the world — the US Library of Congress.
    Elaborating on the future of the system, the team is working on making the technology ready for practical use, by incorporating the laser into a compact chip that enables the random numbers generated to be fed directly into a computer.

    Story Source:
    Materials provided by Nanyang Technological University. Note: Content may be edited for style and length. More