More stories

  • in

    Early-warning for seizures could be a game-changer for epilepsy patients

    Epilepsy is one of the most common neurological conditions, affecting more than 65 million worldwide. For those dealing with epilepsy, the advent of a seizure can feel like a ticking time bomb. It could happen at any time or any place, potentially posing a fatal risk when a seizure strikes during risky situations, such as while driving.
    A research team at USC Viterbi School of Engineering and Keck Medicine of USC is tackling this dangerous problem with a powerful new seizure predicting mathematical model that will give epilepsy patients an accurate warning five minutes to one hour before they are likely to experience a seizure, offering enhanced freedom for the patient and cutting the need for medical intervention.
    The research, published in the Journal of Neural Engineering, is led by corresponding authors Dong Song, research associate professor of biomedical engineering at USC Viterbi School of Engineering and Pen-Ning Yu, former PhD researcher in Song’s lab, in collaboration with Charles Liu, professor of clinical neurological surgery and director of the USC Neurorestoration Center. The other authors are David Packard Chair in Engineering and professor of biomedical engineering, Ted Berger, and medical director of the USC Comprehensive Epilepsy Program at the Keck Medical Center, Christianne Heck.
    The mathematical model works by learning from large amounts of brain signal data collected from an electrical implant in the patient. Liu and his team have already been working with epilepsy patients with implantable devices, which are able to offer ongoing real-time monitoring of the brain’s electrical signals in the same way that an electroencephalogram (EEG) uses external electrodes to measure signals. The new mathematical model can take this data and learn each patient’s unique brain signals, looking out for precursors, or patterns of brain activity that show a “pre-ictal” state, in which a patient is at risk of seizure onset.
    Song said the new model is able to accurately predict whether a seizure may happen within one hour, allowing the patient to take the necessary intervention.
    “For example, it could be as simple as just alerting the patient their seizure is coming the next hour, so they shouldn’t drive their car right now, or they should take their medicine, or they should go and sit down” Song said. “Or ideally in future we can detect seizure signals and then send electrical stimulation through an implantable device to the brain to prevent the seizure from happening.”
    Liu said that the discovery would have major positive implications for public health, given epilepsy treatment had been severely impacted in the past year by the pandemic.

    advertisement

    “This is hopefully, going to change the way we deal with epilepsy going forward and it’s driven by the needs that have been in place for a long time, but have been highlighted and accelerated by COVID,” Liu said.
    He said that currently, patients with medically intractable epilepsy-epilepsy that cannot be controlled with medication-are admitted electively to the hospital for video EEG monitoring. With the advent of COVID, these elective admissions completely halted and epilepsy programs across the country ground to a halt over the past year. Liu said this highlights the need for a new workflow by which EEG recordings from scalp or intradural electrodes can be acquired at home and analyzed computationally.
    “So we need to create a new workflow by which, instead of bringing patients to the ICU, we take the recordings from their home and use the computation models to do everything they would have done in the hospital,” Liu said. “Not only can you manage patients using physical distancing, you can also scale in a way that only technology allows. Computation can analyze thousands of pages of data at once, whereas a single neurologist cannot.”
    How the Seizure Prediction Model Works
    Song said the new model was different to previous seizure prediction models in that it extracts both linear and non-linear information from the patient’s brain signals.

    advertisement

    “Linear is the simple feature. If you understand the parts, you can understand the whole,” Song said. “Whereas the non-linear feature means that even if you understand the parts, when you scale up it has some emergent properties that cannot be explained.”
    “For some patients, linear features are more important and for other patients, non-linear features are more important,” Song said.
    Song said that while other models predict brain activity over a short time scale, a matter of milliseconds, his team’s model examined an extended time scale.
    “The brain is a multi-temporal scale device so we need to understand what happens not just in the short term, but many more steps in the future,” Song said.
    He said that the model is also unique in that it is patient-specific-it extracts the information that is significant for each individual patient. Because every brain is very different in terms of the signals that indicate a “pre-ictal” state.
    “Patients are all different from each other, so in order to accurately predict seizures, we need to record signals, we need to look at a lot of different features and we need to have an algorithm to select the most important feature for prediction,” Song said.
    “I can’t tell you how exciting, this is. At USC we’ve been very interested in trying to create tools that enhance the public health dimension of these diseases that we’re treating, and it’s really difficult,” Liu said
    “Epileptologists are still relatively few in number in many parts of our country and world. While they can identify many subtle features on EEG, the kinds of models that Song can create can identify additional features at a massive scale necessary to help the millions of patients affected by epilepsy in our region and worldwide,” Liu said.
    Heck, who is also co-director for the USC Neurorestoration Center, said that there are two important issues to the clinical relevance of this technology.
    “One is that a majority of patients who suffer from epilepsy live with fear and anxiety about their next seizure which may strike like lightening in the most inopportune moment, perhaps while driving, or just walking in public. An ample warning provides a critical ‘get safe’ opportunity,” Heck said. “The second relevant issue clinically is that we have brain implants, smart devices, that this engineered technology can enhance, giving greater hope for efficacy of our existing therapies.” More

  • in

    Social media use driven by search for reward, akin to animals seeking food

    Our use of social media, specifically our efforts to maximize “likes,” follows a pattern of “reward learning,” concludes a new study by an international team of scientists. Its findings, which appear in the journal Nature Communications, reveal parallels with the behavior of animals, such as rats, in seeking food rewards.
    “These results establish that social media engagement follows basic, cross-species principles of reward learning,” explains David Amodio, a professor at New York University and the University of Amsterdam and one of the paper’s authors. “These findings may help us understand why social media comes to dominate daily life for many people and provide clues, borrowed from research on reward learning and addiction, to how troubling online engagement may be addressed.”
    In 2020, more than four billion people spent several hours per day, on average, on platforms such as Instagram, Facebook, Twitter, and other more specialized forums. This widespread social media engagement has been likened by many to an addiction, in which people are driven to pursue positive online social feedback, such as “likes,” over direct social interaction and even basic needs like eating and drinking.
    While social media usage has been studied extensively, what actually drives people to engage, sometimes obsessively, with others on social media is less clear.
    To examine these motivations, the Nature Communications study, which also included scientists from Boston University, the University of Zurich, and Sweden’s Karolinska Institute, directly tested, for the first time, whether social media use can be explained by the way our minds process and learn from rewards.
    To do so, the authors analyzed more than one million social media posts from over 4,000 users on Instagram and other sites. They found that people space their posts in a way that maximizes how many “likes” they receive on average: they post more frequently in response to a high rate of likes and less frequently when they receive fewer likes.
    The researchers then used computational models to reveal that this pattern conforms closely to known mechanisms of reward learning, a long-established psychological concept that posits behavior may be driven and reinforced by rewards.
    More specifically, their analysis suggested that social media engagement is driven by similar principles that lead non-human animals, such as rats, to maximize their food rewards in a Skinner Box — a commonly used experimental tool in which animal subjects, placed in a compartment, access food by taking certain actions (e.g., pressing a particular lever).
    The researchers then corroborated these results with an online experiment, in which human participants could post funny images with phrases, or “memes,” and receive likes as feedback on an Instagram-like platform. Consistent with the study’s quantitative analysis, the results showed that people posted more often when they received more likes — on average.
    “Our findings can help lead to a better understanding of why social media dominates so many people’s daily lives and can also provide leads for ways of tackling excessive online behavior,” says the University of Amsterdam’s Bjo?rn Lindstro?m, the paper’s lead author.

    Story Source:
    Materials provided by New York University. Note: Content may be edited for style and length. More

  • in

    First complete coronavirus model shows cooperation

    The COVID-19 virus holds some mysteries. Scientists remain in the dark on aspects of how it fuses and enters the host cell; how it assembles itself; and how it buds off the host cell.
    Computational modeling combined with experimental data provides insights into these behaviors. But modeling over meaningful timescales of the pandemic-causing SARS-CoV-2 virus has so far been limited to just its pieces like the spike protein, a target for the current round of vaccines.
    A new multiscale coarse-grained model of the complete SARS-CoV-2 virion, its core genetic material and virion shell, has been developed for the first time using supercomputers. The model offers scientists the potential for new ways to exploit the virus’s vulnerabilities.
    “We wanted to understand how SARS-CoV-2 works holistically as a whole particle,” said Gregory Voth, the Haig P. Papazian Distinguished Service Professor at the University of Chicago. Voth is the corresponding author of the study that developed the first whole virus model, published November 2020 in the Biophysical Journal.
    “We developed a bottom-up coarse-grained model,” said Voth, “where we took information from atomistic-level molecular dynamics simulations and from experiments.” He explained that a coarse-grained model resolves only groups of atoms, versus all-atom simulations, where every single atomic interaction is resolved. “If you do that well, which is always a challenge, you maintain the physics in the model.”
    The early results of the study show how the spike proteins on the surface of the virus move cooperatively.

    advertisement

    “They don’t move independently like a bunch of random, uncorrelated motions,” Voth said. “They work together.”
    This cooperative motion of the spike proteins is informative of how the coronavirus explores and detects the ACE2 receptors of a potential host cell.
    “The paper we published shows the beginnings of how the modes of motion in the spike proteins are correlated,” Voth said. He added that the spikes are coupled to each other. When one protein moves another one also moves in response.
    “The ultimate goal of the model would be, as a first step, to study the initial virion attractions and interactions with ACE2 receptors on cells and to understand the origins of that attraction and how those proteins work together to go on to the virus fusion process,” Voth said.
    Voth and his group have been developing coarse-grained modeling methods on viruses such as HIV and influenza for more than 20 years. They ‘coarsen’ the data to make it simpler and more computationally tractable, while staying true to the dynamics of the system.

    advertisement

    “The benefit of the coarse-grained model is that it can be hundreds to thousands of times more computationally efficient than the all-atom model,” Voth explained. The computational savings allowed the team to build a much larger model of the coronavirus than ever before, at longer time-scales than what has been done with all-atom models.
    “What you’re left with are the much slower, collective motions. The effects of the higher frequency, all-atom motions are folded into those interactions if you do it well. That’s the idea of systematic coarse-graining.”
    The holistic model developed by Voth started with atomic models of the four main structural elements of the SARS-CoV-2 virion: the spike, membrane, nucleocapsid, and envelope proteins. These atomic models were then simulated and simplified to generate the complete course-grained model.
    The all-atom molecular dynamics simulations of the spike protein component of the virion system, about 1.7 million atoms, were generated by study co-author Rommie Amaro, a professor of chemistry and biochemistry at the University of California, San Diego.
    “Their model basically ingests our data, and it can learn from the data that we have at these more detailed scales and then go beyond where we went,” Amaro said. “This method that Voth has developed will allow us and others to simulate over the longer time scales that are needed to actually simulate the virus infecting a cell.”
    Amaro elaborated on the behavior observed from the coarse-grained simulations of the spike proteins.
    “What he saw very clearly was the beginning of the dissociation of the S1 subunit of the spike. The whole top part of the spike peels off during fusion,” Amaro said.
    One of the first steps of viral fusion with the host cell is this dissociation, where it binds to the ACE2 receptor of the host cell.
    “The larger S1 opening movements that they saw with this coarse-grained model was something we hadn’t seen yet in the all-atom molecular dynamics, and in fact it would be very difficult for us to see,” Amaro said. “It’s a critical part of the function of this protein and the infection process with the host cell. That was an interesting finding.”
    Voth and his team used the all-atom dynamical information on the open and closed states of the spike protein generated by the Amaro Lab on the Frontera supercomputer, as well as other data. The National Science Foundation (NSF)-funded Frontera system is operated by the Texas Advanced Computing Center (TACC) at The University of Texas at Austin.
    “Frontera has shown how important it is for these studies of the virus, at multiple scales. It was critical at the atomic level to understand the underlying dynamics of the spike with all of its atoms. There’s still a lot to learn there. But now this information can be used a second time to develop new methods that allow us to go out longer and farther, like the coarse-graining method,” Amaro said.
    “Frontera has been especially useful in providing the molecular dynamics data at the atomistic level for feeding into this model. It’s very valuable,” Voth said.
    The Voth Group initially used the Midway2 computing cluster at the University of Chicago Research Computing Center to develop the coarse-grained model.
    The membrane and envelope protein all-atom simulations were generated on the Anton 2 system. Operated by the Pittsburgh Supercomputing Center (PSC) with support from National Institutes of Health, Anton 2 is a special-purpose supercomputer for molecular dynamics simulations developed and provided without cost by D. E. Shaw Research.
    “Frontera and Anton 2 provided the key molecular level input data into this model,” Voth said.
    “A really fantastic thing about Frontera and these types of methods is that we can give people much more accurate views of how these viruses are moving and carrying about their work,” Amaro said.
    “There are parts of the virus that are invisible even to experiment,” she continued. “And through these types of methods that we use on Frontera, we can give scientists the first and important views into what these systems really look like with all of their complexity and how they’re interacting with antibodies or drugs or with parts of the host cell.”
    The type of information that Frontera is giving researchers helps to understand the basic mechanisms of viral infection. It is also useful for the design of safer and better medicines to treat the disease and to prevent it, Amaro added.
    Said Voth: “One thing that we’re concerned about right now are the UK and the South African SARS-CoV-2 variants. Presumably, with a computational platform like we have developed here, we can rapidly assess those variances, which are changes of the amino acids. We can hopefully rather quickly understand the changes these mutations cause to the virus and then hopefully help in the design of new modified vaccines going forward.”
    The study, “A multiscale coarse-grained model of the SARS-CoV-2 virion,” was published on November 27, 2020 in the Biophysical Journal. The study co-authors are Alvin Yu, Alexander J. Pak, Peng He, Viviana Monje-Galvan, Gregory A. Voth of the University of Chicago; and Lorenzo Casalino, Zied Gaieb, Abigail C. Dommer, and Rommie E. Amaro of the University of California, San Diego. Funding was provided by the NSF through NSF RAPID grant CHE-2029092, NSF RAPID MCB-2032054, the National Institute of General Medical Sciences of the National Institutes of Health through grant R01 GM063796, National Institutes of Health GM132826, and a UC San Diego Moore’s Cancer Center 2020 SARS-COV-2 seed grant. Computational resources were provided by the Research Computing Center at the University of Chicago, Frontera at the Texas Advanced Computer Center funded by the NSF grant (OAC-1818253), and the Pittsburgh Super Computing Center (PSC) through the Anton 2 machine. Anton 2 computer time was allocated by the COVID-19 HPC Consortium and provided by the PSC through Grant R01GM116961 from the National Institutes of Health. The Anton 2 machine at PSC was generously made available by D. E. Shaw Research.” More

  • in

    Smartphones could help to prevent glaucoma blindness

    Smartphones could be used to scan people’s eyes for early-warning signs of glaucoma — helping to prevent severe ocular diseases and blindness, a new study reveals.
    Some of the most common eye-related diseases are avoidable and display strong risk factors before onset, but it is much harder to pinpoint a group of people at risk from glaucoma.
    Glaucoma is associated with elevated levels of intraocular pressure (IOP) and an accurate, non-invasive way of monitoring an individual’s IOP over an extended period would help to significantly increase their chances of maintaining their vision.
    Soundwaves used as a mobile measurement method would detect increasing values of IOP, prompting early diagnosis and treatment.
    Scientists at the University of Birmingham have successfully carried out experiments using soundwaves and an eye model, publishing their findings in Engineering Reports.
    Co-author Dr. Khamis Essa, Director of the Advanced Manufacturing Group at the University of Birmingham, commented: “We discovered a relationship between the internal pressure of an object and its acoustic reflection coefficient. With further investigation into eye geometry and how this affects the interaction with soundwaves, it possible to use a smartphone to accurately measure IOP from the comfort of the user’s home.”
    Risk factors for other eye diseases are easier to assess — for example, in the case of diabetic retinopathy, individuals with diabetes are specifically at risk and are constantly monitored for tiny bulges that develop in the blood vessels of the eye.

    advertisement

    The current ‘gold standard’ method of measuring IOP is applanation tonometry, where numbing drops followed by non-toxic dye are applied to the patient’s eyes. There are problems and measurement errors associated with this method.
    An independent risk factor of glaucoma is having a thin central corneal thickness (CCT) — either by natural occurrence or a common procedure like laser eye surgery. A thin CCT causes artificially low readings of IOP when using applanation tonometry.
    The only way to verify the reading is by a full eye examination — not possible in a mobile situation. Also, the equipment is too expensive for most people to purchase for long-term home monitoring.
    IOP is a vital measurement of healthy vision, defined as pressure created by continued renewal of eye fluids.
    Ocular hypertension is caused by an imbalance in production and drainage of aqueous fluid — most common in older adults. Risk increases with age, in turn increasing the likelihood of an individual developing glaucoma.
    Glaucoma is a disease of the optic nerve which is estimated to affect 79.6 million people world-wide and, if left untreated, causes irreversible damage. In most cases, blindness can be prevented with appropriate control and treatment.

    Story Source:
    Materials provided by University of Birmingham. Note: Content may be edited for style and length. More

  • in

    Laser system generates random numbers at ultrafast speeds

    An international team of scientists has developed a system that can generate random numbers over a hundred times faster than current technologies, paving the way towards faster, cheaper, and more secure data encryption in today’s digitally connected world.
    The random generator system was jointly developed by researchers from Nanyang Technological University, Singapore (NTU Singapore), Yale University, and Trinity College Dublin, and made in NTU.
    Random numbers are used for a variety of purposes, such as generating data encryption keys and one-time passwords (OTPs) in everyday processes such online banking and e-commerce to shore up their security.
    The system uses a laser with a special hourglass-shaped cavity to generate random patterns, which are formed by light rays reflecting and interacting with each other within the cavity. By reading the patterns, the system generates many series of random numbers at the same time.
    The researchers found that like snowflakes, no two number sequences generated using the system were the same, due to the unpredictable nature of how the light rays reflect and interact with each other in the cavity.
    The laser used in the system is about one millimeter long, smaller than most other lasers. It is also energy efficient and can be operated with any household power socket, as it only requires a one-ampere (1A) current.

    advertisement

    In their study published in one of the world’s leading scientific journals Science on 26 February 2021, the researchers verified the effectiveness of their random number generator using two tests, including one published by the US National Institute of Standards and Technology.
    The research team has proven that the NTU-made random number generator which is faster and more secure than existing comparable technologies, could help safeguard users’ data in a world that is steadily relying more on Internet transactions (see Image 2).
    Professor Wang Qijie from NTU’s School of Electrical and Electronic Engineering & School of Physical and Mathematical Science, as well as The Photonics Institute, who led the NTU team involved in the international research, said, “Current random number generators run by computers are cheap and effective. However, they are vulnerable to attacks, as hackers could predict future number sequences if they discover the algorithm used to generate the numbers. Our system is safer as it uses an unpredictable method to generate numbers, making it impossible for even those with the same device to replicate.”
    Dr Zeng Yongquan, a Research Fellow from NTU’s School of Physical and Mathematical Sciences, who co-designed the laser system, said: “Our system surpasses current random number generators, as the method can simultaneously generate many more random sequences of information at an even faster rate.”
    The team’s laser system can also generate about 250 terabytes of random bits per second — more than a hundred times faster than current computer-based random number generators.
    At its speed, the system would only take about 12 seconds to generate a body of random numbers equivalent to the size of information in the largest library in the world — the US Library of Congress.
    Elaborating on the future of the system, the team is working on making the technology ready for practical use, by incorporating the laser into a compact chip that enables the random numbers generated to be fed directly into a computer.

    Story Source:
    Materials provided by Nanyang Technological University. Note: Content may be edited for style and length. More

  • in

    Scientists induce artificial 'magnetic texture' in graphene

    Graphene is incredibly strong, lightweight, conductive … the list of its superlative properties goes on.
    It is not, however, magnetic — a shortcoming that has stunted its usefulness in spintronics, an emerging field that scientists say could eventually rewrite the rules of electronics, leading to more powerful semiconductors, computers and other devices.
    Now, an international research team led by the University at Buffalo is reporting an advancement that could help overcome this obstacle.
    In a study published today in the journal Physical Review Letters, researchers describe how they paired a magnet with graphene, and induced what they describe as “artificial magnetic texture” in the nonmagnetic wonder material.
    “Independent of each other, graphene and spintronics each possess incredible potential to fundamentally change many aspects of business and society. But if you can blend the two together, the synergistic effects are likely to be something this world hasn’t yet seen,” says lead author Nargess Arabchigavkani, who performed the research as a PhD candidate at UB and is now a postdoctoral research associate at SUNY Polytechnic Institute.
    Additional authors represent UB, King Mongkut’s Institute of Technology Ladkrabang in Thailand, Chiba University in Japan, University of Science and Technology of China, University of Nebraska Omaha, University of Nebraska Lincoln, and Uppsala University in Sweden.

    advertisement

    For their experiments, researchers placed a 20-nanometer-thick magnet in direct contact with a sheet of graphene, which is a single layer of carbon atoms arranged in a two-dimensional honeycomb lattice that is less than 1 nanometer thick.
    “To give you a sense of the size difference, it’s a bit like putting a brick on a sheet of paper,” says the study’s senior author Jonathan Bird, PhD, professor and chair of electrical engineering at the UB School of Engineering and Applied Sciences.
    Researchers then placed eight electrodes in different spots around the graphene and magnet to measure their conductivity.
    The electrodes revealed a surprise — the magnet induced an artificial magnetic texture in the graphene that persisted even in areas of the graphene away from the magnet. Put simply, the intimate contact between the two objects caused the normally nonmagnetic carbon to behave differently, exhibiting magnetic properties similar to common magnetic materials like iron or cobalt.
    Moreover, it was found that these properties could overwhelm completely the natural properties of the graphene, even when looking several microns away from the contact point of the graphene and the magnet. This distance (a micron is a millionth of a meter), while incredibly small, is relatively large microscopically speaking.
    The findings raise important questions relating to the microscopic origins of the magnetic texture in the graphene.
    Most importantly, Bird says, is the extent to which the induced magnetic behavior arises from the influence of spin polarization and/or spin-orbit coupling, which are phenomena known to be intimately connected to the magnetic properties of materials and to the emerging technology of spintronics.
    Rather than utilizing the electrical charge carried by electrons (as in traditional electronics), spintronic devices seek to exploit the unique quantum property of electrons known as spin (which is analogous to the earth spinning on its own axis). Spin offers the potential to pack more data into smaller devices, thereby increasing the power of semiconductors, quantum computers, mass storage devices and other digital electronics.
    The work was supported by funding from the U.S. Department of Energy. Additional support came from the U.S. National Science Foundation; nCORE, a wholly owned subsidiary of the Semiconductor Research Corporation; the Swedish Research Council; and the Japan Society for the Promotion of Science.

    Story Source:
    Materials provided by University at Buffalo. Original written by Cory Nealon. Note: Content may be edited for style and length. More

  • in

    Light unbound: Data limits could vanish with new optical antennas

    Researchers at the University of California, Berkeley, have found a new way to harness properties of light waves that can radically increase the amount of data they carry. They demonstrated the emission of discrete twisting laser beams from antennas made up of concentric rings roughly equal to the diameter of a human hair, small enough to be placed on computer chips.
    The new work, reported in a paper published Thursday, Feb. 25, in the journal Nature Physics, throws wide open the amount of information that can be multiplexed, or simultaneously transmitted, by a coherent light source. A common example of multiplexing is the transmission of multiple telephone calls over a single wire, but there had been fundamental limits to the number of coherent twisted lightwaves that could be directly multiplexed.
    “It’s the first time that lasers producing twisted light have been directly multiplexed,” said study principal investigator Boubacar Kanté, the Chenming Hu Associate Professor at UC Berkeley’s Department of Electrical Engineering and Computer Sciences. “We’ve been experiencing an explosion of data in our world, and the communication channels we have now will soon be insufficient for what we need. The technology we are reporting overcomes current data capacity limits through a characteristic of light called the orbital angular momentum. It is a game-changer with applications in biological imaging, quantum cryptography, high-capacity communications and sensors.”
    Kanté, who is also a faculty scientist in the Materials Sciences Division at Lawrence Berkeley National Laboratory (Berkeley Lab), has been continuing this work at UC Berkeley after having started the research at UC San Diego. The first author of the study is Babak Bahari, a former Ph.D. student in Kanté’s lab.
    Kanté said that current methods of transmitting signals through electromagnetic waves are reaching their limit. Frequency, for example, has become saturated, which is why there are only so many stations one can tune into on the radio. Polarization, where lightwaves are separated into two values — horizontal or vertical — can double the amount of information transmitted. Filmmakers take advantage of this when creating 3D movies, allowing viewers with specialized glasses to receive two sets of signals — one for each eye — to create a stereoscopic effect and the illusion of depth.
    Harnessing the potential in a vortex
    But beyond frequency and polarization is orbital angular momentum, or OAM, a property of light that has garnered attention from scientists because it offers exponentially greater capacity for data transmission. One way to think about OAM is to compare it to the vortex of a tornado.

    advertisement

    “The vortex in light, with its infinite degrees of freedom, can, in principle, support an unbounded quantity of data,” said Kanté. “The challenge has been finding a way to reliably produce the infinite number of OAM beams. No one has ever produced OAM beams of such high charges in such a compact device before.”
    The researchers started with an antenna, one of the most important components in electromagnetism and, they noted, central to ongoing 5G and upcoming 6G technologies. The antennas in this study are topological, which means that their essential properties are retained even when the device is twisted or bent.
    Creating rings of light
    To make the topological antenna, the researchers used electron-beam lithography to etch a grid pattern onto indium gallium arsenide phosphide, a semiconductor material, and then bonded the structure onto a surface made of yttrium iron garnet. The researchers designed the grid to form quantum wells in a pattern of three concentric circles — the largest about 50 microns in diameter — to trap photons. The design created conditions to support a phenomenon known as the photonic quantum Hall effect, which describes the movement of photons when a magnetic field is applied, forcing light to travel in only one direction in the rings.
    “People thought the quantum Hall effect with a magnetic field could be used in electronics but not in optics because of the weak magnetism of existing materials at optical frequencies,” said Kanté. “We are the first to show that the quantum Hall effect does work for light.”
    By applying a magnetic field perpendicular to their two-dimensional microstructure, the researchers successfully generated three OAM laser beams traveling in circular orbits above the surface. The study further showed that the laser beams had quantum numbers as large as 276, referring to the number of times light twists around its axis in one wavelength.
    “Having a larger quantum number is like having more letters to use in the alphabet,” said Kanté. “We’re allowing light to expand its vocabulary. In our study, we demonstrated this capability at telecommunication wavelengths, but in principle, it can be adapted to other frequency bands. Even though we created three lasers, multiplying the data rate by three, there is no limit to the possible number of beams and data capacity.”
    Kanté said the next step in his lab is to make quantum Hall rings that use electricity as power sources. More

  • in

    Computer training to reduce trauma symptoms

    Computer training applied in addition to psychotherapy can potentially help reduce the symptoms of post-traumatic stress disorder (PTSD). These are the results found by researchers from Ruhr-Universität Bochum and their collaborating partners in a randomised controlled clinical trial with 80 patients with PTSD. With the computerised training, the patients learned to appraise recurring and distressing trauma symptoms in a less negative light and instead to interpret them as a normal and understandable part of processing the trauma. The results are described by a team headed by Dr. Marcella Woud and Dr. Simon Blackwell from the Department of Clinical Psychology and Psychotherapy, together with the group led by Professor Henrik Kessler from the Clinic for Psychosomatic Medicine and Psychotherapy at the LWL University Hospital Bochum in the journal Psychotherapy and Psychosomatics, published online on 23 February 2021.
    Intrusions are a core symptom of post-traumatic stress disorder. Images of the traumatic experience suddenly and uncontrollably re-enter consciousness, often accompanied by strong sensory impressions such as the sounds or certain smells at the scene of the trauma, sometimes even making patients feel as if they are reliving the trauma. “Patients appraise the fact that they are experiencing these intrusions very negatively; they are often afraid that it is a sign that they are losing their mind,” explains Marcella Woud. “The feeling of having no control over the memories and experiencing the wide variety of intense negative emotions that often accompany intrusions make them even more distressing, which in turn reinforces negative appraisals.”
    A sentence completion task could help patients to reappraise symptoms
    Consequently, trauma therapies specifically address negative appraisals of symptoms such as intrusions. The Bochum-based team set out to establish whether a computerised training targeting these appraisals could also reduce symptoms and, at the same time, help to understand more about the underlying mechanisms of negative appraisals in PTSD. During the training, the patients are shown trauma-relevant sentences on the computer, which they have to complete. For example: “Since the incident, I sometimes react more anxiously than usual. This reaction is under_tand_ble.” Or: “I often think that I myself am to blame for the trauma. Such thoughts are un_ound_d.” The patients’ task is to fill in the word fragment’s first missing letter and by doing so to systematically appraise the statements in a more positive way. The aim is thus to learn that their symptoms are normal reactions and part of the processing of what they have experienced.
    Approximately half of the study participants underwent this “Cognitive Bias Modification-Appraisal” training, while the other half received a placebo control training — a visual concentration training — which was not designed to change negative appraisals. Both trainings took place during the first two weeks of the patients’ treatment in the clinic, with four sessions each week. One session lasted about 20 minutes. During and after the inpatient treatment, various measurements were collected to record any changes to the symptoms.
    Fewer trauma symptoms
    Patients who had participated in the appraisal training subsequently rated their symptoms such as intrusions and trauma-relevant thoughts less negatively than patients in the control group, and they also showed fewer other trauma-relevant symptoms after the training. “This leads us to conclude that the training appears to work — at least in the short-term,” says Marcella Woud. “Our study was not designed to examine long-term effects, which is something we will have to do in future studies on top of studying the training’s mechanisms in more detail.”

    Story Source:
    Materials provided by Ruhr-University Bochum. Note: Content may be edited for style and length. More