More stories

  • in

    First complete coronavirus model shows cooperation

    The COVID-19 virus holds some mysteries. Scientists remain in the dark on aspects of how it fuses and enters the host cell; how it assembles itself; and how it buds off the host cell.
    Computational modeling combined with experimental data provides insights into these behaviors. But modeling over meaningful timescales of the pandemic-causing SARS-CoV-2 virus has so far been limited to just its pieces like the spike protein, a target for the current round of vaccines.
    A new multiscale coarse-grained model of the complete SARS-CoV-2 virion, its core genetic material and virion shell, has been developed for the first time using supercomputers. The model offers scientists the potential for new ways to exploit the virus’s vulnerabilities.
    “We wanted to understand how SARS-CoV-2 works holistically as a whole particle,” said Gregory Voth, the Haig P. Papazian Distinguished Service Professor at the University of Chicago. Voth is the corresponding author of the study that developed the first whole virus model, published November 2020 in the Biophysical Journal.
    “We developed a bottom-up coarse-grained model,” said Voth, “where we took information from atomistic-level molecular dynamics simulations and from experiments.” He explained that a coarse-grained model resolves only groups of atoms, versus all-atom simulations, where every single atomic interaction is resolved. “If you do that well, which is always a challenge, you maintain the physics in the model.”
    The early results of the study show how the spike proteins on the surface of the virus move cooperatively.

    advertisement

    “They don’t move independently like a bunch of random, uncorrelated motions,” Voth said. “They work together.”
    This cooperative motion of the spike proteins is informative of how the coronavirus explores and detects the ACE2 receptors of a potential host cell.
    “The paper we published shows the beginnings of how the modes of motion in the spike proteins are correlated,” Voth said. He added that the spikes are coupled to each other. When one protein moves another one also moves in response.
    “The ultimate goal of the model would be, as a first step, to study the initial virion attractions and interactions with ACE2 receptors on cells and to understand the origins of that attraction and how those proteins work together to go on to the virus fusion process,” Voth said.
    Voth and his group have been developing coarse-grained modeling methods on viruses such as HIV and influenza for more than 20 years. They ‘coarsen’ the data to make it simpler and more computationally tractable, while staying true to the dynamics of the system.

    advertisement

    “The benefit of the coarse-grained model is that it can be hundreds to thousands of times more computationally efficient than the all-atom model,” Voth explained. The computational savings allowed the team to build a much larger model of the coronavirus than ever before, at longer time-scales than what has been done with all-atom models.
    “What you’re left with are the much slower, collective motions. The effects of the higher frequency, all-atom motions are folded into those interactions if you do it well. That’s the idea of systematic coarse-graining.”
    The holistic model developed by Voth started with atomic models of the four main structural elements of the SARS-CoV-2 virion: the spike, membrane, nucleocapsid, and envelope proteins. These atomic models were then simulated and simplified to generate the complete course-grained model.
    The all-atom molecular dynamics simulations of the spike protein component of the virion system, about 1.7 million atoms, were generated by study co-author Rommie Amaro, a professor of chemistry and biochemistry at the University of California, San Diego.
    “Their model basically ingests our data, and it can learn from the data that we have at these more detailed scales and then go beyond where we went,” Amaro said. “This method that Voth has developed will allow us and others to simulate over the longer time scales that are needed to actually simulate the virus infecting a cell.”
    Amaro elaborated on the behavior observed from the coarse-grained simulations of the spike proteins.
    “What he saw very clearly was the beginning of the dissociation of the S1 subunit of the spike. The whole top part of the spike peels off during fusion,” Amaro said.
    One of the first steps of viral fusion with the host cell is this dissociation, where it binds to the ACE2 receptor of the host cell.
    “The larger S1 opening movements that they saw with this coarse-grained model was something we hadn’t seen yet in the all-atom molecular dynamics, and in fact it would be very difficult for us to see,” Amaro said. “It’s a critical part of the function of this protein and the infection process with the host cell. That was an interesting finding.”
    Voth and his team used the all-atom dynamical information on the open and closed states of the spike protein generated by the Amaro Lab on the Frontera supercomputer, as well as other data. The National Science Foundation (NSF)-funded Frontera system is operated by the Texas Advanced Computing Center (TACC) at The University of Texas at Austin.
    “Frontera has shown how important it is for these studies of the virus, at multiple scales. It was critical at the atomic level to understand the underlying dynamics of the spike with all of its atoms. There’s still a lot to learn there. But now this information can be used a second time to develop new methods that allow us to go out longer and farther, like the coarse-graining method,” Amaro said.
    “Frontera has been especially useful in providing the molecular dynamics data at the atomistic level for feeding into this model. It’s very valuable,” Voth said.
    The Voth Group initially used the Midway2 computing cluster at the University of Chicago Research Computing Center to develop the coarse-grained model.
    The membrane and envelope protein all-atom simulations were generated on the Anton 2 system. Operated by the Pittsburgh Supercomputing Center (PSC) with support from National Institutes of Health, Anton 2 is a special-purpose supercomputer for molecular dynamics simulations developed and provided without cost by D. E. Shaw Research.
    “Frontera and Anton 2 provided the key molecular level input data into this model,” Voth said.
    “A really fantastic thing about Frontera and these types of methods is that we can give people much more accurate views of how these viruses are moving and carrying about their work,” Amaro said.
    “There are parts of the virus that are invisible even to experiment,” she continued. “And through these types of methods that we use on Frontera, we can give scientists the first and important views into what these systems really look like with all of their complexity and how they’re interacting with antibodies or drugs or with parts of the host cell.”
    The type of information that Frontera is giving researchers helps to understand the basic mechanisms of viral infection. It is also useful for the design of safer and better medicines to treat the disease and to prevent it, Amaro added.
    Said Voth: “One thing that we’re concerned about right now are the UK and the South African SARS-CoV-2 variants. Presumably, with a computational platform like we have developed here, we can rapidly assess those variances, which are changes of the amino acids. We can hopefully rather quickly understand the changes these mutations cause to the virus and then hopefully help in the design of new modified vaccines going forward.”
    The study, “A multiscale coarse-grained model of the SARS-CoV-2 virion,” was published on November 27, 2020 in the Biophysical Journal. The study co-authors are Alvin Yu, Alexander J. Pak, Peng He, Viviana Monje-Galvan, Gregory A. Voth of the University of Chicago; and Lorenzo Casalino, Zied Gaieb, Abigail C. Dommer, and Rommie E. Amaro of the University of California, San Diego. Funding was provided by the NSF through NSF RAPID grant CHE-2029092, NSF RAPID MCB-2032054, the National Institute of General Medical Sciences of the National Institutes of Health through grant R01 GM063796, National Institutes of Health GM132826, and a UC San Diego Moore’s Cancer Center 2020 SARS-COV-2 seed grant. Computational resources were provided by the Research Computing Center at the University of Chicago, Frontera at the Texas Advanced Computer Center funded by the NSF grant (OAC-1818253), and the Pittsburgh Super Computing Center (PSC) through the Anton 2 machine. Anton 2 computer time was allocated by the COVID-19 HPC Consortium and provided by the PSC through Grant R01GM116961 from the National Institutes of Health. The Anton 2 machine at PSC was generously made available by D. E. Shaw Research.” More

  • in

    Smartphones could help to prevent glaucoma blindness

    Smartphones could be used to scan people’s eyes for early-warning signs of glaucoma — helping to prevent severe ocular diseases and blindness, a new study reveals.
    Some of the most common eye-related diseases are avoidable and display strong risk factors before onset, but it is much harder to pinpoint a group of people at risk from glaucoma.
    Glaucoma is associated with elevated levels of intraocular pressure (IOP) and an accurate, non-invasive way of monitoring an individual’s IOP over an extended period would help to significantly increase their chances of maintaining their vision.
    Soundwaves used as a mobile measurement method would detect increasing values of IOP, prompting early diagnosis and treatment.
    Scientists at the University of Birmingham have successfully carried out experiments using soundwaves and an eye model, publishing their findings in Engineering Reports.
    Co-author Dr. Khamis Essa, Director of the Advanced Manufacturing Group at the University of Birmingham, commented: “We discovered a relationship between the internal pressure of an object and its acoustic reflection coefficient. With further investigation into eye geometry and how this affects the interaction with soundwaves, it possible to use a smartphone to accurately measure IOP from the comfort of the user’s home.”
    Risk factors for other eye diseases are easier to assess — for example, in the case of diabetic retinopathy, individuals with diabetes are specifically at risk and are constantly monitored for tiny bulges that develop in the blood vessels of the eye.

    advertisement

    The current ‘gold standard’ method of measuring IOP is applanation tonometry, where numbing drops followed by non-toxic dye are applied to the patient’s eyes. There are problems and measurement errors associated with this method.
    An independent risk factor of glaucoma is having a thin central corneal thickness (CCT) — either by natural occurrence or a common procedure like laser eye surgery. A thin CCT causes artificially low readings of IOP when using applanation tonometry.
    The only way to verify the reading is by a full eye examination — not possible in a mobile situation. Also, the equipment is too expensive for most people to purchase for long-term home monitoring.
    IOP is a vital measurement of healthy vision, defined as pressure created by continued renewal of eye fluids.
    Ocular hypertension is caused by an imbalance in production and drainage of aqueous fluid — most common in older adults. Risk increases with age, in turn increasing the likelihood of an individual developing glaucoma.
    Glaucoma is a disease of the optic nerve which is estimated to affect 79.6 million people world-wide and, if left untreated, causes irreversible damage. In most cases, blindness can be prevented with appropriate control and treatment.

    Story Source:
    Materials provided by University of Birmingham. Note: Content may be edited for style and length. More

  • in

    Laser system generates random numbers at ultrafast speeds

    An international team of scientists has developed a system that can generate random numbers over a hundred times faster than current technologies, paving the way towards faster, cheaper, and more secure data encryption in today’s digitally connected world.
    The random generator system was jointly developed by researchers from Nanyang Technological University, Singapore (NTU Singapore), Yale University, and Trinity College Dublin, and made in NTU.
    Random numbers are used for a variety of purposes, such as generating data encryption keys and one-time passwords (OTPs) in everyday processes such online banking and e-commerce to shore up their security.
    The system uses a laser with a special hourglass-shaped cavity to generate random patterns, which are formed by light rays reflecting and interacting with each other within the cavity. By reading the patterns, the system generates many series of random numbers at the same time.
    The researchers found that like snowflakes, no two number sequences generated using the system were the same, due to the unpredictable nature of how the light rays reflect and interact with each other in the cavity.
    The laser used in the system is about one millimeter long, smaller than most other lasers. It is also energy efficient and can be operated with any household power socket, as it only requires a one-ampere (1A) current.

    advertisement

    In their study published in one of the world’s leading scientific journals Science on 26 February 2021, the researchers verified the effectiveness of their random number generator using two tests, including one published by the US National Institute of Standards and Technology.
    The research team has proven that the NTU-made random number generator which is faster and more secure than existing comparable technologies, could help safeguard users’ data in a world that is steadily relying more on Internet transactions (see Image 2).
    Professor Wang Qijie from NTU’s School of Electrical and Electronic Engineering & School of Physical and Mathematical Science, as well as The Photonics Institute, who led the NTU team involved in the international research, said, “Current random number generators run by computers are cheap and effective. However, they are vulnerable to attacks, as hackers could predict future number sequences if they discover the algorithm used to generate the numbers. Our system is safer as it uses an unpredictable method to generate numbers, making it impossible for even those with the same device to replicate.”
    Dr Zeng Yongquan, a Research Fellow from NTU’s School of Physical and Mathematical Sciences, who co-designed the laser system, said: “Our system surpasses current random number generators, as the method can simultaneously generate many more random sequences of information at an even faster rate.”
    The team’s laser system can also generate about 250 terabytes of random bits per second — more than a hundred times faster than current computer-based random number generators.
    At its speed, the system would only take about 12 seconds to generate a body of random numbers equivalent to the size of information in the largest library in the world — the US Library of Congress.
    Elaborating on the future of the system, the team is working on making the technology ready for practical use, by incorporating the laser into a compact chip that enables the random numbers generated to be fed directly into a computer.

    Story Source:
    Materials provided by Nanyang Technological University. Note: Content may be edited for style and length. More

  • in

    Scientists induce artificial 'magnetic texture' in graphene

    Graphene is incredibly strong, lightweight, conductive … the list of its superlative properties goes on.
    It is not, however, magnetic — a shortcoming that has stunted its usefulness in spintronics, an emerging field that scientists say could eventually rewrite the rules of electronics, leading to more powerful semiconductors, computers and other devices.
    Now, an international research team led by the University at Buffalo is reporting an advancement that could help overcome this obstacle.
    In a study published today in the journal Physical Review Letters, researchers describe how they paired a magnet with graphene, and induced what they describe as “artificial magnetic texture” in the nonmagnetic wonder material.
    “Independent of each other, graphene and spintronics each possess incredible potential to fundamentally change many aspects of business and society. But if you can blend the two together, the synergistic effects are likely to be something this world hasn’t yet seen,” says lead author Nargess Arabchigavkani, who performed the research as a PhD candidate at UB and is now a postdoctoral research associate at SUNY Polytechnic Institute.
    Additional authors represent UB, King Mongkut’s Institute of Technology Ladkrabang in Thailand, Chiba University in Japan, University of Science and Technology of China, University of Nebraska Omaha, University of Nebraska Lincoln, and Uppsala University in Sweden.

    advertisement

    For their experiments, researchers placed a 20-nanometer-thick magnet in direct contact with a sheet of graphene, which is a single layer of carbon atoms arranged in a two-dimensional honeycomb lattice that is less than 1 nanometer thick.
    “To give you a sense of the size difference, it’s a bit like putting a brick on a sheet of paper,” says the study’s senior author Jonathan Bird, PhD, professor and chair of electrical engineering at the UB School of Engineering and Applied Sciences.
    Researchers then placed eight electrodes in different spots around the graphene and magnet to measure their conductivity.
    The electrodes revealed a surprise — the magnet induced an artificial magnetic texture in the graphene that persisted even in areas of the graphene away from the magnet. Put simply, the intimate contact between the two objects caused the normally nonmagnetic carbon to behave differently, exhibiting magnetic properties similar to common magnetic materials like iron or cobalt.
    Moreover, it was found that these properties could overwhelm completely the natural properties of the graphene, even when looking several microns away from the contact point of the graphene and the magnet. This distance (a micron is a millionth of a meter), while incredibly small, is relatively large microscopically speaking.
    The findings raise important questions relating to the microscopic origins of the magnetic texture in the graphene.
    Most importantly, Bird says, is the extent to which the induced magnetic behavior arises from the influence of spin polarization and/or spin-orbit coupling, which are phenomena known to be intimately connected to the magnetic properties of materials and to the emerging technology of spintronics.
    Rather than utilizing the electrical charge carried by electrons (as in traditional electronics), spintronic devices seek to exploit the unique quantum property of electrons known as spin (which is analogous to the earth spinning on its own axis). Spin offers the potential to pack more data into smaller devices, thereby increasing the power of semiconductors, quantum computers, mass storage devices and other digital electronics.
    The work was supported by funding from the U.S. Department of Energy. Additional support came from the U.S. National Science Foundation; nCORE, a wholly owned subsidiary of the Semiconductor Research Corporation; the Swedish Research Council; and the Japan Society for the Promotion of Science.

    Story Source:
    Materials provided by University at Buffalo. Original written by Cory Nealon. Note: Content may be edited for style and length. More

  • in

    Light unbound: Data limits could vanish with new optical antennas

    Researchers at the University of California, Berkeley, have found a new way to harness properties of light waves that can radically increase the amount of data they carry. They demonstrated the emission of discrete twisting laser beams from antennas made up of concentric rings roughly equal to the diameter of a human hair, small enough to be placed on computer chips.
    The new work, reported in a paper published Thursday, Feb. 25, in the journal Nature Physics, throws wide open the amount of information that can be multiplexed, or simultaneously transmitted, by a coherent light source. A common example of multiplexing is the transmission of multiple telephone calls over a single wire, but there had been fundamental limits to the number of coherent twisted lightwaves that could be directly multiplexed.
    “It’s the first time that lasers producing twisted light have been directly multiplexed,” said study principal investigator Boubacar Kanté, the Chenming Hu Associate Professor at UC Berkeley’s Department of Electrical Engineering and Computer Sciences. “We’ve been experiencing an explosion of data in our world, and the communication channels we have now will soon be insufficient for what we need. The technology we are reporting overcomes current data capacity limits through a characteristic of light called the orbital angular momentum. It is a game-changer with applications in biological imaging, quantum cryptography, high-capacity communications and sensors.”
    Kanté, who is also a faculty scientist in the Materials Sciences Division at Lawrence Berkeley National Laboratory (Berkeley Lab), has been continuing this work at UC Berkeley after having started the research at UC San Diego. The first author of the study is Babak Bahari, a former Ph.D. student in Kanté’s lab.
    Kanté said that current methods of transmitting signals through electromagnetic waves are reaching their limit. Frequency, for example, has become saturated, which is why there are only so many stations one can tune into on the radio. Polarization, where lightwaves are separated into two values — horizontal or vertical — can double the amount of information transmitted. Filmmakers take advantage of this when creating 3D movies, allowing viewers with specialized glasses to receive two sets of signals — one for each eye — to create a stereoscopic effect and the illusion of depth.
    Harnessing the potential in a vortex
    But beyond frequency and polarization is orbital angular momentum, or OAM, a property of light that has garnered attention from scientists because it offers exponentially greater capacity for data transmission. One way to think about OAM is to compare it to the vortex of a tornado.

    advertisement

    “The vortex in light, with its infinite degrees of freedom, can, in principle, support an unbounded quantity of data,” said Kanté. “The challenge has been finding a way to reliably produce the infinite number of OAM beams. No one has ever produced OAM beams of such high charges in such a compact device before.”
    The researchers started with an antenna, one of the most important components in electromagnetism and, they noted, central to ongoing 5G and upcoming 6G technologies. The antennas in this study are topological, which means that their essential properties are retained even when the device is twisted or bent.
    Creating rings of light
    To make the topological antenna, the researchers used electron-beam lithography to etch a grid pattern onto indium gallium arsenide phosphide, a semiconductor material, and then bonded the structure onto a surface made of yttrium iron garnet. The researchers designed the grid to form quantum wells in a pattern of three concentric circles — the largest about 50 microns in diameter — to trap photons. The design created conditions to support a phenomenon known as the photonic quantum Hall effect, which describes the movement of photons when a magnetic field is applied, forcing light to travel in only one direction in the rings.
    “People thought the quantum Hall effect with a magnetic field could be used in electronics but not in optics because of the weak magnetism of existing materials at optical frequencies,” said Kanté. “We are the first to show that the quantum Hall effect does work for light.”
    By applying a magnetic field perpendicular to their two-dimensional microstructure, the researchers successfully generated three OAM laser beams traveling in circular orbits above the surface. The study further showed that the laser beams had quantum numbers as large as 276, referring to the number of times light twists around its axis in one wavelength.
    “Having a larger quantum number is like having more letters to use in the alphabet,” said Kanté. “We’re allowing light to expand its vocabulary. In our study, we demonstrated this capability at telecommunication wavelengths, but in principle, it can be adapted to other frequency bands. Even though we created three lasers, multiplying the data rate by three, there is no limit to the possible number of beams and data capacity.”
    Kanté said the next step in his lab is to make quantum Hall rings that use electricity as power sources. More

  • in

    Computer training to reduce trauma symptoms

    Computer training applied in addition to psychotherapy can potentially help reduce the symptoms of post-traumatic stress disorder (PTSD). These are the results found by researchers from Ruhr-Universität Bochum and their collaborating partners in a randomised controlled clinical trial with 80 patients with PTSD. With the computerised training, the patients learned to appraise recurring and distressing trauma symptoms in a less negative light and instead to interpret them as a normal and understandable part of processing the trauma. The results are described by a team headed by Dr. Marcella Woud and Dr. Simon Blackwell from the Department of Clinical Psychology and Psychotherapy, together with the group led by Professor Henrik Kessler from the Clinic for Psychosomatic Medicine and Psychotherapy at the LWL University Hospital Bochum in the journal Psychotherapy and Psychosomatics, published online on 23 February 2021.
    Intrusions are a core symptom of post-traumatic stress disorder. Images of the traumatic experience suddenly and uncontrollably re-enter consciousness, often accompanied by strong sensory impressions such as the sounds or certain smells at the scene of the trauma, sometimes even making patients feel as if they are reliving the trauma. “Patients appraise the fact that they are experiencing these intrusions very negatively; they are often afraid that it is a sign that they are losing their mind,” explains Marcella Woud. “The feeling of having no control over the memories and experiencing the wide variety of intense negative emotions that often accompany intrusions make them even more distressing, which in turn reinforces negative appraisals.”
    A sentence completion task could help patients to reappraise symptoms
    Consequently, trauma therapies specifically address negative appraisals of symptoms such as intrusions. The Bochum-based team set out to establish whether a computerised training targeting these appraisals could also reduce symptoms and, at the same time, help to understand more about the underlying mechanisms of negative appraisals in PTSD. During the training, the patients are shown trauma-relevant sentences on the computer, which they have to complete. For example: “Since the incident, I sometimes react more anxiously than usual. This reaction is under_tand_ble.” Or: “I often think that I myself am to blame for the trauma. Such thoughts are un_ound_d.” The patients’ task is to fill in the word fragment’s first missing letter and by doing so to systematically appraise the statements in a more positive way. The aim is thus to learn that their symptoms are normal reactions and part of the processing of what they have experienced.
    Approximately half of the study participants underwent this “Cognitive Bias Modification-Appraisal” training, while the other half received a placebo control training — a visual concentration training — which was not designed to change negative appraisals. Both trainings took place during the first two weeks of the patients’ treatment in the clinic, with four sessions each week. One session lasted about 20 minutes. During and after the inpatient treatment, various measurements were collected to record any changes to the symptoms.
    Fewer trauma symptoms
    Patients who had participated in the appraisal training subsequently rated their symptoms such as intrusions and trauma-relevant thoughts less negatively than patients in the control group, and they also showed fewer other trauma-relevant symptoms after the training. “This leads us to conclude that the training appears to work — at least in the short-term,” says Marcella Woud. “Our study was not designed to examine long-term effects, which is something we will have to do in future studies on top of studying the training’s mechanisms in more detail.”

    Story Source:
    Materials provided by Ruhr-University Bochum. Note: Content may be edited for style and length. More

  • in

    AI identifies social bias trends in Bollywood, Hollywood movies

    Babies whose births were depicted in Bollywood films from the 1950s and 60s were more often than not boys; in today’s films, boy and girl newborns are about evenly split. In the 50s and 60s, dowries were socially acceptable; today, not so much. And Bollywood’s conception of beauty has remained consistent through the years: beautiful women have fair skin.
    Fans and critics of Bollywood — the popular name for a $2.1 billion film industry centered in Mumbai, India — might have some inkling of all this, particularly as movies often reflect changes in the culture. But these insights came via an automated computer analysis designed by Carnegie Mellon University computer scientists.
    The researchers, led by Kunal Khadilkar and Ashiqur R. KhudaBukhsh of CMU’s Language Technologies Institute (LTI), gathered 100 Bollywood movies from each of the past seven decades along with 100 of the top-grossing Hollywood moves from the same periods. They then used statistical language models to analyze subtitles of those 1,400 films for gender and social biases, looking for such factors as what words are closely associated with each other.
    “Most cultural studies of movies might consider five or 10 movies,” said Khadilkar, a master’s student in LTI. “Our method can look at 2,000 movies in a matter of days.”
    It’s a method that enables people to study cultural issues with much more precision, said Tom Mitchell, Founders University Professor in the School of Computer Science and a co-author of the study.
    “We’re talking about statistical, automated analysis of movies at scale and across time,” Mitchell said. “It gives us a finer probe for understanding the cultural themes implicit in these films.” The same natural language processing tools might be used to rapidly analyze hundreds or thousands of books, magazine articles, radio transcripts or social media posts, he added.
    For instance, the researchers assessed beauty conventions in movies by using a so-called cloze test. Essentially, it’s a fill-in-the-blank exercise: “A beautiful woman should have BLANK skin.” A language model normally would predict “soft” as the answer, they noted. But when the model was trained with the Bollywood subtitles, the consistent prediction became “fair.” The same thing happened when Hollywood subtitles were used, though the bias was less pronounced.
    To assess the prevalence of male characters, the researchers used a metric called Male Pronoun Ratio (MPR), which compares the occurrence of male pronouns such as “he” and “him” with the total occurrences of male and female pronouns. From 1950 through today, the MPR for Bollywood and Hollywood movies ranged from roughly 60 to 65 MPR. By contrast, the MPR for a selection of Google Books dropped from near 75 in the 1950s to parity, about 50, in the 2020s.
    Dowries — monetary or property gifts from a bride’s family to the groom’s — were common in India before they were outlawed in the early 1960s. Looking at words associated with dowry over the years, the researchers found such words as “loan,” “debt” and “jewelry” in Bollywood films of the 50s, which suggested compliance. By the 1970s, other words, such as “consent” and “responsibility,” began to appear. Finally, in the 2000s, the words most closely associated with dowry — including “trouble,” “divorce” and “refused” — indicate noncompliance or its consequences.
    “All of these things we kind of knew,” said KhudaBukhsh, an LTI project scientist, “but now we have numbers to quantify them. And we can also see the progress over the last 70 years as these biases have been reduced.”
    A research paper by Khadilkar, KhudaBukhsh and Mitchell was presented at the Association for the Advancement of Artificial Intelligence virtual conference earlier this month.

    Story Source:
    Materials provided by Carnegie Mellon University. Original written by Byron Spice. Note: Content may be edited for style and length. More

  • in

    Molecular bridges power up printed electronics

    The exfoliation of graphite into graphene layers inspired the investigation of thousands of layered materials: amongst them transition metal dichalcogenides (TMDs). These semiconductors can be used to make conductive inks to manufacture printed electronic and optoelectronic devices. However, defects in their structure may hinder their performance. Now, Graphene Flagship researchers have overcome these hurdles by introducing ‘molecular bridges’- small molecules that interconnect the TMD flakes, thereby boosting the conductivity and overall performance.
    The results, published in Nature Nanotechnology, come from a multidisciplinary collaboration between Graphene Flagship partners the University of Strasbourg and CNRS, France, AMBER and Trinity College Dublin, Ireland, and Cambridge Graphene Centre, University of Cambridge, UK. The employed molecular bridges increase the carrier mobility — a physical parameter related to the electrical conductivity — tenfold.
    TMD inks are used in many fields, from electronics and sensors to catalysis and biomedicine. They are usually manufactured using liquid-phase exfoliation, a technique developed by the Graphene Flagship that allows for the mass production of graphene and layered materials. But, although this technology yields high volumes of product, it has some limitations. The exfoliation process may create defects that affect the layered material’s performance, particularly when it comes to conducting electricity.
    Inspired by organic electronics — the field behind successful technologies such as organic light-emitting diodes (OLEDs) and low-cost solar cells — the Graphene Flagship team found a solution: molecular bridges. With these chemical structures, the researchers managed to kill two birds with one stone. First, they connected TMD flakes to one another, creating a network that facilitates the charge transport and conductivity. The molecular bridges double up as walls, healing the chemical defects at the edges of the flakes and eliminating electrical vacancies that would otherwise promote energy loss.
    Furthermore, molecular bridges provide researchers with a new tool to tailor the conductivity of TMD inks on demand. If the bridge is a conjugated molecule — a structure with double bonds or aromatic rings — the carrier mobility is higher than when using saturated molecules, such as hydrocarbons. “The structure of the molecular bridge plays a key role,” explains Paolo Samorì, from Graphene Flagship partner the University of Strasbourg, France, who led the study. “We use molecules called di-thiols, which you can readily buy from any chemical supplier’s catalogue,” he adds. Their available structural diversity opens a world of possibilities to regulate the conductivity, adapting it to each specific application. “Molecular bridges will help us integrate many new functions in TMD-based devices,” continues Samorì. “These inks can be printed on any surface, like plastic, fabric or paper, enabling a whole variety of new circuitry and sensors for flexible electronics and wearables.”
    Maria Smolander, Graphene Flagship Work Package Leader for Flexible Electronics, adds: “This work is of high importance as a crucial step towards the full exploitation of solution-based fabrication methods like printing in flexible electronics. The use of the covalently bound bridges improves both the structural and electrical properties of the thin layers based on TMD flakes.”
    Andrea C. Ferrari, Science and Technology Officer of the Graphene Flagship and Chair of its Management Panel, adds: “The Graphene Flagship pioneered both liquid phase exfoliation and inkjet printing of graphene and layered materials. These techniques can produce and handle large volumes of materials. This paper is a key step to make semiconducting layered materials available for printed, flexible and wearable electronics, and yet again pushes forward the state of the art.”

    Story Source:
    Materials provided by Graphene Flagship. Original written by Fernando Gomollón-Bel. Note: Content may be edited for style and length. More