More stories

  • in

    New metalens shifts focus without tilting or moving

    Polished glass has been at the center of imaging systems for centuries. Their precise curvature enables lenses to focus light and produce sharp images, whether the object in view is a single cell, the page of a book, or a far-off galaxy.
    Changing focus to see clearly at all these scales typically requires physically moving a lens, by tilting, sliding, or otherwise shifting the lens, usually with the help of mechanical parts that add to the bulk of microscopes and telescopes.
    Now MIT engineers have fabricated a tunable “metalens” that can focus on objects at multiple depths, without changes to its physical position or shape. The lens is made not of solid glass but of a transparent “phase-changing” material that, after heating, can rearrange its atomic structure and thereby change the way the material interacts with light.
    The researchers etched the material’s surface with tiny, precisely patterned structures that work together as a “metasurface” to refract or reflect light in unique ways. As the material’s property changes, the optical function of the metasurface varies accordingly. In this case, when the material is at room temperature, the metasurface focuses light to generate a sharp image of an object at a certain distance away. After the material is heated, its atomic structure changes, and in response, the metasurface redirects light to focus on a more distant object.
    In this way, the new active “metalens” can tune its focus without the need for bulky mechanical elements. The novel design, which currently images within the infrared band, may enable more nimble optical devices, such as miniature heat scopes for drones, ultracompact thermal cameras for cellphones, and low-profile night-vision goggles.
    “Our result shows that our ultrathin tunable lens, without moving parts, can achieve aberration-free imaging of overlapping objects positioned at different depths, rivaling traditional, bulky optical systems,” says Tian Gu, a research scientist in MIT’s Materials Research Laboratory.

    advertisement

    Gu and his colleagues have published their results today in the journal Nature Communications. His co-authors include Juejun Hu, Mikhail Shalaginov, Yifei Zhang, Fan Yang, Peter Su, Carlos Rios, Qingyang Du, and Anuradha Agarwal at MIT; Vladimir Liberman, Jeffrey Chou, and Christopher Roberts of MIT Lincoln Laboratory; and collaborators at the University of Massachusetts at Lowell, the University of Central Florida, and Lockheed Martin Corporation.
    A material tweak
    The new lens is made of a phase-changing material that the team fabricated by tweaking a material commonly used in rewritable CDs and DVDs. Called GST, it comprises germanium, antimony, and tellurium, and its internal structure changes when heated with laser pulses. This allows the material to switch between transparent and opaque states — the mechanism that enables data stored in CDs to be written, wiped away, and rewritten.
    Earlier this year, the researchers reported adding another element, selenium, to GST to make a new phase-changing material: GSST. When they heated the new material, its atomic structure shifted from an amorphous, random tangle of atoms to a more ordered, crystalline structure. This phase shift also changed the way infrared light traveled through the material, affecting refracting power but with minimal impact on transparency.
    The team wondered whether GSST’s switching ability could be tailored to direct and focus light at specific points depending on its phase. The material then could serve as an active lens, without the need for mechanical parts to shift its focus.

    advertisement

    “In general when one makes an optical device, it’s very challenging to tune its characteristics postfabrication,” Shalaginov says. “That’s why having this kind of platform is like a holy grail for optical engineers, that allows [the metalens] to switch focus efficiently and over a large range.”
    In the hot seat
    In conventional lenses, glass is precisely curved so that incoming light beam refracts off the lens at various angles, converging at a point a certain distance away, known as the lens’ focal length. The lenses can then produce a sharp image of any objects at that particular distance. To image objects at a different depth, the lens must physically be moved.
    Rather than relying on a material’s fixed curvature to direct light, the researchers looked to modify GSST-based metalens in a way that the focal length changes with the material’s phase.
    In their new study, they fabricated a 1-micron-thick layer of GSST and created a “metasurface” by etching the GSST layer into microscopic structures of various shapes that refract light in different ways.
    “It’s a sophisticated process to build the metasurface that switches between different functionalities, and requires judicious engineering of what kind of shapes and patterns to use,” Gu says. “By knowing how the material will behave, we can design a specific pattern which will focus at one point in the amorphous state, and change to another point in the crystalline phase.”
    They tested the new metalens by placing it on a stage and illuminating it with a laser beam tuned to the infrared band of light. At certain distances in front of the lens, they placed transparent objects composed of double-sided patterns of horizontal and vertical bars, known as resolution charts, that are typically used to test optical systems.
    The lens, in its initial, amorphous state, produced a sharp image of the first pattern. The team then heated the lens to transform the material to a crystalline phase. After the transition, and with the heating source removed, the lens produced an equally sharp image, this time of the second, farther set of bars.
    “We demonstrate imaging at two different depths, without any mechanical movement,” Shalaginov says.
    The experiments show that a metalens can actively change focus without any mechanical motions. The researchers say that a metalens could be potentially fabricated with integrated microheaters to quickly heat the material with short millisecond pulses. By varying the heating conditions, they can also tune to other material’s intermediate states, enabling continuous focal tuning.
    “It’s like cooking a steak — one starts from a raw steak, and can go up to well done, or could do medium rare, and anything else in between,” Shalaginov says. “In the future this unique platform will allow us to arbitrarily control the focal length of the metalens.” More

  • in

    Silver and gold nanowires open the way to better electrochromic devices

    The team of Professor Dongling Ma of the Institut national de la recherche scientifique (INRS) developed a new approach for foldable and solid devices.
    Solid and flexible electrochromic (EC) devices, such as smart windows, wearable electronics, foldable displays, and smartphones, are of great interest in research. This importance is due to their unique property: the colour or opacity of the material changes when a voltage is applied.
    Traditionally, electrochromic devices use indium tin oxide (ITO) electrodes. However, the inflexibility of metal oxide and the leakage issue of liquid electrolyte affect the performance and lifetime of EC devices. ITO is also brittle, which is incompatible with flexible substrates.
    Furthermore, there are concerns about the scarcity and cost of indium, a rare element, which raises a question on its long-term sustainability. The fabrication process for the highest quality ITO electrodes is expensive. “With all these limitations, the need for ITO-free optoelectronic devices are considerably high. We were able to achieve such a goal,” says Dongling Ma who led the study recently published in the journal Advanced Functional Materials.
    A new approach
    Indeed, the team has developed a new approach with a cost-effective and easy electrode fabrication that is completely ITO-free. “We reached high stability and flexibility of transparent conductive electrodes (TEC), even in a harsh environment, such as oxidizing solution of H2O2” she adds. They are the first to apply stable nanowires-based TCEs in flexible EC devices, using silver nanowires coated with a compact gold shell.
    Now that they have a proof of concept, the researchers want to scale up the synthesis of TEC and make the nanowires fabrication process even more cost-effective, while maintaining high device performance.

    Story Source:
    Materials provided by Institut national de la recherche scientifique – INRS. Original written by Audrey-Maude Vézina. Note: Content may be edited for style and length. More

  • in

    Speedier network analysis for a range of computer hardware developed

    Graphs — data structures that show the relationship among objects — are highly versatile. It’s easy to imagine a graph depicting a social media network’s web of connections. But graphs are also used in programs as diverse as content recommendation (what to watch next on Netflix?) and navigation (what’s the quickest route to the beach?). As Ajay Brahmakshatriya summarizes: “graphs are basically everywhere.”
    Brahmakshatriya has developed software to more efficiently run graph applications on a wider range of computer hardware. The software extends GraphIt, a state-of-the-art graph programming language, to run on graphics processing units (GPUs), hardware that processes many data streams in parallel. The advance could accelerate graph analysis, especially for applications that benefit from a GPU’s parallelism, such as recommendation algorithms.
    Brahmakshatriya, a PhD student in MIT’s Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, will present the work at this month’s International Symposium on Code Generation and Optimization. Co-authors include Brahmakshatriya’s advisor, Professor Saman Amarasinghe, as well as Douglas T. Ross Career Development Assistant Professor of Software Technology Julian Shun, postdoc Changwan Hong, recent MIT PhD student Yunming Zhang PhD ’20 (now with Google), and Adobe Research’s Shoaib Kamil.
    When programmers write code, they don’t talk directly to the computer hardware. The hardware itself operates in binary — 1s and 0s — while the coder writes in a structured, “high-level” language made up of words and symbols. Translating that high-level language into hardware-readable binary requires programs called compilers. “A compiler converts the code to a format that can run on the hardware,” says Brahmakshatriya. One such compiler, specially designed for graph analysis, is GraphIt.
    The researchers developed GraphIt in 2018 to optimize the performance of graph-based algorithms regardless of the size and shape of the graph. GraphIt allows the user not only to input an algorithm, but also to schedule how that algorithm runs on the hardware. “The user can provide different options for the scheduling, until they figure out what works best for them,” says Brahmakshatriya. “GraphIt generates very specialized code tailored for each application to run as efficiently as possible.”
    A number of startups and established tech firms alike have adopted GraphIt to aid their development of graph applications. But Brahmakshatriya says the first iteration of GraphIt had a shortcoming: It only runs on central processing units or CPUs, the type of processor in a typical laptop.
    “Some algorithms are massively parallel,” says Brahmakshatriya, “meaning they can better utilize hardware like a GPU that has 10,000 cores for execution.” He notes that some types of graph analysis, including recommendation algorithms, require a high degree of parallelism. So Brahmakshatriya extended GraphIt to enable graph analysis to flourish on GPUs.
    Brahmakshatriya’s team preserved the way GraphIt users input algorithms, but adapted the scheduling component for a wider array of hardware. “Our main design decision in extending GraphIt to GPUs was to keep the algorithm representation exactly the same,” says Brahmakshatriya. “Instead, we added a new scheduling language. So, the user can keep the same algorithms that they had before written before [for CPUs], and just change the scheduling input to get the GPU code.”
    This new, optimized scheduling for GPUs gives a boost to graph algorithms that require high parallelism — including recommendation algorithms or internet search functions that sift through millions of websites simultaneously. To confirm the efficacy of GraphIt’s new extension, the team ran 90 experiments pitting GraphIt’s runtime against other state-of-the-art graph compilers on GPUs. The experiments included a range of algorithms and graph types, from road networks to social networks. GraphIt ran fastest in 65 of the 90 cases and was close behind the leading algorithm in the rest of the trials, demonstrating both its speed and versatility.
    Brahmakshatriya says the new GraphIt extension provides a meaningful advance in graph analysis, enabling users to go between CPUs and GPUs with state-of-the-art performance with ease. “The field these days is tooth-and-nail competition. There are new frameworks coming out every day,” He says. But he emphasizes that the payoff for even slight optimization is worth it. “Companies are spending millions of dollars each day to run graph algorithms. Even if you make it run just 5 percent faster, you’re saving many thousands of dollars.”
    This research was funded, in part, by the National Science Foundation, U.S. Department of Energy, the Applications Driving Architectures Center, and the Defense Advanced Research Projects Agency. More

  • in

    A speed limit also applies in the quantum world

    Even in the world of the smallest particles with their own special rules, things cannot proceed infinitely fast. Physicists at the University of Bonn have now shown what the speed limit is for complex quantum operations. The study also involved scientists from MIT, the universities of Hamburg, Cologne and Padua, and the Jülich Research Center. The results are important for the realization of quantum computers, among other things. 
    Suppose you observe a waiter (the lockdown is already history) who on New Year’s Eve has to serve an entire tray of champagne glasses just a few minutes before midnight. He rushes from guest to guest at top speed. Thanks to his technique, perfected over many years of work, he nevertheless manages not to spill even a single drop of the precious liquid.
    A little trick helps him to do this: While the waiter accelerates his steps, he tilts the tray a bit so that the champagne does not spill out of the glasses. Halfway to the table, he tilts it in the opposite direction and slows down. Only when he has come to a complete stop does he hold it upright again.
    Atoms are in some ways similar to champagne. They can be described as waves of matter, which behave not like a billiard ball but more like a liquid. Anyone who wants to transport atoms from one place to another as quickly as possible must therefore be as skillful as the waiter on New Year’s Eve. “And even then, there is a speed limit that this transport cannot exceed,” explains Dr. Andrea Alberti, who led this study at the Institute of Applied Physics of the University of Bonn.
    Cesium atom as a champagne substitute
    In their study, the researchers experimentally investigated exactly where this limit lies. They used a cesium atom as a champagne substitute and two laser beams perfectly superimposed but directed against each other as a tray. This superposition, called interference by physicists, creates a standing wave of light: a sequence of mountains and valleys that initially do not move. “We loaded the atom into one of these valleys, and then set the standing wave in motion — this displaced the position of the valley itself,” says Alberti. “Our goal was to get the atom to the target location in the shortest possible time without it spilling out of the valley, so to speak.”
    The fact that there is a speed limit in the microcosm was already theoretically demonstrated by two Soviet physicists, Leonid Mandelstam and Igor Tamm more than 60 years ago. They showed that the maximum speed of a quantum process depends on the energy uncertainty, i.e., how “free” the manipulated particle is with respect to its possible energy states: the more energetic freedom it has, the faster it is. In the case of the transport of an atom, for example, the deeper the valley into which the cesium atom is trapped, the more spread the energies of the quantum states in the valley are, and ultimately the faster the atom can be transported. Something similar can be seen in the example of the waiter: If he only fills the glasses half full (to the chagrin of the guests), he runs less risk that the champagne spills over as he accelerates and decelerates. However, the energetic freedom of a particle cannot be increased arbitrarily. “We can’t make our valley infinitely deep — it would cost us too much energy,” stresses Alberti.
    Beam me up, Scotty!
    The speed limit of Mandelstam and Tamm is a fundamental limit. However, one can only reach it under certain circumstances, namely in systems with only two quantum states. “In our case, for example, this happens when the point of origin and destination are very close to each other,” the physicist explains. “Then the matter waves of the atom at both locations overlap, and the atom could be transported directly to its destination in one go, that is, without any stops in between — almost like the teleportation in the Starship Enterprise of Star Trek.”
    However, the situation is different when the distance grows to several dozens of matter wave widths as in the Bonn experiment. For these distances, direct teleportation is impossible. Instead, the particle must go through several intermediate states to reach its final destination: The two-level system becomes a multi-level system. The study shows that a lower speed limit applies to such processes than that predicted by the two Soviet physicists: It is determined not only by the energy uncertainty, but also by the number of intermediate states. In this way, the work improves the theoretical understanding of complex quantum processes and their constraints.
    The physicists’ findings are important not least for quantum computing. The computations that are possible with quantum computers are mostly based on the manipulation of multi-level systems. Quantum states are very fragile, though. They last only a short lapse of time, which physicists call coherence time. It is therefore important to pack as many computational operations as possible into this time. “Our study reveals the maximum number of operations we can perform in the coherence time,” Alberti explains. “This makes it possible to make optimal use of it.”
    The study was funded by the German Research Foundation (DFG) as part of the Collaborative Research Center SFB/TR 185 OSCAR. Funding was also provided by the Reinhard Frank Foundation in collaboration with the German Technion Society, and by the German Academic Exchange Service. More

  • in

    Atomic nuclei in the quantum swing

    From atomic clocks to secure communication to quantum computers: these developments are based on the increasingly better control of the quantum behaviour of electrons in atomic shells with the help of laser light. Now, for the first time, physicists at the Max Planck Institute for Nuclear Physics in Heidelberg have succeeded in precisely controlling quantum jumps in atomic nuclei using X-ray light. Compared with electron systems, nuclear quantum jumps are extreme — with energies up to millions of times higher and incredibly short zeptosecond processes. A zeptosecond is one trillionth of a billionth of a second. The rewards include profound insight into the quantum world, ultra-precise nuclear clocks, and nuclear batteries with enormous storage capacity. The experiment required a sophisticated X-ray flash facility developed by a Heidelberg group led by Jörg Evers as part of an international collaboration.
    One of the great successes of modern physics is the increasingly precise control of dynamic quantum processes. It enables a deeper understanding of the quantum world with all its oddities and is also a driving force of new quantum technologies. But from the perspective of the atoms, “coherent control” has so far remained superficial: it is the quantum jump of the electrons in the outer shell of the atoms that has become increasingly controllable by lasers. But as Christoph Keitel explains, the atomic nuclei themselves are also quantum systems in which the nuclear building blocks can make quantum jumps between different quantum states.
    Energy-rich quantum jumps for nuclear batteries
    “In addition to this analogy to electron shells, there are huge differences,” explains the Director at the Max Planck Institute for Nuclear Physics in Heidelberg: “They’ve got us so excited!” Quantum jumps between different quantum states are actually jumps on a kind of energy ladder. “And the energies of these quantum jumps are often six orders of magnitude greater than in the electron shell,” says Keitel. A single quantum jump made by a nuclear component can thus pump up to a million times more energy into it — or get it out again. This has given rise to the idea of nuclear batteries with an unprecedented storage capacity.
    Such technical applications are still visions of the future. At the moment, research entails addressing and controlling these quantum jumps in a targeted manner. This requires precisely controlled, high-energy X-ray light. The Heidelberg team has been working on such an experimental technique for over 10 years. It has now been used for the first time.
    Accurate frequencies enable ultra-precise atomic clocks
    The quantum states of atomic nuclei offer another important advantage over electron states. Compared with the electronic quantum jumps, they are much more sharply defined. Because this translates directly into more accurate frequencies according to the laws of physics, they can, in principle, be used for extremely precise measurements. For example, this could enable the development of ultra-precise nuclear clocks that would make today’s atomic clocks look like antiquated pendulum clocks. In addition to technical applications of such clocks (e.g. in navigation), they could be used to examine the fundamentals of today’s physics much more precisely. This includes the fundamental question of whether the constants of nature really are constant. However, such precision techniques require the control of quantum transitions in the nuclei.

    advertisement

    Coordinated light flashes enhance or reduce the excitation
    The principle of the Heidelberg experimental technique sounds quite simple at first. It uses pulses (i.e. flashes) of high-energy X-ray light, which are currently provided by the European Synchrotron Radiation Source ESRF in Grenoble. The experiment splits these X-ray pulses in a first sample in such a way that a second pulse follows behind the rest of the first pulse with a time delay. One after the other, both encounter a second sample, the actual object of investigation.
    The first pulse is very brief and contains a broad mix of frequencies. Like a shotgun blast, it stimulates a quantum jump in the nuclei; in the first experiment, this was a special quantum state in nuclei of iron atoms. The second pulse is much longer and has an energy that is precisely tuned to the quantum jump. In this way, it can specifically manipulate the quantum dynamics triggered by Pulse 1. The time span between the two pulses can be adjusted. This allows the team to adjust whether the second pulse is more constructive or destructive for the quantum state.
    The Heidelberg physicists compare this control mechanism to a swing. With the first pulse, you push it. Depending on the phase of its oscillation in which you give it a second push, it oscillates even stronger or is slowed down.
    Pulse control accurate to a few zeptoseconds
    But what sounds simple is a technical challenge that required years of research. A controlled change in the quantum dynamics of an atomic nucleus requires that the delay of the second pulse is stable on the unimaginably short time scale of a few zeptoseconds. Because only then do the two pulses work together in a controlling way.

    advertisement

    A zeptosecond is one trillionth of a billionth of a second — or a decimal point followed by 20 zeroes and a 1. In one zeptosecond, light does not even manage to pass through one per cent of a medium-sized atom. How can you imagine this in relation to our world? “If you imagine that an atom were as big as the Earth, that would be about 50 km, says Jörg Evers, who initiated the project.
    The sample is shifted by 45 trillionths of a metre
    The second X-ray pulse is delayed by a tiny displacement of the first sample, also containing iron nuclei with the appropriate quantum transition. “The nuclei selectively store energy from the first X-ray pulse for a short period of time during which the sample is rapidly shifted by about half a wavelength of X-ray light,” explains Thomas Pfeifer, Director at the Max Planck Institute for Nuclear Physics in Heidelberg. This corresponds to about 45 trillionths of a metre. After this tiny movement, the sample emits the second pulse.
    The physicists compare their experiment to two tuning forks that are at different distances from a firecracker (Figure 2). The bang first strikes the closer tuning fork, making it vibrate, and then moves on to the second tuning fork. In the meantime, the first tuning fork, now excited, emits sound waves itself, which arrive with a delay at the second fork. Depending on the delay time, this sound either amplifies or dampens the vibrations of the second fork — just like the second push on the oscillating swing, as well as for the case of the excited nuclei.
    With this experiment, Jörg Evers, Christoph Keitel, and Thomas Pfeifer and their team from the Max Planck Institute for Nuclear Physics in cooperation with researchers from DESY in Hamburg and the Helmholtz Institute/Friedrich Schiller University in Jena succeeded for the first time in demonstrating coherent control of nuclear excitations. In addition to synchrotron facilities such as those at the ESRF, free-electron lasers (FELs) such as the European XFEL at DESY have recently provided powerful sources of X-ray radiation — even in laser-like quality. This opens up a dynamic future for the emerging field of nuclear quantum optics. More

  • in

    COVID-19: Future targets for treatments rapidly identified with new computer simulations

    Researchers have detailed a mechanism in the distinctive corona of Covid-19 that could help scientists to rapidly find new treatments for the virus, and quickly test whether existing treatments are likely to work with mutated versions as they develop.
    The team, led by the University of Warwick as part of the EUTOPIA community of European universities, have simulated movements in nearly 300 protein structures of the Covid-19 virus spike protein by using computational modelling techniques, in an effort to help identify promising drug targets for the virus.
    In a new paper published today (19 February) in the journal Scientific Reports, the team of physicists and life scientists detail the methods they used to model the flexibility and dynamics of all 287 protein structures for the Covid-19 virus, also known as SARS-CoV-2, identified so far. Just like organisms, viruses are composed of proteins, large biomolecules that perform a variety of functions. The scientists believe that one method for treating the virus could be interfering with the mobility of those proteins.
    They have made their data, movies and structural information, detailing how the proteins move and how they deform, for all 287 protein structures for Covid-19 that were available at the time of the study, publicly accessible to allow others to investigate potential avenues for treatments.
    The researchers focused particular efforts on a part of the virus known as the spike protein, also called the Covid-19 echo domain structure, which forms the extended corona that gives coronaviruses their name. This spike is what allows the virus to attach itself to the ACE2 enzyme in human cell membranes, through which it causes the symptoms of Covid-19.
    The spike protein is in fact a homotrimer, or three of the same type of protein combined. By modelling the movements of the proteins in the spike, the researchers identified a ‘hinge’ mechanism that allows the spike to hook onto a cell, and also opens up a tunnel in the virus that is a likely means of delivering the infection to the hooked cell. The scientists suggest that by finding a suitable molecule to block the mechanism — literally, by inserting a suitably sized and shaped molecule — pharmaceutical scientists will be able to quickly identify existing drugs that could be effective against the virus.

    advertisement

    Lead author Professor Rudolf Roemer from the Department of Physics at the University of Warwick, who conducted the work while on a sabbatical at CY Cergy-Paris Université, said: “Knowing how this mechanism works is one way in which you can stop the virus, and in our study we are the first to see the detailed movement of opening. Now that you know what the range of this movement is, you can figure out what can block it.
    “All those people who are interested in checking whether the protein structures in the virus could be drug targets should be able to examine this and see if the dynamics that we compute are useful to them.
    “We couldn’t look closely at all the 287 proteins though in the time available. People should use the motion that we observe as a starting point for their own development of drug targets. If you find an interesting motion for a particular protein structure in our data, you can use that as the basis for further modelling or experimental studies.”
    To investigate the proteins’ movements, the scientists used a protein flexibility modelling approach. This involves recreating the protein structure as a computer model then simulating how that structure would move by treating the protein as a material consisting of solid and elastic subunits, with possible movement of these subunits defined by chemical bonds. The method has been shown to be particularly efficient and accurate when applied to large proteins such as the coronavirus’s spike protein. This can allow scientists to swiftly identify promising targets for drugs for further investigation.
    The protein structures that the researchers based their modelling on are all contained in the Protein Data Bank. Anyone who publishes a biological structure has to submit it to the protein databank so that it is freely available in a standard format for others to download and study further. Since the start of the Covid-19 pandemic, scientists all over the world have already submitted thousands of protein structures of Covid-19-related proteins onto the Protein Data Bank.
    Professor Roemer adds: “The gold standard in modelling protein dynamics computationally is a method called molecular dynamics. Unfortunately, this method can become very time consuming particularly for large proteins such as the Covid-19 spike, which has nearly 3000 residues — the basic building blocks of all proteins. Our method is much quicker, but naturally we have to make more stringent simplifying assumptions. Nevertheless, we can rapidly simulate structures that are much larger than what alternative methods can do.
    “At the moment, no-one has published experiments that identify protein crystal structures for the new variants of Covid-19. If new structures come out for the mutations in the virus then scientists could quickly test existing treatments and see if the new mechanics have an impact on their effectiveness using our method.” More

  • in

    Boys who play video games have lower depression risk

    Boys who regularly play video games at age 11 are less likely to develop depressive symptoms three years later, finds a new study led by a UCL researcher.
    The study, published in Psychological Medicine, also found that girls who spend more time on social media appear to develop more depressive symptoms.
    Taken together, the findings demonstrate how different types of screen time can positively or negatively influence young people’s mental health, and may also impact boys and girls differently.
    Lead author, PhD student Aaron Kandola (UCL Psychiatry) said: “Screens allow us to engage in a wide range of activities. Guidelines and recommendations about screen time should be based on our understanding of how these different activities might influence mental health and whether that influence is meaningful.
    “While we cannot confirm whether playing video games actually improves mental health, it didn’t appear harmful in our study and may have some benefits. Particularly during the pandemic, video games have been an important social platform for young people.
    “We need to reduce how much time children — and adults — spend sitting down, for their physical and mental health, but that doesn’t mean that screen use is inherently harmful.”
    Kandola has previously led studies finding that sedentary behaviour (sitting still) appeared to increase the risk of depression and anxiety in adolescents. To gain more insight into what drives that relationship, he and colleagues chose to investigate screen time as it is responsible for much of sedentary behaviour in adolescents. Other studies have found mixed results, and many did not differentiate between different types of screen time, compare between genders, or follow such a large group of young people over multiple years.

    advertisement

    The research team from UCL, Karolinska Institutet (Sweden) and the Baker Heart and Diabetes Institute (Australia) reviewed data from 11,341 adolescents who are part of the Millennium Cohort Study, a nationally representative sample of young people who have been involved in research since they were born in the UK in 2000-2002.
    The study participants had all answered questions about their time spent on social media, playing video games, or using the internet, at age 11, and also answered questions about depressive symptoms, such as low mood, loss of pleasure and poor concentration, at age 14. The clinical questionnaire measures depressive symptoms and their severity on a spectrum, rather than providing a clinical diagnosis.
    In the analysis, the research team accounted for other factors that might have explained the results, such as socioeconomic status, physical activity levels, reports of bullying, and prior emotional symptoms.
    The researchers found that boys who played video games most days had 24% fewer depressive symptoms, three years later, than boys who played video games less than once a month, although this effect was only significant among boys with low physical activity levels, and was not found among girls. The researchers say this might suggest that less active boys could derive more enjoyment and social interaction from video games.
    While their study cannot confirm if the relationship is causal, the researchers say there are some positive aspects of video games which could support mental health, such as problem-solving, and social, cooperative and engaging elements.

    advertisement

    There may also be other explanations for the link between video games and depression, such as differences in social contact or parenting styles, which the researchers did not have data for. They also did not have data on hours of screen time per day, so they cannot confirm whether multiple hours of screen time each day could impact depression risks.
    The researchers found that girls (but not boys) who used social media most days at age 11 had 13% more depressive symptoms three years later than those who used social media less than once a month, although they did not find an association for more moderate use of social media. Other studies have previously found similar trends, and researchers have suggested that frequent social media use could increase feelings of social isolation.
    Screen use patterns between boys and girls may have influenced the findings, as boys in the study played video games more often than girls and used social media less frequently.
    The researchers did not find clear associations between general internet use and depressive symptoms in either gender.
    Senior author Dr Mats Hallgren (Karolinska Institutet) has conducted other studies in adults finding that mentally-active types of screen time, such as playing video games or working at a computer, might not affect depression risk in the way that more passive forms of screen time appear to do.
    He said: “The relationship between screen time and mental health is complex, and we still need more research to help understand it. Any initiatives to reduce young people’s screen time should be targeted and nuanced. Our research points to possible benefits of screen time; however, we should still encourage young people to be physically active and to break up extended periods of sitting with light physical activity.” More

  • in

    Explainable AI for decoding genome biology

    Researchers at the Stowers Institute for Medical Research, in collaboration with colleagues at Stanford University and Technical University of Munich have developed advanced explainable artificial intelligence (AI) in a technical tour de force to decipher regulatory instructions encoded in DNA. In a report published online February 18, 2021, in Nature Genetics, the team found that a neural network trained on high-resolution maps of protein-DNA interactions can uncover subtle DNA sequence patterns throughout the genome and provide a deeper understanding of how these sequences are organized to regulate genes.
    Neural networks are powerful AI models that can learn complex patterns from diverse types of data such as images, speech signals, or text to predict associated properties with impressive high accuracy. However, many see these models as uninterpretable since the learned predictive patterns are hard to extract from the model. This black-box nature has hindered the wide application of neural networks to biology, where interpretation of predictive patterns is paramount.
    One of the big unsolved problems in biology is the genome’s second code — its regulatory code. DNA bases (commonly represented by letters A, C, G, and T) encode not only the instructions for how to build proteins, but also when and where to make these proteins in an organism. The regulatory code is read by proteins called transcription factors that bind to short stretches of DNA called motifs. However, how particular combinations and arrangements of motifs specify regulatory activity is an extremely complex problem that has been hard to pin down.
    Now, an interdisciplinary team of biologists and computational researchers led by Stowers Investigator Julia Zeitlinger, PhD, and Anshul Kundaje, PhD, from Stanford University, have designed a neural network — named BPNet for Base Pair Network — that can be interpreted to reveal regulatory code by predicting transcription factor binding from DNA sequences with unprecedented accuracy. The key was to perform transcription factor-DNA binding experiments and computational modeling at the highest possible resolution, down to the level of individual DNA bases. This increased resolution allowed them to develop new interpretation tools to extract the key elemental sequence patterns such as transcription factor binding motifs and the combinatorial rules by which motifs function together as a regulatory code.
    “This was extremely satisfying,” says Zeitlinger, “as the results fit beautifully with existing experimental results, and also revealed novel insights that surprised us.”
    For example, the neural network models enabled the researchers to discover a striking rule that governs binding of the well-studied transcription factor called Nanog. They found that Nanog binds cooperatively to DNA when multiples of its motif are present in a periodic fashion such that they appear on the same side of the spiraling DNA helix.

    advertisement

    “There has been a long trail of experimental evidence that such motif periodicity sometimes exists in the regulatory code,” Zeitlinger says. “However, the exact circumstances were elusive, and Nanog had not been a suspect. Discovering that Nanog has such a pattern, and seeing additional details of its interactions, was surprising because we did not specifically search for this pattern.”
    “This is the key advantage of using neural networks for this task,” says ?iga Avsec, PhD, first author of the paper. Avsec and Kundaje created the first version of the model when Avsec visited Stanford during his doctoral studies in the lab of Julien Gagneur, PhD, at the Technical University in Munich, Germany.
    “More traditional bioinformatics approaches model data using pre-defined rigid rules that are based on existing knowledge. However, biology is extremely rich and complicated,” says Avsec. “By using neural networks, we can train much more flexible and nuanced models that learn complex patterns from scratch without previous knowledge, thereby allowing novel discoveries.”
    BPNet’s network architecture is similar to that of neural networks used for facial recognition in images. For instance, the neural network first detects edges in the pixels, then learns how edges form facial elements like the eye, nose, or mouth, and finally detects how facial elements together form a face. Instead of learning from pixels, BPNet learns from the raw DNA sequence and learns to detect sequence motifs and eventually the higher-order rules by which the elements predict the base-resolution binding data.
    Once the model is trained to be highly accurate, the learned patterns are extracted with interpretation tools. The output signal is traced back to the input sequences to reveal sequence motifs. The final step is to use the model as an oracle and systematically query it with specific DNA sequence designs, similar to what one would do to test hypotheses experimentally, to reveal the rules by which sequence motifs function in a combinatorial manner.
    “The beauty is that the model can predict way more sequence designs that we could test experimentally,” Zeitlinger says. “Furthermore, by predicting the outcome of experimental perturbations, we can identify the experiments that are most informative to validate the model.” Indeed, with the help of CRISPR gene editing techniques, the researchers confirmed experimentally that the model’s predictions were highly accurate.
    Since the approach is flexible and applicable to a variety of different data types and cell types, it promises to lead to a rapidly growing understanding of the regulatory code and how genetic variation impacts gene regulation. Both the Zeitlinger Lab and the Kundaje Lab are already using BPNet to reliably identify binding motifs for other cell types, relate motifs to biophysical parameters, and learn other structural features in the genome such as those associated with DNA packaging. To enable other scientists to use BPNet and adapt it for their own needs, the researchers have made the entire software framework available with documentation and tutorials. More