More stories

  • in

    Big data tells story of diversity, migration of math's elite

    Math’s top prize, the Fields Medal, has succeeded in making mathematics more inclusive but still rewards elitism, according to a Dartmouth study.
    Published in Nature’s Humanities and Social Sciences Communications, the study analyzed the effectiveness of the Fields Medal to make math at its highest level more representative across nations and identities. The result provides a visual, data-driven history of international migration and social networks among math elites, particularly since World War II.
    “With so much recent discussion on equality in academia, we came to this study recognizing that math has a reputation of being egalitarian,” says Herbert Chang, a research affiliate in Dartmouth’s Fu Lab and lead author of the paper. “Our results provide a complex and rich story about the world of math especially since the establishment of the Fields Medal.”
    The Fields Medal, widely considered the Nobel Prize of mathematics, is awarded every four years to mathematicians under the age of 40. It was first presented in 1936 to honor young mathematicians from groups that were typically underrepresented in top math circles.
    According to the Dartmouth mathematicians, the prize has received criticism over its history for rewarding existing power structures rather than making math more inclusive and equitable at the elite level. Against this criticism, the study set out to explore how well the award has lived up to its original promise.
    The analysis shows that the Fields Medal has elevated mathematicians of marginalized nationalities, but that the there is also “self-reinforcing behavior,” mostly through mentoring relationships among math elites. More

  • in

    New early warning system for self-driving cars

    A team of researchers at the Technical University of Munich (TUM) has developed a new early warning system for vehicles that uses artificial intelligence to learn from thousands of real traffic situations. A study of the system was carried out in cooperation with the BMW Group. The results show that, if used in today’s self-driving vehicles, it can warn seven seconds in advance against potentially critical situations that the cars cannot handle alone — with over 85% accuracy.
    To make self-driving cars safe in the future, development efforts often rely on sophisticated models aimed at giving cars the ability to analyze the behavior of all traffic participants. But what happens if the models are not yet capable of handling some complex or unforeseen situations?
    A team working with Prof. Eckehard Steinbach, who holds the Chair of Media Technology and is a member of the Board of Directors of the Munich School of Robotics and Machine Intelligence (MSRM) at TUM, is taking a new approach. Thanks to artificial intelligence (AI), their system can learn from past situations where self-driving test vehicles were pushed to their limits in real-world road traffic. Those are situations where a human driver takes over — either because the car signals the need for intervention or because the driver decides to intervene for safety reasons.
    Pattern recognition through RNN
    The technology uses sensors and cameras to capture surrounding conditions and records status data for the vehicle such as the steering wheel angle, road conditions, weather, visibility and speed. The AI system, based on a recurrent neural network (RNN), learns to recognize patterns with the data. If the system spots a pattern in a new driving situation that the control system was unable to handle in the past, the driver will be warned in advance of a possible critical situation.
    “To make vehicles more autonomous, many existing methods study what the cars now understand about traffic and then try to improve the models used by them. The big advantage of our technology: we completely ignore what the car thinks. Instead we limit ourselves to the data based on what actually happens and look for patterns,” says Steinbach. “In this way, the AI discovers potentially critical situations that models may not be capable of recognizing, or have yet to discover. Our system therefore offers a safety function that knows when and where the cars have weaknesses.”
    Warnings up to seven seconds in advance
    The team of researchers tested the technology with the BMW Group and its autonomous development vehicles on public roads and analyzed around 2500 situations where the driver had to intervene. The study showed that the AI is already capable of predicting potentially critical situations with better than 85 percent accuracy — up to seven seconds before they occur.
    Collecting data with no extra effort
    For the technology to function, large quantities of data are needed. After all, the AI can only recognize and predict experiences at the limits of the system if the situations were seen before. With the large number of development vehicles on the road, the data was practically generated by itself, says Christopher Kuhn, one of the authors of the study: “Every time a potentially critical situation comes up on a test drive, we end up with a new training example.” The central storage of the data makes it possible for every vehicle to learn from all of the data recorded across the entire fleet.
    Story Source:
    Materials provided by Technical University of Munich (TUM). Note: Content may be edited for style and length. More

  • in

    Scientists develop ultra-thin terahertz source

    Physicists from the University of Sussex have developed an extremely thin, large-area semiconductor surface source of terahertz, composed of just a few atomic layers and compatible with existing electronic platforms.
    Terahertz sources emit brief light pulses oscillating at ‘trillion of times per second’. At this scale, they are too fast to be handled by standard electronics, and, until recently, too slow to be handled by optical technologies. This has great significance for the evolution of ultra-fast communication devices above the 300GHz limit — such as that required for 6G mobile phone technology — something that is still fundamentally beyond the limit of current electronics.
    Researchers in the Emergent Photonics (EPic) Lab at Sussex, led by the Director of the Emergent Photonics (EPic) Lab Professor Marco Peccianti, are leaders in surface terahertz emission technology having achieved the brightest and thinnest surface semiconductor sources demonstrated so far. The emission region of their new development, a semiconductor source of terahertz, is 10 times thinner than previously achieved, with comparable or even better performances.
    The thin layers can be placed on top of existing objects and devices, meaning they are able to place a terahertz source in places that would have been inconceivable otherwise, including everyday object such as a teapot or even a work of art — opening up huge potential for anti-counterfeiting and ‘the internet of things’ — as well as previously incompatible electronics, such as a next generation mobile phone.
    Dr Juan S. Totero Gongora, Leverhulme Early Career Fellow at the University of Sussex, said:
    “From a physics perspective, our results provide a long-sought answer that dates back to the first demonstration of terahertz sources based on two-colour lasers. More

  • in

    Discovery of a mechanism for making superconductors more resistant to magnetic fields

    Superconductivity is known to be easily destroyed by strong magnetic fields. NIMS, Osaka University and Hokkaido University have jointly discovered that a superconductor with atomic-scale thickness can retain its superconductivity even when a strong magnetic field is applied to it. The team has also identified a new mechanism behind this phenomenon. These results may facilitate the development of superconducting materials resistant to magnetic fields and topological superconductors composed of superconducting and magnetic materials.
    Superconductivity has been used in various technologies, such as magnetic resonance imaging (MRI) and highly sensitive magnetic sensors. Topological superconductors, a special type of superconductor, have been attracting great attention in recent years. They are able to retain quantum information for a long time and can be used in combination with magnetic materials to form qubits that may enable quantum computers to perform very complex calculations. However, superconductivity is easily destroyed by strong magnetic fields or magnetic materials in close proximity. It is therefore desirable to develop a topological superconducting material resistant to magnetic fields.
    The research team recently fabricated crystalline films of indium, a common superconducting material, with atomic-scale thickness. The team then discovered a new mechanism that prevents the superconductivity of these films from being destroyed by a strong magnetic field. When a magnetic field is applied to a superconducting material, the magnetic field interacts with electron spins. It causes the electronic energy of the material to change and destroys its superconductivity. However, when a superconducting material is thinned to a two-dimensional atomic layer, the spin and the momentum of the electrons in the layer are coupled, causing the electron spins to frequently rotate. This offsets the effect of the changes in electronic energy induced by the magnetic field and thus preserves superconductivity. This mechanism can enhance the critical magnetic field — the maximum magnetic field strength above which superconductivity disappears — up to 16-20 Tesla, which is approximately triple the generally accepted theoretical value. It is expected to have a wide range of applications as it was observed for an ordinary superconducting material and does not require either special crystalline structures or strong electronic correlations.
    Based on these results, we plan to develop superconducting thin films capable of resisting even stronger magnetic fields. We also intend to create a hybrid device composed of superconducting and magnetic materials that is needed for the development of topological superconductors: a vital component in next-generation quantum computers.
    Story Source:
    Materials provided by National Institute for Materials Science, Japan. Note: Content may be edited for style and length. More

  • in

    New statistical method eases data reproducibility crisis

    A reproducibility crisis is ongoing in scientific research, where many studies may be difficult or impossible to replicate and thereby validate, especially when the study involves a very large sample size. For example, to evaluate the validity of a high-throughput genetic study’s findings scientists must be able to replicate the study and achieve the same results. Now researchers at Penn State and the University of Minnesota have developed a statistical tool that can accurately estimate the replicability of a study, thus eliminating the need to duplicate the work and effectively mitigating the reproducibility crisis.
    The team used its new method, which they describe in a paper publishing today (March 30) in Nature Communications, to confirm the findings of a 2019 study on the genetic factors that contribute to smoking and drinking addiction but noted that it also can be applied to other genome-wide association studies — or studies that investigate the genetic underpinnings for diseases.
    “While we applied the method to study smoking and drinking addiction-related outcomes, it could benefit other similar large-scale consortia studies, including current studies on the host genetic contribution to COVID-19 symptoms,” said Dajiang Liu, associate professor of public health sciences and biochemistry and molecular biology, Penn State.
    According to Liu, to detect patterns in genome-wide association studies it is important to obtain data from a large number of individuals. Scientists often acquire these data by combining many existing similarly designed studies, which is what Liu and his colleagues did for the 2019 smoking and drinking addiction study that ultimately comprised 1.2 million individuals.
    “We worked really hard to collect all of the patient samples that we could manage,” said Liu, noting that the data came from biobanks, epidemiology studies and direct-to-consumer genetic testing companies, such as 23andMe. However, he added, since the team used all of the available studies in its analysis, there were none leftover to use as comparisons for validation. “Our statistical method allows researchers to assess the replicability of genetic association signals without a replication dataset,” he said. “It helps to maximize the power of genetic studies as no samples need to be reserved for replication; instead, all samples can be used for discoveries.”
    The team’s method, which they call MAMBA (Meta-Analysis Model-Based Assessment of replicability), evaluates the strength and consistency of the associations between atypical bits of DNA, called single nucleotide polymorphisms (SNPs), and disease traits such as addiction. Specifically, MAMBA calculates the probability that if an experiment can be repeated with a different set of individuals, the relationships between the SNPs and those individuals’ traits would be the same or similar as in the first experiment. More

  • in

    Topological protection of entangled two-photon light in photonic topological insulators

    In a joint effort, researchers from the Humboldt-Universität (Berlin), the Max Born Institute (Berlin) and the University of Central Florida (USA), have revealed the necessary conditions for the robust transport of entangled states of two-photon light in photonic topological insulators, paving the way the towards noise-resistant transport of quantum information. The results have appeared in Nature Communications.
    Originally discovered in condensed matter systems, topological insulators are two-dimensional materials that support scattering-free (uni-directional) transport along their edges, even in the presence of defects and disorder. In essence, topological insulators are finite lattice systems where, given a suitable termination of the underlying infinite lattice, edge states are formed that lie in a well-defined energy gap associated with the bulk states, i.e. these edge states are energetically separated from the bulk states.
    Importantly, single-particle edge states in such systems are topologically protected from scattering: they cannot scatter into the bulk due to their energy lying in the gap, and they cannot scatter backwards because backward propagating edge states are either absent or not coupled to the forward propagating edge states.
    The feasibility of engineering complex Hamiltonians using integrated photonic lattices, combined with the availability of entangled photons, raises the intriguing possibility of employing topologically protected entangled states in optical quantum computing and information processing (Science 362, 568, (2018), Optica 6, 955 (2019)).
    Achieving this goal, however, is highly nontrivial as topological protection does not straightforwardly extend to multi-particle (back-)scattering. At first, this fact appears to be counterintuitive because, individually, each particle is protected by topology whilst, jointly, entangled (correlated) particles become highly susceptible to perturbations of the ideal lattice. The underlying physical principle behind this apparent ‘discrepancy’ is that, quantum-mechanically, identical particles are described by states that satisfy an exchange symmetry principle.
    In their work, the researchers make several fundamental advances towards understanding and controlling topological protection in the context of multi-particle states:- First, they identify physical mechanisms which induce a vulnerability of entangled states in topological photonic lattices and present clear guidelines for maximizing entanglement without sacrificing topological protection. -Second, they stablish and demonstrate a threshold-like behavior of entanglement vulnerability and identify conditions for robust protection of highly entangled two-photon states.To be precise, they explore the impact of disorder onto a range of two-photon states that extend from the fully correlated to the fully anti-correlated limits, thereby also covering a completely separable state. For their analysis, they consider two topological lattices, one periodic and one aperiodic. In the periodic case they consider the Haldane model, and for the aperiodic case a square lattice, whose single-particle dynamics corresponds to the quantum Hall effect, is studied.
    The results offer a clear roadmap for generating robust wave packets tailored to the particular disorder at hand. Specifically, they establish limits on the stability of entangled states up to relatively high degrees of entanglement that offer practical guidelines for generating useful entangled states in topological photonic systems. Further, these findings demonstrate that in order to maximize entanglement without sacrificing topological protection, the joint spectral correlation map of two-photon states must fit inside a well-defined topological window of protection. More

  • in

    Unique AI method for generating proteins will speed up drug development

    Artificial Intelligence is now capable of generating novel, functionally active proteins, thanks to recently published work by researchers from Chalmers University of Technology, Sweden.
    “What we are now able to demonstrate offers fantastic potential for a number of future applications, such as faster and more cost-efficient development of protein-based drugs,” says Aleksej Zelezniak, Associate Professor at the Department of Biology and Biological Engineering at Chalmers.
    Proteins are large, complex molecules that play a crucial role in all living cells, building, modifying, and breaking down other molecules naturally inside our cells. They are also widely used in industrial processes and products, and in our daily lives.
    Protein-based drugs are very common — the diabetes drug insulin is one of the most prescribed. Some of the most expensive and effective cancer medicines are also protein-based, as well as the antibody formulas currently being used to treat COVID-19.
    From computer design to working proteins in just a few weeks
    Current methods used for protein engineering rely on introducing random mutations to protein sequences. However, with each additional random mutation introduced, the protein activity declines. More

  • in

    Mathematical modeling used to analyze dynamics of CAR T-cell therapy

    Chimeric antigen receptor T-cell therapy, or CAR T, is a relatively new type of therapy approved to treat several types of aggressive B cell leukemias and lymphomas. Many patients have strong responses to CAR T; however, some have only a short response and develop disease progression quickly. Unfortunately, it is not completely understood why these patients have progression. In an article published in Proceedings of the Royal Society B, Moffitt Cancer Center researchers use mathematical modeling to help explain why CAR T cells work in some patients and not in others.
    CAR T is a type of personalized immunotherapy that uses a patient’s own T cells to target cancer cells. T cells are harvested from a patient and genetically modified in a laboratory to add a specific receptor that targets cancer cells. The patient then undergoes lymphodepletion with chemotherapy to lower some of their existing normal immune cells to help with expansion of the CAR T cells that are infused back into the patient, where they can get to work and attack the tumor.
    Mathematical modeling has been used to help predict how CAR T cells will behave after being infused back into patients; however, no studies have yet considered how interactions between the normal T cells and CAR T cells impact the dynamics of the therapy, in particular how the nonlinear T cell kinetics factor into the chances of therapy success. Moffitt researchers integrated clinical data with mathematical and statistical modeling to address these unknown factors.
    The researchers demonstrate that CAR T cells are effective because they rapidly expand after being infused back into the patient; however, the modified T cells are shown to compete with existing normal T cells, which can limit their ability to expand.
    “Treatment success critically depends on the ability of the CAR T cells to multiply in the patient, and this is directly dependent upon the effectiveness of lymphodepletion that reduces the normal T cells before CAR T infusion,” said Frederick Locke, M.D., co-lead study author and vice chair of the Blood and Marrow Transplant and Cellular Immunotherapy Department at Moffitt.
    In their model, the researchers discovered that tumor eradication is a random, yet potentially highly probable event. Despite this randomness of cure, the authors demonstrated that differences in the timing and probability of cures are determined largely by variability among patient and disease factors. The model confirmed that cures tend to happen early, within 20 to 80 days before CAR T cells decline in number, while disease progression tends to happen over a wider time range between 200 to 500 days after treatment.
    The researchers’ model could also be used to test new treatments or propose refined clinical trial designs. For example, the researchers used their model to demonstrate that another round of CAR T-cell therapy would require a second chemotherapy lymphodepletion to improve patient outcomes.
    “Our model confirms the hypothesis that sufficient lymphodepletion is an important factor in determining durable response. Improving the adaptation of CAR T cells to expand more and survive longer in vivo could result in increased likelihood and duration of response,” explained Philipp Altrock, Ph.D., lead study author and assistant member of the Integrated Mathematical Oncology Department at Moffitt.
    Story Source:
    Materials provided by H. Lee Moffitt Cancer Center & Research Institute. Note: Content may be edited for style and length. More