More stories

  • in

    'Magnetic graphene' forms a new kind of magnetism

    Researchers have identified a new form of magnetism in so-called magnetic graphene, which could point the way toward understanding superconductivity in this unusual type of material.
    The researchers, led by the University of Cambridge, were able to control the conductivity and magnetism of iron thiophosphate (FePS3), a two-dimensional material which undergoes a transition from an insulator to a metal when compressed. This class of magnetic materials offers new routes to understanding the physics of new magnetic states and superconductivity.
    Using new high-pressure techniques, the researchers have shown what happens to magnetic graphene during the transition from insulator to conductor and into its unconventional metallic state, realised only under ultra-high pressure conditions. When the material becomes metallic, it remains magnetic, which is contrary to previous results and provides clues as to how the electrical conduction in the metallic phase works. The newly discovered high-pressure magnetic phase likely forms a precursor to superconductivity so understanding its mechanisms is vital.
    Their results, published in the journal Physical Review X, also suggest a way that new materials could be engineered to have combined conduction and magnetic properties, which could be useful in the development of new technologies such as spintronics, which could transform the way in which computers process information.
    Properties of matter can alter dramatically with changing dimensionality. For example, graphene, carbon nanotubes, graphite and diamond are all made of carbon atoms, but have very different properties due to their different structure and dimensionality.
    “But imagine if you were also able to change all of these properties by adding magnetism,” said first author Dr Matthew Coak, who is jointly based at Cambridge’s Cavendish Laboratory and the University of Warwick. “A material which could be mechanically flexible and form a new kind of circuit to store information and perform computation. This is why these materials are so interesting, and because they drastically change their properties when put under pressure so we can control their behaviour.”
    In a previous study by Sebastian Haines of Cambridge’s Cavendish Laboratory and the Department of Earth Sciences, researchers established that the material becomes a metal at high pressure, and outlined how the crystal structure and arrangement of atoms in the layers of this 2D material change through the transition.

    advertisement

    “The missing piece has remained however, the magnetism,” said Coak. “With no experimental techniques able to probe the signatures of magnetism in this material at pressures this high, our international team had to develop and test our own new techniques to make it possible.”
    The researchers used new techniques to measure the magnetic structure up to record-breaking high pressures, using specially designed diamond anvils and neutrons to act as the probe of magnetism. They were then able to follow the evolution of the magnetism into the metallic state.
    “To our surprise, we found that the magnetism survives and is in some ways strengthened,” co-author Dr Siddharth Saxena, group leader at the Cavendish Laboratory. “This is unexpected, as the newly-freely-roaming electrons in a newly conducting material can no longer be locked to their parent iron atoms, generating magnetic moments there — unless the conduction is coming from an unexpected source.”
    In their previous paper, the researchers showed these electrons were ‘frozen’ in a sense. But when they made them flow or move, they started interacting more and more. The magnetism survives, but gets modified into new forms, giving rise to new quantum properties in a new type of magnetic metal.
    How a material behaves, whether conductor or insulator, is mostly based on how the electrons, or charge, move around. However, the ‘spin’ of the electrons has been shown to be the source of magnetism. Spin makes electrons behave a bit like tiny bar magnets and point a certain way. Magnetism from the arrangement of electron spins is used in most memory devices: harnessing and controlling it is important for developing new technologies such as spintronics, which could transform the way in which computers process information.

    advertisement

    “The combination of the two, the charge and the spin, is key to how this material behaves,” said co-author Dr David Jarvis from the Institut Laue-Langevin, France, who carried out this work as the basis of his PhD studies at the Cavendish Laboratory. “Finding this sort of quantum multi-functionality is another leap forward in the study of these materials.”
    “We don’t know exactly what’s happening at the quantum level, but at the same time, we can manipulate it,” said Saxena. “It’s like those famous ‘unknown unknowns’: we’ve opened up a new door to properties of quantum information, but we don’t yet know what those properties might be.”
    There are more potential chemical compounds to synthesise than could ever be fully explored and characterised. But by carefully selecting and tuning materials with special properties, it is possible to show the way towards the creation of compounds and systems, but without having to apply huge amounts of pressure.
    Additionally, gaining fundamental understanding of phenomena such as low-dimensional magnetism and superconductivity allows researchers to make the next leaps in materials science and engineering, with particular potential in energy efficiency, generation and storage.
    As for the case of magnetic graphene, the researchers next plan to continue the search for superconductivity within this unique material. “Now that we have some idea what happens to this material at high pressure, we can make some predictions about what might happen if we try to tune its properties through adding free electrons by compressing it further,” said Coak.
    “The thing we’re chasing is superconductivity,” said Saxena. “If we can find a type of superconductivity that’s related to magnetism in a two-dimensional material, it could give us a shot at solving a problem that’s gone back decades.” More

  • in

    'Multiplying' light could be key to ultra-powerful optical computers

    An important class of challenging computational problems, with applications in graph theory, neural networks, artificial intelligence and error-correcting codes can be solved by multiplying light signals, according to researchers from the University of Cambridge and Skolkovo Institute of Science and Technology in Russia.
    In a paper published in the journal Physical Review Letters, they propose a new type of computation that could revolutionise analogue computing by dramatically reducing the number of light signals needed while simplifying the search for the best mathematical solutions, allowing for ultra-fast optical computers.
    Optical or photonic computing uses photons produced by lasers or diodes for computation, as opposed to classical computers which use electrons. Since photons are essentially without mass and can travel faster than electrons, an optical computer would be superfast, energy-efficient and able to process information simultaneously through multiple temporal or spatial optical channels.
    The computing element in an optical computer — an alternative to the ones and zeroes of a digital computer — is represented by the continuous phase of the light signal, and the computation is normally achieved by adding two light waves coming from two different sources and then projecting the result onto ‘0’ or ‘1’ states.
    However, real life presents highly nonlinear problems, where multiple unknowns simultaneously change the values of other unknowns while interacting multiplicatively. In this case, the traditional approach to optical computing that combines light waves in a linear manner fails.
    Now, Professor Natalia Berloff from Cambridge’s Department of Applied Mathematics and Theoretical Physics and PhD student Nikita Stroev from Skolkovo Institute of Science and Technology have found that optical systems can combine light by multiplying the wave functions describing the light waves instead of adding them and may represent a different type of connections between the light waves.

    advertisement

    They illustrated this phenomenon with quasi-particles called polaritons — which are half-light and half-matter — while extending the idea to a larger class of optical systems such as light pulses in a fibre. Tiny pulses or blobs of coherent, superfast-moving polaritons can be created in space and overlap with one another in a nonlinear way, due to the matter component of polaritons.
    “We found the key ingredient is how you couple the pulses with each other,” said Stroev. “If you get the coupling and light intensity right, the light multiplies, affecting the phases of the individual pulses, giving away the answer to the problem. This makes it possible to use light to solve nonlinear problems.”
    The multiplication of the wave functions to determine the phase of the light signal in each element of these optical systems comes from the nonlinearity that occurs naturally or is externally introduced into the system.
    “What came as a surprise is that there is no need to project the continuous light phases onto ‘0’ and ‘1’ states necessary for solving problems in binary variables,” said Stroev. “Instead, the system tends to bring about these states at the end of its search for the minimum energy configuration. This is the property that comes from multiplying the light signals. On the contrary, previous optical machines require resonant excitation that fixes the phases to binary values externally.”
    The authors have also suggested and implemented a way to guide the system trajectories towards the solution by temporarily changing the coupling strengths of the signals.
    “We should start identifying different classes of problems that can be solved directly by a dedicated physical processor,” said Berloff. “Higher-order binary optimisation problems are one such class, and optical systems can be made very efficient in solving them.”
    There are still many challenges to be met before optical computing can demonstrate its superiority in solving hard problems in comparison with modern electronic computers: noise reduction, error correction, improved scalability, guiding the system to the true best solution are among them.
    “Changing our framework to directly address different types of problems may bring optical computing machines closer to solving real-world problems that cannot be solved by classical computers,” said Berloff. More

  • in

    Sophisticated lung-on-chip created

    The lung is a complex organ whose main function is to exchange gases. It is the largest organ in the human body and plays a key role in the oxygenation of all the organs. Due to its structure, cellular composition and dynamic microenvironment, is difficult to mimic in vitro.
    A specialized laboratory of the ARTORG Center for Biomedical Engineering Research, University of Bern, headed by Olivier Guenat has developed a new generation of in-vitro models called organs-on-chip for over 10 years, focusing on modeling the lung and its diseases. After a first successful lung-on-chip system exhibiting essential features of the lung, the Organs-on-Chip (OOC) Technologies laboratory has now developed a purely biological next-generation lung-on-chip in collaboration with the Helmholtz Centre for Infection Research in Germany and the Thoracic Surgery and Pneumology Departments at Inselspital.
    A fully biodegradable life-sized air-blood-barrier
    Pauline Zamprogno, who developed the new model for her PhD thesis at the OOC, summarizes its characteristics: “The new lung-on-chip reproduces an array of alveoli with in vivo like dimensions. It is based on a thin, stretchable membrane, made with molecules naturally found in the lung: collagen and elastin. The membrane is stable, can be cultured on both sides for weeks, is biodegradable and its elastic properties allow mimicking respiratory motions by mechanically stretching the cells.”
    By contrast to the first generation, which was also built by the team around Olivier Guenat, the developed system reproduces key aspects of the lung extracellular matrix (ECM): Its composition (cells support made of ECM proteins), its structure (array of alveoli with dimension similar to those found in vivo + fiber structure) and its properties (biodegradability, a key aspect to investigating barrier remodeling during lung diseases such as IPF or COPD). Additionally, the fabrication process is simple and less cumbersome than that of a polydimethylsiloxane stretchable porous membrane from the first-generation lung-on-chip.
    Broad potential clinical applications
    Cells to be cultured on the new chip for research are currently obtained from cancer patients undergoing lung resections at the Inselspital Department of Thoracic Surgery. Department Head Ralph Schmid sees a double advantage in the system: “The second generation lung-on-chip can be seeded with either healthy or diseased lung alveolar cells. This provides clinicians with both a better understanding of the lung’s physiology and a predictive tool for drug screening and potentially also for precision medicine, identifying the specific therapy with the best potential of helping a particular patient.”

    advertisement

    “The applications for such membranes are broad, from basic science investigations into lung functionalities and pathologies, to identifying new pathways, and to a more efficient discovery of potential new therapies,” says Thomas Geiser, Head of the Department of Pneumology at the Inselspital and Director of Teaching and Research of the Insel Gruppe.
    Powerful alternative to animal models in research
    As an additional plus, the new lung-on-chip can reduce the need for pneumological research based on animal models. “Many promising drug candidates successfully tested in preclinical models on rodents have failed when tested in humans due to differences between the species and in the expression of a lung disease,” explains Olivier Guenat. “This is why, in the long term, we aim to reduce animal testing and provide more patient-relevant systems for drug screening with the possibility of tailoring models to specific patients (by seeding organs-on-chip with their own cells).”
    The new biological lung-on-chip will be further developed by Pauline Zamprogno and her colleagues from the OOC Technologies group to mimic a lung with idiopathic pulmonary fibrosis (IPF), a chronic disease of the lung leading to progressive scarring of the lung tissue within the framework of a research project funded by the Swiss 3R Competence Center (3RCC). “My new project consists in the development of an IPF-on- chip model based on the biological membrane. So far, we have develop a healthy air-blood barrier. Now it’s time to use it to investigate a real biological question,” says Zamprogno.
    Research group Organs-On-Chip Technologies of the ARTORG Center
    This specialized group of the ARTORG Center for Biomedical Engineering Research develops organs-on-chip, focusing on the lung and its diseases, in collaboration with the Departments of Pulmonary Medicine and Thoracic Surgery of the Inselspital. The group combines engineering, in particular microfluidics and microfabrication, cell biology and tissue engineering methods, material sciences and medicine.
    Their first development of a breathing lung-on-chip is further developed in collaboration with the start-up AlevoliX, with the aim to revolutionize preclinical research. Recently the group has developed an entirely biological second-generation lung-on-chip focusing on recreating the air-blood barrier of the lung. A second research direction aims at developing a functional lung microvasculature. Here, lung endothelial cells are seeded in a micro-engineered environment, where they self-assemble to build a network of perfusable and contractile microvessels of only a few tens of micrometers in diameter.
    Next to pharmaceutical applications, organs-on-chip are seen as having the potential to be used in precision medicine to test the patient’s own cells in order to tailor the best therapy. Furthermore, such systems have the significant potential to reduce animal testing in medical and life-science research. The OOC group operates the Organs-on-Chip Facility, providing scientists from the University of Bern, the University Hospital of Bern and beyond an infrastructure and equipment to produce microfluidic devices and test organs-on-chips. More

  • in

    Online searches can help foreshadow future COVID-19 surges and declines, new study shows

    Online searches for mobile and isolated activities can help to predict later surges and declines in COVID-19 cases, a team of researchers has found. Its findings, based on a four-month analysis of online searches, offer a potential means to anticipate the pathways of the pandemic — before new infections are reported.
    “This is a first step towards building a tool that can help predict COVID case surges by capturing higher-risk activities and intended mobility, which searches for gyms and in-person dining can illuminate,” says Anasse Bari, a clinical assistant professor in computer science at New York University’s Courant Institute of Mathematical Sciences and one of the authors of the paper, which appears in the journal Social Network Analysis and Mining. “Using such ‘alternative data’ is nothing new and has been applied for other purposes — for instance, alternative data has been used in finance to generate data-driven investments, such as studying satellite images of cars in parking lots to predict businesses earnings.”
    “Our research shows the same techniques could be applied to combatting a pandemic by spotting, ahead of time, where outbreaks are likely to occur,” adds Megan Coffee, a clinical assistant professor in the Division of Infectious Disease & Immunology at NYU Grossman School of Medicine. “Developing a barometer of behavior would, with further work and validation, allow policymakers and epidemiologists to track the impact of social interventions and brace for rising surges.”
    The research also showed an association between intended activities outside the home after lockdown restrictions were lifted, pointing to how the effects of policy decisions can be measured using alternative data.
    Since the onset of the pandemic, governments have restricted activities, often based on surges of COVID-19 cases, then loosened these restrictions after declines. However, these actions are in response to infection rates and are designed to limit the spread of future cases.
    In the Social Network Analysis and Mining study, the researchers sought to determine if there were ways to spot behaviors known to be risky during the pandemic (e.g., visits to barbershops and nail salons) ahead of local and regional outbreaks — and conversely, identify behaviors known to be less risky (e.g., exercising at home) prior to declines in coronavirus cases.

    advertisement

    “Our goal was to capture the underlying social dynamics of an unprecedented pandemic using alternative data sources that are new to infectious disease epidemiology,” explains Bari. “When someone searches the closing time of a local bar or looks up directions to a local gym, they give some insight into what future risks they may have.”
    To examine this, they studied online searches from March through June in 2020 in all 50 states. Here, they divided searches into two categories — or “tracks”: a mobility index track, which categorized searches linked to interactions with others outside the home (e.g., “theaters near me,” “flight tickets”), and an isolation index track, which categorized searches linked to activities done at home (“food delivery,” “at-home yoga”).
    The team’s choice of search keywords was informed by a recent Democracy Fund + UCLA Nationscape survey that tracked activities individuals reported they would prioritize attending if “restrictions were lifted on the advice of public health officials regarding activities.” The most popular results included “going to a stadium/concert,” “going to the movies,” and “attending a sports event.”
    Using Google Trends data, the researchers tracked search trends related to mobility and isolation to develop mobility and isolation indexes. They complemented these with a “net movement index,” which was the difference between the mobility index and the isolation index.
    The researchers then looked at COVID-19 case growth 10 to 14 days later — the expected lag between exposure and symptoms — at the state level by examining data from state and local health agencies.

    advertisement

    Overall, they found that the net movement index correlated with new COVID-19 cases — reported weekly — in 42 of 50 states over the studied period (March-June 2020).
    The researchers also looked more closely at five states (Arizona, California, Florida, New York, and Texas) to determine the impact of the ending of stay-at-home orders on searches. In all of these states, the mobility index, which decreased during the initial lockdown phase, increased as re-openings began. Subsequently, COVID-19 cases rose again nationwide in June 2020 and surged in Arizona, California, Florida, and Texas.
    By contrast, an earlier sharp decline in mobility indices was followed by a sharp decline in the case growth data in these same five states.
    “From this work, we hope to build a knowledge base on human behavior change from alternative data during the life cycle of the pandemic in order to allow machine learning to predict behavior in future epidemics,” says Aashish Khubchandani, an NYU undergraduate and one of the paper’s authors.
    The researchers recognize that search-based methods to predict infection outbreaks raise privacy concerns. However, they emphasize that their tool uses large volumes of search queries, not individual ones, and relies on anonymized data in order to offer health-related projections.
    The paper’s other authors were Courant researchers Matthias Heymann and Junzhang Wang, who are part of the Predictive Analytics and AI research lab at the Courant Institute.
    The study was supported, in part, by an Amazon AI research grant and an NYU COVID-19 Catalyst Research grant. More

  • in

    Breakthrough in quantum photonics promises a new era in optical circuits

    The modern world is powered by electrical circuitry on a “chip” — the semiconductor chip underpinning computers, cell phones, the internet, and other applications. In the year 2025, humans are expected to be creating 175 zettabytes (175trillion gigabytes) of new data. How can we ensure the security of sensitive data at such a high volume? And how can we address grand-challenge-like problems, from privacy and security to climate change, leveraging this data, especially given the limited capability of current computers?
    A promising alternative is emerging quantum communication and computation technologies. For this to happen, however, it will require the widespread development of powerful new quantum optical circuits; circuits that are capable of securely processing the massive amounts of information we generate every day. Researchers in USC’s Mork Family Department of Chemical Engineering and Materials Science have made a breakthrough to help enable this technology.
    While a traditional electrical circuit is a pathway along which electrons from an electric charge flow, a quantum optical circuit uses light sources that generate individual light particles, or photons, on-demand, one-at-a-time, acting as information carrying bits (quantum bits or qubits). These light sources are nano-sized semiconductor “quantum dots”-tiny manufactured collections of tens of thousands to a million atoms packed within a volume of linear size less than a thousandth of the thickness of typical human hair buried in a matrix of another suitable semiconductor.
    They have so far been proven to be the most versatile on-demand single photon generators. The optical circuit requires these single photon sources to be arranged on a semiconductor chip in a regular pattern. Photons with nearly identical wavelength from the sources must then be released in a guided direction. This allows them to be manipulated to form interactions with other photons and particles to transmit and process information.
    Until now, there has been a significant barrier to the development of such circuits. For example, in current manufacturing techniques quantum dots have different sizes and shapes and assemble on the chip in random locations. The fact that the dots have different sizes and shapes mean that the photons they release do not have uniform wavelengths. This and the lack of positional order make them unsuitable for use in the development of optical circuits.
    In recently published work, researchers at USC have shown that single photons can indeed be emitted in a uniform way from quantum dots arranged in a precise pattern. It should be noted that the method of aligning quantum dots was first developed at USC by the lead PI, Professor Anupam Madhukar, and his team nearly thirty years ago, well before the current explosive research activity in quantum information and interest in on-chip single-photon sources. In this latest work, the USC team has used such methods to create single-quantum dots, with their remarkable single-photon emission characteristics. It is expected that the ability to precisely align uniformly-emitting quantum dots will enable the production of optical circuits, potentially leading to novel advancements in quantum computing and communications technologies.

    advertisement

    The work, published in APL Photonics, was led by Jiefei Zhang, currently a research assistant professor in the Mork Family Department of Chemical Engineering and Materials Science, with corresponding author Anupam Madhukar, Kenneth T. Norris Professor in Engineering and Professor of Chemical Engineering, Electrical Engineering, Materials Science, and Physics.
    “The breakthrough paves the way to the next steps required to move from lab demonstration of single photon physics to chip-scale fabrication of quantum photonic circuits,” Zhang said. “This has potential applications in quantum (secure) communication, imaging, sensing and quantum simulations and computation.”
    Madhukar said that it is essential that quantum dots be ordered in a precise way so that photons released from any two or more dots can be manipulated to connect with each other on the chip. This will form the basis of building unit for quantum optical circuits.
    “If the source where the photons come from is randomly located, this can’t be made to happen.” Madhukar said.
    “The current technology that is allowing us to communicate online, for instance using a technological platform such as Zoom, is based on the silicon integrated electronic chip. If the transistors on that chip are not placed in exact designed locations, there would be no integrated electrical circuit,” Madhukar said. “It is the same requirement for photon sources such as quantum dots to create quantum optical circuits.”
    The research is supported by the Air Force Office of Scientific Research (AFOSR) and the U.S. Army Research Office (ARO).

    advertisement

    “This advance is an important example of how solving fundamental materials science challenges, like how to create quantum dots with precise position and composition, can have big downstream implications for technologies like quantum computing,” said Evan Runnerstrom, program manager, Army Research Office, an element of the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory. “This shows how ARO’s targeted investments in basic research support the Army’s enduring modernization efforts in areas like networking.”
    To create the precise layout of quantum dots for the circuits, the team used a method called SESRE (substrate-encoded size-reducing epitaxy) developed in the Madhukar group in the early 1990s. In the current work, the team fabricated regular arrays of nanometer-sized mesas with a defined edge orientation, shape (sidewalls) and depth on a flat semiconductor substrate, composed of gallium arsenide (GaAs). Quantum dots are then created on top of the mesas by adding appropriate atoms using the following technique.
    First, incoming gallium (Ga) atoms gather on the top of the nanoscale mesas attracted by surface energy forces, where they deposit GaAs. Then, the incoming flux is switched to indium (In) atoms, to in turn deposit indium arsenide (InAs) followed back by Ga atoms to form GaAs and hence create the desired individual quantum dots that end up releasing single photons. To be useful for creating optical circuits, the space between the pyramid-shaped nano-mesas needs to be filled by material that flattens the surface. The final chip where opaque GaAs is depicted as a translucent overlayer under which the quantum dots are located.
    “This work also sets a new world-record of ordered and scalable quantum dots in terms of the simultaneous purity of single-photon emission greater than 99.5%, and in terms of the uniformity of the wavelength of the emitted photons, which can be as narrow as 1.8nm, which is a factor of 20 to 40 better than typical quantum dots,” Zhang said.
    Zhang said that with this uniformity, it becomes feasible to apply established methods such as local heating or electric fields to fine-tune the photon wavelengths of the quantum dots to exactly match each other, which is necessary for creating the required interconnections between different quantum dots for circuits.
    This means that for the first time researchers can create scalable quantum photonic chips using well-established semiconductor processing techniques. In addition, the team’s efforts are now focused on establishing how identical the emitted photons are from the same and/or from different quantum dots. The degree of indistinguishability is central to quantum effects of interference and entanglement, that underpin quantum information processing -communication, sensing, imaging, or computing.
    Zhang concluded: “We now have an approach and a material platform to provide scalable and ordered sources generating potentially indistinguishable single-photons for quantum information applications. The approach is general and can be used for other suitable material combinations to create quantum dots emitting over a wide range of wavelengths preferred for different applications, for example fiber-based optical communication or the mid-infrared regime, suited for environmental monitoring and medical diagnostics,” Zhang said.
    Gernot S. Pomrenke, AFOSR Program Officer, Optoelectronics and Photonics said that reliable arrays of on-demand single photon sources on-chip were a major step forward.
    “This impressive growth and material science work stretches over three decades of dedicated effort before research activities in quantum information were in the mainstream,” Pomrenke said. “Initial AFOSR funding and resources from other DoD agencies have been critical in realizing the challenging work and vision by Madhukar, his students, and collaborators. There is a great likelihood that the work will revolutionize the capabilities of data centers, medical diagnostics, defense and related technologies.” More

  • in

    New way to power up nanomaterials for electronic applications

    UCLA materials scientists and colleagues have discovered that perovskites, a class of promising materials that could be used for low-cost, high-performance solar cells and LEDs, have a previously unutilized molecular component that can further tune the electronic property of perovskites.
    Named after Russian mineralogist Lev Perovski, perovskite materials have a crystal-lattice structure of inorganic molecules like that of ceramics, along with organic molecules that are interlaced throughout. Up to now, these organic molecules appeared to only serve a structural function and could not directly contribute to perovskites’ electronic performance.
    Led by UCLA, a new study shows that when the organic molecules are designed properly, they not only can maintain the crystal lattice structure, but also contribute to the materials’ electronic properties. This discovery opens up new possibilities to improve the design of materials that will lead to better solar cells and LEDs. The study detailing the research was recently published in Science.
    “This is like finding an old dog that can play new tricks,” said Yang Yang, the Carol and Lawrence E. Tannas Jr. Professor of Engineering at the UCLA Samueli School of Engineering, who is the principal investigator on the research. “In materials science, we look all the way down to the atomic structure of a material for efficient performance. Our postdocs and graduate students didn’t take anything for granted and dug deeper to find a new pathway.”
    In order to make a better-performing perovskite material, the researchers incorporated a specially designed organic molecule, a pyrene-containing organic ammonium. On its exterior, the positively charged ammonium molecule connected to molecules of pyrene — a quadruple ring of carbon atoms. This molecular design offered additional electronic tunability of perovskites.
    “The unique property of perovskites is that they have the advantage of high-performance inorganic semiconductors, as well as easy and low-cost processability of polymers,” said study co-lead author Rui Wang, a UCLA postdoctoral scholar in materials science and engineering. “This newly enhanced perovskite material now offers opportunities for improved design concepts with better efficiency.”
    To demonstrate perovskites’ added effectiveness, the team built a photovoltaic (PV) cell prototype with the materials, and then tested it under continuous light for 2,000 hours. The new cell continued to convert light to energy at 85% of its original efficiency. This contrasts with a PV cell made of the same materials, but without the added altered organic molecule, which retained only 60% of its original efficiency.

    Story Source:
    Materials provided by University of California – Los Angeles. Note: Content may be edited for style and length. More

  • in

    AI can make accurate assessment of whether a person will die from COVID-19, study finds

    Using patient data, artificial intelligence can make a 90 percent accurate assessment of whether a person will die from COVID-19 or not, according to new research at the University of Copenhagen. Body mass index (BMI), gender and high blood pressure are among the most heavily weighted factors. The research can be used to predict the number of patients in hospitals, who will need a respirator and determine who ought to be first in line for a vaccination.
    Artificial intelligence is able to predict who is most likely to die from the coronavirus. In doing so, it can also help decide who should be at the front of the line for the precious vaccines now being administered across Denmark.
    The result is from a newly published study by researchers at the University of Copenhagen’s Department of Computer Science. Since the COVID pandemic’s first wave, researchers have been working to develop computer models that can predict, based on disease history and health data, how badly people will be affected by COVID-19.
    Based on patient data from the Capital Region of Denmark and Region Zealand, the results of the study demonstrate that artificial intelligence can, with up to 90 percent certainty, determine whether an uninfected person who is not yet infected will die of COVID-19 or not if they are unfortunate enough to become infected. Once admitted to the hospital with COVID-19, the computer can predict with 80 percent accuracy whether the person will need a respirator.
    “We began working on the models to assist hospitals, as during the first wave, they feared that they did not have enough respirators for intensive care patients. Our new findings could also be used to carefully identify who needs a vaccine,” explains Professor Mads Nielsen of the University of Copenhagen’s Department of Computer Science.
    Older men with high blood pressure are highest at risk
    The researchers fed a computer program with health data from 3,944 Danish COVID-19 patients. This trained the computer to recognize patterns and correlations in both patients’ prior illnesses and in their bouts against COVID-19.

    advertisement

    “Our results demonstrate, unsurprisingly, that age and BMI are the most decisive parameters for how severely a person will be affected by COVID-19. But the likelihood of dying or ending up on a respirator is also heightened if you are male, have high blood pressure or a neurological disease,” explains Mads Nielsen.
    The diseases and health factors that, according to the study, have the most influence on whether a patient ends up on a respirator after being infected with COVID-19 are in order of priority: BMI, age, high blood pressure, being male, neurological diseases, COPD, asthma, diabetes and heart disease.
    “For those affected by one or more of these parameters, we have found that it may make sense to move them up in the vaccine queue, to avoid any risk of them becoming inflected and eventually ending up on a respirator,” says Nielsen.
    Predicting respiratory needs is a must
    Researchers are currently working with the Capital Region of Denmark to take advantage of this fresh batch of results in practice. They hope that artificial intelligence will soon be able to help the country’s hospitals by continuously predicting the need for respirators.
    “We are working towards a goal that we should be able to predict the need for respirators five days ahead by giving the computer access to health data on all COVID positives in the region,” says Mads Nielsen, adding:
    “The computer will never be able to replace a doctor’s assessment, but it can help doctors and hospitals see many COVID-19 infected patients at once and set ongoing priorities.”
    However, technical work is still pending to make health data from the region available for the computer and thereafter to calculate the risk to the infected patients. The research was carried out in collaboration with Rigshospitalet and Bispebjerg and Frederiksberg Hospital. More

  • in

    The Ramanujan Machine: Researchers develop 'conjecture generator'

    Using AI and computer automation, Technion researchers have developed a “conjecture generator” that creates mathematical conjectures, which are considered to be the starting point for developing mathematical theorems. They have already used it to generate a number of previously unknown formulas. The study, which was published in the journal Nature, was carried out by undergraduates from different faculties under the tutelage of Assistant Professor Ido Kaminer of the Andrew and Erna Viterbi Faculty of Electrical Engineering at the Technion.
    The project deals with one of the most fundamental elements of mathematics — mathematical constants. A mathematical constant is a number with a fixed value that emerges naturally from different mathematical calculations and mathematical structures in different fields. Many mathematical constants are of great importance in mathematics, but also in disciplines that are external to mathematics, including biology, physics, and ecology. The golden ratio and Euler’s number are examples of such fundamental constants. Perhaps the most famous constant is pi, which was studied in ancient times in the context of the circumference of a circle. Today, pi appears in numerous formulas in all branches of science, with many math aficionados competing over who can recall more digits after the decimal point: 3.14159…
    The Technion researchers proposed and examined a new idea: The use of computer algorithms to automatically generate mathematical conjectures that appear in the form of formulas for mathematical constants.
    A conjecture is a mathematical conclusion or proposition that has not been proved; once the conjecture is proved, it becomes a theorem. Discovery of a mathematical conjecture on fundamental constants is relatively rare, and its source often lies in mathematical genius and exceptional human intuition. Newton, Riemann, Goldbach, Gauss, Euler, and Ramanujan are examples of such genius, and the new approach presented in the paper is named after Srinivasa Ramanujan.
    Ramanujan, an Indian mathematician born in 1887, grew up in a poor family, yet managed to arrive in Cambridge at the age of 26 at the initiative of British mathematicians Godfrey Hardy and John Littlewood. Within a few years he fell ill and returned to India, where he died at the age of 32. During his brief life he accomplished great achievements in the world of mathematics. One of Ramanujan’s rare capabilities was the intuitive formulation of unproven mathematical formulas. The Technion research team therefore decided to name their algorithm “the Ramanujan Machine,” as it generates conjectures without proving them, by “imitating” intuition using AI and considerable computer automation.
    According to Prof. Kaminer, “Our results are impressive because the computer doesn’t care if proving the formula is easy or difficult, and doesn’t base the new results on any prior mathematical knowledge, but only on the numbers in mathematical constants. To a large degree, our algorithms work in the same way as Ramanujan himself, who presented results without proof. It’s important to point out that the algorithm itself is incapable of proving the conjectures it found — at this point, the task is left to be resolved by human mathematicians.”
    The conjectures generated by the Technion’s Ramanujan Machine have delivered new formulas for well-known mathematical constants such as pi, Euler’s number (e), Apéry’s constant (which is related to the Riemann zeta function), and the Catalan constant. Surprisingly, the algorithms developed by the Technion researchers succeeded not only in creating known formulas for these famous constants, but in discovering several conjectures that were heretofore unknown. The researchers estimate this algorithm will be able to significantly expedite the generation of mathematical conjectures on fundamental constants and help to identify new relationships between these constants.
    As mentioned, until now, these conjectures were based on rare genius. This is why in hundreds of years of research, only a few dozens of formulas were found. It took the Technion’s Ramanujan Machine just a few hours to discover all the formulas for pi discovered by Gauss, the “Prince of Mathematics,” during a lifetime of work, along with dozens of new formulas that were unknown to Gauss.
    According to the researchers, “Similar ideas can in the future lead to the development of mathematical conjectures in all areas of mathematics, and in this way provide a meaningful tool for mathematical research.”
    The research team has launched a website, RamanujanMachine.com, which is intended to inspire the public to be more involved in the advancement of mathematical research by providing algorithmic tools that will be available to mathematicians and the public at large. Even before the article was published, hundreds of students, experts, and amateur mathematicians had signed up to the website.
    The research study started out as an undergraduate project in the Rothschild Scholars Technion Program for Excellence with the participation of Gal Raayoni and George Pisha, and continued as part of the research projects conducted in the Andrew and Erna Viterbi Faculty of Electrical Engineering with the participation of Shahar Gottlieb, Yoav Harris, and Doron Haviv. This is also where the most significant breakthrough was made — by an algorithm developed by Shahar Gottlieb — which led to the article’s publication in Nature. Prof. Kaminer adds that the most interesting mathematical discovery made by the Ramanujan Machine’s algorithms to date relates to a new algebraic structure concealed within a Catalan constant. The structure was discovered by high school student Yahel Manor, who participated in the project as part of the Alpha Program for science-oriented youth. Prof. Kaminer added that, “Industry colleagues Uri Mendlovic and Yaron Hadad also participated in the study, and contributed greatly to the mathematical and algorithmic concepts that form the foundation for the Ramanujan Machine. It is important to emphasize that the entire project was executed on a voluntary basis, received no funding, and participants joined the team out of pure scientific curiosity.”
    Prof. Ido Kaminer is the head of the Robert and Ruth Magid Electron Beam Quantum Dynamics Laboratory. He is a faculty member in the Andrew and Erna Viterbi Faculty of Electrical Engineering and the Solid State Institute. Kaminer is affiliated with the Helen Diller Quantum Center and the Russell Berrie Nanotechology Institute. More