More stories

  • in

    Spin keeps electrons in line in iron-based superconductor

    Researchers from PSI’s Spectroscopy of Quantum Materials group together with scientists from Beijing Normal University have solved a puzzle at the forefront of research into iron-based superconductors: the origin of FeSe’s electronic nematicity. Using Resonant inelastic X-ray scattering (RIXS) at the Swiss Light Source (SLS), they discovered that, surprisingly, this electronic phenomenon is primarily spin driven. Electronic nematicity is believed to be an important ingredient in high-temperature superconductivity, but whether it helps or hinders it is still unknown. Their findings are published in Nature Physics.
    Near PSI, where the Swiss forest is ever present in people’s lives, you often see log piles: incredibly neat log piles. Wedge shaped logs for firewood are stacked carefully lengthways but with little thought to their rotation. When particles in a material spontaneously line up, like the logs in these log piles, such that they break rotational symmetry but preserve translational symmetry, a material is said to be in a nematic state. In a liquid crystal, this means that the rod shaped molecules are able to flow like a liquid in the direction of their alignment, but not in other directions. Electronic nematicity occurs when the electron orbitals in a material align in this way. Typically, this electronic nematicity manifests itself as anisotropic electronic properties: for example, resistivity or conductivity exhibiting vastly different magnitudes when measured along different axes.
    Since their discovery in 2008, the past decade has seen enormous interest in the family of iron based superconductors. Alongside the well-studied cuprate superconductors, these materials exhibit the mysterious phenomenon of high temperature superconductivity. The electronic nematic state is a ubiquitous feature of iron-based superconductors. Yet, until now, the physical origin of this electronic nematicity is a puzzle; in fact, arguably one of the most important puzzles in the study of iron-based superconductors.
    But why is the electronic nematicity so interesting? The answer lies with the ever exciting conundrum: understanding how electrons pair up and achieve superconductivity at high temperatures. The stories of electronic nematicity and superconductivity are inextricably linked — but exactly how, and indeed whether they compete or cooperate, is a hotly debated issue.
    The drive to understand electronic nematicity has led researchers to turn their attention to one particular iron-based superconductor, iron selenide (FeSe). FeSe is somewhat of an enigma, simultaneously possessing the most simple crystal structure of all the iron-based superconductors and the most baffling electronic properties.
    FeSe enters its superconducting phase below a critical temperature (Tc) of 9 K but tantalisingly boasts a tunable Tc, meaning that this temperaturecan be raised by applying pressure to or doping the material. The quasi-2D layered material possesses an extended electronic nematic phase, which appears below approximately 90 K. Curiously, this electronic nematicity appears without the long-range magnetic order that it would typically go hand in hand with, leading to lively debate surrounding its origins: namely, whether these are driven by orbital- or spin-degrees of freedom. The absence of long range magnetic order in FeSe gives the opportunity to have a clearer view on the electronic nematicity and its interplay with superconductivity. As a result many researchers feel that FeSe may hold the key to understanding the puzzle of electronic nematicity across the family of iron based superconductors. More

  • in

    Researchers magnify hidden biological structures with MAGNIFIERS

    A research team from Carnegie Mellon University and Columbia have combined two emerging imaging technologies to better view a wide range of biomolecules, including proteins, lipids and DNA, at the nanoscale. Their technique, which brings together expansion microscopy and stimulated Raman scattering microscopy, is detailed in Advanced Science.
    Biomolecules are traditionally imaged using fluorescent microscopy, but that technique has its limitations. Fluorescent microscopy relies on fluorophore-carrying tags to bind to and label molecules of interest. These tags emit fluorescent light with a broad range of wavelengths, thus researchers can only use 3-4 fluorescent colors in the visible spectrum at a time to label molecules of interest.
    Unlike fluorescent microscopy, stimulated Raman scattering microscopy (SRS) visualizes the chemical bonds of biomolecules by capturing their vibrational fingerprints. In this sense, SRS doesn’t need labels to see the different types of biomolecules, or even different isotopes, within a sample. In addition, a rainbow of dyes with unique vibrational spectra can be used to image multiple targets. However, SRS has a diffraction limit of about 300 nanometers, making it unable to visualize many of the crucial nanoscale structures found in cells and tissue.
    “Each type of molecule has its own vibrational fingerprint. SRS allows us to see the type of molecule we want by tuning in to the characteristic frequency of its vibrations. Something like switching between the radio stations.” said Carnegie Mellon Eberly Family Associate Professor of Biological Sciences Yongxin (Leon) Zhao.
    Zhao’s lab has been developing new imaging tools based on expansion microscopy — a technique that addresses the problem of diffraction limits in a wide range of biological imaging. Expansion microscopy takes biological samples and transforms them into water-soluble hydrogels. The hydrogels can then be treated and made to expand to more than 100 times their original volume. The expanded samples can then be imaged using standard techniques.
    “Just as SRS was able to surmount the limitations of fluorescence microscopy, expansion microscopy surmounts the limitations of SRS,” said Zhao.
    The Carnegie Mellon and Columbia researchers combined SRS and expansion microscopy to create Molecule Anchorable Gel-enabled Nanoscale Imaging of Fluorescence and stimulated Raman Scattering microscopy (MAGNIFIERS). Zhao’s expansion microscopy technique was able to expand samples up to 7.2-fold, allowing them to use SRS to image smaller molecules and structures than they would be able to do without expansion.
    In the recently published study, the research team showed that MAGNIFIERS could be used for high-resolution metabolic imaging of protein aggregates, like those created in conditions like Huntington’s disease. They also showed that MAGNIFIERS could map the nanoscale location of eight different markers in brain tissue at one time.
    The researchers plan to continue to develop the MAGNIFIERS technique to achieve higher resolution and higher throughput imaging for understanding the pathology of complex diseases, such as cancer and brain disorders.
    Additional study co-authors include: Alexsandra Klimas, Brendan Gallagher, Zhangu Cheng, Feifei Fu, Piyumi Wijesekara and Xi Ren from Carnegie Mellon; and Yupeng Miao, Lixue Shi and Wei Min from Columbia
    This research was funded by the National Institutes of Health (DP2 OD025926-01, R01 GM128214, R01 GM132860, and R01 EB029523), Carnegie Mellon University, the DSF Charitable Foundation and U.S. Department of Defense (VR190139).
    Story Source:
    Materials provided by Carnegie Mellon University. Original written by Jocelyn Duffy. Note: Content may be edited for style and length. More

  • in

    Accelerating the pace of machine learning

    Machine learning happens a lot like erosion.
    Data is hurled at a mathematical model like grains of sand skittering across a rocky landscape. Some of those grains simply sail along with little or no impact. But some of them make their mark: testing, hardening, and ultimately reshaping the landscape according to inherent patterns and fluctuations that emerge over time.
    Effective? Yes. Efficient? Not so much.
    Rick Blum, the Robert W. Wieseman Professor of Electrical and Computer Engineering at Lehigh University, seeks to bring efficiency to distributed learning techniques emerging as crucial to modern artificial intelligence (AI) and machine learning (ML). In essence, his goal is to hurl far fewer grains of data without degrading the overall impact.
    In the paper “Distributed Learning With Sparsified Gradient Differences,” published in a special ML-focused issue of the IEEE Journal of Selected Topics in Signal Processing, Blum and collaborators propose the use of “Gradient Descent method with Sparsification and Error Correction,” or GD-SEC, to improve the communications efficiency of machine learning conducted in a “worker-server” wireless architecture. The issue was published May 17, 2022.
    “Problems in distributed optimization appear in various scenarios that typically rely on wireless communications,” he says. “Latency, scalability, and privacy are fundamental challenges.”
    “Various distributed optimization algorithms have been developed to solve this problem,” he continues,”and one primary method is to employ classical GD in a worker-server architecture. In this environment, the central server updates the model’s parameters after aggregating data received from all workers, and then broadcasts the updated parameters back to the workers. But the overall performance is limited by the fact that each worker must transmit all of its data all of the time. When training a deep neural network, this can be on the order of 200 MB from each worker device at each iteration. This communication step can easily become a significant bottleneck on overall performance, especially in federated learning and edge AI systems.”
    Through the use of GD-SEC, Blum explains, communication requirements are significantly reduced. The technique employs a data compression approach where each worker sets small magnitude gradient components to zero — the signal-processing equivalent of not sweating the small stuff. The worker then only transmits to the server the remaining non-zero components. In other words, meaningful, usable data are the only packets launched at the model.
    “Current methods create a situation where each worker has expensive computational cost; GD-SEC is relatively cheap where only one GD step is needed at each round,” says Blum.
    Professor Blum’s collaborators on this project include his former student Yicheng Chen ’19G ’21PhD, now a software engineer with LinkedIn; Martin Takác, an associate professor at the Mohamed bin Zayed University of Artificial Intelligence; and Brian M. Sadler, a Life Fellow of the IEEE, U.S. Army Senior Scientist for Intelligent Systems, and Fellow of the Army Research Laboratory.
    Story Source:
    Materials provided by Lehigh University. Note: Content may be edited for style and length. More

  • in

    Component for brain-inspired computing

    Researchers from ETH Zurich, the University of Zurich and Empa have developed a new material for an electronic component that can be used in a wider range of applications than its predecessors. Such components will help create electronic circuits that emulate the human brain and that are more efficient at performing machine-​learning tasks.
    Compared with computers, the human brain is incredibly energy efficient. Scientists are therefore drawing on how the brain and its interconnected neurons function for inspiration in designing innovative computing technologies. They foresee that these brain-​inspired computing systems, will be more energy efficient than conventional ones, as well as better at performing machine-​learning tasks.
    Much like neurons, which are responsible for both data storage and data processing in the brain, scientists want to combine storage and processing in a single electronic component type, known as a memristor. Their hope is that this will help to achieve greater efficiency, because moving data between the processor and the storage, as conventional computers do, is the main reason for the high energy consumption in machine learning applications.
    Researchers at ETH Zurich, the University of Zurich and Empa have now developed an innovative concept for a memristor that can be used in a far wider range of applications than existing memristors. “There are different operation modes for memristors, and it is advantageous to be able to use all these modes depending on an artificial neural network’s architecture,” explains ETH postdoc Rohit John. “But previous conventional memristors had to be configured for one of these modes in advance.” The new memristors from the researchers in Zurich can now easily switch between two operation modes while in use: a mode in which the signal grows weaker over time and dies (volatile mode), and one in which the signal remains constant (non-​volatile mode).
    Just like in the brain
    “These two operation modes are also found in the human brain,” John says. On the one hand, stimuli at the synapses are transmitted from neuron to neuron with biochemical neurotransmitters. These stimuli start out strong and then gradually become weaker. On the other hand, new synaptic connections to other neurons form in the brain while we learn. These connections are longer-​lasting. More

  • in

    Technique protects privacy when making online recommendations

    Algorithms recommend products while we shop online or suggest songs we might like as we listen to music on streaming apps.
    These algorithms work by using personal information like our past purchases and browsing history to generate tailored recommendations. The sensitive nature of such data makes preserving privacy extremely important, but existing methods for solving this problem rely on heavy cryptographic tools requiring enormous amounts of computation and bandwidth.
    MIT researchers may have a better solution. They developed a privacy-preserving protocol that is so efficient it can run on a smartphone over a very slow network. Their technique safeguards personal data while ensuring recommendation results are accurate.
    In addition to user privacy, their protocol minimizes the unauthorized transfer of information from the database, known as leakage, even if a malicious agent tries to trick a database into revealing secret information.
    The new protocol could be especially useful in situations where data leaks could violate user privacy laws, like when a health care provider uses a patient’s medical history to search a database for other patients who had similar symptoms or when a company serves targeted advertisements to users under European privacy regulations.
    “This is a really hard problem. We relied on a whole string of cryptographic and algorithmic tricks to arrive at our protocol,” says Sacha Servan-Schreiber, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and lead author of the paper that presents this new protocol. More

  • in

    Teaching physics to AI makes the student a master

    Researchers at Duke University have demonstrated that incorporating known physics into machine learning algorithms can help the inscrutable black boxes attain new levels of transparency and insight into material properties.
    In one of the first projects of its kind, researchers constructed a modern machine learning algorithm to determine the properties of a class of engineered materials known as metamaterials and to predict how they interact with electromagnetic fields.
    Because it first had to consider the metamaterial’s known physical constraints, the program was essentially forced to show its work. Not only did the approach allow the algorithm to accurately predict the metamaterial’s properties, it did so more efficiently than previous methods while providing new insights.
    The results appear online the week of May 9 in the journal Advanced Optical Materials.
    “By incorporating known physics directly into the machine learning, the algorithm can find solutions with less training data and in less time,” said Willie Padilla, professor of electrical and computer engineering at Duke. “While this study was mainly a demonstration showing that the approach could recreate known solutions, it also revealed some insights into the inner workings of non-metallic metamaterials that nobody knew before.”
    Metamaterials are synthetic materials composed of many individual engineered features, which together produce properties not found in nature through their structure rather than their chemistry. In this case, the metamaterial consists of a large grid of silicon cylinders that resemble a Lego baseplate. More

  • in

    Researchers create photonic materials for powerful, efficient light-based computing

    University of Central Florida researchers are developing new photonic materials that could one day help enable low power, ultra-fast, light-based computing.
    The unique materials, known as topological insulators, are like wires that have been turned inside out, where the current runs along the outside and the interior is insulated.
    Topological insulators are important because they could be used in circuit designs that allow for more processing power to be crammed into a single space without generating heat, thus avoiding the overheating problem today’s smaller and smaller circuits face.
    In their latest work, published in the journal Nature Materials, the researchers demonstrated a new approach to create the materials that uses a novel, chained, honeycomb lattice design.
    The researchers laser etched the chained, honeycombed design onto a sample of silica, the material commonly used to make photonic circuits.
    Nodes in the design allow the researchers to modulate the current without bending or stretching the photonic wires, an essential feature needed for controlling the flow of light and thus information in a circuit. More

  • in

    New model could improve matches between students and schools

    For the majority of students in the U.S., residential addresses determine which public elementary, middle, or high school they attend. But with an influx of charter schools and state-funded voucher programs for private schools, as well as a growing number of cities that let students apply to public schools across the district (regardless of zip code), the admissions process can turn into a messy game of matchmaking.
    Simultaneous applications for competitive spots and a lack of coordination among school authorities often result in some students being matched with multiple schools while others are unassigned. It can lead to unfilled seats at the start of the semester and extra stress for students and parents, as well as teachers and administrators.
    Assistant Professor of Economics Bertan Turhan at Iowa State University and his co-authors outline a way to make better, more efficient matches between students and schools in their new study published in Games and Economic Behavior. Turhan says their goal was to create a fairer process that works within realistic parameters.
    “There are a lot of success stories in major U.S. cities where economists and policymakers worked together to improve school choice,” said Turhan. “The algorithm we introduced builds on that and could give school groups some degree of coordination and significantly increase overall student welfare in situations where there’s a lot of competition to get into certain schools.”
    A new matchmaking model
    Using the researchers’ model, each student or family submits one rank-ordered list of public schools to the public school district and another rank-ordered list of private schools to the voucher program. Each school also submits a ranking of students to either the public school district or voucher program. More