More stories

  • in

    Researchers magnify hidden biological structures with MAGNIFIERS

    A research team from Carnegie Mellon University and Columbia have combined two emerging imaging technologies to better view a wide range of biomolecules, including proteins, lipids and DNA, at the nanoscale. Their technique, which brings together expansion microscopy and stimulated Raman scattering microscopy, is detailed in Advanced Science.
    Biomolecules are traditionally imaged using fluorescent microscopy, but that technique has its limitations. Fluorescent microscopy relies on fluorophore-carrying tags to bind to and label molecules of interest. These tags emit fluorescent light with a broad range of wavelengths, thus researchers can only use 3-4 fluorescent colors in the visible spectrum at a time to label molecules of interest.
    Unlike fluorescent microscopy, stimulated Raman scattering microscopy (SRS) visualizes the chemical bonds of biomolecules by capturing their vibrational fingerprints. In this sense, SRS doesn’t need labels to see the different types of biomolecules, or even different isotopes, within a sample. In addition, a rainbow of dyes with unique vibrational spectra can be used to image multiple targets. However, SRS has a diffraction limit of about 300 nanometers, making it unable to visualize many of the crucial nanoscale structures found in cells and tissue.
    “Each type of molecule has its own vibrational fingerprint. SRS allows us to see the type of molecule we want by tuning in to the characteristic frequency of its vibrations. Something like switching between the radio stations.” said Carnegie Mellon Eberly Family Associate Professor of Biological Sciences Yongxin (Leon) Zhao.
    Zhao’s lab has been developing new imaging tools based on expansion microscopy — a technique that addresses the problem of diffraction limits in a wide range of biological imaging. Expansion microscopy takes biological samples and transforms them into water-soluble hydrogels. The hydrogels can then be treated and made to expand to more than 100 times their original volume. The expanded samples can then be imaged using standard techniques.
    “Just as SRS was able to surmount the limitations of fluorescence microscopy, expansion microscopy surmounts the limitations of SRS,” said Zhao.
    The Carnegie Mellon and Columbia researchers combined SRS and expansion microscopy to create Molecule Anchorable Gel-enabled Nanoscale Imaging of Fluorescence and stimulated Raman Scattering microscopy (MAGNIFIERS). Zhao’s expansion microscopy technique was able to expand samples up to 7.2-fold, allowing them to use SRS to image smaller molecules and structures than they would be able to do without expansion.
    In the recently published study, the research team showed that MAGNIFIERS could be used for high-resolution metabolic imaging of protein aggregates, like those created in conditions like Huntington’s disease. They also showed that MAGNIFIERS could map the nanoscale location of eight different markers in brain tissue at one time.
    The researchers plan to continue to develop the MAGNIFIERS technique to achieve higher resolution and higher throughput imaging for understanding the pathology of complex diseases, such as cancer and brain disorders.
    Additional study co-authors include: Alexsandra Klimas, Brendan Gallagher, Zhangu Cheng, Feifei Fu, Piyumi Wijesekara and Xi Ren from Carnegie Mellon; and Yupeng Miao, Lixue Shi and Wei Min from Columbia
    This research was funded by the National Institutes of Health (DP2 OD025926-01, R01 GM128214, R01 GM132860, and R01 EB029523), Carnegie Mellon University, the DSF Charitable Foundation and U.S. Department of Defense (VR190139).
    Story Source:
    Materials provided by Carnegie Mellon University. Original written by Jocelyn Duffy. Note: Content may be edited for style and length. More

  • in

    Accelerating the pace of machine learning

    Machine learning happens a lot like erosion.
    Data is hurled at a mathematical model like grains of sand skittering across a rocky landscape. Some of those grains simply sail along with little or no impact. But some of them make their mark: testing, hardening, and ultimately reshaping the landscape according to inherent patterns and fluctuations that emerge over time.
    Effective? Yes. Efficient? Not so much.
    Rick Blum, the Robert W. Wieseman Professor of Electrical and Computer Engineering at Lehigh University, seeks to bring efficiency to distributed learning techniques emerging as crucial to modern artificial intelligence (AI) and machine learning (ML). In essence, his goal is to hurl far fewer grains of data without degrading the overall impact.
    In the paper “Distributed Learning With Sparsified Gradient Differences,” published in a special ML-focused issue of the IEEE Journal of Selected Topics in Signal Processing, Blum and collaborators propose the use of “Gradient Descent method with Sparsification and Error Correction,” or GD-SEC, to improve the communications efficiency of machine learning conducted in a “worker-server” wireless architecture. The issue was published May 17, 2022.
    “Problems in distributed optimization appear in various scenarios that typically rely on wireless communications,” he says. “Latency, scalability, and privacy are fundamental challenges.”
    “Various distributed optimization algorithms have been developed to solve this problem,” he continues,”and one primary method is to employ classical GD in a worker-server architecture. In this environment, the central server updates the model’s parameters after aggregating data received from all workers, and then broadcasts the updated parameters back to the workers. But the overall performance is limited by the fact that each worker must transmit all of its data all of the time. When training a deep neural network, this can be on the order of 200 MB from each worker device at each iteration. This communication step can easily become a significant bottleneck on overall performance, especially in federated learning and edge AI systems.”
    Through the use of GD-SEC, Blum explains, communication requirements are significantly reduced. The technique employs a data compression approach where each worker sets small magnitude gradient components to zero — the signal-processing equivalent of not sweating the small stuff. The worker then only transmits to the server the remaining non-zero components. In other words, meaningful, usable data are the only packets launched at the model.
    “Current methods create a situation where each worker has expensive computational cost; GD-SEC is relatively cheap where only one GD step is needed at each round,” says Blum.
    Professor Blum’s collaborators on this project include his former student Yicheng Chen ’19G ’21PhD, now a software engineer with LinkedIn; Martin Takác, an associate professor at the Mohamed bin Zayed University of Artificial Intelligence; and Brian M. Sadler, a Life Fellow of the IEEE, U.S. Army Senior Scientist for Intelligent Systems, and Fellow of the Army Research Laboratory.
    Story Source:
    Materials provided by Lehigh University. Note: Content may be edited for style and length. More

  • in

    Component for brain-inspired computing

    Researchers from ETH Zurich, the University of Zurich and Empa have developed a new material for an electronic component that can be used in a wider range of applications than its predecessors. Such components will help create electronic circuits that emulate the human brain and that are more efficient at performing machine-​learning tasks.
    Compared with computers, the human brain is incredibly energy efficient. Scientists are therefore drawing on how the brain and its interconnected neurons function for inspiration in designing innovative computing technologies. They foresee that these brain-​inspired computing systems, will be more energy efficient than conventional ones, as well as better at performing machine-​learning tasks.
    Much like neurons, which are responsible for both data storage and data processing in the brain, scientists want to combine storage and processing in a single electronic component type, known as a memristor. Their hope is that this will help to achieve greater efficiency, because moving data between the processor and the storage, as conventional computers do, is the main reason for the high energy consumption in machine learning applications.
    Researchers at ETH Zurich, the University of Zurich and Empa have now developed an innovative concept for a memristor that can be used in a far wider range of applications than existing memristors. “There are different operation modes for memristors, and it is advantageous to be able to use all these modes depending on an artificial neural network’s architecture,” explains ETH postdoc Rohit John. “But previous conventional memristors had to be configured for one of these modes in advance.” The new memristors from the researchers in Zurich can now easily switch between two operation modes while in use: a mode in which the signal grows weaker over time and dies (volatile mode), and one in which the signal remains constant (non-​volatile mode).
    Just like in the brain
    “These two operation modes are also found in the human brain,” John says. On the one hand, stimuli at the synapses are transmitted from neuron to neuron with biochemical neurotransmitters. These stimuli start out strong and then gradually become weaker. On the other hand, new synaptic connections to other neurons form in the brain while we learn. These connections are longer-​lasting. More

  • in

    Technique protects privacy when making online recommendations

    Algorithms recommend products while we shop online or suggest songs we might like as we listen to music on streaming apps.
    These algorithms work by using personal information like our past purchases and browsing history to generate tailored recommendations. The sensitive nature of such data makes preserving privacy extremely important, but existing methods for solving this problem rely on heavy cryptographic tools requiring enormous amounts of computation and bandwidth.
    MIT researchers may have a better solution. They developed a privacy-preserving protocol that is so efficient it can run on a smartphone over a very slow network. Their technique safeguards personal data while ensuring recommendation results are accurate.
    In addition to user privacy, their protocol minimizes the unauthorized transfer of information from the database, known as leakage, even if a malicious agent tries to trick a database into revealing secret information.
    The new protocol could be especially useful in situations where data leaks could violate user privacy laws, like when a health care provider uses a patient’s medical history to search a database for other patients who had similar symptoms or when a company serves targeted advertisements to users under European privacy regulations.
    “This is a really hard problem. We relied on a whole string of cryptographic and algorithmic tricks to arrive at our protocol,” says Sacha Servan-Schreiber, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and lead author of the paper that presents this new protocol. More

  • in

    Teaching physics to AI makes the student a master

    Researchers at Duke University have demonstrated that incorporating known physics into machine learning algorithms can help the inscrutable black boxes attain new levels of transparency and insight into material properties.
    In one of the first projects of its kind, researchers constructed a modern machine learning algorithm to determine the properties of a class of engineered materials known as metamaterials and to predict how they interact with electromagnetic fields.
    Because it first had to consider the metamaterial’s known physical constraints, the program was essentially forced to show its work. Not only did the approach allow the algorithm to accurately predict the metamaterial’s properties, it did so more efficiently than previous methods while providing new insights.
    The results appear online the week of May 9 in the journal Advanced Optical Materials.
    “By incorporating known physics directly into the machine learning, the algorithm can find solutions with less training data and in less time,” said Willie Padilla, professor of electrical and computer engineering at Duke. “While this study was mainly a demonstration showing that the approach could recreate known solutions, it also revealed some insights into the inner workings of non-metallic metamaterials that nobody knew before.”
    Metamaterials are synthetic materials composed of many individual engineered features, which together produce properties not found in nature through their structure rather than their chemistry. In this case, the metamaterial consists of a large grid of silicon cylinders that resemble a Lego baseplate. More

  • in

    Researchers create photonic materials for powerful, efficient light-based computing

    University of Central Florida researchers are developing new photonic materials that could one day help enable low power, ultra-fast, light-based computing.
    The unique materials, known as topological insulators, are like wires that have been turned inside out, where the current runs along the outside and the interior is insulated.
    Topological insulators are important because they could be used in circuit designs that allow for more processing power to be crammed into a single space without generating heat, thus avoiding the overheating problem today’s smaller and smaller circuits face.
    In their latest work, published in the journal Nature Materials, the researchers demonstrated a new approach to create the materials that uses a novel, chained, honeycomb lattice design.
    The researchers laser etched the chained, honeycombed design onto a sample of silica, the material commonly used to make photonic circuits.
    Nodes in the design allow the researchers to modulate the current without bending or stretching the photonic wires, an essential feature needed for controlling the flow of light and thus information in a circuit. More

  • in

    New model could improve matches between students and schools

    For the majority of students in the U.S., residential addresses determine which public elementary, middle, or high school they attend. But with an influx of charter schools and state-funded voucher programs for private schools, as well as a growing number of cities that let students apply to public schools across the district (regardless of zip code), the admissions process can turn into a messy game of matchmaking.
    Simultaneous applications for competitive spots and a lack of coordination among school authorities often result in some students being matched with multiple schools while others are unassigned. It can lead to unfilled seats at the start of the semester and extra stress for students and parents, as well as teachers and administrators.
    Assistant Professor of Economics Bertan Turhan at Iowa State University and his co-authors outline a way to make better, more efficient matches between students and schools in their new study published in Games and Economic Behavior. Turhan says their goal was to create a fairer process that works within realistic parameters.
    “There are a lot of success stories in major U.S. cities where economists and policymakers worked together to improve school choice,” said Turhan. “The algorithm we introduced builds on that and could give school groups some degree of coordination and significantly increase overall student welfare in situations where there’s a lot of competition to get into certain schools.”
    A new matchmaking model
    Using the researchers’ model, each student or family submits one rank-ordered list of public schools to the public school district and another rank-ordered list of private schools to the voucher program. Each school also submits a ranking of students to either the public school district or voucher program. More

  • in

    Energy-efficient AI hardware technology via a brain-inspired stashing system?

    Researchers have proposed a novel system inspired by the neuromodulation of the brain, referred to as a ‘stashing system,’ that requires less energy consumption. The research group led by Professor Kyung Min Kim from the Department of Materials Science and Engineering has developed a technology that can efficiently handle mathematical operations for artificial intelligence by imitating the continuous changes in the topology of the neural network according to the situation. The human brain changes its neural topology in real time, learning to store or recall memories as needed. The research group presented a new artificial intelligence learning method that directly implements these neural coordination circuit configurations.
    Research on artificial intelligence is becoming very active, and the development of artificial intelligence-based electronic devices and product releases are accelerating, especially in the Fourth Industrial Revolution age. To implement artificial intelligence in electronic devices, customized hardware development should also be supported. However most electronic devices for artificial intelligence require high power consumption and highly integrated memory arrays for large-scale tasks. It has been challenging to solve these power consumption and integration limitations, and efforts have been made to find out how the human brain solves problems.
    To prove the efficiency of the developed technology, the research group created artificial neural network hardware equipped with a self-rectifying synaptic array and algorithm called a ‘stashing system’ that was developed to conduct artificial intelligence learning. As a result, it was able to reduce energy by 37% within the stashing system without any accuracy degradation. This result proves that emulating the neuromodulation in humans is possible.
    Professor Kim said, “In this study, we implemented the learning method of the human brain with only a simple circuit composition and through this we were able to reduce the energy needed by nearly 40 percent.”
    This neuromodulation-inspired stashing system that mimics the brain’s neural activity is compatible with existing electronic devices and commercialized semiconductor hardware. It is expected to be used in the design of next-generation semiconductor chips for artificial intelligence.
    This study was published in Advanced Functional Materials in March 2022 and supported by KAIST, the National Research Foundation of Korea, the National NanoFab Center, and SK Hynix.
    Story Source:
    Materials provided by The Korea Advanced Institute of Science and Technology (KAIST). Note: Content may be edited for style and length. More