More stories

  • in

    Boosting memory performance by strong ion bombardment

    Recently, new technology has emerged that dramatically improves the performance of flash memory by a strong ion bombardment process. This memory platform can reliably express multiple data in a single device, rendering it applicable for future neuromorphic computing as well as increasing memory capacity.
    POSTECH professor Yoonyoung Chung (Department of Electrical Engineering and Department of Semiconductor Engineering) and Ph.D. candidate Seongmin Park (Department of Electrical Engineering), in joint research with Samsung Electronics, have developed a flash memory with increased data storage by intentionally generating defects.
    As artificial intelligence technology advances, developing a novel semiconductor device optimized for the neural network with multilevel data is required. New materials and devices have been developed as neuromorphic devices but have limitations in durability, scalability, and storage capacity compared to flash memory, which has been widely used as a storage device for various applications.
    To overcome these issues, the research team implemented a strong plasma bombardment process during the deposition of the data-storage layer to generate artificial defect sites in a flash memory device. The researchers confirmed that more electrons can be stored in generated defects, dramatically increasing the amount of data storage compared to conventional flash memory.
    A memory with multiple levels of data can be demonstrated when the electrons are gradually filled in the data storage layer in which many defects are generated. The multilevel flash memory developed in this study can reliably distinguish eight data levels.
    The findings from the study are significant in that they can minimize the risk of developing a new semiconductor material or structure and, at the same time, significantly advance flash memory with excellent performance and scalability for AI applications. When applied to neuromorphic systems, inference accuracy and reliability are expected to be dramatically improved compared to conventional devices.
    Recently published in Materials Today Nano, a renowned international academic journal in the field of nanotechnology, this study was supported by Samsung Electronics and the Next-generation Intelligence-Type Semiconductor Development Program.
    Story Source:
    Materials provided by Pohang University of Science & Technology (POSTECH). Note: Content may be edited for style and length. More

  • in

    Ant colonies behave like neural networks when making decisions

    Temperatures are rising, and one colony of ants will soon have to make a collective decision. Each ant feels the rising heat beneath its feet but carries along as usual until, suddenly, the ants reverse course. The whole group rushes out as one — a decision to evacuate has been made. It is almost as if the colony of ants has a greater, collective mind.
    A new study suggests that indeed, ants as a group behave similar to networks of neurons in a brain.
    Rockefeller’s Daniel Kronauer and postdoctoral associate Asaf Gal developed a new experimental setup to meticulously analyze decision-making in ant colonies. As reported in the Proceedings of the National Academy of Sciences, they found that when a colony evacuates due to rising temperatures, its decision is a function of both the magnitude of the heat increase and the size of the ant group.
    The findings suggest that ants combine sensory information with the parameters of their group to arrive at a group response — a process similar to neural computations giving rise to decisions.
    “We pioneered an approach to understand the ant colony as a cognitive-like system that perceives inputs and then translates them into behavioral outputs,” says Kronauer, head of the Laboratory of Social Evolution and Behavior. “This is one of the first steps toward really understanding how insect societies engage in collective computation.”
    A new paradigm
    At its most basic level, decision-making boils down to a series of computations meant to maximize benefits and minimize costs. For instance, in a common type of decision-making called sensory response thresholding, an animal has to detect sensory input like heat past a certain level to produce a certain costly behavior, like moving away. If the rise in temperature isn’t big enough, it won’t be worth it.
    Kronauer and Gal wanted to investigate how this type of information processing occurs at the collective level, where group dynamics come into play. They developed a system in which they could precisely perturb an ant colony with controlled temperature increases. To track the behavioral responses of individual ants and the entire colony, they marked each insect with different colored dots and followed their movements with a tracking camera.
    As the researchers expected, colonies of a set size of 36 workers and 18 larvae dependably evacuated their nest when the temperature hit around 34 degrees Celsius. This finding makes intuitive sense, Kronauer says, because “if you become too uncomfortable, you leave.”
    However, the researchers were surprised to find that the ants were not merely responding to temperature itself. When they increased the size of the colony from 10 to 200 individuals, the temperature necessary to trigger the decision to vacate increased. Colonies of 200 individuals, for example, held out until temperatures soared past 36 degrees. “It seems that the threshold isn’t fixed. Rather, it’s an emergent property that changes depending on the group size,” Kronauer says.
    Individual ants are unaware of the size of their colony, so how can their decision depend on it? He and Gal suspect that the explanation has to do with the way pheromones, the invisible messengers that pass information between ants, scale their effect when more ants are present. They use a mathematical model to show that such a mechanism is indeed plausible. But they do not know why larger colonies would require higher temperatures to pack up shop. Kronauer ventures that it could simply be that the larger the colony’s size, the more onerous it is to relocate, pushing up the critical temperature for which relocations happen.
    In future studies, Kronauer and Gal hope to refine their theoretical model of the decision-making process in the ant colony by interfering with more parameters and seeing how the insects respond. For example, they can tamper with the level of pheromones in the ants’ enclosure or create genetically altered ants with different abilities to detect temperature changes. “What we’ve been able to do so far is to perturb the system and measure the output precisely,” Kronauer says. “In the long term, the idea is to reverse engineer the system to deduce its inner workings in more and more detail.”
    Story Source:
    Materials provided by Rockefeller University. Note: Content may be edited for style and length. More

  • in

    New method can improve explosion detection

    Computers can be trained to better detect distant nuclear detonations, chemical blasts and volcano eruptions by learning from artificial explosion signals, according to a new method devised by a University of Alaska Fairbanks scientist.
    The work, led by UAF Geophysical Institute postdoctoral researcher Alex Witsil, was published recently in the journal Geophysical Research Letters.
    Witsil, at the Geophysical Institute’s Wilson Alaska Technical Center, and colleagues created a library of synthetic infrasound explosion signals to train computers in recognizing the source of an infrasound signal. Infrasound is at a frequency too low to be heard by humans and travels farther than high-frequency audible waves.
    “We used modeling software to generate 28,000 synthetic infrasound signals, which, though generated in a computer, could hypothetically be recorded by infrasound microphones deployed hundreds of kilometers from a large explosion,” Witsil said.
    The artificial signals reflect variations in atmospheric conditions, which can alter an explosion’s signal regionally or globally as the sound waves propagate. Those changes can make it difficult to detect an explosion’s origin and type from a great distance.
    Why create artificial sounds of explosions rather than use real-world examples? Because explosions haven’t occurred at every location on the planet and the atmosphere constantly changes, there aren’t enough real-world examples to train generalized machine-learning detection algorithms. More

  • in

    The best semiconductor of them all?

    Silicon is one of the most abundant elements on Earth, and in its pure form the material has become the foundation of much of modern technology, from solar cells to computer chips. But silicon’s properties as a semiconductor are far from ideal.
    For one thing, although silicon lets electrons whizz through its structure easily, it is much less accommodating to “holes” — electrons’ positively charged counterparts — and harnessing both is important for some kinds of chips. What’s more, silicon is not very good at conducting heat, which is why overheating issues and expensive cooling systems are common in computers.
    Now, a team of researchers at MIT, the University of Houston, and other institutions has carried out experiments showing that a material known as cubic boron arsenide overcomes both of these limitations. It provides high mobility to both electrons and holes, and has excellent thermal conductivity. It is, the researchers say, the best semiconductor material ever found, and maybe the best possible one.
    So far, cubic boron arsenide has only been made and tested in small, lab-scale batches that are not uniform. The researchers had to use special methods originally developed by former MIT postdoc Bai Song to test small regions within the material. More work will be needed to determine whether cubic boron arsenide can be made in a practical, economical form, much less replace the ubiquitous silicon. But even in the near future, the material could find some uses where its unique properties would make a significant difference, the researchers say.
    The findings are reported in the journal Science, in a paper by MIT postdoc Jungwoo Shin and MIT professor of mechanical engineering Gang Chen; Zhifeng Ren at the University of Houston; and 14 others at MIT, the University of Houston, the University of Texas at Austin, and Boston College.
    Earlier research, including work by David Broido, who is a co-author of the new paper, had theoretically predicted that the material would have high thermal conductivity; subsequent work proved that prediction experimentally. This latest work completes the analysis by confirming experimentally a prediction made by Chen’s group back in 2018: that cubic boron arsenide would also have very high mobility for both electrons and holes, “which makes this material really unique,” says Chen. More

  • in

    Buckyballs on gold are less exotic than graphene

    Graphene consists of carbon atoms that crosslink in a plane to form a flat honeycomb structure. In addition to surprisingly high mechanical stability, the material has exciting electronic properties: The electrons behave like massless particles, which can be clearly demonstrated in spectrometric experiments. Measurements reveal a linear dependence of energy on momentum, namely the so-called Dirac cones — two lines that cross without a band gap — i.e. an energy difference between electrons in the conduction band and those in the valence bands.
    Variants in graphene architecture
    Artificial variants of graphene architecture are a hot topic in materials research right now. Instead of carbon atoms, quantum dots of silicon have been placed, ultracold atoms have been trapped in the honeycomb lattice with strong laser fields, or carbon monoxide molecules have been pushed into place on a copper surface piece by piece with a scanning tunneling microscope, where they could impart the characteristic graphene properties to the electrons of the copper.
    Artificial graphene with buckyballs?
    A recent study suggested that it is infinitely easier to make artificial graphene using C60 molecules called buckyballs. Only a uniform layer of these needs to be vapor-deposited onto gold for the gold electrons to take on the special graphene properties. Measurements of photoemission spectra appeared to show a kind of Dirac cone.
    Analysis of band structures at BESSY II
    “That would be really quite amazing,” says Dr. Andrei Varykhalov, of HZB, who heads a photoemission and scanning tunneling microscopy group. “Because the C60 molecule is absolutely nonpolar, it was hard for us to imagine how such molecules would exert a strong influence on the electrons in the gold.” So Varykhalov and his team launched a series of measurements to test this hypothesis.
    In tricky and detailed analyses, the Berlin team was able to study C60 layers on gold over a much larger energy range and for different measurement parameters. They used angle-resolved ARPES spectroscopy at BESSY II, which enables particularly precise measurements, and also analysed electron spin for some measurements.
    Normal behavior
    “We see a parabolic relationship between momentum and energy in our measured data, so it’s a very normal behavior. These signals come from the electrons deep in the substrate (gold or copper) and not the layer, which could be affected by the buckyballs,” explains Dr. Maxim Krivenkov, lead author of the study. The team was also able to explain the linear measurement curves from the previous study. “These measurement curves merely mimic the Dirac cones; they are an artifact, so to speak, of a deflection of the photoelectrons as they leave the gold and pass through the C60 layer,” Varykhalov explains. Therefore, the buckyball layer on gold cannot be considered an artificial graphene.
    Story Source:
    Materials provided by Helmholtz-Zentrum Berlin für Materialien und Energie. Note: Content may be edited for style and length. More

  • in

    Software program allows simultaneous viewing of tissue images through dimensionality reduction

    Imaging of tissue specimens is an important aspect of translational research that bridges the gap between basic laboratory science and clinical science to improve the understanding of cancer and aid in the development of new therapies. To analyze images to their fullest potential, scientists ideally need an application that enables multiple images to be viewed simultaneously. In an article published in the journal Patterns, Moffitt Cancer Center researchers describe a new open-source software program they developed that allows users to view many multiplexed images simultaneously.
    There have been significant improvements in the approaches to study cancer over the past decade, including new techniques to study tissue samples. For example, machines can now be programmed to stain hundreds of slides simultaneously, or alternatively, up to 1,000 different tissue sample cores can be placed on a single slide and stained for biomarkers at the same time. With the advent of these approaches comes a wealth of possibilities to generate new data and information. Due to the magnitude of this information and the complex nature of cancer itself, computational modeling and software are needed to view and study the cancer biomarkers, tissue architecture, and cellular interactions among these samples.
    As researchers in Moffitt’s Integrated Mathematical Oncology Department (IMO) were working on a project, they realized that the currently available software for image viewing was not amenable to their needs.
    “We were interested in understanding the underlying spatial patterns between tumor and immune cells and how the tumors were organized. This required us to compare multiple images simultaneously and we realized there was no software, free or commercial, enabling this,” said Sandhya Prabhakaran, Ph.D., lead author and applied research scientist at Moffitt.
    The IMO team decided to create a software program that would enable them to view multiple images at the same time and extract data through additional analyses that could be used for a variety of purposes, including identifying biomarkers and understanding tissue architecture and the spatial organization of different cell types. Their program, called Mistic, takes information from multidimensional images and uses dimensionality reduction methods called t-distributed stochastic neighbor embedding (t-SNE) to abstract each image to a point in reduced space. Mistic is an open-source software that can be used with images from Vectra, CyCIF, t-CyCIF and CODEX.
    In their publication, the researchers describe the creation of Mistic and some of the applications that it could be used for. For example, they demonstrated that the software could be used to view 92 images from patients with non-small cell lung cancer and deduce how biomarkers cluster across patients with different responses to treatment. In another example, the researchers used Mistic combined with statistical analysis to assess the spatial colocalization and coexpression of immune cell markers in 210 endometrial cancer samples.
    The team is excited about the potential applications for Mistic and have plans to improve the software.
    “We will enhance Mistic to use biologically meaningful regions of interest from the multiplexed image to render the overall image t-SNE. We also have plans to augment Mistic with other visualization software and build a cross-platform viewer plugin to improve the adoption, usability and functionality of Mistic in the biomedical research community,” said Sandy Anderson, Ph.D., author and chair of Moffitt’s IMO Department.
    In addition to Mistic, the Patterns featured the IMO team in a People of Data article titled “Developing tools for analyzing and viewing multiplexed images.” Here, the IMO team gets to introduce themselves, discuss their research passion and the challenges and opportunities relevant to imaging in mathematical oncology. More

  • in

    AI speeds sepsis detection to prevent hundreds of deaths

    Patients are 20% less likely to die of sepsis because a new AI system developed at Johns Hopkins University catches symptoms hours earlier than traditional methods, an extensive hospital study demonstrates. The system, created by a Johns Hopkins researcher whose young nephew died from sepsis, scours medical records and clinical notes to identify patients at risk of life-threatening complications. The work, which could significantly cut patient mortality from one of the top causes of hospital deaths worldwide, is published today in Nature Medicine and Nature Digital Medicine.
    “It is the first instance where AI is implemented at the bedside, used by thousands of providers, and where we’re seeing lives saved,” said Suchi Saria, founding research director of the Malone Center for Engineering in Healthcare at Johns Hopkins and lead author of the studies, which evaluated more than a half million patients over two years. “This is an extraordinary leap that will save thousands of sepsis patients annually. And the approach is now being applied to improve outcomes in other important problem areas beyond sepsis.” Sepsis occurs when an infection triggers a chain reaction throughout the body. Inflammation can lead to blood clots and leaking blood vessels, and ultimately can cause organ damage or organ failure. About 1.7 million adults develop sepsis every year in the United States and more than 250,000 of them die.
    Sepsis is easy to miss since symptoms such as fever and confusion are common in other conditions, Saria said. The faster it’s caught, the better a patient’s chances for survival. “One of the most effective ways of improving outcomes is early detection and giving the right treatments in a timely way, but historically this has been a difficult challenge due to lack of systems for accurate early identification,” said Saria, who directs the Machine Learning and Healthcare Lab at Johns Hopkins.
    To address the problem, Saria and other Johns Hopkins doctors and researcher developed the Targeted Real-Time Early Warning System. Combining a patient’s medical history with current symptoms and lab results, the machine-learning system shows clinicians when someone is at risk for sepsis and suggests treatment protocols, such as starting antibiotics. The AI tracks patients from when they arrive in the hospital through discharge, ensuring that critical information isn’t overlooked even if staff changes or a patient moves to a different department. During the study, more than 4,000 clinicians from five hospitals used the AI in treating 590,000 patients. The system also reviewed 173,931 previous patient cases. In 82% of sepsis cases, the AI was accurate nearly 40% of the time.
    Previous attempts to use electronic tools to detect sepsis caught less than half that many cases and were accurate 2% to 5% of the time. All sepsis cases are eventually caught, but with the current standard of care, the condition kills 30% of the people who develop it. In the most severe sepsis cases where an hour delay is the difference between life and death, the AI detected it an average of nearly six hours earlier than traditional methods. “This is a breakthrough in many ways,” said co-author Albert Wu, an internist and director of the Johns Hopkins Center for Health Services and Outcomes Research.
    “Up to this point, most of these types of systems have guessed wrong much more often than they get it right. Those false alarms undermine confidence.” Unlike conventional approaches, the system allows doctors to see why the tool is making specific recommendations. The work is extremely personal to Saria, who lost her nephew as a young adult to sepsis. “Sepsis develops very quickly and this is what happened in my nephew’s case,” she said. “When doctors detected it, he was already in septic shock.” Bayesian Health, a company spun-off from Johns Hopkins, led and managed the deployment across all testing sites. The team also partnered with the two largest electronic health record system providers, Epic and Cerner, to ensure that the tool can be implemented at other hospitals. The team has adapted the technology to identify patients at risk for pressure injuries, commonly known as bed sores, and those at risk for sudden deterioration caused by bleeding, acute respiratory failure, and cardiac arrest.
    “The approach used here is foundationally different,” Saria said. “It’s adaptive and takes into consideration the diversity of the patient population, the unique ways in which doctors and nurses deliver care across different sites, and the unique characteristics of each health system, allowing it to be significantly more accurate and to gain provider trust and adoption.”
    Co-authors of the three studies in Nature Medicine and Nature Digital Medicine include Katharine Henry, Roy Adams, Cassandra Parent, David Hager, Edward Chen, Mustapha Saheed, and Albert Wu of Johns Hopkins University; Hossein Soleimani of University of California, San Francisco; Anirudh Sridharan of Howard County General Hospital; Lauren Johnson, Maureen Henley, Sheila Miranda, Katrina Houston, and Anushree Ahluwalia of The Johns Hopkins Hospital; Sara Cosgrove and Eili Klein of Johns Hopkins University School of Medicine; Andrew Markowski of Suburban Hospital; and Robert Linton of Howard County General Hospital.
    The work was funded by the Gordon and Betty Moore Foundation (No. 3926 and 3186.01), the National Science Foundation Future of Work at the Human-technology Frontier (No. 1840088), and the Alfred P. Sloan Foundation research fellowship (2018).
    Story Source:
    Materials provided by Johns Hopkins University. Original written by Laura Cech. Note: Content may be edited for style and length. More

  • in

    Quantum digits unlock more computational power with fewer quantum particles

    For decades computers have been synonymous with binary information — zeros and ones. Now a team at the University of Innsbruck, Austria, realized a quantum computer that breaks out of this paradigm and unlocks additional computational resources, hidden in almost all of today’s quantum devices.
    We all learn from early on that computers work with zeros and ones, also known as binary information. This approach has been so successful that computers now power everything from coffee machines to self-driving cars and it is hard to imagine a life without them.
    Building on this success, today’s quantum computers are also designed with binary information processing in mind. “The building blocks of quantum computers, however, are more than just zeros and ones,” explains Martin Ringbauer, an experimental physicist from Innsbruck, Austria. “Restricting them to binary systems prevents these devices from living up to their true potential.”
    The team led by Thomas Monz at the Department of Experimental Physics at the University of Innsbruck, now succeeded in developing a quantum computer that can perform arbitrary calculations with so-called quantum digits (qudits), thereby unlocking more computational power with fewer quantum particles.
    Quantum systems are different
    Although storing information in zeros and ones is not the most efficient way of doing calculations, it is the simplest way. Simple often also means reliable and robust to errors and so binary information has become the unchallenged standard for classical computers.
    In the quantum world, the situation is quite different. In the Innsbruck quantum computer, for example, information is stored in individual trapped Calcium atoms. Each of these atoms naturally has eight different states, of which typically only two are used to store information. Indeed, almost all existing quantum computers have access to more quantum states than they use for computation.
    A natural approach for hardware and software
    The physicists from Innsbruck now developed a quantum computer that can make use of the full potential of these atoms, by computing with qudits. Contrary to the classical case, using more states does not make the computer less reliable. “Quantum systems naturally have more than just two states and we showed that we can control them all equally well,” says Thomas Monz.
    On the flipside, many of the tasks that need quantum computers, such as problems in physics, chemistry, or material science, are also naturally expressed in the qudit language. Rewriting them for qubits can often make them too complicated for today’s quantum computers. “Working with more than zeros and ones is very natural, not only for the quantum computer but also for its applications, allowing us to unlock the true potential of quantum systems,” explains Martin Ringbauer.
    Story Source:
    Materials provided by University of Innsbruck. Note: Content may be edited for style and length. More