More stories

  • in

    Magnetic memory milestone

    Computers and smartphones have different kinds of memory, which vary in speed and power efficiency depending on where they are used in the system. Typically, larger computers, especially those in data centers, will use a lot of magnetic hard drives, which are less common in consumer systems now. The magnetic technology these are based on provides very high capacity, but lack the speed of solid state system memory. Devices based on upcoming spintronic technology may be able to bridge that gap and radically improve upon even theoretical performance of classical electronic devices.
    Professor Satoru Nakatsuji and Project Associate Professor Tomoya Higo from the Department of Physics at the University of Tokyo, together with their team, explore the world of spintronics and other related areas of solid state physics — broadly speaking, the physics of things that function without moving. Over the years, they have studied special kinds of magnetic materials, some of which have very unusual properties. You’ll be familiar with ferromagnets, as these are the kinds that exist in many everyday applications like computer hard drives and electric motors — you probably even have some stuck to your refrigerator. However, of greater interest to the team are more obscure magnetic materials called antiferromagnets.
    “Like ferromagnets, antiferromagnets’ magnetic properties arise from the collective behavior of their component particles, in particular the spins of their electrons, something analogous to angular momentum,” said Nakatsuji. “Both materials can be used to encode information by changing localized groups of constituent particles. However, antiferromagnets have a distinct advantage in the high speed at which these changes to the information-storing spin states can be made, at the cost of increased complexity.”
    “Some spintronic memory devices already exist. MRAM (magnetoresistive random access memory) has been commercialized and can replace electronic memory in some situations, but it is based on ferromagnetic switching,” said Higo. “After considerable trial and error, I believe we are the first to report the successful switching of spin states in antiferromagnetic material Mn3Sn by using the same method as that used for ferromagnets in the MRAM, meaning we have coaxed the antiferromagnetic substance into acting as a simple memory device.”
    This method of switching is called spin-orbit torque (SOT) switching and it’s cause for excitement in the technology sector. It uses a fraction of the power to change the state of a bit (1 or 0) in memory, and although the researchers’ experiments involved switching their Mn3Sn sample in as little as a few milliseconds (thousandth of a second), they are confident that SOT switching could occur on the picosecond (trillionth of a second) scale, which would be orders of magnitude faster than the switching speed of current state-of-the-art electronic computer chips.
    “We achieved this due to the unique material Mn3Sn,” said Nakatsuji. “It proved far easier to work with in this way that other antiferromagnetic materials may have been.”
    “There is no rule book on how to fabricate this material. We aim to create a pure, flat crystal lattice of Mn3Sn from manganese and tin using a process called molecular beam epitaxy,” said Higo. “There are many parameters to this process that have to be fine-tuned, and we are still refining the process to see how it might be scaled up if it’s to become an industrial method one day.”
    Story Source:
    Materials provided by University of Tokyo. Note: Content may be edited for style and length. More

  • in

    Melanoma thickness equally hard for algorithms and dermatologists to judge

    Assessing the thickness of melanoma is difficult, whether done by an experienced dermatologist or a well-trained machine-learning algorithm. A study from the University of Gothenburg shows that the algorithm and the dermatologists had an equal success rate in interpreting dermoscopic images.
    In diagnosing melanoma, dermatologists evaluate whether it is an aggressive form (“invasive melanoma”), where the cancer cells grow down into the dermis and there is a risk of spreading to other parts of the body, or a milder form (“melanoma in situ,” MIS) that develops in the outer skin layer, the epidermis, only. Invasive melanomas that grow deeper than one millimeter into the skin are considered thick and, as such, more aggressive.
    Importance of thickness
    Melanomas are assessed by investigation with a dermatoscope — a type of magnifying glass fitted with a bright light. Diagnosing melanoma is often relatively simple, but estimating its thickness is a much greater challenge.
    “As well as providing valuable prognostic information, the thickness may affect the choice of surgical margins for the first operation and how promptly it needs to be performed,” says Sam Polesie, associate professor (docent) of dermatology and venereology at Sahlgrenska Academy, University of Gothenburg, Polesie is also a dermatologist at Sahlgrenska University Hospital and the study’s first author.
    Tie between man and machine
    Using a web platform, 438 international dermatologists assessed nearly 1,500 melanoma images captured with a dermatoscope. The dermatologists’ results were then compared with those from a machine-learning algorithm trained in classifying melanoma depth. More

  • in

    'Pulling back the curtain' to reveal a molecular key to The Wizard of Oz

    Many people and companies worry about sensitive data getting hacked, so encrypting files with digital keys has become more commonplace. Now, researchers reporting in ACS Central Science have developed a durable molecular encryption key from sequence-defined polymers that are built and deconstructed in a sequential way. They hid their molecular key in the ink of a letter, which was mailed and then used to decrypt a file with text from The Wonderful Wizard of Oz.
    Securely sharing data relies on encryption algorithms that jumble up the information and only reveal it when the correct code or digital encryption key is used. Researchers have been developing molecular strategies, including DNA chains and polymers, to durably store and transport encryption keys. Currently, nucleic acids store more information than polymers. The challenge with polymers is that when they get too long, storing more data with each additional monomer becomes less efficient, and figuring out the information they’re hiding with analytical instruments becomes extremely difficult. Recently, Eric Anslyn and colleagues developed a method to deconstruct polymers in a sequential way, allowing their structures to be determined more easily with liquid chromatography-mass spectrometry (LC/MS). So, Anslyn, James Ruether and others wanted to test the method on a mixture of unique polymers hidden in ink to see if the approach could be used to reveal a complex molecular encryption key.
    First, the researchers generated a 256-character-long binary key that could encrypt and decrypt text files when entered into an algorithm. Next, they encoded the key into polymer sequences of eight 10-monomer-long oligourethanes. Only the middle eight monomers held the key, and the two ends acted as placeholders for synthesis and decoding. The decoding placeholder was a unique, isotopically labeled “fingerprint” monomer in each sequence, indicating where each polymer’s encoded information fit in the order of the final digital key. Then the researchers mixed the eight polymers together and used a sequential depolymerization method and LC/MS to determine the original structures and the digital key. Finally, one group of the researchers combined the polymers with isopropanol, glycerol and soot to make an ink, which they used to write a letter that they mailed to other colleagues, who didn’t know the encoded information. These scientists extracted the ink from the paper and followed the same sequential analysis to successfully reconstruct the binary key. They entered the encryption key into the algorithm, revealing a plain text file of The Wonderful Wizard of Oz. The researchers say that their results demonstrate that molecular information encryption with sequence-defined polymer mixtures is durable enough for real-world applications, such as hiding secret messages in letters and plastic objects.
    The authors acknowledge funding from the Army Research Office, the Howard Hughes Medical Institute, the Keck Foundation and the Welch Reagents Chair.
    Story Source:
    Materials provided by American Chemical Society. Note: Content may be edited for style and length. More

  • in

    Learning to fight infection

    Scientific advancements have often been held back by the need for high volumes of data, which can be costly, time-consuming, and sometimes difficult to collect. But there may be a solution to this problem when investigating how our bodies fight illness: a new machine learning method called “MotifBoost.” This approach can help interpret data from T-cell receptors (TCRs) in identifying past infections to specific pathogens. By focusing on a collection of short sequences of amino acids in the TCRs, a research team achieved more accurate results with smaller datasets. This work may shed light on the way the human immune system recognizes germs, which may lead to improved health outcomes.
    The recent pandemic has highlighted the vital importance of the human body’s ability to fight back against novel threats. The adaptive immune system uses specialized cells, including T-cells, which prepare an array of diverse receptors that can recognize antigens specific to invading germs even before they arrive for the first time. Therefore, the diversity of the receptors is an important topic of investigation. However, the correspondence between receptors and the antigens they recognize is often difficult to determine experimentally, and current computational methods often fail if not provided with enough data.
    Now, scientists from the Institute of Industrial Science at The University of Tokyo have developed a new machine learning method that can predict the infection of a donor based on limited data of TCRs. “MotifBoost” focuses on very short segments, called k-mers, in each receptor. Although the protein motifs considered by scientists are usually much longer, the team found that extracting the frequency of each combination of three consecutive amino acids was highly effective. “Our machine learning methods trained on small-scale datasets can supplement conventional classification methods which only work on very large datasets,” first author Yotaro Katayama says. MotifBoost was inspired by the fact that different people usually produce similar TCRs when exposed to the same pathogen.
    First, the researchers employed an unsupervised learning approach, in which donors were automatically sorted based on patterns found in the data, and showed that donors formed distinct clusters using the k-mer distribution based on having previous infection by cytomegalovirus (CMV) or not. Because unsupervised learning algorithms do not have information about which donors had been infected with CMV, this result indicated that the k-mer information is effective in capturing characteristics of a patient’s immune status. Then, the scientists used the k-mer distribution data for a supervised learning task, in which the algorithm was given the TCR data of each donor, along with labels for which donors were infected with a specific disease. The algorithm was then trained to predict the label for unseen samples, and the prediction performance was tested for CMV and HIV.
    “We found that existing machine learning methods can suffer from learning instability and reduced accuracy when the number of samples drops below a certain critical size. In contrast, MotifBoost performed just as well on the large dataset, and still provided a good result on the small dataset,” says senior author Tetsuya J. Kobayashi. This research may lead to new tests for viral exposure and immune status based on T-cell composition.
    This research is published in Frontiers in Immunology as “Comparative study of repertoire classification methods reveals data efficiency of k-mer feature extraction.”
    Story Source:
    Materials provided by Institute of Industrial Science, The University of Tokyo. Note: Content may be edited for style and length. More

  • in

    Scientists reveal genetic architecture underlying alcohol, cigarette abuse

    Have you ever wondered why one person can smoke cigarettes for a year and easily quit, while another person will become addicted for life? Why can’t some people help themselves from abusing alcohol and others can take it or leave it? One reason is a person’s genetic proclivity to abuse substances. UNC School of Medicine researchers led by Hyejung Won, PhD, are beginning to understand these underlying genetic differences. The more they learn, the better chance they will be able to create therapies to help the millions of people who struggle with addiction.
    Won, assistant professor of genetics and member of the UNC Neuroscience Center, and colleagues identified genes linked to cigarette smoking and drinking. The researchers found that these genes are over-represented in certain kinds of neurons — brain cells that trigger other cells to send chemical signals throughout the brain.
    The researchers, who published their work in the journal Molecular Psychiatry, also found that the genes underlying cigarette smoking were linked to the perception of pain and response to food, as well as the abuse of other drugs, such as cocaine. Other genes associated with alcohol use were linked to stress and learning, as well as abuse of other drugs, such as morphine.
    Given the lack of current treatment options for substance use disorder, the researchers also conducted analyses of a publicly available drug database to identify potential new treatments for substance abuse.
    “We found that antipsychotics and other mood stabilizers could potentially provide therapeutic relief for individuals struggling with substance abuse,” said Nancy Sey, graduate student in the Won lab and the first author of the paper. “And we’re confident our research provides a good foundation for research focused on creating better treatments to address drug dependency.”
    Parsing the Genome
    Long-term substance use and substance use disorders have been linked to many common diseases and conditions, such as lung cancer, liver disease, and mental illnesses. Yet, few treatment options are available, largely due to gaps in our understanding of the biological processes involved. More

  • in

    New model predicts how temperature affects life from quantum to classical scales

    Every biological process depends critically on temperature. It’s true of the very small, the very large, and every scale in-between, from molecules to ecosystems and across every environment.
    A general theory describing how life depends on temperature has been lacking — until now. In a paper pubished in the Proceedings of the National Academy of Sciences, researchers led by Jose Ignacio Arroyo, a Santa Fe Institute Postdoctoral Fellow, introduce a simple framework that rigorously predicts how temperature affects living things, at all scales.
    “It is very fundamental,” says SFI External Professor Pablo Marquet, an ecologist at the Pontifica Universidad Catolica de Chile, in Santiago. Marquet, Arroyo’s Ph.D. thesis advisor, also worked on the model. “You can apply this to pretty much every process that is affected by temperature. We hope it will be a landmark contribution.”
    Marquet notes that such a theory could help researchers make accurate predictions in a range of areas, including biological responses to climate change, the spread of infectious diseases, and food production.
    Previous attempts to generalize the effects of temperature on biology have lacked the “big picture” implications built into the new model, says Marquet. Biologists and ecologists often use the Arrhenius equation, for example, to describe how temperature affects the rates of chemical reactions. That approach successfully approximates how temperature influences some biological processes, but it can’t fully account for many others, including metabolism and growth rate.
    Arroyo initially set out to develop a general mathematical model to predict the behavior of any variable in biology. He quickly realized, however, that temperature was a kind of universal predictor and could guide the development of a new model. He started with a theory in chemistry that describes the kinetics of enzymes, but with a few additions and assumptions, he extended the model from the quantum-molecular level to larger, macroscopic scales.
    Importantly, the model combines three elements lacking in earlier attempts. First, like its counterpart in chemistry, it’s derived from first principles. Second, the heart of the model is a single, simple equation with only a few parameters. (Most existing models require a plethora of assumptions and parameters.) Third, “it’s universal in the sense that it can explain patterns and behaviors for any microorganisms or any taxa in any environment,” he says. All temperature responses for different processes, taxa, and scales collapse to the same general functional form.
    “I think that our ability to systematize temperature response has the potential to reveal novel unification in biological processes in order to resolve a variety of controversies,” says SFI Professor Chris Kempes who, along with SFI Professor Geoffrey West, helped the team bridge the quantum-to-classical scales.
    The PNAS paper describes predictions from the new model that align with empirical observations of diverse phenomena, including the metabolic rate of an insect, the relative germination of alfalfa, the growth rate of a bacterium, and the mortality rate of a fruit fly.
    In future publications, Arroyo says, the group plans to derive new predictions from this model — many of which were planned for the first publication. “The paper was just getting too big,” he says.
    Story Source:
    Materials provided by Santa Fe Institute. Note: Content may be edited for style and length. More

  • in

    Go with the flow: New findings about moving electricity could improve fusion devices

    Researchers at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) have found that updating a mathematical model to include a physical property known as resistivity could lead to the improved design of doughnut-shaped fusion facilities known as tokamaks.
    “Resistivity is the property of any substance that inhibits the flow of electricity,” said PPPL physicist Nathaniel Ferraro, one of the collaborating researchers. “It’s kind of like the viscosity of a fluid, which inhibits things moving through it. For example, a stone will move more slowly through molasses than water, and more slowly through water than through air.”
    Scientists have discovered a new way that resistivity can cause instabilities in the plasma edge, where temperatures and pressures rise sharply. By incorporating resistivity into models that predict the behavior of plasma, a soup of electrons and atomic nuclei that makes up 99% of the visible universe, scientists can design systems for future fusion facilities that make the plasma more stable.
    “We want to use this knowledge to figure out how to develop a model that allows us to plug in certain plasma characteristics and predict whether the plasma will be stable before we actually do an experiment,” said Andreas Kleiner, a PPPL physicist who was the lead author of a paper reporting the results in Nuclear Fusion. “Basically, in this research, we saw that resistivity matters and our models ought to include it,” Kleiner said.
    Fusion, the power that drives the sun and stars, combines light elements in the form of plasma — the hot, charged state of matter composed of free electrons and atomic nuclei — and generates massive amounts of energy. Scientists seek to harness fusion on Earth for a virtually inexhaustible supply of power to generate electricity.
    Scientists want the plasma to be stable because instabilities can lead to plasma eruptions known as edge-localized modes (ELMs) that can damage internal components of the tokamak over time, requiring those components to be replaced more frequently. Future fusion reactors will have to run without stopping for repairs, however, for months at a time. More

  • in

    AI tech to automate process of denture design and enhance treatment efficiency without compromising accuracy

    Loss of permanent teeth is usually caused by dental diseases or trauma and is common in the global population, especially among the elderly due to aging and relatively poorer oral health.
    Failure to replace a missing tooth not only affects facial aesthetic and chewing function, but it may also lead to jawbone loss and shifting to teeth, which may cause malocclusion and bite irregularities that could have a significant impact on the health of the remaining teeth, gums, jaw muscles and jaw points.
    Artificial teeth, also known as bridges and dentures, are prosthetic devices used to replace missing teeth. It is essential for the false teeth to resemble the patient’s original tooth so that the patient can retain his or her original appearance, chewing function, oral and physical health.
    Currently, the process of designing and creating dentures is highly time-consuming as the existing computerised design process requires tedious manual inputs, teeth occlusion information collection as well as multiple denture fitting procedures due to limited accuracy of exciting technologies.
    Researchers from the Faculty of Dentistry at the University of Hong Kong (HKU) and the Department of Computer Science of Chu Hai College of Higher Education, collaborated to develop a new approach using artificial intelligence to automate the design of individualised dentures, in order to enhance the treatment efficiency and improve patient experience.
    The AI technology used in the process was based on 3D Generative Adversarial Network (3D-GAN) algorithm and tested on 175 participants recruited at HKU. The study shows that AI technology could reconstruct the shape of a natural healthy tooth and automate the process of false teeth design with high accuracy.
    “The 3D GAN algorithm was selected due to its superior performance on 3D object reconstruction compared to other AI algorithms. In the preliminary study, 3D GAN was able to rebuild similar shapes to the original teeth for 60% of the cases. It is expected to mature with more AI training data,” co-Investigator, Dr Reinhard Chau explained.
    The new approach only requires the digital model of a patient’s dentition to function. It can learn the features of an individual’s teeth from the rest of the dentition and generate a false tooth that looks like the missing tooth.
    “This will facilitate the treatment workflow for dentists in replacing a missing tooth, as the preparation and fitting process will require minimal time, and a patient will not need to stay at the clinic for long hours,” said Principal Investigator Dr Walter Lam.
    The study entitled “Artificial intelligence-designed single molar dental prostheses: A protocol of prospective experimental study” is published in the journal PLoS ONE. The preliminary results of the study were presented in the recent International Association of Dental Research (IADR) General Session. The study won the IADR Neal Garrett Clinical Research Prize and First runner-up in the 2022 IADR-SEA Hatton Award — Senior Category.
    Story Source:
    Materials provided by The University of Hong Kong. Note: Content may be edited for style and length. More