More stories

  • in

    Motion capture reveals why VAR in football struggles with offside decisions

    New research by the University of Bath has used motion capture technology to assess the accuracy of Video Assistant Referee (VAR) technologies in football. The study suggests that VAR is useful for preventing obvious mistakes but is currently not precise enough to give accurate judgements every time.
    VAR was introduced into association football in 2018 to help referees review decisions for goals, red cards, penalties and offsides. The technology uses film footage from pitch-side cameras, meaning that VAR operators can view the action from different angles and then offer their judgements on incidents to the head referee to make a final decision.
    However, the accuracy and application of VAR has also been questioned by some, including high profile pundits like Gary Lineker and Alan Shearer, following controversial decisions which can change the course of the game.
    Critics of VAR further argue that it hampers the flow of the game, however some research suggests it has reduced the number of fouls, offsides and yellow cards.
    Dr Pooya Soltani, from the University of Bath’s Centre for Analysis of Motion, Entertainment Research and Applications (CAMERA), used optical motion capture systems to assess the accuracy of VAR systems.
    He filmed a football player receiving the ball from a teammate, viewed from different camera angles, whilst recording the 3D positions of the ball and players using optical motion capture cameras. More

  • in

    Physicists use quantum simulation tools to study, understand exotic state of matter

    Physicists have demonstrated how simulations using quantum computing can enable observation of a distinctive state of matter taken out of its normal equilibrium. Such novel states of matter could one day lead to developments in fast, powerful quantum information storage and precision measurement science.
    Thomas Iadecola worked his way through the title of the latest research paper that includes his theoretical and analytical work, patiently explaining digital quantum simulation, Floquet systems and symmetry-protected topological phases.
    Then he offered explanations of nonequilibrium systems, time crystals, 2T periodicity and the 2016 Nobel Prize in Physics.
    Iadecola’s corner of quantum condensed matter physics — the study of how states of matter emerge from collections of atoms and subatomic particles — can be counterintuitive and needs an explanation at most every turn and term.
    The bottom line, as explained by the Royal Swedish Academy of Sciences in announcing that 2016 physics prize to David Thouless, Duncan Haldane and Michael Kosterlitz, is that researchers are revealing more and more of the secrets of exotic matter, “an unknown world where matter can assume strange states.”
    The new paper published in the journal Nature and co-authored by Iadecola, an Iowa State University assistant professor of physics and astronomy and an Ames National Laboratory scientist, describes simulations using quantum computing that enabled observation of a distinctive state of matter taken out of its normal equilibrium. More

  • in

    Idea of ice age 'species pump' in the Philippines boosted by new way of drawing evolutionary trees

    Does the Philippines’ astonishing biodiversity result in part from rising and falling seas during the ice ages?
    Scientists have long thought the unique geography of the Philippines — coupled with seesawing ocean levels — could have created a “species pump” that triggered massive diversification by isolating, then reconnecting, groups of species again and again on islands. They call the idea the “Pleistocene aggregate island complex (PAIC) model” of diversification.
    But hard evidence, connecting bursts of speciation to the precise times that global sea levels rose and fell, has been scant until now.
    A groundbreaking Bayesian method and new statistical analyses of genomic data from geckos in the Philippines shows that during the ice ages, the timing of gecko diversification gives strong statistical support for the first time to the PAIC model, or “species pump.” The investigation, with roots at the University of Kansas, was just published in the Proceedings of the National Academy of Sciences.
    “The Philippines is an isolated archipelago, currently including more than 7,100 islands, but this number was dramatically reduced, possibly to as few as six or seven giant islands, during the Pleistocene,” said co-author Rafe Brown, curator-in-charge of the herpetology division of the Biodiversity Institute and Natural History Museum at KU. “The aggregate landmasses were composed of many of today’s smaller islands, which became connected together by dry land as sea levels fell, and all that water was tied up in glaciers. It’s been hypothesized that this kind of fragmentation and fusion of land, which happened as sea levels repeatedly fluctuated over the last 4 million years, sets the stage for a special evolutionary process, which may have triggered simultaneous clusters or bursts of speciation in unrelated organisms present at the time. In this case, we tested this prediction in two different genera of lizards, each with species found only in the Philippines.”
    For decades, the Philippines has been a hotbed of fieldwork by biologists with KU’s Biodiversity Institute, where the authors analyzed genetic samples of Philippine geckos as well as other animals. However, even with today’s technology and scientists’ ability to characterize variation from across the genome, the development of powerful statistical approaches capable of handling genome-scale data is still catching up — particularly in challenging cases, like the task of estimating past times that species formed, using genetic data collected from populations surviving today. More

  • in

    Magnetic memory milestone

    Computers and smartphones have different kinds of memory, which vary in speed and power efficiency depending on where they are used in the system. Typically, larger computers, especially those in data centers, will use a lot of magnetic hard drives, which are less common in consumer systems now. The magnetic technology these are based on provides very high capacity, but lack the speed of solid state system memory. Devices based on upcoming spintronic technology may be able to bridge that gap and radically improve upon even theoretical performance of classical electronic devices.
    Professor Satoru Nakatsuji and Project Associate Professor Tomoya Higo from the Department of Physics at the University of Tokyo, together with their team, explore the world of spintronics and other related areas of solid state physics — broadly speaking, the physics of things that function without moving. Over the years, they have studied special kinds of magnetic materials, some of which have very unusual properties. You’ll be familiar with ferromagnets, as these are the kinds that exist in many everyday applications like computer hard drives and electric motors — you probably even have some stuck to your refrigerator. However, of greater interest to the team are more obscure magnetic materials called antiferromagnets.
    “Like ferromagnets, antiferromagnets’ magnetic properties arise from the collective behavior of their component particles, in particular the spins of their electrons, something analogous to angular momentum,” said Nakatsuji. “Both materials can be used to encode information by changing localized groups of constituent particles. However, antiferromagnets have a distinct advantage in the high speed at which these changes to the information-storing spin states can be made, at the cost of increased complexity.”
    “Some spintronic memory devices already exist. MRAM (magnetoresistive random access memory) has been commercialized and can replace electronic memory in some situations, but it is based on ferromagnetic switching,” said Higo. “After considerable trial and error, I believe we are the first to report the successful switching of spin states in antiferromagnetic material Mn3Sn by using the same method as that used for ferromagnets in the MRAM, meaning we have coaxed the antiferromagnetic substance into acting as a simple memory device.”
    This method of switching is called spin-orbit torque (SOT) switching and it’s cause for excitement in the technology sector. It uses a fraction of the power to change the state of a bit (1 or 0) in memory, and although the researchers’ experiments involved switching their Mn3Sn sample in as little as a few milliseconds (thousandth of a second), they are confident that SOT switching could occur on the picosecond (trillionth of a second) scale, which would be orders of magnitude faster than the switching speed of current state-of-the-art electronic computer chips.
    “We achieved this due to the unique material Mn3Sn,” said Nakatsuji. “It proved far easier to work with in this way that other antiferromagnetic materials may have been.”
    “There is no rule book on how to fabricate this material. We aim to create a pure, flat crystal lattice of Mn3Sn from manganese and tin using a process called molecular beam epitaxy,” said Higo. “There are many parameters to this process that have to be fine-tuned, and we are still refining the process to see how it might be scaled up if it’s to become an industrial method one day.”
    Story Source:
    Materials provided by University of Tokyo. Note: Content may be edited for style and length. More

  • in

    Melanoma thickness equally hard for algorithms and dermatologists to judge

    Assessing the thickness of melanoma is difficult, whether done by an experienced dermatologist or a well-trained machine-learning algorithm. A study from the University of Gothenburg shows that the algorithm and the dermatologists had an equal success rate in interpreting dermoscopic images.
    In diagnosing melanoma, dermatologists evaluate whether it is an aggressive form (“invasive melanoma”), where the cancer cells grow down into the dermis and there is a risk of spreading to other parts of the body, or a milder form (“melanoma in situ,” MIS) that develops in the outer skin layer, the epidermis, only. Invasive melanomas that grow deeper than one millimeter into the skin are considered thick and, as such, more aggressive.
    Importance of thickness
    Melanomas are assessed by investigation with a dermatoscope — a type of magnifying glass fitted with a bright light. Diagnosing melanoma is often relatively simple, but estimating its thickness is a much greater challenge.
    “As well as providing valuable prognostic information, the thickness may affect the choice of surgical margins for the first operation and how promptly it needs to be performed,” says Sam Polesie, associate professor (docent) of dermatology and venereology at Sahlgrenska Academy, University of Gothenburg, Polesie is also a dermatologist at Sahlgrenska University Hospital and the study’s first author.
    Tie between man and machine
    Using a web platform, 438 international dermatologists assessed nearly 1,500 melanoma images captured with a dermatoscope. The dermatologists’ results were then compared with those from a machine-learning algorithm trained in classifying melanoma depth. More

  • in

    'Pulling back the curtain' to reveal a molecular key to The Wizard of Oz

    Many people and companies worry about sensitive data getting hacked, so encrypting files with digital keys has become more commonplace. Now, researchers reporting in ACS Central Science have developed a durable molecular encryption key from sequence-defined polymers that are built and deconstructed in a sequential way. They hid their molecular key in the ink of a letter, which was mailed and then used to decrypt a file with text from The Wonderful Wizard of Oz.
    Securely sharing data relies on encryption algorithms that jumble up the information and only reveal it when the correct code or digital encryption key is used. Researchers have been developing molecular strategies, including DNA chains and polymers, to durably store and transport encryption keys. Currently, nucleic acids store more information than polymers. The challenge with polymers is that when they get too long, storing more data with each additional monomer becomes less efficient, and figuring out the information they’re hiding with analytical instruments becomes extremely difficult. Recently, Eric Anslyn and colleagues developed a method to deconstruct polymers in a sequential way, allowing their structures to be determined more easily with liquid chromatography-mass spectrometry (LC/MS). So, Anslyn, James Ruether and others wanted to test the method on a mixture of unique polymers hidden in ink to see if the approach could be used to reveal a complex molecular encryption key.
    First, the researchers generated a 256-character-long binary key that could encrypt and decrypt text files when entered into an algorithm. Next, they encoded the key into polymer sequences of eight 10-monomer-long oligourethanes. Only the middle eight monomers held the key, and the two ends acted as placeholders for synthesis and decoding. The decoding placeholder was a unique, isotopically labeled “fingerprint” monomer in each sequence, indicating where each polymer’s encoded information fit in the order of the final digital key. Then the researchers mixed the eight polymers together and used a sequential depolymerization method and LC/MS to determine the original structures and the digital key. Finally, one group of the researchers combined the polymers with isopropanol, glycerol and soot to make an ink, which they used to write a letter that they mailed to other colleagues, who didn’t know the encoded information. These scientists extracted the ink from the paper and followed the same sequential analysis to successfully reconstruct the binary key. They entered the encryption key into the algorithm, revealing a plain text file of The Wonderful Wizard of Oz. The researchers say that their results demonstrate that molecular information encryption with sequence-defined polymer mixtures is durable enough for real-world applications, such as hiding secret messages in letters and plastic objects.
    The authors acknowledge funding from the Army Research Office, the Howard Hughes Medical Institute, the Keck Foundation and the Welch Reagents Chair.
    Story Source:
    Materials provided by American Chemical Society. Note: Content may be edited for style and length. More

  • in

    Learning to fight infection

    Scientific advancements have often been held back by the need for high volumes of data, which can be costly, time-consuming, and sometimes difficult to collect. But there may be a solution to this problem when investigating how our bodies fight illness: a new machine learning method called “MotifBoost.” This approach can help interpret data from T-cell receptors (TCRs) in identifying past infections to specific pathogens. By focusing on a collection of short sequences of amino acids in the TCRs, a research team achieved more accurate results with smaller datasets. This work may shed light on the way the human immune system recognizes germs, which may lead to improved health outcomes.
    The recent pandemic has highlighted the vital importance of the human body’s ability to fight back against novel threats. The adaptive immune system uses specialized cells, including T-cells, which prepare an array of diverse receptors that can recognize antigens specific to invading germs even before they arrive for the first time. Therefore, the diversity of the receptors is an important topic of investigation. However, the correspondence between receptors and the antigens they recognize is often difficult to determine experimentally, and current computational methods often fail if not provided with enough data.
    Now, scientists from the Institute of Industrial Science at The University of Tokyo have developed a new machine learning method that can predict the infection of a donor based on limited data of TCRs. “MotifBoost” focuses on very short segments, called k-mers, in each receptor. Although the protein motifs considered by scientists are usually much longer, the team found that extracting the frequency of each combination of three consecutive amino acids was highly effective. “Our machine learning methods trained on small-scale datasets can supplement conventional classification methods which only work on very large datasets,” first author Yotaro Katayama says. MotifBoost was inspired by the fact that different people usually produce similar TCRs when exposed to the same pathogen.
    First, the researchers employed an unsupervised learning approach, in which donors were automatically sorted based on patterns found in the data, and showed that donors formed distinct clusters using the k-mer distribution based on having previous infection by cytomegalovirus (CMV) or not. Because unsupervised learning algorithms do not have information about which donors had been infected with CMV, this result indicated that the k-mer information is effective in capturing characteristics of a patient’s immune status. Then, the scientists used the k-mer distribution data for a supervised learning task, in which the algorithm was given the TCR data of each donor, along with labels for which donors were infected with a specific disease. The algorithm was then trained to predict the label for unseen samples, and the prediction performance was tested for CMV and HIV.
    “We found that existing machine learning methods can suffer from learning instability and reduced accuracy when the number of samples drops below a certain critical size. In contrast, MotifBoost performed just as well on the large dataset, and still provided a good result on the small dataset,” says senior author Tetsuya J. Kobayashi. This research may lead to new tests for viral exposure and immune status based on T-cell composition.
    This research is published in Frontiers in Immunology as “Comparative study of repertoire classification methods reveals data efficiency of k-mer feature extraction.”
    Story Source:
    Materials provided by Institute of Industrial Science, The University of Tokyo. Note: Content may be edited for style and length. More

  • in

    Scientists reveal genetic architecture underlying alcohol, cigarette abuse

    Have you ever wondered why one person can smoke cigarettes for a year and easily quit, while another person will become addicted for life? Why can’t some people help themselves from abusing alcohol and others can take it or leave it? One reason is a person’s genetic proclivity to abuse substances. UNC School of Medicine researchers led by Hyejung Won, PhD, are beginning to understand these underlying genetic differences. The more they learn, the better chance they will be able to create therapies to help the millions of people who struggle with addiction.
    Won, assistant professor of genetics and member of the UNC Neuroscience Center, and colleagues identified genes linked to cigarette smoking and drinking. The researchers found that these genes are over-represented in certain kinds of neurons — brain cells that trigger other cells to send chemical signals throughout the brain.
    The researchers, who published their work in the journal Molecular Psychiatry, also found that the genes underlying cigarette smoking were linked to the perception of pain and response to food, as well as the abuse of other drugs, such as cocaine. Other genes associated with alcohol use were linked to stress and learning, as well as abuse of other drugs, such as morphine.
    Given the lack of current treatment options for substance use disorder, the researchers also conducted analyses of a publicly available drug database to identify potential new treatments for substance abuse.
    “We found that antipsychotics and other mood stabilizers could potentially provide therapeutic relief for individuals struggling with substance abuse,” said Nancy Sey, graduate student in the Won lab and the first author of the paper. “And we’re confident our research provides a good foundation for research focused on creating better treatments to address drug dependency.”
    Parsing the Genome
    Long-term substance use and substance use disorders have been linked to many common diseases and conditions, such as lung cancer, liver disease, and mental illnesses. Yet, few treatment options are available, largely due to gaps in our understanding of the biological processes involved. More