More stories

  • in

    The numbers don't lie: Australia is failing at maths and we need to find a new formula to arrest the decline

    Divide, subtract, add, multiply: whatever way you cut it, Australia is heading in one direction when it comes to global maths rankings — downwards.
    From an OECD mathematics ranking of 11 in the world 20 years ago, Australian secondary students are now languishing in 29th place out of 38 countries, according to the most recent statistics.
    The sliding maths rankings have created widespread debate over whether curriculum changes are needed in our schools, but a new international paper co-authored by University of South Australia cognitive psychologist Dr Fernando Marmolejo-Ramos could provide part of the solution.
    In the latest edition of Integrative Psychology and Behavioural Science, Dr Marmolejo-Ramos and researchers from China and Iran explain why simple gestures such as hand motions are important in helping students understand mathematical concepts.
    “Many people struggle with mathematics and there is a lot of anxiety around it because it is an abstract topic,” Dr Marmolejo-Ramos says. “You see the numbers, equations and graphs, but unless you engage human motor and sensory skills, they can be very difficult to grasp.”
    To get maths concepts across, it is important to bring together language, speech intonation, facial expressions and hand gestures, particularly the latter, the researchers say. More

  • in

    New method melds data to make a 3-D map of cells' activities

    Just as it’s hard to understand a conversation without knowing its context, it can be difficult for biologists to grasp the significance of gene expression without knowing a cell’s environment. To solve that problem, researchers at Princeton Engineering have developed a method to elucidate a cell’s surroundings so that biologists can make more meaning of gene expression information.
    The researchers, led by Professor of Computer Science Ben Raphael, hope the new system will open the door to identifying rare cell types and choosing cancer treatment options with new precision. Raphael is the senior author of a paper describing the method published May 16 in Nature Methods.
    The basic technique of linking gene expression with a cell’s environment, called spatial transcriptomics (ST), has been around for several years. Scientists break down tissue samples onto a microscale grid and link each spot on the grid with information about gene expression. The problem is that current computational tools can only analyze spatial patterns of gene expression in two dimensions. Experiments that use multiple slices from a single tissue sample — such as a region of a brain, heart or tumor — are difficult to synthesize into a complete picture of the cell types in the tissue.
    The Princeton researchers’ method, called PASTE (for Probabilistic Alignment of ST Experiments), integrates information from multiple slices taken from the same tissue sample, providing a three-dimensional view of gene expression within a tumor or a developing organ. When sequence coverage in an experiment is limited due to technical or cost issues, PASTE can also merge information from multiple tissue slices into a single two-dimensional consensus slice with richer gene expression information.
    “Our method was motivated by the observation that oftentimes biologists will perform multiple experiments from the same tissue,” said Raphael. “Now, these replicate experiments are not exactly the same cells, but they’re from the same tissue and therefore should be highly similar.”
    The team’s technique can align multiple slices from a single tissue sample, categorizing cells based on their gene expression profiles while preserving the physical location of the cells within the tissue. More

  • in

    Scientists identify characteristics to better define long COVID

    A research team supported by the National Institutes of Health has identified characteristics of people with long COVID and those likely to have it. Scientists, using machine learning techniques, analyzed an unprecedented collection of electronic health records (EHRs) available for COVID-19 research to better identify who has long COVID. Exploring de-identified EHR data in the National COVID Cohort Collaborative (N3C), a national, centralized public database led by NIH’s National Center for Advancing Translational Sciences (NCATS), the team used the data to find more than 100,000 likely long COVID cases as of October 2021 (as of May 2022, the count is more than 200,000). The findings appeared May 16 in The Lancet Digital Health.
    Long COVID is marked by wide-ranging symptoms, including shortness of breath, fatigue, fever, headaches, “brain fog” and other neurological problems. Such symptoms can last for many months or longer after an initial COVID-19 diagnosis. One reason long COVID is difficult to identify is that many of its symptoms are similar to those of other diseases and conditions. A better characterization of long COVID could lead to improved diagnoses and new therapeutic approaches.
    “It made sense to take advantage of modern data analysis tools and a unique big data resource like N3C, where many features of long COVID can be represented,” said co-author Emily Pfaff, Ph.D., a clinical informaticist at the University of North Carolina at Chapel Hill.
    The N3C data enclave currently includes information representing more than 13 million people nationwide, including nearly 5 million COVID-19-positive cases. The resource enables rapid research on emerging questions about COVID-19 vaccines, therapies, risk factors and health outcomes.
    The new research is part of a related, larger trans-NIH initiative, Researching COVID to Enhance Recovery (RECOVER), which aims to improve the understanding of the long-term effects of COVID-19, called post-acute sequelae of SARS-CoV-2 infection (PASC). RECOVER will accurately identify people with PASC and develop approaches for its prevention and treatment. The program also will answer critical research questions about the long-term effects of COVID through clinical trials, longitudinal observational studies, and more.
    In the Lancet study, Pfaff, Melissa Haendel, Ph.D., at the University of Colorado Anschutz Medical Campus, and their colleagues examined patient demographics, health care use, diagnoses and medications in the health records of 97,995 adult COVID-19 patients in the N3C. They used this information, along with data on nearly 600 long COVID patients from three long COVID clinics, to create three machine learning models to identify long COVID patients.
    In machine learning, scientists “train” computational methods to rapidly sift through large amounts of data to reveal new insights — in this case, about long COVID. The models looked for patterns in the data that could help researchers both understand patient characteristics and better identify individuals with the condition.
    The models focused on identifying potential long COVID patients among three groups in the N3C database: All COVID-19 patients, patients hospitalized with COVID-19, and patients who had COVID-19 but were not hospitalized. The models proved to be accurate, as people identified as at risk for long COVID were similar to patients seen at long COVID clinics. The machine learning systems classified approximately 100,000 patients in the N3C database whose profiles were close matches to those with long COVID.
    “Once you’re able to determine who has long COVID in a large database of people, you can begin to ask questions about those people,” said Josh Fessel, M.D., Ph.D., senior clinical advisor at NCATS and a scientific program lead in RECOVER. “Was there something different about those people before they developed long COVID? Did they have certain risk factors? Was there something about how they were treated during acute COVID that might have increased or decreased their risk for long COVID?”
    The models searched for common features, including new medications, doctor visits and new symptoms, in patients with a positive COVID diagnosis who were at least 90 days out from their acute infection. The models identified patients as having long COVID if they went to a long COVID clinic or demonstrated long COVID symptoms and likely had the condition but hadn’t been diagnosed.
    “We want to incorporate the new patterns we’re seeing with the diagnosis code for COVID and include it in our models to try to improve their performance,” said the University of Colorado’s Haendel. “The models can learn from a greater variety of patients and become more accurate. We hope we can use our long COVID patient classifier for clinical trial recruitment.”
    This study was funded by NCATS, which contributed to the design, maintenance and security of the N3C Enclave, and the NIH RECOVER Initiative, supported by NIH OT2HL161847. RECOVER is coordinating, among others, the participant recruitment protocol to which this work contributes. The analyses were conducted with data and tools accessed through the NCATS N3C Data Enclave and supported by NCATS U24TR002306. More

  • in

    Shaping the future of light through reconfigurable metasurfaces

    The technological advancement of optical lenses has long been a significant marker of human scientific achievement. Eyeglasses, telescopes, cameras, and microscopes have all literally and figuratively allowed us to see the world in a new light. Lenses are also a fundamental component of manufacturing nanoelectronics by the semiconductor industry.
    One of the most impactful breakthroughs of lens technology in recent history has been the development of photonic metasurfaces — artificially engineered nano-scale materials with remarkable optical properties. Georgia Tech researchers at the forefront of this technology have recently demonstrated the first-ever electrically tunable photonic metasurface platform in a recent study published by Nature Communications.
    “Metasurfaces can make the optical systems very thin, and as they become easier to control and tune, you’ll soon find them in cell phone cameras and similar electronic imaging systems,” said Ali Adibi, professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology.
    The pronounced tuning measures achieved through the new platform represent a critical advancement towards the development of miniaturized reconfigurable metasurfaces. The results of the study have shown a record eleven-fold change in the reflective properties, a large range of spectral tuning for operation, and much faster tuning speed.
    Heating Up Metasurfaces
    Metasurfaces are a class of nanophotonic materials in which a large range of miniaturized elements are engineered to affect the transmission and reflection of light at different frequencies in a controlled way. More

  • in

    The way of water: Making advanced electronics with H2O

    Water is the secret ingredient in a simple way to create key components for solar cells, X-ray detectors and other optoelectronics devices.
    The next generation of photovoltaics, semiconductors and LEDs could be made using perovskites — an exciting and versatile nanomaterial with a crystal structure.
    Perovskites have already shown similar efficiency to silicon, are cheaper to make, and feature a tuneable bandgap, meaning the energy they are able to absorb, reflect or conduct can be changed to suit different purposes.
    Ordinarily, water is kept as far away as possible during the process of creating perovskites. The presence of moisture can lead to defects in the materials, causing them to fall apart more quickly when they’re being used in a device.
    That’s why perovskites for scientific research are often made via spin coating in the sealed environment of a nitrogen glove box.
    Now, though, members of the ARC Centre of Excellence in Exciton Science have found a simple way to control the growth of phase-pure perovskite crystals by harnessing water as a positive factor. This liquid-based mechanism works at room temperature, so the approach remains cost effective. More

  • in

    Electronic skin: Physicist develops multisensory hybrid material

    The “smart skin” developed by Anna Maria Coclite is very similar to human skin. It senses pressure, humidity and temperature simultaneously and produces electronic signals. More sensitive robots or more intelligent prostheses are thus conceivable.
    The skin is the largest sensory organ and at the same time the protective coat of the human being. It “feels” several sensory inputs at the same time and reports information about humidity, temperature and pressure to the brain. For Anna Maria Coclite, a material with such multisensory properties is “a kind of ‘holy grail’ in the technology of intelligent artificial materials. In particular, robotics and smart prosthetics would benefit from a better integrated, more precise sensing system similar to human skin.” The ERC grant winner and researcher at the Institute of Solid State Physics at TU Graz has succeeded in developing the three-in-one hybrid material “smart skin” for the next generation of artificial, electronic skin using a novel process. The result of this pioneering research has now been published in the journal Advanced Materials Technologies.
    As delicate as a fingertip
    For almost six years, the team worked on the development of smart skin as part of Coclite’s ERC project Smart Core. With 2,000 individual sensors per square millimetre, the hybrid material is even more sensitive than a human fingertip. Each of these sensors consists of a unique combination of materials: an smart polymer in the form of a hydrogel inside and a shell of piezoelectric zinc oxide. Coclite explains: “The hydrogel can absorb water and thus expands upon changes in humidity and temperature. In doing so, it exerts pressure on the piezoelectric zinc oxide, which responds to this and all other mechanical stresses with an electrical signal.” The result is a wafer-thin material that reacts simultaneously to force, moisture and temperature with extremely high spatial resolution and emits corresponding electronic signals. “The first artificial skin samples are six micrometres thin, or 0.006 millimetres. But it could be even thinner,” says Anna Maria Coclite. In comparison, the human epidermis is 0.03 to 2 millimetres thick. The human skin perceives things from a size of about one square millimetre. The smart skin has a resolution that is a thousand times smaller and can register objects that are too small for human skin (such as microorganisms).
    Material processing at the nanoscale
    The individual sensor layers are very thin and at the same time equipped with sensor elements covering the entire surface. This was possible in a worldwide unique process for which the researchers combined three known methods from physical chemistry for the first time: a chemical vapour deposition for the hydrogel material, an atomic layer deposition for the zinc oxide and nanoprint lithography for the polymer template. The lithographic preparation of the polymer template was the responsibility of the research group “Hybrid electronics and structuring” headed by Barbara Stadlober. The group is part of Joanneum Research’s Materials Institute based in Weiz.
    Several fields of application are now opening up for the skin-like hybrid material. In healthcare, for example, the sensor material could independently detect microorganisms and report them accordingly. Also conceivable are prostheses that give the wearer information about temperature or humidity, or robots that can perceive their environment more sensitively. On the path to application,smart skin scores with a decisive advantage: the sensory nanorods — the “smart core” of the material — are produced using a vapor-based manufacturing process. This process is already well established in production plants for integrated circuits, for example. The production of smart skin can thus be easily scaled and implemented in existing production lines.
    The properties of smart skin are now being optimized even further. Anna Maria Coclite and her team — here in particular the PhD student Taher Abu Ali — want to extend the temperature range to which the material reacts and improve the flexibility of the artificial skin.
    Story Source:
    Materials provided by Graz University of Technology. Original written by Susanne Filzwieser. Note: Content may be edited for style and length. More

  • in

    New approach allows for faster ransomware detection

    Engineering researchers have developed a new approach for implementing ransomware detection techniques, allowing them to detect a broad range of ransomware far more quickly than previous systems.
    Ransomware is a type of malware. When a system is infiltrated by ransomware, the ransomware encrypts that system’s data — making the data inaccessible to users. The people responsible for the ransomware then extort the affected system’s operators, demanding money from the users in exchange for granting them access to their own data.
    Ransomware extortion is hugely expensive, and instances of ransomware extortion are on the rise. The FBI reports receiving 3,729 ransomware complaints in 2021, with costs of more than $49 million. What’s more, 649 of those complaints were from organizations classified as critical infrastructure.
    “Computing systems already make use of a variety of security tools that monitor incoming traffic to detect potential malware and prevent it from compromising the system,” says Paul Franzon, co-author of a paper on the new ransomware detection approach. “However, the big challenge here is detecting ransomware quickly enough to prevent it from getting a foothold in the system. Because as soon as ransomware enters the system, it begins encrypting files.” Franzon is Cirrus Logic Distinguished Professor of Electrical and Computer Engineering at North Carolina State University.
    “There’s a machine-learning algorithm called XGBoost that is very good at detecting ransomware,” says Archit Gajjar, first author of the paper and a Ph.D. student at NC State. “However, when systems run XGBoost as software through a CPU or GPU, it’s very slow. And attempts to incorporate XGBoost into hardware systems have been hampered by a lack of flexibility — they focus on very specific challenges, and that specificity makes it difficult or impossible for them to monitor for the full array of ransomware attacks.
    “We’ve developed a hardware-based approach that allows XGBoost to monitor for a wide range of ransomware attacks, but is much faster than any of the software approaches,” Gajjar says.
    The new approach is called FAXID, and in proof-of-concept testing, the researchers found it was just as accurate as software-based approaches at detecting ransomware. The big difference was speed. FAXID was up to 65.8 times faster than software running XGBoost on a CPU and up to 5.3 times faster than software running XGBoost on a GPU.
    “Another advantage of FAXID is that it allows us to run problems in parallel,” Gajjar says. “You could devote all of the dedicated security hardware’s resources to ransomware detection, and detect ransomware more quickly. But you could also allocate the security hardware’s computing power to separate problems. For example, you could devote a certain percentage of the hardware to ransomware detection and another percentage of the hardware to another challenge — such as fraud detection.”
    “Our work on FAXID was funded by the Center for Advanced Electronics through Machine Learning (CAEML), which is a public-private partnership,” Franzon says. “The technology is already being made available to members of the center, and we know of at least one company that is making plans to implement it in their systems.”
    The paper, “FAXID: FPGA-Accelerated XGBoost Inference for Data Centers using HLS,” is being presented at the 30th IEEE International Symposium on Field-Programmable Custom Computing Machines (FCCM), being held in New York City from May 15-18. The paper was co-authored by Priyank Kashyap, a Ph.D. student at NC State; Aydin Aysu, an assistant professor of electrical and computer engineering at NC State; and Sumon Dey and Chris Cheng of Hewlett Packard Enterprise.
    The work was supported by CAEML, through National Science Foundation grant number CNS #16-244770, and CAEML member companies.
    Story Source:
    Materials provided by North Carolina State University. Original written by Matt Shipman. Note: Content may be edited for style and length. More

  • in

    Eavesdroppers can hack 6G frequency with DIY metasurface

    Crafty hackers can make a tool to eavesdrop on some 6G wireless signals in as little as five minutes using office paper, an inkjet printer, a metallic foil transfer and a laminator.
    The wireless security hack was discovered by engineering researchers from Rice University and Brown University, who will present their findings and demonstrate the attack this week in San Antonio at ACM WiSec 2022, the Association for Computing Machinery’s annual conference on security and privacy in wireless and mobile networks.
    “Awareness of a future threat is the first step to counter that threat,” said study co-author Edward Knightly, Rice’s Sheafor-Lindsay Professor of Electrical and Computer Engineering. “The frequencies that are vulnerable to this attack aren’t in use yet, but they are coming and we need to be prepared.”
    In the study, Knightly, Brown University engineering Professor Daniel Mittleman and colleagues showed an attacker could easily make a sheet of office paper covered with 2D foil symbols — a metasurface — and use it to redirect part of a 150 gigahertz “pencil beam” transmission between two users.
    They dubbed the attack “Metasurface-in-the-Middle” as a nod to both the hacker’s tool and the way it is wielded. Metasurfaces are thin sheets of material with patterned designs that manipulate light or electromagnetic waves. “Man-in-the-middle” is a computer security industry classification for attacks in which an adversary secretly inserts themself between two parties.
    The 150 gigahertz frequency is higher than is used in today’s 5G cellular or Wi-Fi networks. But Knightly said wireless carriers are looking to roll out 150 gigahertz and similar frequencies known as terahertz waves or millimeter waves over the next decade. More