More stories

  • in

    New technique offers faster security for non-volatile memory tech

    Researchers have developed a technique that leverages hardware and software to improve file system security for next-generation memory technologies called non-volatile memories (NVMs). The new encryption technique also permits faster performance than existing software security technologies.
    “NVMs are an emerging technology that allows rapid access to the data, and retains data even when a system crashes or loses power,” says Amro Awad, senior author of a paper on the work and an assistant professor of electrical and computer engineering at North Carolina State University. “However, the features that give NVMs these attractive characteristics also make it difficult to encrypt files on NVM devices — which raises security concerns. We’ve developed a way to secure files on NVM devices without sacrificing the speed that makes NVMs attractive.”
    “Our technique allows for file-level encryption in fast NVM memories, while cutting the related execution time significantly,” says Kazi Abu Zubair, first author of the paper and a Ph.D. student at NC State.
    Traditionally, computers use two types of data storage. Dynamic random access memory (DRAM) allows quick access to stored data, but will lose that data if the system crashes. Long-term storage technologies, such as hard drives, are good at retaining data even if a system loses power — but store the data in a way that makes it slower to access.
    NVMs combine the best features of both technologies. However, securing files on NVM devices can be challenging.
    Existing methods for file system encryption use software, which is not particularly fast. Historically, this wasn’t a problem because the technologies for accessing file data from long-term storage devices weren’t particularly fast either. More

  • in

    Artificial intelligence may improve diabetes diagnosis, study shows

    Using a fully-automated artificial intelligence (AI) deep learning model, researchers were able to identify early signs of type 2 diabetes on abdominal CT scans, according to a new study published in the journal Radiology.
    Type 2 diabetes affects approximately 13% of all U.S. adults and an additional 34.5% of adults meet the criteria for prediabetes. Due to the slow onset of symptoms, it is important to diagnose the disease in its early stages. Some cases of pre-diabetes can last up to 8 years and an earlier diagnosis will allow patients to make lifestyle changes to alter the progression of the disease.
    Abdominal CT imaging can be a promising tool to diagnose type 2 diabetes. CT imaging is already widely used in clinical practices, and it can provide a significant amount of information about the pancreas. Previous studies have shown that patients with diabetes tend to accumulate more visceral fat and fat within the pancreas than non-diabetic patients. However, not much work has been done to study the liver, muscles and blood vessels around the pancreas, said study co-senior author Ronald M. Summers, M.D., Ph.D., senior investigator and staff radiologist at the National Institutes of Health Clinical Center in Bethesda, Maryland.
    “The analysis of both pancreatic and extra-pancreatic features is a novel approach and has not been shown in previous work to our knowledge,” said first author Hima Tallam, B.S.E., M.D./Ph.D. student.
    The manual analysis of low-dose non-contrast pancreatic CT images by a radiologist or trained specialist is a time-intensive and difficult process. To address these clinical challenges, there is a need for the improvement of automated image analysis of the pancreas, the authors said.
    For this retrospective study, Dr. Summers and colleagues, in close collaboration with co-senior author Perry J. Pickhardt, M.D., professor of radiology at the University of Wisconsin School of Medicine & Public Health, used a dataset of patients who had undergone routine colorectal cancer screening with CT at the University of Wisconsin Hospital and Clinics. Of the 8,992 patients who had been screened between 2004 and 2016, 572 had been diagnosed with type 2 diabetes and 1,880 with dysglycemia, a term that refers to blood sugar levels that go too low or too high. There was no overlap between diabetes and dysglycemic diagnosis.
    To build the deep learning model, the researchers used a total of 471 images obtained from a variety of datasets, including the Medical Data Decathlon, The Cancer Imaging Archive and the Beyond Cranial Vault challenge. The 471 images were then divided into three subsets: 424 for training, 8 for validation and 39 for test sets. Researchers also included data from four rounds of active learning.
    The deep learning model displayed excellent results, demonstrating virtually no difference compared to manual analysis. In addition to the various pancreatic features, the model also analyzed the visceral fat, density and volumes of the surrounding abdominal muscles and organs.
    The results showed that patients with diabetes had lower pancreas density and higher visceral fat amounts than patients without diabetes.
    “We found that diabetes was associated with the amount of fat within the pancreas and inside the patients’ abdomens,” Dr. Summers said. “The more fat in those two locations, the more likely the patients were to have diabetes for a longer period of time.”
    The best predictors of type 2 diabetes in the final model included intrapancreatic fat percentage, pancreas fractal dimension, plaque severity between the L1-L4 vertebra level, average liver CT attenuation, and BMI. The deep learning model used these predictors to accurately discern patients with and without diabetes.
    “This study is a step towards the wider use of automated methods to address clinical challenges,” the authors said. “It may also inform future work investigating the reason for pancreatic changes that occur in patients with diabetes.” More

  • in

    Adding AI to Museum exhibits increases learning, keeps kids engaged longer

    Hands-on exhibits are staples of science and children’s museums around the world, and kids love them. The exhibits invite children to explore scientific concepts in fun and playful ways.
    But do kids actually learn from them? Ideally, museum staff, parents or caregivers are on hand to help guide the children through the exhibits and facilitate learning, but that is not always possible.
    Researchers from Carnegie Mellon University’s Human-Computer Interaction Institute (HCII) have demonstrated a more effective way to support learning and increase engagement. They used artificial intelligence to create a new genre of interactive, hands-on exhibits that includes an intelligent, virtual assistant to interact with visitors.
    When the researchers compared their intelligent exhibit to a traditional one, they found that the intelligent exhibit increased learning and the time spent at the exhibit.
    “Having artificial intelligence and computer vision turned the play into learning,” said Nesra Yannier, HCII faculty member and head of the project, who called the results “purposeful play.”
    Earthquake tables are popular exhibits. In a typical example, kids build towers and then watch them tumble on a shaking table. Signs around the exhibit try to engage kids in thinking about science as they play, but it is not clear how well these work or how often they are even read. More

  • in

    Scientists develop a recyclable pollen-based paper for repeated printing and ‘unprinting’

    Scientists at Nanyang Technological University, Singapore (NTU Singapore) have developed a pollen-based ‘paper’ that, after being printed on, can be ‘erased’ and reused multiple times without any damage to the paper.
    In a research paper published online in Advanced Materials on 5 April, the NTU Singapore scientists demonstrated how high-resolution colour images could be printed on the non-allergenic pollen paper with a laser printer, and then ‘unprinted’ — by completely removing the toner without damaging the paper — with an alkaline solution. They demonstrated that this process could be repeated up to at least eight times.
    This innovative, printer-ready pollen paper could become an eco-friendly alternative to conventional paper, which is made via a multi-step process with a significant negative environmental impact, said the NTU team led by Professors Subra Suresh and Cho Nam-Joon.
    It could also help to reduce the carbon emissions and energy usage associated with conventional paper recycling, which involves repulping, de-toning (removal of printer toner) and reconstruction.
    The other members of this all-NTU research team are research fellow Dr Ze Zhao, graduate students Jingyu Deng and Hyunhyuk Tae, and former graduate student Mohammed Shahrudin Ibrahim.
    Prof Subra Suresh, NTU President and senior author of the paper, said: “Through this study, we showed that we could print high-resolution colour images on paper produced from a natural, plant-based material that was rendered non-allergenic through a process we recently developed. We further demonstrated the feasibility of doing so repeatedly without destroying the paper, making this material a viable eco-friendly alternative to conventional wood-based paper. This is a new approach to paper recycling — not just by making paper in a more sustainable way, but also by extending the lifespan of the paper so that we get the maximum value out of each piece of paper we produce. The concepts established here, with further developments in scalable manufacturing, could be adapted and extended to produce other “directly printable” paper-based products such as storage and shipping cartons and containers.”
    Prof Cho Nam-Joon, senior author of the paper, said: “Aside from being easily recyclable, our pollen-based paper is also highly versatile. Unlike wood-based conventional paper, pollen is generated in large amounts and is naturally renewable, making it potentially an attractive raw material in terms of scalability, economics, and environmental sustainability. In addition, by integrating conductive materials with the pollen paper, we could potentially use the material in soft electronics, green sensors, and generators to achieve advanced functions and properties.” More

  • in

    Honey holds potential for making brain-like computer chips

    Honey might be a sweet solution for developing environmentally friendly components for neuromorphic computers, systems designed to mimic the neurons and synapses found in the human brain. Hailed by some as the future of computing, neuromorphic systems are much faster and use much less power than traditional computers. Engineers have demonstrated one way to make them more organic too by using honey to make a memristor, a component similar to a transistor that can not only process but also store data in memory. VANCOUVER, Wash. — Honey might be a sweet solution for developing environmentally friendly components for neuromorphic computers, systems designed to mimic the neurons and synapses found in the human brain.
    Hailed by some as the future of computing, neuromorphic systems are much faster and use much less power than traditional computers. Washington State University engineers have demonstrated one way to make them more organic too. In a study published in Journal of Physics D, the researchers show that honey can be used to make a memristor, a component similar to a transistor that can not only process but also store data in memory.
    “This is a very small device with a simple structure, but it has very similar functionalities to a human neuron,” said Feng Zhao, associate professor of WSU’s School of Engineering and Computer Science and corresponding author on the study.”This means if we can integrate millions or billions of these honey memristors together, then they can be made into a neuromorphic system that functions much like a human brain.”
    For the study, Zhao and first author Brandon Sueoka, a WSU graduate student in Zhao’s lab, created memristors by processing honey into a solid form and sandwiching it between two metal electrodes, making a structure similar to a human synapse. They then tested the honey memristors’ ability to mimic the work of synapses with high switching on and off speeds of 100 and 500 nanoseconds respectively. The memristors also emulated the synapse functions known as spike-timing dependent plasticity and spike-rate dependent plasticity, which are responsible for learning processes in human brains and retaining new information in neurons.
    The WSU engineers created the honey memristors on a micro-scale, so they are about the size of a human hair. The research team led by Zhao plans to develop them on a nanoscale, about 1/1000 of a human hair, and bundle many millions or even billions together to make a full neuromorphic computing system.
    Currently, conventional computer systems are based on what’s called the von Neumann architecture. Named after its creator, this architecture involves an input, usually from a keyboard and mouse, and an output, such as the monitor. It also has a CPU, or central processing unit, and RAM, or memory storage. Transferring data through all these mechanisms from input to processing to memory to output takes a lot of power at least compared to the human brain, Zhao said. For instance, the Fugaku supercomputer uses upwards of 28 megawatts, roughly equivalent to 28 million watts, to run while the brain uses only around 10 to 20 watts. More

  • in

    Chemical data management: an open way forward

    One of the most challenging aspects of modern chemistry is managing data. For example, when synthesizing a new compound, scientists will go through multiple attempts of trial-and-error to find the right conditions for the reaction, generating in the process massive amounts of raw data. Such data is of incredible value, as, like humans, machine-learning algorithms can learn much from failed and partially successful experiments.
    The current practice is, however, to publish only the most successful experiments, since no human can meaningfully process the massive amounts of failed ones. But AI has changed this; it is exactly what these machine-learning methods can do provided the data are stored in a machine-actionable format for anyone to use.
    “For a long time, we needed to compress information due to the limited page count in printed journal articles,” says Professor Berend Smit, who directs the Laboratory of Molecular Simulation at EPFL Valais Wallis. “Nowadays, many journals do not even have printed editions anymore; however, chemists still struggle with reproducibility problems because journal articles are missing crucial details. Researchers ‘waste’ time and resources replicating ‘failed’ experiments of authors and struggle to build on top of published results as raw data are rarely published.”
    But volume is not the only problem here; data diversity is another: research groups use different tools like Electronic Lab Notebook software, which store data in proprietary formats that are sometimes incompatible with each other. This lack of standardization makes it nearly impossible for groups to share data.
    Now, Smit, with Luc Patiny and Kevin Jablonka at EPFL, have published a perspective in Nature Chemistry presenting an open platform for the entire chemistry workflow: from the inception of a project to its publication.
    The scientists envision the platform as “seamlessly” integrating three crucial steps: data collection, data processing, and data publication — all with minimal cost to researchers. The guiding principle is that data should be FAIR: easily findable, accessible, interoperable, and re-usable. “At the moment of data collection, the data will be automatically converted into a standard FAIR format, making it possible to automatically publish all ‘failed’ and partially successful experiments together with the most successful experiment,” says Smit.
    But the authors go a step further, proposing that data should also be machine-actionable. “We are seeing more and more data-science studies in chemistry,” says Jablonka. “Indeed, recent results in machine learning try to tackle some of the problems chemists believe are unsolvable. For instance, our group has made enormous progress in predicting optimal reaction conditions using machine-learning models. But those models would be much more valuable if they could also learn reaction conditions that fail, but otherwise, they remain biased because only the successful conditions are published.”
    Finally, the authors propose five concrete steps that the field must take to create a FAIR data-management plan: The chemistry community should embrace its own existing standards and solutions. Journals need to make deposition of reusable raw data, where community standards exist, mandatory. We need to embrace the publication of “failed” experiments. Electronic Lab Notebooks that do not allow exporting all data into an open machine-actionable form should be avoided. Data-intensive research must enter our curricula.”We think there is no need to invent new file formats or technologies,” says Patiny. “In principle, all the technology is there, and we need to embrace existing technologies and make them interoperable.”
    The authors also point out that just storing data in any electronic lab notebook — the current trend — does not necessarily mean that humans and machines can reuse the data. Rather, the data must be structured and published in a standardized format, and they also must contain enough context to enable data-driven actions.
    “Our perspective offers a vision of what we think are the key components to bridge the gap between data and machine learning for core problems in chemistry,” says Smit. “We also provide an open science solution in which EPFL can take the lead.”
    Story Source:
    Materials provided by Ecole Polytechnique Fédérale de Lausanne. Original written by Nik Papageorgiou. Note: Content may be edited for style and length. More

  • in

    Making a ‘sandwich’ out of magnets and topological insulators, potential for lossless electronics

    A Monash University-led research team has discovered that a structure comprising an ultra-thin topological insulator sandwiched between two 2D ferromagnetic insulators becomes a large-bandgap quantum anomalous Hall insulator.
    Such a heterostructure provides an avenue towards viable ultra-low energy future electronics, or even topological photovoltaics.
    Topological Insulator: The Filling in the Sandwich
    In the researchers’ new heterostructure, a ferromagnetic material forms the ‘bread’ of the sandwich, while a topological insulator (ie, a material displaying nontrivial topology) takes the place of the ‘filling’.
    Combining magnetism and nontrivial band topology gives rise to quantum anomalous Hall (QAH) insulators, as well as exotic quantum phases such as the QAH effect where current flows without dissipation along quantized edge states.
    Inducing magnetic order in topological insulators via proximity to a magnetic material offers a promising pathway towards achieving QAH effect at higher temperatures (approaching or exceeding room temperature) for lossless transport applications. More

  • in

    Understanding the use of bicycle sharing systems with statistics

    Bicycle sharing systems (BSSs) are a popular transport system in many of the world’s big cities. Not only do BSSs provide a convenient and eco-friendly mode of travel, they also help reduce traffic congestion. Moreover, bicycles can be rented at one port and returned at a different port. Despite these advantages, however, BSSs cannot rely solely on its users to maintain the availability of bicycles at all ports at all times. This is because many bicycle trips only go in one direction, causing excess bicycles at some ports and a lack of bicycles in others.
    This problem is generally solved by rebalancing, which involves strategically dispatching special trucks to relocate excess bicycles to other ports, where they are needed. Efficient rebalancing, however, is an optimization problem of its own, and Professor Tohru Ikeguchi and his colleagues from Tokyo University of Science, Japan, have devoted much work to finding optimal rebalancing strategies. In a study from 2021, they proposed a method for optimally rebalancing tours in a relatively short time. However, the researchers only checked the performance of their algorithm using randomly generated ports as benchmarks, which may not reflect the conditions of BSS ports in the real world.
    Addressing this issue, Prof. Ikeguchi has recently led another study, together with PhD student Ms. Honami Tsushima, to find more realistic benchmarks. In their paper published in Nonlinear Theory and Its Applications, IEICE, the researchers sought to create these benchmarks by statistically analyzing the actual usage history of rented and returned bicycles in real BSSs. “Bike sharing systems could become the preferred public transport system globally in the future. It is, therefore, an important issue to address in our societies,” Prof. Ikeguchi explains.
    The researchers used publicly available data from four real BSSs located in four major cities in USA: Boston, Washington DC, New York City, and Chicago. Save for Boston, these cities have over 560 ports each, for a total number of bicycles in the thousands.
    First, a preliminary analysis revealed that an excess and lack of bicycles occurred across all four BSSs during all months of the year, verifying the need for active rebalancing. Next, the team sought to understand the temporal patterns of rented and returned bicycles, for which they treated the logged rent and return events as “point processes.”
    The researchers independently analyzed both point processes using three approaches, namely raster plots, coefficient of variation, and local variation. Raster plots helped them find periodic usage patterns, while coefficient of variation and local variation allowed them to measure the global and local variabilities, respectively, of the random intervals between consecutive bicycle rent or return events.
    The analyses of raster plots yielded useful insights about how the four BSSs were used in their respective cities. Most bicycles were used during daytime and fewer overnight, producing a periodic pattern. Interestingly, from the analyses of the local variation, the team found that usage patterns were similar between weekdays and weekends, contradicting the results of previous studies. Finally, the results indicated that the statistical characteristics of the temporal patterns of rented and returned bikes followed a Poisson process — a widely studied random distribution — only in New York City. This was an important find, given the original objective of the research team. “We can now create realistic benchmark instances whose temporal patterns of rented and returned bicycles follow the Poisson process. This, in turn, can help improve the bicycle rebalancing model we proposed in our earlier work,” explains Prof. Ikeguchi.
    Overall, this study paves the way to a deeper understanding of how people use BSSs. Moreover, through further detailed analyses, it should be possible to gain insight into more complex aspects of human life, as Prof. Ikeguchi remarks: “We believe that the analysis of BSS data will lead not only to efficient bike sharing but also to a better understanding of the social dynamics of human movement.”
    In any case, making BSSs a more efficient and attractive option will, hopefully, make a larger percentage of people choose the bicycle as their preferred means of transportation. More