More stories

  • in

    Scientists develop a recyclable pollen-based paper for repeated printing and ‘unprinting’

    Scientists at Nanyang Technological University, Singapore (NTU Singapore) have developed a pollen-based ‘paper’ that, after being printed on, can be ‘erased’ and reused multiple times without any damage to the paper.
    In a research paper published online in Advanced Materials on 5 April, the NTU Singapore scientists demonstrated how high-resolution colour images could be printed on the non-allergenic pollen paper with a laser printer, and then ‘unprinted’ — by completely removing the toner without damaging the paper — with an alkaline solution. They demonstrated that this process could be repeated up to at least eight times.
    This innovative, printer-ready pollen paper could become an eco-friendly alternative to conventional paper, which is made via a multi-step process with a significant negative environmental impact, said the NTU team led by Professors Subra Suresh and Cho Nam-Joon.
    It could also help to reduce the carbon emissions and energy usage associated with conventional paper recycling, which involves repulping, de-toning (removal of printer toner) and reconstruction.
    The other members of this all-NTU research team are research fellow Dr Ze Zhao, graduate students Jingyu Deng and Hyunhyuk Tae, and former graduate student Mohammed Shahrudin Ibrahim.
    Prof Subra Suresh, NTU President and senior author of the paper, said: “Through this study, we showed that we could print high-resolution colour images on paper produced from a natural, plant-based material that was rendered non-allergenic through a process we recently developed. We further demonstrated the feasibility of doing so repeatedly without destroying the paper, making this material a viable eco-friendly alternative to conventional wood-based paper. This is a new approach to paper recycling — not just by making paper in a more sustainable way, but also by extending the lifespan of the paper so that we get the maximum value out of each piece of paper we produce. The concepts established here, with further developments in scalable manufacturing, could be adapted and extended to produce other “directly printable” paper-based products such as storage and shipping cartons and containers.”
    Prof Cho Nam-Joon, senior author of the paper, said: “Aside from being easily recyclable, our pollen-based paper is also highly versatile. Unlike wood-based conventional paper, pollen is generated in large amounts and is naturally renewable, making it potentially an attractive raw material in terms of scalability, economics, and environmental sustainability. In addition, by integrating conductive materials with the pollen paper, we could potentially use the material in soft electronics, green sensors, and generators to achieve advanced functions and properties.” More

  • in

    Honey holds potential for making brain-like computer chips

    Honey might be a sweet solution for developing environmentally friendly components for neuromorphic computers, systems designed to mimic the neurons and synapses found in the human brain. Hailed by some as the future of computing, neuromorphic systems are much faster and use much less power than traditional computers. Engineers have demonstrated one way to make them more organic too by using honey to make a memristor, a component similar to a transistor that can not only process but also store data in memory. VANCOUVER, Wash. — Honey might be a sweet solution for developing environmentally friendly components for neuromorphic computers, systems designed to mimic the neurons and synapses found in the human brain.
    Hailed by some as the future of computing, neuromorphic systems are much faster and use much less power than traditional computers. Washington State University engineers have demonstrated one way to make them more organic too. In a study published in Journal of Physics D, the researchers show that honey can be used to make a memristor, a component similar to a transistor that can not only process but also store data in memory.
    “This is a very small device with a simple structure, but it has very similar functionalities to a human neuron,” said Feng Zhao, associate professor of WSU’s School of Engineering and Computer Science and corresponding author on the study.”This means if we can integrate millions or billions of these honey memristors together, then they can be made into a neuromorphic system that functions much like a human brain.”
    For the study, Zhao and first author Brandon Sueoka, a WSU graduate student in Zhao’s lab, created memristors by processing honey into a solid form and sandwiching it between two metal electrodes, making a structure similar to a human synapse. They then tested the honey memristors’ ability to mimic the work of synapses with high switching on and off speeds of 100 and 500 nanoseconds respectively. The memristors also emulated the synapse functions known as spike-timing dependent plasticity and spike-rate dependent plasticity, which are responsible for learning processes in human brains and retaining new information in neurons.
    The WSU engineers created the honey memristors on a micro-scale, so they are about the size of a human hair. The research team led by Zhao plans to develop them on a nanoscale, about 1/1000 of a human hair, and bundle many millions or even billions together to make a full neuromorphic computing system.
    Currently, conventional computer systems are based on what’s called the von Neumann architecture. Named after its creator, this architecture involves an input, usually from a keyboard and mouse, and an output, such as the monitor. It also has a CPU, or central processing unit, and RAM, or memory storage. Transferring data through all these mechanisms from input to processing to memory to output takes a lot of power at least compared to the human brain, Zhao said. For instance, the Fugaku supercomputer uses upwards of 28 megawatts, roughly equivalent to 28 million watts, to run while the brain uses only around 10 to 20 watts. More

  • in

    Chemical data management: an open way forward

    One of the most challenging aspects of modern chemistry is managing data. For example, when synthesizing a new compound, scientists will go through multiple attempts of trial-and-error to find the right conditions for the reaction, generating in the process massive amounts of raw data. Such data is of incredible value, as, like humans, machine-learning algorithms can learn much from failed and partially successful experiments.
    The current practice is, however, to publish only the most successful experiments, since no human can meaningfully process the massive amounts of failed ones. But AI has changed this; it is exactly what these machine-learning methods can do provided the data are stored in a machine-actionable format for anyone to use.
    “For a long time, we needed to compress information due to the limited page count in printed journal articles,” says Professor Berend Smit, who directs the Laboratory of Molecular Simulation at EPFL Valais Wallis. “Nowadays, many journals do not even have printed editions anymore; however, chemists still struggle with reproducibility problems because journal articles are missing crucial details. Researchers ‘waste’ time and resources replicating ‘failed’ experiments of authors and struggle to build on top of published results as raw data are rarely published.”
    But volume is not the only problem here; data diversity is another: research groups use different tools like Electronic Lab Notebook software, which store data in proprietary formats that are sometimes incompatible with each other. This lack of standardization makes it nearly impossible for groups to share data.
    Now, Smit, with Luc Patiny and Kevin Jablonka at EPFL, have published a perspective in Nature Chemistry presenting an open platform for the entire chemistry workflow: from the inception of a project to its publication.
    The scientists envision the platform as “seamlessly” integrating three crucial steps: data collection, data processing, and data publication — all with minimal cost to researchers. The guiding principle is that data should be FAIR: easily findable, accessible, interoperable, and re-usable. “At the moment of data collection, the data will be automatically converted into a standard FAIR format, making it possible to automatically publish all ‘failed’ and partially successful experiments together with the most successful experiment,” says Smit.
    But the authors go a step further, proposing that data should also be machine-actionable. “We are seeing more and more data-science studies in chemistry,” says Jablonka. “Indeed, recent results in machine learning try to tackle some of the problems chemists believe are unsolvable. For instance, our group has made enormous progress in predicting optimal reaction conditions using machine-learning models. But those models would be much more valuable if they could also learn reaction conditions that fail, but otherwise, they remain biased because only the successful conditions are published.”
    Finally, the authors propose five concrete steps that the field must take to create a FAIR data-management plan: The chemistry community should embrace its own existing standards and solutions. Journals need to make deposition of reusable raw data, where community standards exist, mandatory. We need to embrace the publication of “failed” experiments. Electronic Lab Notebooks that do not allow exporting all data into an open machine-actionable form should be avoided. Data-intensive research must enter our curricula.”We think there is no need to invent new file formats or technologies,” says Patiny. “In principle, all the technology is there, and we need to embrace existing technologies and make them interoperable.”
    The authors also point out that just storing data in any electronic lab notebook — the current trend — does not necessarily mean that humans and machines can reuse the data. Rather, the data must be structured and published in a standardized format, and they also must contain enough context to enable data-driven actions.
    “Our perspective offers a vision of what we think are the key components to bridge the gap between data and machine learning for core problems in chemistry,” says Smit. “We also provide an open science solution in which EPFL can take the lead.”
    Story Source:
    Materials provided by Ecole Polytechnique Fédérale de Lausanne. Original written by Nik Papageorgiou. Note: Content may be edited for style and length. More

  • in

    Making a ‘sandwich’ out of magnets and topological insulators, potential for lossless electronics

    A Monash University-led research team has discovered that a structure comprising an ultra-thin topological insulator sandwiched between two 2D ferromagnetic insulators becomes a large-bandgap quantum anomalous Hall insulator.
    Such a heterostructure provides an avenue towards viable ultra-low energy future electronics, or even topological photovoltaics.
    Topological Insulator: The Filling in the Sandwich
    In the researchers’ new heterostructure, a ferromagnetic material forms the ‘bread’ of the sandwich, while a topological insulator (ie, a material displaying nontrivial topology) takes the place of the ‘filling’.
    Combining magnetism and nontrivial band topology gives rise to quantum anomalous Hall (QAH) insulators, as well as exotic quantum phases such as the QAH effect where current flows without dissipation along quantized edge states.
    Inducing magnetic order in topological insulators via proximity to a magnetic material offers a promising pathway towards achieving QAH effect at higher temperatures (approaching or exceeding room temperature) for lossless transport applications. More

  • in

    Understanding the use of bicycle sharing systems with statistics

    Bicycle sharing systems (BSSs) are a popular transport system in many of the world’s big cities. Not only do BSSs provide a convenient and eco-friendly mode of travel, they also help reduce traffic congestion. Moreover, bicycles can be rented at one port and returned at a different port. Despite these advantages, however, BSSs cannot rely solely on its users to maintain the availability of bicycles at all ports at all times. This is because many bicycle trips only go in one direction, causing excess bicycles at some ports and a lack of bicycles in others.
    This problem is generally solved by rebalancing, which involves strategically dispatching special trucks to relocate excess bicycles to other ports, where they are needed. Efficient rebalancing, however, is an optimization problem of its own, and Professor Tohru Ikeguchi and his colleagues from Tokyo University of Science, Japan, have devoted much work to finding optimal rebalancing strategies. In a study from 2021, they proposed a method for optimally rebalancing tours in a relatively short time. However, the researchers only checked the performance of their algorithm using randomly generated ports as benchmarks, which may not reflect the conditions of BSS ports in the real world.
    Addressing this issue, Prof. Ikeguchi has recently led another study, together with PhD student Ms. Honami Tsushima, to find more realistic benchmarks. In their paper published in Nonlinear Theory and Its Applications, IEICE, the researchers sought to create these benchmarks by statistically analyzing the actual usage history of rented and returned bicycles in real BSSs. “Bike sharing systems could become the preferred public transport system globally in the future. It is, therefore, an important issue to address in our societies,” Prof. Ikeguchi explains.
    The researchers used publicly available data from four real BSSs located in four major cities in USA: Boston, Washington DC, New York City, and Chicago. Save for Boston, these cities have over 560 ports each, for a total number of bicycles in the thousands.
    First, a preliminary analysis revealed that an excess and lack of bicycles occurred across all four BSSs during all months of the year, verifying the need for active rebalancing. Next, the team sought to understand the temporal patterns of rented and returned bicycles, for which they treated the logged rent and return events as “point processes.”
    The researchers independently analyzed both point processes using three approaches, namely raster plots, coefficient of variation, and local variation. Raster plots helped them find periodic usage patterns, while coefficient of variation and local variation allowed them to measure the global and local variabilities, respectively, of the random intervals between consecutive bicycle rent or return events.
    The analyses of raster plots yielded useful insights about how the four BSSs were used in their respective cities. Most bicycles were used during daytime and fewer overnight, producing a periodic pattern. Interestingly, from the analyses of the local variation, the team found that usage patterns were similar between weekdays and weekends, contradicting the results of previous studies. Finally, the results indicated that the statistical characteristics of the temporal patterns of rented and returned bikes followed a Poisson process — a widely studied random distribution — only in New York City. This was an important find, given the original objective of the research team. “We can now create realistic benchmark instances whose temporal patterns of rented and returned bicycles follow the Poisson process. This, in turn, can help improve the bicycle rebalancing model we proposed in our earlier work,” explains Prof. Ikeguchi.
    Overall, this study paves the way to a deeper understanding of how people use BSSs. Moreover, through further detailed analyses, it should be possible to gain insight into more complex aspects of human life, as Prof. Ikeguchi remarks: “We believe that the analysis of BSS data will lead not only to efficient bike sharing but also to a better understanding of the social dynamics of human movement.”
    In any case, making BSSs a more efficient and attractive option will, hopefully, make a larger percentage of people choose the bicycle as their preferred means of transportation. More

  • in

    The future of 5G+ infrastructure could be built tile by tile

    5G+ (5G/Beyond 5G) is the fastest-growing segment and the only significant opportunity for investment growth in the wireless network infrastructure market, according to the latest forecast by Gartner, Inc. But currently 5G+ technologies rely on large antenna arrays that are typically bulky and come only in very limited sizes, making them difficult to transport and expensive to customize.
    Researchers from Georgia Tech’s College of Engineering have developed a novel and flexible solution to address the problem. Their additively manufactured tile-based approach can construct on-demand, massively scalable arrays of 5G+ (5G/Beyond 5G)enabled smart skins with the potential to enable intelligence on nearly any surface or object. The study, recently published in Scientific Reports, describes the approach, which is not only much easier to scale and customize than current practices, but features no performance degradation whenever flexed or scaled to a very large number of tiles.
    “Typically, there are a lot of smaller wireless network systems working together, but they are not scalable. With the current techniques, you can’t increase, decrease, or direct bandwidth, especially for very large areas,” said Tentzeris. “Being able to utilize and scale this novel tile-based approach makes this possible.”
    Tentzeris says his team’s modular application equipped with 5G+ capability has the potential for immediate, large-scale impact as the telecommunications industry continues to rapidly transition to standards for faster, higher capacity, and lower latency communications.
    Building the Tiles
    In Georgia Tech’s new approach, flexible and additively manufactured tiles are assembled onto a single, flexible underlying layer. This allows tile arrays to be attached to a multitude of surfaces. The architecture also allows for very large 5G+ phased/electronically steerable antenna array networks to be installed on-the-fly. According to Tentzeris, attaching a tile array to an unmanned aerial vehicle (UAV) is even a possibility to surge broadband capacity in low coverage areas. More

  • in

    Technology has the potential to change the patient-provider relationship

    Healthcare technology continues to evolve and has the potential to significantly change the relationship between providers and their patients. A study from the U.S. Department of Veterans Affairs, Regenstrief Institute and Indiana University School of Medicine analyzed perspectives on personal health records.
    Personal health records are different from electronic health records because they are used by the patient as opposed to the provider. They are sometimes referred to as patient portals and allow the patient to see test results, medications and other health information.
    The research team interviewed providers, patients and caregivers associated with the Richard L. Roudebush VA Medical Center about their thoughts on personal health records and how they could be used.
    “During the interviews, patients expressed the potential for personal health records to deepen their relationship with their provider and to allow them to be more understood. Physicians were interested in having more clinical information sharing to facilitate better care,” said study author David Haggstrom, M.D., MAS, director of the Regenstrief Institute Center for Health Services Research, core investigator at the VA Health Services Research and Development (HSR&D) Center for Health Information and Communication (CHIC) and associate professor of medicine at IU School of Medicine. “These different visions of the value of these records show the need for discussions between physicians and patients to set expectations about the uses of PHRs.”
    Both doctors and patients raised concerns about workflow.
    “Patient portals have already created an additional strain on medical staff, and patients are sensitive to that. Careful thought needs to be given to how health systems and teams deploy PHRs to still provide patient-centered care,” said Dr. Haggstrom.
    The next steps for personal health records involve implementing them more widely, tailoring them for specific conditions and making them more user-friendly.
    Dr. Haggstrom is currently leading a five-year clinical trial using a personal health record created specifically for cancer patients. The research team will be looking at both the quality of care and the impact on the patient-provider relationship.
    In addition to Dr. Haggstrom, Thomas Carr, M.D. of VA CHIC is an author. The study was supported in part by VA HSR&D CDA 07-016, the VA Advanced Medical Informatics Fellowship Program and the Livestrong Foundation.
    Story Source:
    Materials provided by Regenstrief Institute. Note: Content may be edited for style and length. More

  • in

    Study shows gaps in how STEM organizations collect demographic information

    Professional organizations in science, technology, engineering and mathematics (STEM) fields could more effectively collect data on underrepresented groups in their fields, according to a new survey published March 31 in Science. With more robust information, STEM organizations could better target efforts to recruit and retain a more diverse membership.
    “We want to start a conversation among STEM organizations,” said Nicholas Burnett, lead author of the study and a postdoctoral researcher in the Department of Neurobiology, Physiology and Behavior at the University of California, Davis. “The ultimate goal is to increase representation of these groups, and you can’t do that without knowing where to target resources.”
    Burnett’s coauthors on the study are: Alyssa Hernandez, Harvard University; Emily King, UC Berkeley; Richelle Tanner, Chapman University; and Kathryn Wilsterman, University of Montana, Missoula.
    The researchers surveyed 164 U. S.-based STEM organizations, drawn mostly from a list of societies affiliated with the American Association for the Advancement of Science. The organizations were asked about the kinds of demographic information they collected on their members and conference attendees, and how they put it to use. Survey results were not associated with any particular organization, and the researchers did not ask for actual demographic information from the respondents: only what categories of information were collected.
    Seventy-three organizations responded to the survey, representing over 700,000 constituents in a range of fields from life sciences and physical sciences to mathematics and technology.
    While most organizations (80 percent) collected some demographic data, exactly what they collected varied. Many organizations followed the kind of breakdown used by federal agencies, offering a number of options for “race and ethnicity” but also lumping together several disparate groups under one category (such as “Asian American and Pacific Islander”). More