More stories

  • in

    3D printing of 'organic electronics'

    When looking at the future of production of micro-scale organic electronics, Mohammad Reza Abidian — associate professor of Biomedical Engineering at the University of Houston Cullen College of Engineering — sees their potential for use in flexible electronics and bioelectronics, via multiphoton 3-D printers.
    The newest paper from his research group examines the possibility of that technology. “Multiphoton Lithography of Organic Semiconductor Devices for 3D Printing of Flexible Electronic Circuits, Biosensors, and Bioelectronics” was published online in Advanced Materials.
    Over the past few years, 3D printing of electronics have become a promising technology due to their potential applications in emerging fields such as nanoelectronics and nanophotonics. Among 3D microfabrication technologies, multiphoton lithography (MPL) is considered the state-of-the-art amongst the microfabrication methods with true 3D fabrication capability, excellent level of spatial and temporal control, and the versatility of photosensitive materials mostly composed of acrylate-based polymers/monomers or epoxy-based photoresists.
    “In this paper we introduced a new photosensitive resin doped with an organic semiconductor material (OS) to fabricate highly conductive 3D microstructures with high-quality structural features via MPL process,” Abidian said.
    They showed that the fabrication process could be performed on glass and flexible substrate poly(dimethylsilosane). They demonstrated that loading as low as 0.5 wt% OS into the resin remarkably increased electrical conductivity of printed organic semiconductor composite polymer over 10 orders of magnitude.
    “The excellent electrical conductivity can be attributed to presence of OS in the cross-linked polymer chains, providing both ionic and electronic conduction pathways along the polymer chains,” Abidian said. More

  • in

    Topology and machine learning reveal hidden relationship in amorphous silicon

    Theoretical scientists have used topological mathematics and machine learning to identify a hidden relationship between nano-scale structures and thermal conductivity in amorphous silicon, a glassy form of the material with no repeating crystalline order.
    A study describing their technique appeared in the Journal of Chemical Physics on 23 June.
    Amorphous solids, such as glass, obsidian, wax, and plastics, have no long-range repeating, or crystalline structure, to the atoms or molecules that they are made out of. This contrasts with crystalline solids, such as salt, most metals and rocks. As they lack long-range order in their structure, the thermal conductivity of amorphous solids can be far lower than a crystalline solid composed of the same material.
    However, there can still be some medium-range order on the scale of nanometers. This medium-range order should affect the propagation and diffusion of atomic vibrations, which carry heat. The heat transport in disordered materials is of special interest to physicists due to its importance in industrial applications. The amorphous form of silicon is used in an enormous range of applications in the modern world, from solar cells to image sensors. For this reason, researchers have intensively investigated the structural signature of the medium-range order in amorphous silicon and how it relates to thermal conductivity.
    “For better control over applications that make use of amorphous silicon, controlling its thermal properties is high on engineers’ wish list,” said Emi Minamitani, the corresponding author of the study and a theoretical molecular scientist with the Institute for Molecular Science in Okazaki, Japan. “Extracting the nano-scale structural characteristics in amorphous including medium-range order is an important key.”
    Unfortunately, researchers have struggled to carry out this task because it is difficult to determine the essential nano-scale features of disordered systems using traditional techniques. More

  • in

    Quantum network nodes with warm atoms

    Communication networks need nodes at which information is processed or rerouted. Physicists at the University of Basel have now developed a network node for quantum communication networks that can store single photons in a vapor cell and pass them on later.
    In quantum communication networks, information is transmitted by single particles of light (photons). At the nodes of such a network buffer elements are needed which can temporarily store, and later re-emit, the quantum information contained in the photons.
    Researchers at the University of Basel in the group of Prof. Philipp Treutlein have now developed a quantum memory that is based on an atomic gas inside a glass cell. The atoms do not have to be specially cooled, which makes the memory easy to produce and versatile, even for satellite applications. Moreover, the researchers have realized a single photon source which allowed them to test the quality and storage time of the quantum memory. Their results were recently published in the scientific journal PRX Quantum.
    Warm atoms in vapor cells
    “The suitability of warm atoms in vapor cells for quantum memories has been investigated for the past twenty years,” says Gianni Buser, who worked on the experiment as a PhD student. “Usually, however, attenuated laser beams — and hence classical light — were used.” In classical light, the number of photons hitting the vapor cell in a certain period follows a statistical distribution; on average it is one photon, but sometimes it can be two, three or none.
    To test the quantum memory with “quantum light” — that is, always precisely one photon — Treutlein and his co-workers developed a dedicated single photon source that emits exactly one photon at a time. The instant when that happens is heralded by a second photon, which is always sent out simultaneously with the first one. This allows the quantum memory to be activated at the right moment.
    The single photon is then directed into the quantum memory where, with the help of a control laser beam, the photon causes more than a billion rubidium atoms to take on a so-called superposition state of two possible energy levels of the atoms. The photon itself vanishes in the process, but the information contained in it is transformed into the superposition state of the atoms. A brief pulse of the control laser can then read out that information after a certain storage time and transform it back into a photon.
    Reducing read-out noise
    “Up to now, a critical point has been noise — additional light that is produced during the read-out and that can compromise the quality of the photon,” explains Roberto Mottola, another PhD student in Treutlein’s lab. Using a few tricks, the physicists were able to reduce that noise sufficiently so that after storage times of several hundred nanoseconds the single photon quality was still high.
    “Those storage times are not very long, and we didn’t actually optimize them for this study,” Treutlein says, “but already now they are more than a hundred times longer than the duration of the stored single photon pulse.” This means that the quantum memory developed by the Basel researchers can already be employed for interesting applications. For instance, it can synchronize randomly produced single photons, which can then be used in various quantum information applications.
    Story Source:
    Materials provided by University of Basel. Note: Content may be edited for style and length. More

  • in

    New deep learning model helps the automated screening of common eye disorders

    A new deep learning (DL) model that can identify disease-related features from images of eyes has been unveiled by a group of Tohoku University researchers. This ‘lightweight’ DL model can be trained with a small number of images, even ones with a high-degree of noise, and is resource-efficient, meaning it is deployable on mobile devices.
    Details were published in the journal Scientific Reports on May 20, 2022.
    With many societies aging and limited medical personnel, DL model reliant self-monitory and tele-screening of diseases are becoming more routine. Yet, deep learning algorithms are generally task specific, and identify or detect general objects such as humans, animals, or road signs.
    Identifying diseases, on the other hand, demands precise measurement of tumors, tissue volume, or other sorts of abnormalities. To do so requires a model to look at separate images and mark boundaries in a process known as segmentation. But accurate prediction takes greater computational output, rendering them difficult to deploy on mobile devices.
    “There is always a trade-off between accuracy, speed and computational resources when it comes to DL models,” says Toru Nakazawa, co-author of the study and professor at Tohoku University’s Department of Ophthalmology. “Our developed model has better segmentation accuracy and enhanced model training reproducibility, even with fewer parameters — making it efficient and more lightweight when compared to other commercial softwares.”
    Professor Nakazawa, Associate Professor Parmanand Sharma, Dr Takahiro Ninomiya, and students from the Department of Ophthalmology worked with professor Takayuki Okatani from Tohoku University’s Graduate School of Information Sciences to produce the model.
    Using low resource devices, they obtained measurements of the foveal avascular zone, a region with the fovea centralis at the center of the retina, to enhance screening for glaucoma.
    “Our model is also capable of detecting/segmenting optic discs and hemorrhages in fundus images with high precision,” added Nakazawa.
    In the future, the group is hopeful of deploying the lightweight model to screen for other common eye disorders and other diseases.
    Story Source:
    Materials provided by Tohoku University. Note: Content may be edited for style and length. More

  • in

    Wearable chemical sensor is as good as gold

    Researchers created a special ultrathin sensor, spun from gold, that can be attached directly to the skin without irritation or discomfort. The sensor can measure different biomarkers or substances to perform on-body chemical analysis. It works using a technique called Raman spectroscopy, where laser light aimed at the sensor is changed slightly depending on whatever chemicals are present on the skin at that point. The sensor can be finely tuned to be extremely sensitive, and is robust enough for practical use.
    Wearable technology is nothing new. Perhaps you or someone you know wears a smartwatch. Many of these can monitor certain health matters such as heart rate, but at present they cannot measure chemical signatures which could be useful for medical diagnosis. Smartwatches or more specialized medical monitors are also relatively bulky and often quite costly. Prompted by such shortfalls, a team comprising researchers from the Department of Chemistry at the University of Tokyo sought a new way to sense various health conditions and environmental matters in a noninvasive and cost-effective manner.
    “A few years ago, I came across a fascinating method for producing robust stretchable electronic components from another research group at the University of Tokyo,” said Limei Liu, a visiting scholar at the time of the study and currently a lecturer at Yangzhou University in China. “These devices are spun from ultrafine threads coated with gold, so can be attached to the skin without issue as gold does not react with or irritate the skin in any way. As sensors, they were limited to detecting motion however, and we were looking for something that could sense chemical signatures, biomarkers and drugs. So we built upon this idea and created a noninvasive sensor that exceeded our expectations and inspired us to explore ways to improve its functionality even further.”
    The main component of the sensor is the fine gold mesh, as gold is unreactive, meaning that when it comes into contact with a substance the team wishes to measure — for example a potential disease biomarker present in sweat — it does not chemically alter that substance. But instead, as the gold mesh is so fine, it can provide a surprisingly large surface for that biomarker to bind to, and this is where the other components of the sensor come in. As a low-power laser is pointed at the gold mesh, some of the laser light is absorbed and some is reflected. Of the light reflected, most has the same energy as the incoming light. However, some incoming light loses energy to the biomarker or other measurable substance, and the discrepancy in energy between reflected and incident light is unique to the substance in question. A sensor called a spectrometer can use this unique energy fingerprint to identify the substance. This method of chemical identification is known as Raman spectroscopy.
    “Currently, our sensors need to be finely tuned to detect specific substances, and we wish to push both the sensitivity and specificity even further in future,” said Assistant Professor Tinghui Xiao. “With this, we think applications like glucose monitoring, ideal for sufferers of diabetes, or even virus detection, might be possible.”
    “There is also potential for the sensor to work with other methods of chemical analysis besides Raman spectroscopy, such as electrochemical analysis, but all these ideas require a lot more investigation,” said Professor Keisuke Goda. “In any case, I hope this research can lead to a new generation of low-cost biosensors that can revolutionize health monitoring and reduce the financial burden of health care.”
    Story Source:
    Materials provided by University of Tokyo. Note: Content may be edited for style and length. More

  • in

    A new model sheds light on how we learn motor skills

    Researchers from the University of Tsukuba have developed a mathematical model of motor learning that reflects the motor learning process in the human brain. Their findings suggest that motor exploration — that is, increased variability in movements — is important when learning a new task. These results may lead to improved motor rehabilitation in patients after injury or disease.
    Even seemingly simple movements are very complex to perform, and the way we learn how to perform new movements remains unclear. Researchers from Japan have recently proposed a new model of motor learning that combines a number of different theories. A study published this month in Neural Networks revealed that their model can simulate motor learning in humans surprisingly well, paving the way for a greater understanding of how our brains work.
    For even a relatively simple task, such as to reach out and pick up an object, there are a huge number of potential combinations of angles between your body and the different joints that are involved. The same goes for each of your muscles — there is an almost endless combination of muscles and forces that can be used together to perform an action. With all of these possible combinations of joints and muscles — not to mention the underlying neuronal activity — how do we ever learn to make any movements at all? Researchers at the University of Tsukuba aimed to address this question.
    The research team first created a mathematical model to imitate the learning process that occurs for new motor tasks. They designed the model to reflect many of the processes that are thought to occur in the brain when a new skill is learned. The researchers then tested their model by attempting to simulate the results of three recent studies that were conducted in humans, in which individuals were asked to perform completely new motor tasks.
    “We were surprised at how well our simulations managed to reproduce many of the results of previous studies in humans,” says Professor Jun Izawa, senior author of the study. “With our model, we were able to bridge the gap between a number of different proposed mechanisms of motor learning, such as motor exploration, redundancy solving, and error-based learning.”
    In their model, larger amounts of motor exploration — that is, variability in movements — were found to help with the learning of sensitivity derivatives, which measure how commands from the brain affect motor error. In this way, errors were transformed into motor corrections.
    “Our success at simulating real results from human studies was encouraging,” explains first author Lucas Rebelo Dal’Bello. “It suggests that our proposed learning mechanism might accurately reflect what occurs in the brain during motor learning.”
    The findings of this study, which indicate the importance of motor exploration in motor learning, provide insights into how motor learning might occur in the human brain. They also suggest that motor exploration should be encouraged when a new motor task is being learned; this may be helpful for motor rehabilitation after injury or disease.
    This work was supported by KAKENHI (Scientific Research on Innovative Areas 19H04977 and 19H05729). LD was supported by a Japanese Government (Monbukagakusho: MEXT) Scholarship.
    Story Source:
    Materials provided by University of Tsukuba. Note: Content may be edited for style and length. More

  • in

    Earth’s oldest known wildfires raged 430 million years ago

    Bits of charcoal entombed in ancient rocks unearthed in Wales and Poland push back the earliest evidence for wildfires to around 430 million years ago. Besides breaking the previous record by about 10 million years, the finds help pin down how much oxygen was in Earth’s atmosphere at the time.

    The ancient atmosphere must have contained at least 16 percent oxygen, researchers report June 13 in Geology. That conclusion is based on modern-day lab tests that show how much oxygen it takes for a wildfire to take hold and spread.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    While oxygen makes up 21 percent of our air today, over the last 600 million years or so, oxygen levels in Earth’s atmosphere have fluctuated between 13 percent and 30 percent (SN: 12/13/05). Long-term models simulating past oxygen concentrations are based on processes such as the burial of coal swamps, mountain building, erosion and the chemical changes associated with them. But those models, some of which predict lower oxygen levels as low as 10 percent for this time period, provide broad-brush strokes of trends and may not capture brief spikes and dips, say Ian Glasspool and Robert Gastaldo, both paleobotanists at Colby College in Waterville, Maine.

    Charcoal, a remnant of wildfire, is physical evidence that provides, at the least, a minimum threshold for oxygen concentrations. That’s because oxygen is one of three ingredients needed to create a wildfire. The second, ignition, came from lightning in the ancient world, says Glasspool. The third, fuel, came from burgeoning plants and fungus 430 million years ago, during the Silurian Period. The predominant greenery were low-growing plants just a couple of centimeters tall. Scattered among this diminutive ground cover were occasional knee-high to waist-high plants and Prototaxites fungi that towered up to nine meters tall. Before this time, most plants were single-celled and lived in the seas.

    Once plants left the ocean and began to thrive, wildfire followed. “Almost as soon as we have evidence of plants on land, we have evidence of wildfire,” says Glasspool.

    That evidence includes tiny chunks of partially charred plants — including charcoal as identified by its microstructure — as well as conglomerations of charcoal and associated minerals embedded within fossilized hunks of Prototaxites fungi. Those samples came from rocks of known ages that formed from sediments dumped just offshore of ancient landmasses. This wildfire debris was carried offshore in streams or rivers before it settled, accumulated and was preserved, the researchers suggest.

    The microstructure of this fossilized and partially charred bit of plant unearthed in Poland from sediments that are almost 425 million years old reveals that it was burnt by some of Earth’s earliest known wildfires.Ian Glasspool/Colby College

    The discovery adds to previous evidence, including analyses of pockets of fluid trapped in halite minerals formed during the Silurian, that suggests that atmospheric oxygen during that time approached or even exceeded the 21 percent concentration seen today, the pair note.

    “The team has good evidence for charring,” says Lee Kump, a biogeochemist at Penn State who wasn’t involved in the new study. Although its evidence points to higher oxygen levels than some models suggest for that time, it’s possible that oxygen was a substantial component of the atmosphere even earlier than the Silurian, he says.

    “We can’t rule out that oxygen levels weren’t higher even further back,” says Kump. “It could be that plants from that era weren’t amenable to leaving a charcoal record.” More

  • in

    Methods from weather forecasting can be adapted to assess risk of COVID-19 exposure

    Techniques used in weather forecasting can be repurposed to provide individuals with a personalized assessment of their risk of exposure to COVID-19 or other viruses, according to new research published by Caltech scientists.
    The technique has the potential to be more effective and less intrusive than blanket lockdowns for combatting the spread of disease, says Tapio Schneider, the Theodore Y. Wu Professor of Environmental Science and Engineering; senior research scientist at JPL, which Caltech manages for NASA; and the lead author of a study on the new research that was published by PLOS Computational Biology on June 23.
    “For this pandemic, it may be too late,” Schneider says, “but this is not going to be the last epidemic that we will face. This is useful for tracking other infectious diseases, too.”
    In principle, the idea is simple: Weather forecasting models ingest a lot of data — for example, measurements of wind speed and direction, temperature, and humidity from local weather stations, in addition to satellite data. They use the data to assess what the current state of the atmosphere is, forecast the weather evolution into the future, and then repeat the cycle by blending the forecast atmospheric state with new data. In the same way, disease risk assessment also harnesses various types of available data to make an assessment about an individual’s risk of exposure to or infection with disease, forecasts the spread of disease across a network of human contacts using an epidemiological model, and then repeats the cycle by blending the forecast with new data. Such assessments might use the results of an institution’s surveillance testing, data from wearable sensors, self-reported symptoms and close contacts as recorded by smartphones, and municipalities’ disease-reporting dashboards.
    The research presented in PLOS Computational Biology is proof of concept. However, its end result would be a smart phone app that would provide an individual with a frequently updated numerical assessment (i.e., a percentage) that reflects their likelihood of having been exposed to or infected with a particular infectious disease agent, such as COVID-19.
    Such an app would be similar to existing COVID-19 exposure notification apps but more sophisticated and effective in its use of data, Schneider and his colleagues say. Those apps provide a binary exposure assessment (“yes, you have been exposed,” or, in the case of no exposure, radio silence); the new app described in the study would provide a more nuanced understanding of continually changing risks of exposure and infection as individuals come close to others and as data about infections is propagated across a continually evolving contact network. More