More stories

  • in

    BioCro software for growing virtual crops improved

    A team from the University of Illinois has revamped the popular crop growth simulation software BioCro, making it a more user-friendly and efficient way to predict crop yield. The updated version, BioCro II, allows modelers to use the technology much more easily and includes faster and more accurate algorithms.
    “In the original BioCro, all the math that the modelers were using was mixed into the programming language, which many people weren’t familiar with, so it was easy to make mistakes,” said Justin McGrath, a Research Plant Physiologist for the U.S. Department of Agriculture, Agricultural Research Service (USDA-ARS) at Illinois. “BioCro II separates those so modelers can do less programming and can instead focus on the equations.”
    Separating the equations from the programming language allows researchers to try new simulations more easily. For example, if a project is looking at how a gene can help plants to use light more efficiently, the equations for that specific gene can be added to existing models, rather than having to change the entire model to include the new information. This development also allows for the software to operate well with other models, a large improvement from the original BioCro.
    In a recent study, published in in silico Plants, McGrath and his team discuss all the improvements they made to the original BioCro software, and why they were necessary to improve modeling capabilities for researchers.
    “If you’ve got a gene and you’re wondering how much it can improve yield, you have a tiny piece in the context of the whole plant. Modeling lets you take that one change, put it in the plant and compare yield with and without that change,” said Edward Lochocki, lead author on the paper and postdoctoral researcher for RIPE. “With the updates we’ve made in BioCro II, if you have ten gene changes to make, you can look at all of them quickly and gauge relative importance before moving the work into the field.”
    This work is part of Realizing Increased Photosynthetic Efficiency (RIPE), an international research project that aims to increase global food production by developing food crops that turn the sun’s energy into food more efficiently with support from the Bill & Melinda Gates Foundation, Foundation for Food & Agriculture Research, and U.K. Foreign, Commonwealth & Development Office.
    “BioCro II represents a complete revamp of the original BioCro, eliminating significant duplication of code, improving the efficiency of code, and eliminating hard-wired parameters,” said RIPE Director Stephen Long, Ikenberry Endowed University Chair of Crop Sciences and Plant Biology at Illinois’ Carl R. Woese Institute for Genomic Biology. “All these changes make it much easier to use the model for new species and cultivars, as well as link to other models, as indeed recently demonstrated by adapting BioCro II for soybean.”
    With the latest updates, crop modeling with BioCro II is going to give researchers the ability to quickly test ideas and get results to farmers faster.
    RIPE is led by the University of Illinois in partnership with The Australian National University, Chinese Academy of Sciences, Commonwealth Scientific and Industrial Research Organisation, Lancaster University, Louisiana State University, University of California, Berkeley, University of Cambridge, University of Essex, and U.S. Department of Agriculture, Agricultural Research Service. More

  • in

    Metasurface-based antenna turns ambient radio waves into electric power

    Researchers have developed a new metasurface-based antenna that represents an important step toward making it practical to harvest energy from radio waves, such as the ones used in cell phone networks or Bluetooth connections. This technology could potentially provide wireless power to sensors, LEDs and other simple devices with low energy requirements.
    “By eliminating wired connections and batteries, these antennas could help reduce costs, improve reliability and make some electrical systems more efficient,” said research team leader Jiangfeng Zhou from the University of South Florida. “This would be useful for powering smart home sensors such as those used for temperature, lighting and motion or sensors used to monitor the structure of buildings or bridges, where replacing a battery might be difficult or impossible.”
    In the journal Optical Materials Express, the researchers report that lab tests of their new antenna showed that it can harvest 100 microwatts of power, enough to power simple devices, from low power radio waves. This was possible because the metamaterial used to make the antenna exhibits perfect absorption of radio waves and was designed to work with low intensities.
    “Although more work is needed to miniaturize the antenna, our device crosses a key threshold of 100 microwatts of harvested power with high efficiency using ambient power levels found in the real world,” said Clayton Fowler, the team member who fabricated the sample and performed the measurements. “The technology could also be adapted so that a radio wave source could be provided to power or charge devices around a room.”
    Harvesting energy from the air
    Scientists have been trying to capture energy from radio waves for quite some time, but it has been difficult to obtain enough energy to be useful. This is changing thanks to the development of metamaterials and the ever-growing number of ambient sources of radio frequency energy available, such as cell phone networks, Wi-Fi, GPS, and Bluetooth signals.
    “With the huge explosion in radio wave-based technologies, there will be a lot of waste electromagnetic emissions that could be collected,” said Zhou. “This, combined with advancements in metamaterials, has created a ripe environment for new devices and applications that could benefit from collecting this waste energy and putting it to use.”
    Metamaterials use small, carefully designed structures to interact with light and radio waves in ways that naturally occurring materials do not. To make the energy-harvesting antenna, the researchers used a metamaterial designed for high absorption of radio waves and that allows a higher voltage to flow across the device’s diode. This improved its efficiency at turning radio waves into power, particularly at low intensity.
    Testing with ambient power levels
    For lab tests of the device, which measured 16 cm by 16 cm, the researchers measured the amount of power harvested while changing the power and frequency of a radio source between 0.7 and 2.0 GHz. They demonstrated the ability to harvest 100 microwatts of power from radio waves with an intensity of just 0.4 microwatts per centimeter squared, approximately the level of intensity of the radio waves 100 meters from a cell phone tower.
    “We also placed a cell phone very close to the antenna during a phone call, and it captured enough energy to power an LED during the call,” said Zhou. “Although it would be more practical to harvest energy from cell phone towers, this demonstrated the power capturing abilities of the antenna.”
    Because the current version of the antenna is much larger than most of the devices it would potentially power, the researchers are working to make it smaller. They would also like to make a version that could collect energy from multiple types of radio waves simultaneously so that more energy could be gathered.
    Story Source:
    Materials provided by Optica. Note: Content may be edited for style and length. More

  • in

    Spintronics: Innovative crystals for future computer electronics

    While modern computers are already very fast, they also consume vast amounts of electricity. For some years now a new technology has been much talked about, which although it is still in its infancy could one day revolutionise computer technology — spintronics. The word is a portmanteau meaning “spin” and “electronics,” because with these components electrons no longer flow through computer chips, but the spin of the electrons serves as the information carrier. A team of researchers with staff from Goethe University Frankfurt has now identified materials that have surprisingly fast properties for spintronics. The results have been published in the specialist magazine “Nature Materials.”
    “You have to imagine the electron spins as if they were tiny magnetic needles which are attached to the atoms of a crystal lattice and which communicate with one another,” says Cornelius Krellner, Professor for Experimental Physics at Goethe University Frankfurt. How these magnetic needles react with one another fundamentally depends on the properties of the material. To date ferromagnetic materials have been examined in spintronics above all; with these materials — similarly to iron magnets — the magnetic needles prefer to point in one direction. In recent years, however, the focus has been placed on so-called antiferromagnets to a greater degree, because these materials are said to allow for even faster and more efficient switchability than other spintronic materials.
    With antiferromagnets the neighbouring magnetic needles always point in opposite directions. If an atomic magnetic needle is pushed in one direction, the neighbouring needle turns to face in the opposite direction. This in turn causes the next but one neighbour to point in the same direction as the first needle again. “As this interplay takes place very quickly and with virtually no friction loss, it offers considerable potential for entirely new forms of electronic componentry,” explains Krellner.
    Above all crystals with atoms from the group of rare earths are regarded as interesting candidates for spintronics as these comparatively heavy atoms have strong magnetic moments — chemists call the corresponding states of the electrons 4f orbitals. Among the rare-earth metals — some of which are neither rare nor expensive — are elements such as praseodymium and neodymium, which are also used in magnet technology. The research team has now studied seven materials with differing rare-earth atoms in total, from praseodymium to holmium.
    The problem in the development of spintronic materials is that perfectly designed crystals are required for such components as the smallest discrepancies immediately have a negative impact on the overall magnetic order in the material. This is where the expertise in Frankfurt came into play. “The rare earths melt at about 1000 degrees Celsius, but the rhodium that is also needed for the crystal does not melt until about 2000 degrees Celsius,” says Krellner. “This is why customary crystallisation methods do not function here.”
    Instead the scientists used hot indium as a solvent. The rare earths, as well as the rhodium and silicon that are required, dissolve in this at about 1500 degrees Celsius. The graphite crucible was kept at this temperature for about a week and then gently cooled. As a result the desired crystals grew in the form of thin disks with an edge length of two to three millimetres. These were then studied by the team with the aid of X-rays produced on the Berlin synchrotron BESSY II and on the Swiss Light Source of the Paul Scherrer Institute in Switzerland.
    “The most important finding is that in the crystals which we have grown the rare-earth atoms react magnetically with one another very quickly and that the strength of these reactions can be specifically adjusted through the choice of atoms,” says Krellner. This opens up the path for further optimisation — ultimately spintronics is still purely fundamental research and years away from the production of commercial components.
    There are still a great many problems to be solved on the path to market maturity, however. Thus, the crystals — which are produced in blazing heat — only deliver convincing magnetic properties at temperatures of less than minus 170 degrees Celsius. “We suspect that the operating temperatures can be raised significantly by adding iron atoms or similar elements,” says Krellner. “But it remains to be seen whether the magnetic properties are then just as positive.” Thanks to the new results the researchers now have a better idea of where it makes sense to change parameters, however.
    Story Source:
    Materials provided by Goethe University Frankfurt. Note: Content may be edited for style and length. More

  • in

    New data analysis tool uncovers important COVID-19 clues

    A new data analysis tool developed by Yale researchers has revealed the specific immune cell types associated with increased risk of death from COVID-19, they report Feb. 28 in the journal Nature Biotechnology.
    Immune system cells such as T cells and antibody-producing B cells are known to provide broad protection against pathogens such as SARS-CoV-2, the virus that causes COVID-19. And large-scale data analyses of millions of cells have given scientists a broad overview of the immune system response to this particular virus. However, they have also found that some immune cell responses — including by cell types that are usually protective — can occasionally trigger deadly inflammation and death in patients.
    Other data analysis tools that allow for examination down to the level of single cells have given scientists some clues about culprits in severe COVID cases. But such focused views often lack the context of particular cell groupings that might cause better or poorer outcomes.
    The Multiscale PHATE tool, a machine learning tool developed at Yale, allows researchers to pass through all resolutions of data, from millions of cells to a single cell, within minutes. The technology builds on an algorithm called PHATE, created in the lab of Smita Krishnaswamy, associate professor of genetics and computer science, which overcomes many of the shortcomings of existing data visualization tools.
    “Machine learning algorithms typically focus on a single resolution view of the data, ignoring information that can be found in other more focused views,” said Manik Kuchroo, a doctoral candidate at Yale School of Medicine who helped develop the technology and is co-lead author of the paper. “For this reason, we created Multiscale PHATE which allows users to zoom in and focus on specific subsets of their data to perform more detailed analysis.”
    Kuchroo, who works in Krishnaswamy’s lab, used the new tool to analyze 55 million blood cells taken from 163 patients admitted to Yale New Haven Hospital with severe cases of COVID-19. Looking broadly, they found that high levels T cells seem to be protective against poor outcomes while high levels of two white blood cell types known as granulocytes and monocytes were associated with higher levels of mortality.
    However, when the researchers drilled down to a more granular level they discovered that TH17, a helper T cell, was also associated with higher mortality when clustered with the immune system cells IL-17 and IFNG.
    By measuring quantities of these cells in the blood, they could predict whether the patient lived or died with 83% accuracy, the researchers report.
    “We were able to rank order risk factors of mortality to show which are the most dangerous,” Krishnaswamy said.
    In theory, the new data analytical tool could be used to fine tune risk assessment in a host of diseases, she said.
    Jessie Huang in the Yale Department of Computer Science and Patrick Wong in the Department of Immunobiology are co-lead authors of the paper. Akiko Iwasaki, the Waldemar Von Zedtwitz Professor of Immunobiology, is co-corresponding author.
    Story Source:
    Materials provided by Yale University. Original written by Bill Hathaway. Note: Content may be edited for style and length. More

  • in

    Computer drug simulations offer warning about promising diabetes and cancer treatment

    Using computer drug simulations, researchers have found that doctors need to be wary of prescribing a particular treatment for all types of cancer and patients.
    The drug, called metformin, has traditionally been prescribed for diabetes but has been used in clinical settings as a cancer treatment in recent years.
    The researchers say while metformin shows great promise, it also has negative consequences for some types of cancers.
    “Metformin is a wonder drug, and we are just beginning to understand all its possible benefits,” said Mehrshad Sadria, a PhD candidate in applied mathematics at the University of Waterloo. “Doctors need to examine the value of the drug on a case-by-case basis, because for some cancers and some patient profiles, it may actually have the opposite of the intended effect by protecting tumour cells against stress.”
    The computer-simulated treatments use models that replicate both the drug and the cancerous cells in a virtual environment. Such models can give clinical trials in humans a considerable head-start and can provide insights to medical practitioners that would take much longer to be discovered in the field.
    “In clinical settings, drugs can sometimes be prescribed in a trial and error manner,” said Anita Layton, professor of applied mathematics and Canada 150 Research Chair in mathematical biology and medicine at Waterloo. “Our mathematical models help accelerate clinical trials and remove some of the guesswork. What we see with this drug is that it can do a lot of good but needs more study.”
    The researchers say their work shows the importance of precision medicine when considering the use of metformin for cancer and other diseases. Precision medicine is an approach that assumes each patient requires individualized medical assessment and treatment.
    “Diseases and treatments are complicated,” Sadria said. “Everything about the patient matters, and even small differences can have a big impact on the effect of a drug, such as age, gender, genetic and epigenetic profiles. All these things are important and can affect a patient’s drug outcome. In addition, no one drug works for everyone, so doctors need to take a close look at each patient when considering treatments like metformin.”
    Sadria, Layton and co-author Deokhwa Seo’s paper was published in the journal BioMed Central Cancer.
    Story Source:
    Materials provided by University of Waterloo. Note: Content may be edited for style and length. More

  • in

    A potential breakthrough for production of superior battery technology

    Micro supercapacitors could revolutionise the way we use batteries by increasing their lifespan and enabling extremely fast charging. Manufacturers of everything from smartphones to electric cars are therefore investing heavily into research and development of these electronic components. Now, researchers at Chalmers University of Technology, Sweden, have developed a method that represents a breakthrough for how such supercapacitors can be produced.
    “When discussing new technologies, it is easy to forget how important the manufacturing method is, so that they can actually be commercially produced and be impactful in society. Here, we have developed methods that can really work in production,” explains Agin Vyas, doctoral student at the Department of Microtechnology and Nanoscience at Chalmers University of Technology and lead author of the article.
    Supercapacitors consist of two electrical conductors separated by an insulating layer. They can store electrical energy and have many positive properties compared to a normal battery, such as much more rapid charging, more efficient energy distribution, and a much greater lifespan without loss of performance, with regards to the charge and discharge cycle. When a supercapacitor is combined with a battery in an electrically powered product, the battery life can be extended many times -up to 4 times for commercial electric vehicles. And whether for personal electronic devices or industrial technologies, the benefits for the end consumer could be huge.
    “It would of course be very convenient to be able to quickly charge, for example, an electric car or not have to change or charge batteries as often as we currently do in our smartphones. But it would also represent a great environmental benefit and be much more sustainable, if batteries had a longer lifespan and did not need to be recycled in complicated processes,” says Agin Vyas.
    Manufacturing a big challenge
    But in practice, today’s supercapacitors are too large for many applications where they could be useful. They need to be about the same size as the battery they are connected to, which is an obstacle to integrating them in mobile phones or electric cars. Therefore, a large part of today’s research and development of supercapacitors is about making them smaller — significantly so.
    Agin Vyas and his colleagues have been working with developing ‘micro’ supercapacitors. These are so small that they can fit on the system circuits which control various functions in mobile phones, computers, electric motors and almost all electronics we use today. This solution is also called ‘system-on-a-chip’.
    One of the most important challenges is that the minimal units need to be manufactured in such a way that they become compatible with other components in a system circuit and can easily be tailored for different areas of use. The new paper demonstrates a manufacturing process in which micro-supercapacitors are integrated with the most common way of manufacturing system circuits (known as CMOS).
    “We used a method known as spin coating, a cornerstone technique in many manufacturing processes. This allows us to choose different electrode materials. We also use alkylamine chains in reduced graphene oxide, to show how that leads to a higher charging and storage capacity,” explains Agin Vyas.
    “Our method is scalable and would involve reduced costs for the manufacturing process. It represents a great step forward in production technology and an important step towards the practical application of micro-supercapacitors in both everyday electronics and industrial applications.”
    A method has also been developed for producing micro-supercapacitors of up to ten different materials in one unified manufacturing process, which means that properties can be easily tailored to suit several different end applications.
    Story Source:
    Materials provided by Chalmers University of Technology. Original written by Karin Wik. Note: Content may be edited for style and length. More

  • in

    Research team makes breakthrough discovery in light interactions with nanoparticles, paving the way for advances in optical computing

    Computers are an indispensable part of our daily lives, and the need for ones that can work faster, solve complex problems more efficiently, and leave smaller environmental footprints by minimizing the required energy for computation is increasingly urgent. Recent progress in photonics has shown that it’s possible to achieve more efficient computing through optical devices that use interactions between metamaterials and light waves to apply mathematical operations of interest on the input signals, and even solve complex mathematical problems. But to date, such computers have required a large footprint and precise, large-area fabrication of the components, which, because of their size, are difficult to scale into more complex networks.
    A newly published paper in Physical Review Letters from researchers at the Advanced Science Research Center at the CUNY Graduate Center (CUNY ASRC) details a breakthrough discovery in nanomaterials and light-wave interactions that paves the way for development of small, low-energy optical computers capable of advanced computing.
    “The increasing energy demands of large data centers and inefficiencies in current computing architectures have become a real challenge for our society,” said Andrea Alù, Ph.D., the paper’s corresponding author, founding director of the CUNY ASRC’s Photonics Initiative and Einstein Professor of Physics at the Graduate Center. “Our work demonstrates that it’s possible to design a nanoscale object that can efficiently interact with light to solve complex mathematical problems with unprecedented speeds and nearly zero energy demands.”
    In their study, CUNY ASRC researchers designed a nanoscale object made of silicon so that, when interrogated with light waves carrying an arbitrary input signal, it is able to encode the corresponding solution of a complex mathematical problem into the scattered light. The solution is calculated at the speed of light, and with minimal energy consumption.”
    “This finding is promising because it offers a practical pathway for creating a new generation of very energy-efficient, ultrafast, ultracompact nanoscale optical computers and other nanophotonic technologies that can be used for classical and quantum computations,” said Heedong Goh, Ph.D., the paper’s lead author and a postdoctoral research associate with Alù’s lab. “The very small size of these nanoscale optical computers is particularly appealing for scalability, because multiple nanostructures can be combined and connected together through light scattering to realize complex nanoscale computing networks.”
    Story Source:
    Materials provided by Advanced Science Research Center, GC/CUNY. Note: Content may be edited for style and length. More

  • in

    Researcher urges caution on AI in mammography

    Analyzing breast-cancer tumors with artificial intelligence has the potential to improve healthcare efficiency and outcomes. But doctors should proceed cautiously, because similar technological leaps previously led to higher rates of false-positive tests and over-treatment.
    That’s according to a new editorial in JAMA Health Forum co-written by Joann G. Elmore, MD, MPH, a researcher at the UCLA Jonsson Comprehensive Cancer Center, the Rosalinde and Arthur Gilbert Foundation Endowed Chair in Health Care Delivery and professor of medicine at the David Geffen School of Medicine at UCLA.
    “Without a more robust approach to the evaluation and implementation of AI, given the unabated adoption of emergent technology in clinical practice, we are failing to learn from our past mistakes in mammography,” the JAMA Health Forum editorial states. The piece, posted online Friday, was co-written with Christoph I. Lee, MD, MS, MBA, a professor of radiology at the University of Washington School of Medicine.
    One of those “past mistakes in mammography,” according to the authors, was adjunct computer-aided detection (CAD) tools, which grew rapidly in popularity in the field of breast cancer screening starting more than two decades ago. CAD was approved by the FDA in 1998, and by 2016 more than 92% of U.S. imaging facilities were using the technology to interpret mammograms and hunt for tumors. But the evidence showed CAD did not improve mammography accuracy. “CAD tools are associated with increased false positive rates, leading to overdiagnosis of ductal carcinoma in situ and unnecessary diagnostic testing,” the authors wrote. Medicare stopped paying for CAD in 2018, but by then the tools had racked up more than $400 million a year in unnecessary health costs.
    “The premature adoption of CAD is a premonitory symptom of the wholehearted embrace of emergent technologies prior to fully understanding their impact on patient outcomes,” Elmore and Lee wrote.
    The doctors suggest several safeguards to put in place to avoid “repeating past mistakes,” including tying Medicare reimbursement to “improved patient outcomes, not just improved technical performance in artificial settings.”
    Story Source:
    Materials provided by University of California – Los Angeles Health Sciences. Note: Content may be edited for style and length. More