More stories

  • in

    Researchers develop deep learning model to predict breast cancer

    Researchers have developed a new, interpretable artificial intelligence (AI) model to predict 5-year breast cancer risk from mammograms, according to a new study published today in Radiology, a journal of the Radiological Society of North America (RSNA).
    One in 8 women, or approximately 13% of the female population in the U.S., will develop invasive breast cancer in their lifetime and 1 in 39 women (3%) will die from the disease, according to the American Cancer Society. Breast cancer screening with mammography, for many women, is the best way to find breast cancer early when treatment is most effective. Having regularly scheduled mammograms can significantly lower the risk of dying from breast cancer. However, it remains unclear how to precisely predict which women will develop breast cancer through screening alone.
    Mirai, a state-of-the-art, deep learning-based algorithm, has demonstrated proficiency as a tool to help predict breast cancer but, because little is known about its reasoning process, the algorithm has the potential for overreliance by radiologists and incorrect diagnoses.
    “Mirai is a black box — a very large and complex neural network, similar in construction to ChatGPT — and no one knew how it made its decisions,” said the study’s lead author, Jon Donnelly, B.S., a Ph.D. student in the Department of Computer Science at Duke University in Durham, North Carolina. “We developed an interpretable AI method that allows us to predict breast cancer from mammograms 1 to 5 years in advance. AsymMirai is much simpler and much easier to understand than Mirai.”
    For the study, Donnelly and colleagues in the Department of Computer Science and Department of Radiology compared their newly developed mammography-based deep learning model called AsymMirai to Mirai’s 1- to 5-year breast cancer risk predictions. AsymMirai was built on the “front end” deep learning portion of Mirai, while replacing the rest of that complicated method with an interpretable module: local bilateral dissimilarity, which looks at tissue differences between the left and right breasts.
    “Previously, differences between the left and right breast tissue were used only to help detect cancer, not to predict it in advance,” Donnelly said. “We discovered that Mirai uses comparisons between the left and right sides, which is how we were able to design a substantially simpler network that also performs comparisons between the sides.”
    For the study, the researchers compared 210,067 mammograms from 81,824 patients in the EMory BrEast imaging Dataset (EMBED) from January 2013 to December 2020 using both Mirai and AsymMirai models. The researchers found that their simplified deep learning model performed almost as well as the state-of-the-art Mirai for 1- to 5-year breast cancer risk prediction.
    The results also supported the clinical importance of breast asymmetry and, as a result, highlights the potential of bilateral dissimilarity as a future imaging marker for breast cancer risk.
    Since the reasoning behind AsymMirai’s predictions is easy to understand, it could be a valuable adjunct to human radiologists in breast cancer diagnoses and risk prediction, Donnelly said.
    “We can, with surprisingly high accuracy, predict whether a woman will develop cancer in the next 1 to 5 years based solely on localized differences between her left and right breast tissue,” he said. “This could have public impact because it could, in the not-too-distant future, affect how often women receive mammograms.” More

  • in

    Backyard insect inspires invisibility devices, next gen tech

    Leafhoppers, a common backyard insect, secrete and coat themselves in tiny mysterious particles that could provide both the inspiration and the instructions for next-generation technology, according to a new study led by Penn State researchers. In a first, the team precisely replicated the complex geometry of these particles, called brochosomes, and elucidated a better understanding of how they absorb both visible and ultraviolet light.
    This could allow the development of bioinspired optical materials with possible applications ranging from invisible cloaking devices to coatings to more efficiently harvest solar energy, said Tak-Sing Wong, professor of mechanical engineering and biomedical engineering. Wong led the study, which was published today (March 18) in the Proceedings of the National Academy of Sciences of the United States of America (PNAS).
    The unique, tiny particles have an unusual soccer ball-like geometry with cavities, and their exact purpose for the insects has been something of a mystery to scientists since the 1950s. In 2017, Wong led the Penn State research team that was the first to create a basic, synthetic version of brochosomes in an effort to better understand their function.
    “This discovery could be very useful for technological innovation,” said Lin Wang, postdoctoral scholar in mechanical engineering and the lead author of the study. “With a new strategy to regulate light reflection on a surface, we might be able to hide the thermal signatures of humans or machines. Perhaps someday people could develop a thermal invisibility cloak based on the tricks used by leafhoppers. Our work shows how understanding nature can help us develop modern technologies.”
    Wang went on to explain that even though scientists have known about brochosome particles for three-quarters of a century, making them in a lab has been a challenge due to the complexity of the particle’s geometry.
    “It has been unclear why the leafhoppers produce particles with such complex structures,” Wang said, “We managed to make these brochosomes using a high-tech 3D-printing method in the lab. We found that these lab-made particles can reduce light reflection by up to 94%. This is a big discovery because it’s the first time we’ve seen nature do something like this, where it controls light in such a specific way using hollow particles.”
    Theories on why leafhoppers coat themselves with a brochosome armor have ranged from keeping them free of contaminants and water to a superhero-like invisibility cloak. However, a new understanding of their geometry raises a strong possibility that its main purpose could be the cloak to avoid predators, according to Tak-Sing Wong, professor of mechanical engineering and biomedical engineering and corresponding author of the study.

    The researchers have found that the size of the holes in the brochosome that give it a hollow, soccer ball-like appearance is extremely important. The size is consistent across leafhopper species, no matter the size of the insect’s body. The brochosomes are roughly 600 nanometers in diameter — about half the size of a single bacterium — and the brochosome pores are around 200 nanometers.
    “That makes us ask a question,” Wong said. “Why this consistency? What is the secret of having brochosomes of about 600 nanometers with about 200-nanometer pores? Does that serve some purpose?”
    The researchers found the unique design of brochosomes serves a dual purpose — absorbing ultraviolet (UV) light, which reduces visibility to predators with UV vision, such as birds and reptiles, and scattering visible light, creating an anti-reflective shield against potential threats. The size of the holes is perfect for absorbing light at the ultraviolet frequency.
    This potentially could lead to a variety of applications for humans using synthetic brochosomes, such as more efficient solar energy harvesting systems, coatings that protect pharmaceuticals from light-induced damage, advanced sunscreens for better skin protection against sun damage and even cloaking devices, researchers said. To test this, the team first had to make synthetic brochosomes, a major challenge in and of itself.
    In their 2017 study, the researchers mimicked some features of brochosomes, particularly the dimples and their distribution, using synthetic materials. This allowed them to begin understanding the optical properties. However, they were only able to make something that looked like brochosomes, not an exact replica.
    “This is the first time we are able to make the exact geometry of the natural brochosome,” Wong said, explaining that the researchers were able to create scaled synthetic replicas of the brochosome structures by using advanced 3D-printing technology.

    They printed a scaled-up version that was 20,000 nanometers in size, or roughly one-fifth the diameter of a human hair. The researchers precisely replicated the shape and morphology, as well as the number and placement of pores using 3D printing, to produce still-small faux brochosomes that were large enough to characterize optically.
    They used a Micro-Fourier transform infrared (FTIR) spectrometer to examine how the brochosomes interacted with infrared light of different wavelengths, helping the researchers understand how the structures manipulate the light.
    Next, the researchers said they plan to improve the synthetic brochosome fabrication to enable production at a scale closer to the size of natural brochosomes. They will also explore additional applications for synthetic brochosomes, such as information encryption, where brochosome-like structures could be used as part of an encryption system where data is only visible under certain light wavelengths.
    Wang noted that their brochosome work demonstrates the value of a biomimetic research approach, where scientists looks to nature for inspiration.
    “Nature has been a good teacher for scientists to develop novel advanced materials,” Wang said. “In this study, we have just focused on one insect species, but there are many more amazing insects out there that are waiting for material scientists to study, and they may be able to help us solve various engineering problems. They are not just bugs; they are inspirations.”
    Along with Wong and Wang from Penn State, other researchers on the study include Sheng Shen, professor of mechanical engineering, and Zhuo Li, doctoral candidate in mechanical engineering, both at Carnegie Mellon University, who contributed to the simulations in this study. Wang and Li contributed equally to this work, for which the researchers have filed a U.S. provisional patent. The Office of Naval Research supported this research. More

  • in

    Two artificial intelligences talk to each other

    Performing a new task based solely on verbal or written instructions, and then describing it to others so that they can reproduce it, is a cornerstone of human communication that still resists artificial intelligence (AI). A team from the University of Geneva (UNIGE) has succeeded in modelling an artificial neural network capable of this cognitive prowess. After learning and performing a series of basic tasks, this AI was able to provide a linguistic description of them to a ”sister” AI, which in turn performed them. These promising results, especially for robotics, are published in Nature Neuroscience.
    Performing a new task without prior training, on the sole basis of verbal or written instructions, is a unique human ability. What’s more, once we have learned the task, we are able to describe it so that another person can reproduce it. This dual capacity distinguishes us from other species which, to learn a new task, need numerous trials accompanied by positive or negative reinforcement signals, without being able to communicate it to their congeners.
    A sub-field of artificial intelligence (AI) — Natural language processing — seeks to recreate this human faculty, with machines that understand and respond to vocal or textual data. This technique is based on artificial neural networks, inspired by our biological neurons and by the way they transmit electrical signals to each other in the brain. However, the neural calculations that would make it possible to achieve the cognitive feat described above are still poorly understood.
    ”Currently, conversational agents using AI are capable of integrating linguistic information to produce text or an image. But, as far as we know, they are not yet capable of translating a verbal or written instruction into a sensorimotor action, and even less explaining it to another artificial intelligence so that it can reproduce it,” explains Alexandre Pouget, full professor in the Department of Basic Neurosciences at the UNIGE Faculty of Medicine.
    A model brain
    The researcher and his team have succeeded in developing an artificial neuronal model with this dual capacity, albeit with prior training. ”We started with an existing model of artificial neurons, S-Bert, which has 300 million neurons and is pre-trained to understand language. We ‘connected’ it to another, simpler network of a few thousand neurons,” explains Reidar Riveland, a PhD student in the Department of Basic Neurosciences at the UNIGE Faculty of Medicine, and first author of the study.
    In the first stage of the experiment, the neuroscientists trained this network to simulate Wernicke’s area, the part of our brain that enables us to perceive and interpret language. In the second stage, the network was trained to reproduce Broca’s area, which, under the influence of Wernicke’s area, is responsible for producing and articulating words. The entire process was carried out on conventional laptop computers. Written instructions in English were then transmitted to the AI.
    For example: pointing to the location — left or right — where a stimulus is perceived; responding in the opposite direction of a stimulus; or, more complex, between two visual stimuli with a slight difference in contrast, showing the brighter one. The scientists then evaluated the results of the model, which simulated the intention of moving, or in this case pointing. ”Once these tasks had been learned, the network was able to describe them to a second network — a copy of the first — so that it could reproduce them. To our knowledge, this is the first time that two AIs have been able to talk to each other in a purely linguistic way,” says Alexandre Pouget, who led the research.
    For future humanoids
    This model opens new horizons for understanding the interaction between language and behaviour. It is particularly promising for the robotics sector, where the development of technologies that enable machines to talk to each other is a key issue. ”The network we have developed is very small. Nothing now stands in the way of developing, on this basis, much more complex networks that would be integrated into humanoid robots capable of understanding us but also of understanding each other,” conclude the two researchers. More

  • in

    Where quantum computers can score

    The travelling salesman problem is considered a prime example of a combinatorial optimisation problem. Now a Berlin team led by theoretical physicist Prof. Dr. Jens Eisert of Freie Universität Berlin and HZB has shown that a certain class of such problems can actually be solved better and much faster with quantum computers than with conventional methods.
    Quantum computers use so-called qubits, which are not either zero or one as in conventional logic circuits, but can take on any value in between. These qubits are realised by highly cooled atoms, ions or superconducting circuits, and it is still physically very complex to build a quantum computer with many qubits. However, mathematical methods can already be used to explore what fault-tolerant quantum computers could achieve in the future. “There are a lot of myths about it, and sometimes a certain amount of hot air and hype. But we have approached the issue rigorously, using mathematical methods, and delivered solid results on the subject. Above all, we have clarified in what sense there can be any advantages at all,” says Prof. Dr. Jens Eisert, who heads a joint research group at Freie Universität Berlin and Helmholtz-Zentrum Berlin.
    The well-known problem of the travelling salesman serves as a prime example: A traveller has to visit a number of cities and then return to his home town. Which is the shortest route? Although this problem is easy to understand, it becomes increasingly complex as the number of cities increases and computation time explodes. The travelling salesman problem stands for a group of optimisation problems that are of enormous economic importance, whether they involve railway networks, logistics or resource optimisation. Good enough solutions can be found using approximation methods.
    The team led by Jens Eisert and his colleague Jean-Pierre Seifert has now used purely analytical methods to evaluate how a quantum computer with qubits could solve this class of problems. A classic thought experiment with pen and paper and a lot of expertise. “We simply assume, regardless of the physical realisation, that there are enough qubits and look at the possibilities of performing computing operations with them,” explains Vincent Ulitzsch, a PhD student at the Technical University of Berlin. In doing so, they unveiled similarities to a well-known problem in cryptography, i.e. the encryption of data. “We realised that we could use the Shor algorithm to solve a subclass of these optimisation problems,” says Ulitzsch. This means that the computing time no longer “explodes” with the number of cities (exponential, 2N), but only increases polynomially, i.e. with Nx, where x is a constant. The solution obtained in this way is also qualitatively much better than the approximate solution using the conventional algorithm.
    “We have shown that for a specific but very important and practically relevant class of combinatorial optimisation problems, quantum computers have a fundamental advantage over classical computers for certain instances of the problem,” says Eisert. More

  • in

    Projection mapping leaves the darkness behind

    Images projected onto objects in the real world create impressive displays that educate and entertain. However, current projection mapping systems all have one common limitation: they only work well in the dark. In a study recently published in IEEE Transactions on Visualization and Computer Graphics, researchers from Osaka University suggest a way to bring projection mapping “into the light.”
    Conventional projection mapping, which turns any three-dimensional surface into an interactive display, requires darkness because any illumination in the surroundings also illuminates the surface of the target object used for display. This means that black and dark colors appear too bright and can’t be displayed properly. In addition, the projected images always look like they are glowing, but not all real objects are luminous, which restricts the range of objects that can be displayed. Displays in dark environments have another disadvantage. Multiple viewers can interact with an illuminated scene, but they are less able to interact with each other in the dark environment.
    “To get around this problem, we use projectors to reproduce normal illumination on every part of the room except the display object itself,” says Masaki Takeuchi, lead author of the study. “In essence, we create the illusion of global illumination without using actual global illumination.”
    Projecting global illumination requires a set of techniques that differs from those of conventional projection mapping. The research team use several standard projectors to illuminate the room along with a projector with a wide aperture and large-format lens to soften the crisp edges of shadows. These luminaire projectors illuminate the environment, but the target object remains in shadow. Conventional texture projectors are then used to map the texture onto its shadowed surface.
    The researchers built a prototype environment and evaluated the performance of their approach. One aspect they evaluated was whether the objects were perceived by humans in aperture-color mode (where the colors appear to radiate from the object itself) or surface-color mode (in which the light appears to be reflected from a colored surface).
    “To our knowledge, we are the first to consider this,” says Daisuke Iwai, senior author of the study. “However, we believe it is fundamental for producing realistic environments.”
    The researchers found that, using their method, they could project texture images onto objects without making the object appear to glow. Instead, the textures were perceived to be the true colors of the object’s surface.
    In future, the researchers plan to add more projectors to handle the complex illumination in the areas next to the display object. Eventually, they aim to produce scenes that are indistinguishable from real-world three-dimensional scenes. They believe that this approach will enable visual design environments for industrial products or packaging, in which the participants can interact not only with their design under natural light but also with each other, facilitating communication and improving design performance. More

  • in

    Holographic message encoded in simple plastic

    There are many ways to store data — digitally, on a hard disk, or using analogue storage technology, for example as a hologram. In most cases, it is technically quite complicated to create a hologram: High-precision laser technology is normally used for this.
    However, if the aim is simply to store data in a physical object, then holography can be done quite easily, as has now been demonstrated at TU Wien: A 3D printer can be used to produce a panel from normal plastic in which a QR code can be stored, for example. The message is read using terahertz rays — electromagnetic radiation that is invisible to the human eye.
    The hologram as a data storage device
    A hologram is completely different from an ordinary image. In an ordinary image, each pixel has a clearly defined position. If you tear off a piece of the picture, a part of the content is lost.
    In a hologram, however, the image is formed by contributions from all areas of the hologram simultaneously. If you take away a piece of the hologram, the rest can still create the complete image (albeit perhaps a blurrier version). With the hologram, the information is not stored pixel by pixel, but rather, all of the information is spread out over the whole hologram.
    “We have applied this principle to terahertz beams,” says Evan Constable from the Institute of Solid State Physics at TU Wien. “These are electromagnetic rays in the range of around one hundred to several thousand gigahertz, comparable to the radiation of a cell phone or a microwave oven — but with a significantly higher frequency.”
    This terahertz radiation is sent to a thin plastic plate. This plate is almost transparent to the terahertz rays, but it has a higher refractive index than the surrounding air, so at each point of the plate, it changes the incident wave a little. “A wave then emanates from each point of the plate, and all these waves interfere with each other,” says Evan Constable. “If you have adjusted the thickness of the plate in just the right way, point by point, then the superposition of all these waves produces exactly the desired image.”
    It is similar to throwing lots of little stones into a pond in a precisely calculated way so that the water waves from all these stones add up to a very specific overall wave pattern.

    A piece of cheap plastic as a high-tech storage unit for valuable items
    In this way, it was possible to encode a Bitcoin wallet address (consisting of 256 bits) in a piece of plastic. By shining terahertz rays of the correct wavelength through this plastic plate, a terahertz ray image is created that produces exactly the desired code. “In this way, you can securely store a value of tens of thousands of euros in an object that only costs a few cents,” says Evan Constable.
    In order for the plate to generate the correct code, one first has to calculate how thick the plate has to be at each point, so that it changes the terahertz wave in exactly the right way. Evan Constable and his collaborators made the code for obtaining this thickness profile available for free on Github. “Once you have this thickness profile, all you need is an ordinary 3D printer to print the plate and you have the desired information stored holographically,” explains Constable. The aim of the research work was not only to make holography with terahertz waves possible, but also to demonstrate how well the technology for working with these waves has progressed and how precisely this still rather unusual range of electromagnetic radiation can already be used today. More

  • in

    New technique helps AI tell when humans are lying

    Researchers have developed a new training tool to help artificial intelligence (AI) programs better account for the fact that humans don’t always tell the truth when providing personal information. The new tool was developed for use in contexts when humans have an economic incentive to lie, such as applying for a mortgage or trying to lower their insurance premiums.
    “AI programs are used in a wide variety of business contexts, such as helping to determine how large of a mortgage an individual can afford, or what an individual’s insurance premiums should be,” says Mehmet Caner, co-author of a paper on the work. “These AI programs generally use mathematical algorithms driven solely by statistics to do their forecasting. But the problem is that this approach creates incentives for people to lie, so that they can get a mortgage, lower their insurance premiums, and so on.
    “We wanted to see if there was some way to adjust AI algorithms in order to account for these economic incentives to lie,” says Caner, who is the Thurman-Raytheon Distinguished Professor of Economics in North Carolina State University’s Poole College of Management.
    To address this challenge, the researchers developed a new set of training parameters that can be used to inform how the AI teaches itself to make predictions. Specifically, the new training parameters focus on recognizing and accounting for a human user’s economic incentives. In other words, the AI trains itself to recognize circumstances in which a human user might lie to improve their outcomes.
    In proof-of-concept simulations, the modified AI was better able to detect inaccurate information from users.
    “This effectively reduces a user’s incentive to lie when submitting information,” Caner says. “However, small lies can still go undetected. We need to do some additional work to better understand where the threshold is between a ‘small lie’ and a ‘big lie.'”
    The researchers are making the new AI training parameters publicly available, so that AI developers can experiment with them.
    “This work shows we can improve AI programs to reduce economic incentives for humans to lie,” Caner says. “At some point, if we make the AI clever enough, we may be able to eliminate those incentives altogether.” More

  • in

    Advance for soft robotics manufacturing, design

    Soft robots use pliant materials such as elastomers to interact safely with the human body and other challenging, delicate objects and environments. A team of Rice University researchers has developed an analytical model that can predict the curing time of platinum-catalyzed silicone elastomers as a function of temperature. The model could help reduce energy waste and improve throughput for elastomer-based components manufacturing.
    “In our study, we looked at elastomers as a class of materials that enables soft robotics, a field that has seen a huge surge in growth over the past decade,” said Daniel Preston, a Rice assistant professor of mechanical engineering and corresponding author on a study published in Cell Reports Physical Science. “While there is some related research on materials like epoxies and even on several specific silicone elastomers, until now there was no detailed quantitative account of the curing reaction for many of the commercially available silicone elastomers that people are actually using to make soft robots. Our work fills that gap.”
    The platinum-catalyzed silicone elastomers that Preston and his team studied typically start out as two viscoelastic liquids that, when mixed together, transform over time into a rubbery solid. As a liquid mixture, they can be poured into intricate molds and thus used for casting complex components. The curing process can occur at room temperature, but it can also be sped up using heat.
    Manufacturing processes involving elastomers have typically relied on empirical estimates for temperature and duration to control the curing process. However, this ballpark approach makes it difficult to predict how elastomers will behave under varying curing conditions. Having a quantitative framework to determine exactly how temperature impacts curing speed will enable manufacturers to maximize efficiency and reduce waste.
    “Previously, using existing models to predict elastomers’ curing behavior under varying temperature conditions was a much more challenging task,” said Te Faye Yap, a graduate student in the Preston lab who is lead author on the study. “There’s a huge need to make manufacturing processes more efficient and reduce waste, both in terms of energy consumption and materials.”
    To understand how temperature impacts the curing process, the researchers used a rheometer — an instrument that measures the mechanical properties of liquids and soft solids — to analyze the curing behavior of six commercially available platinum-catalyzed elastomers.
    “We were able to develop a model based on what is called the Arrhenius relationship that relates this curing reaction rate to the temperature at which the elastomer is being cured,” Preston said. “Now we have a really nice quantitative understanding of exactly how temperature impacts curing speed.”
    The Arrhenius framework, a formula that relates the rate of chemical reactions to temperature, has been used in a variety of contexts such as semiconductor processing and virus inactivation. Preston and his group have used the framework in some of their prior work and found it also applies to curing reactions for materials like epoxies as described in previous studies. In this study, the researchers used the Arrhenius framework along with rheological data to develop an analytical model that could directly impact manufacturing practices.

    “In this work, we really probed the curing reaction as a function of the temperature of the elastomer, but we also looked in depth at the mechanical properties of the elastomers when cured at elevated temperatures meant to achieve these higher throughputs and curing speeds,” Preston said.
    The researchers conducted mechanical testing on elastomer samples that were cured at room temperature and at elevated temperatures to see whether heating treatments impact the materials’ mechanical properties.
    “We found that exposing the elastomers to 70 degrees Celsius (158 Fahrenheit) does not alter the tensile and compressive properties of the material when compared to components that were cured at room temperature,” Yap said. “Moreover, to demonstrate the usage of accelerated curing when making a device, we fabricated soft, pneumatically actuated grippers at both elevated and room temperature conditions, and we observed no difference in the performance of the grippers upon pressurizing.”
    While temperature did not seem to have an effect on the elastomers’ ability to withstand mechanical stress, the researchers found that it did impact adhesion between components.
    “Say we’ve already cured a few different components that need to be assembled together into the complete, soft robotic system,” Preston said. “When we then try to adhere these components to each other, there’s an impact on the adhesion or the ability to stick them together. In this case, that is greatly affected by the extent of curing that has occurred before we tried to bond.”
    The research advances scientific understanding of how temperature can be used to manipulate fabrication processes involving elastomers, which could open up the soft robotics design space for new or improved applications. One key area of interest is the biomedical industry.

    “Surgical robots oftenbenefit from being compliant or soft in nature, because operating inside the human body means you want to minimize the risk of puncture or bruising to tissue or organs,” Preston said. “So a lot of the robots that now operate inside the human body are moving to softer architectures and are benefiting from that. Some researchers have also started to look into using soft robotic systems to help reposition patients confined to a bed for long periods of time to try to avoid putting pressure on certain areas.”
    Other areas of potential use for soft robotics are agriculture (for instance picking fruits or vegetables that are fragile or bruise easily), disaster relief (search-and-rescue operations in impacted areas with limited or difficult access) and research (collecting or handling samples).
    “This study provides a framework that could expand the design space for manufacturing with thermally cured elastomers to create complex structures that exhibit high elasticity which can be used to develop medical devices, shock absorbers and soft robots,” Yap said.
    Silicone elastomers’ unique properties — biocompatibility, flexibility, thermal resistance, shock absorption, insulation and more — will continue to be an asset in a range of industries, and the current research can help expand and improve their use beyond current capabilities.
    The research was supported by the National Science Foundation (2144809), the Rice Academy of Fellows, NASA (80NSSC21K1276), the National GEM Consortium and the US Department of Energy through an appointment with the Energy Efficiency & Renewable Energy Science, Technology and Policy Program administered by the Oak Ridge Institute for Science and Education (ORISE) and managed by Oak Ridge Associated Universities (ORAU) under contract number DE-SC0014664. More