More stories

  • in

    ‘On the Move’ examines how climate change will alter where people live

    On the MoveAbrahm LustgartenFarrar, Straus and Giroux, $30

    Ellen Herdell’s nerves were nearing a breaking point. The fortysomething, lifelong Californian had noticed her home was increasingly threatened by wildfires. After relatives lost their house to a blaze and the constant threat traumatized her 9-year-old daughter, Herdell found herself up at 3 a.m. one night in 2020 searching Zillow for homes in Vermont.

    She’s not alone. Across the United States, people facing extreme fires, storms, floods and heat are looking for the escape hatch. In On the Move, Abrahm Lustgarten examines who these people are, where they live, where climate change may cause them to move and how this reshuffling will impact the country (SN: 5/12/20). More

  • in

    The math problem that took nearly a century to solve: Secret to Ramsey numbers

    We’ve all been there: staring at a math test with a problem that seems impossible to solve. What if finding the solution to a problem took almost a century? For mathematicians who dabble in Ramsey theory, this is very much the case. In fact, little progress had been made in solving Ramsey problems since the 1930s.
    Now, University of California San Diego researchers Jacques Verstraete and Sam Mattheus have found the answer to r(4,t), a longstanding Ramsey problem that has perplexed the math world for decades.
    What was Ramsey’s problem, anyway?
    In mathematical parlance, a graph is a series of points and the lines in between those points. Ramsey theory suggests that if the graph is large enough, you’re guaranteed to find some kind of order within it — either a set of points with no lines between them or a set of points with all possible lines between them (these sets are called “cliques”). This is written as r(s,t) where s are the points with lines and t are the points without lines.
    To those of us who don’t deal in graph theory, the most well-known Ramsey problem, r(3,3), is sometimes called “the theorem on friends and strangers” and is explained by way of a party: in a group of six people, you will find at least three people who all know each other or three people who all don’t know each other. The answer to r(3,3) is six.
    “It’s a fact of nature, an absolute truth,” Verstraete states. “It doesn’t matter what the situation is or which six people you pick — you will find three people who all know each other or three people who all don’t know each other. You may be able to find more, but you are guaranteed that there will be at least three in one clique or the other.”
    What happened after mathematicians found that r(3,3) = 6? Naturally, they wanted to know r(4,4), r(5,5), and r(4,t) where the number of points that are not connected is variable. The solution to r(4,4) is 18 and is proved using a theorem created by Paul Erdös and George Szekeres in the 1930s.

    Currently r(5,5) is still unknown.
    A good problem fights back
    Why is something so simple to state so hard to solve? It turns out to be more complicated than it appears. Let’s say you knew the solution to r(5,5) was somewhere between 40-50. If you started with 45 points, there would be more than 10234 graphs to consider!
    “Because these numbers are so notoriously difficult to find, mathematicians look for estimations,” Verstraete explained. “This is what Sam and I have achieved in our recent work. How do we find not the exact answer, but the best estimates for what these Ramsey numbers might be?”
    Math students learn about Ramsey problems early on, so r(4,t) has been on Verstraete’s radar for most of his professional career. In fact, he first saw the problem in print in Erdös on Graphs: His Legacy of Unsolved Problems, written by two UC San Diego professors, Fan Chung and the late Ron Graham. The problem is a conjecture from Erdös, who offered $250 to the first person who could solve it.
    “Many people have thought about r(4,t) — it’s been an open problem for over 90 years,” Verstraete said. “But it wasn’t something that was at the forefront of my research. Everybody knows it’s hard and everyone’s tried to figure it out, so unless you have a new idea, you’re not likely to get anywhere.”
    Then about four years ago, Verstraete was working on a different Ramsey problem with a mathematician at the University of Illinois-Chicago, Dhruv Mubayi. Together they discovered that pseudorandom graphs could advance the current knowledge on these old problems.

    In 1937, Erdös discovered that using random graphs could give good lower bounds on Ramsey problems. What Verstraete and Mubayi discovered was that sampling from pseudorandom graphs frequently gives better bounds on Ramsey numbers than random graphs. These bounds — upper and lower limits on the possible answer — tightened the range of estimations they could make. In other words, they were getting closer to the truth.
    In 2019, to the delight of the math world, Verstraete and Mubayi used pseudorandom graphs to solve r(3,t). However, Verstraete struggled to build a pseudorandom graph that could help solve r(4,t).
    He began pulling in different areas of math outside of combinatorics, including finite geometry, algebra and probability. Eventually he joined forces with Mattheus, a postdoctoral scholar in his group whose background was in finite geometry.
    “It turned out that the pseudorandom graph we needed could be found in finite geometry,” Verstraete stated. “Sam was the perfect person to come along and help build what we needed.”
    Once they had the pseudorandom graph in place, they still had to puzzle out several pieces of math. It took almost a year, but eventually they realized they had a solution: r(4,t) is close to a cubic function of t. If you want a party where there will always be four people who all know each other or t people who all don’t know each other, you will need roughly t3 people present. There is a small asterisk (actually an o) because, remember, this is an estimate, not an exact answer. But t3 is very close to the exact answer.
    The findings are currently under review with the Annals of Mathematics.
    “It really did take us years to solve,” Verstraete stated. “And there were many times where we were stuck and wondered if we’d be able to solve it at all. But one should never give up, no matter how long it takes.”
    Verstraete emphasizes the importance of perseverance — something he reminds his students of often. “If you find that the problem is hard and you’re stuck, that means it’s a good problem. Fan Chung said a good problem fights back. You can’t expect it just to reveal itself.”
    Verstraete knows such dogged determination is well-rewarded: “I got a call from Fan saying she owes me $250.” More

  • in

    100 kilometers of quantum-encrypted transfer

    Researchers at DTU have successfully distributed a quantum-secure key using a method called Continuous Variable Quantum Key Distribution (CV QKD). The researchers have managed to make the method work over a record 100 km distance — the longest distance ever achieved using the CV QKD method. The advantage of the method is that it can be applied to the existing Internet infrastructure.
    Quantum computers threaten existing algorithm-based encryptions, which currently secure data transfers against eavesdropping and surveillance. They are not yet powerful enough to break them, but it’s a matter of time. If a quantum computer succeeds in figuring out the most secure algorithms, it leaves an open door to all data connected via the internet. This has accelerated the development of a new encryption method based on the principles of quantum physics.
    But to succeed, researchers must overcome one of the challenges of quantum mechanics — ensuring consistency over longer distances. Continuous Variable Quantum Key Distribution has so far worked best over short distances.
    “We have achieved a wide range of improvements, especially regarding the loss of photons along the way. In this experiment, published in Science Advances, we securely distributed a quantum-encrypted key 100 kilometres via fibre optic cable. This is a record distance with this method,” says Tobias Gehring, an associate professor at DTU, who, together with a group of researchers at DTU, aims to be able to distribute quantum-encrypted information around the world via the internet.
    Secret keys from quantum states of light
    “When data needs to be sent from A to B, it must be protected. Encryption combines data with a secure key distributed between sender and receiver so both can access the data. A third party must not be able to figure out the key while it is being transmitted; otherwise, the encryption will be compromised. Key exchange is, therefore, essential in encrypting data.
    Quantum Key Distribution (QKD) is an advanced technology that researchers are working on for crucial exchanges. The technology ensures the exchange of cryptographic keys by using light from quantum mechanical particles called photons.

    When a sender sends information encoded in photons, the quantum mechanical properties of the photons are exploited to create a unique key for the sender and receiver. Attempts by others to measure or observe photons in a quantum state will instantly change their state. Therefore, it is physically only possible to measure light by disturbing the signal.
    “It is impossible to make a copy of a quantum state, as when making a copy of an A4 sheet — if you try, it will be an inferior copy. That’s what ensures that it is not possible to copy the key. This can protect critical infrastructure such as health records and the financial sector from being hacked,” explains Tobias Gehring.
    Works via existing infrastructure
    The Continuous Variable Quantum Key Distribution (CV QKD) technology can be integrated into the existing internet infrastructure.
    “The advantage of using this technology is that we can build a system that resembles what optical communication already relies on.”
    The backbone of the internet is optical communication. It works by sending data via infrared light running through optical fibres. They function as light guides laid in cables, ensuring we can send data worldwide. Data can be sent faster and over longer distances via fibre optic cables, and light signals are less susceptible to interference, which is called noise in technical terms.

    “It is a standard technology that has been used for a long time. So, you don’t need to invent anything new to be able to use it to distribute quantum keys, and it can make implementation significantly cheaper. And we can operate at room temperature,” explains Tobias Gehring, adding:
    “But CV QKD technology works best over shorter distances. Our task is to increase the distance. And the 100 kilometres is a big step in the right direction.”
    Noise, Errors, and Assistance from Machine Learning
    The researchers succeeded in increasing the distance by addressing three factors that limit their system in exchanging the quantum-encrypted keys over longer distances:
    Machine learning provided earlier measurements of the disturbances affecting the system. Noise, as these disturbances are called, can arise, for example, from electromagnetic radiation, which can distort or destroy the quantum states being transmitted. The earlier detection of the noise made it possible to reduce its corresponding effect more effectively.
    Furthermore, the researchers have become better at correcting errors that can occur along the way, which can be caused by noise, interference, or imperfections in the hardware.
    “In our upcoming work, we will use the technology to establish a secure communication network between Danish ministries to secure their communication. We will also attempt to generate secret keys between, for example, Copenhagen and Odense to enable companies with branches in both cities to establish quantum-safe communication,” Tobias Gehring says.
    Facts:
    We don’t exactly know what happens — yet.
    Quantum Key Distribution was developed as a concept in 1984 by Bennett and Brassard, while the Canadian physicist and computer pioneer Artur Ekert and his colleagues carried out the first practical implementation of QKD in 1992. Their contribution has been crucial for developing modern QKD protocols, a set of rules, procedures, or conventions that determine how a device should perform a task.
    Quantum Key Distribution (QKD) is based on a fundamental uncertainty in copying photons in a quantum state. Photons are the quantum mechanical particles that light consists of.
    Photons in a quantum state carry a fundamental uncertainty, meaning it is not possible with certainty to know whether the photon is one or several photons collected in the given state, also called coherent photons. This prevents a hacker from measuring the number of photons, making it impossible to make an exact copy of a state.
    They also carry a fundamental randomness because photons are in multiple states simultaneously, also called superposition. The superposition of photons collapses into a random state when the measurement occurs. This makes it impossible to measure precisely which phase they are in while in superposition.
    Together, it becomes nearly impossible for a hacker to copy a key without introducing errors, and the system will know if a hacker is trying to break in and can shut down immediately. In other words, it becomes impossible for a hacker to first steal the key and then to avoid the door locking as he tries to put the key in the lock.
    Continuous Variable Quantum Key Distribution (CV QKD) focuses on measuring the smooth properties of quantum states in photons. It can be compared to conveying information in a stream of all the nuances of colours instead of conveying information step by step in each colour.
    Facts:
    The Innovation Fund Denmark, the Danish National Research Foundation, the European Union’s Horizon Europe research and innovation program, the Carlsberg Foundation, and the Czech Science Foundation support the project.
    The research group comprises Adnan A.E. Hajomer, Nitin Jain, Hou-Man Chin, Ivan Derkach, Ulrik L. Andersen, and Tobias Gehring.
    The Danish Quantum Communication Infrastructure (QCI.DK) targets the first deployment of Danish quantum communication technologies in a versatile network supporting real-life Quantum Key Distribution applications. More

  • in

    I spy with my speedy eye — scientists discover speed of visual perception ranges widely in humans

    Using a blink-and-you’ll-miss-it experiment, researchers from Trinity College Dublin have discovered that individuals differ widely in the rate at which they perceive visual signals. Some people perceive a rapidly changing visual cue at frequencies that others cannot, which means some access more visual information per timeframe than others.
    This discovery suggests some people have an innate advantage in certain settings where response time is crucial, such as in ball sports, or in competitive gaming.
    The rate with which we perceive the world is known as our “temporal resolution,” and in many ways it is similar to the refresh rate of a computer monitor.
    The researchers, from the Department of Zoology in the School of Natural Sciences and the Trinity College Institute of Neuroscience, found that there is considerable variation among people in their temporal resolution, meaning some people effectively see more “images per second” than others.
    To quantify this, the scientists used the “critical flicker fusion threshold,” a measure for the maximum frequency at which an individual can perceive a flickering light source.
    If the light source flickers above a person’s threshold, they will not be able to see that it is flickering, and instead see the light as steady. Some participants in the experiment indicated they saw the light as completely still when it was in fact flashing about 35 times per second, while others were still able to perceive the flashing at rates of over 60 times per second.
    Clinton Haarlem, PhD Candidate in the School of Natural Sciences, is the first author of the article that has just been published in leading journal PLOS ONE. He said: “We also measured temporal resolution on multiple occasions in the same participants and found that even though there is significant variation among individuals, the trait appears to be quite stable over time within individuals.”
    Though our visual temporal resolution is quite stable from day to day in general, a post-hoc analysis did suggest that there may be slightly more variation over time within females than within males.

    “We don’t yet know how this variation in visual temporal resolution might affect our day-to-day lives, but we believe that individual differences in perception speed might become apparent in high-speed situations where one might need to locate or track fast-moving objects, such as in ball sports, or in situations where visual scenes change rapidly, such as in competitive gaming,” added Clinton Haarlem.
    “This suggests that some people may have an advantage over others before they have even picked up a racquet and hit a tennis ball, or grabbed a controller and jumped into some fantasy world online.”
    Andrew Jackson, Professor in Zoology in Trinity’s School of Natural Sciences, said: “What I think is really interesting about this project is how a zoologist, a geneticist and a psychologist can all find different angles to this work. For me as a zoologist the consequences of variation in visual perception likely has profound implications for how predators and prey interact, with various arms-races existing for investment in brain processing power and clever strategies to exploit weaknesses in one’s enemy.”
    Kevin Mitchell, Associate Professor in Developmental Neurobiology in Trinity’s School of Genetics and Microbiology, and the Trinity College Institute of Neuroscience, said: “Because we only have access to our own subjective experience, we might naively expect that everyone else perceives the world in the same way we do. Examples like colour blindness show that isn’t always true, but there are many less well known ways that perception can vary too. This study characterises one such difference — in the ‘frame rate”‘ of our visual systems. Some people really do seem to see the world faster than others.” More

  • in

    Study uses artificial intelligence to show how personality influences the expression of our genes

    An international study led by the UGR using artificial intelligence has shown that our personalities alter the expression of our genes. The findings shed new light on the long-standing mystery of how the mind and body interact.
    The study, published in the journal Molecular Psychiatry (Nature), examines how an individual’s personality and underlying outlook on life regulate their gene expression, and thus affect their health and well-being. It is the first study to measure the transcription of the entire genome in relation to human personality.
    The multi- and interdisciplinary study was led by researchers from the Andalusian Interuniversity Research Institute in Data Science and Computational Intelligence (DaSCI), the UGR’s Department of Computer Science and Artificial Intelligence, and the Biohealth Research Institute in Granada (ibs.GRANADA). It was carried out in collaboration with Professor Robert Cloninger (Washington University in St. Louis), researchers from Baylor College of Medicine (Texas, USA) and the Young Finns Study (Finland).
    The international research team (made up of specialists in genetics, medicine, psychology and computer science) used data from the Young Finns Study, an extensive study conducted in the general population of Finland over four decades during which relevant information was collected on participants’ health, physical condition and lifestyle. In addition, participants were subjected to extensive personality assessments that addressed both temperament (habits and emotional reactivity) and character (conscious goals and values). The results showed that certain outlooks on life are conducive to a healthy, fulfilling and long life, while others lead to a stressful, unhealthy and short life.
    The study analysed the regulation of gene expression in these individuals, taking into account three levels of self-awareness that were measured through their combined temperament and character profiles. These levels were designated “unregulated” — individuals dominated by irrational emotions and habits associated with their traditions and obedience to authority, “organised” — self-sufficient individuals capable of intentionally regulating their habits and cooperating with others for mutual benefit, and lastly, “creative” — self-transcendent individuals who adapt their habits to live in harmony with others, with nature or with the universe, even if this requires occasional personal sacrifices.
    Two key findings
    As UGR researcher and co-lead author of the study Coral del Val explains: “In our research we made two key discoveries about the expression and organisation of genes according to the personality profiles of these individuals. First, we discovered a network of 4,000 genes that clustered into multiple modules that were expressed in specific regions of the brain. Some of these genes had already been linked in previous studies to the inheritance of human personality. Second, we discovered that the modules formed a functional interaction network capable of orchestrating changes in gene expression in order to adapt to varying internal and external conditions. The modules turned on and off in a flexible manner, facilitating adaptation to the everyday challenges we all face, and choreographing our development.”
    The researchers showed that the changes in the patterns of interaction between these modules were orchestrated by two sub-networks. One network regulated emotional reactivity (anxiety, fear, etc.), while the other regulated what a person perceives as meaningful (e.g. production of concepts and language). “What’s most remarkable is the fact that the networks for emotion and meaning are coordinated by a control centre made up of six genes,” notes Elisa Díaz de la Guardia-Bolívar, the other co-lead author of the study. “It is particularly interesting that we found that the six genes of the control hub are highly preserved throughout evolution, from single-celled organisms to modern humans. This finding confirms their beneficial role in regulating the functioning of all forms of life on Earth,” she adds.

    Identifying these gene networks and the control hub regulating gene expression in humans has practical value because it shows how people can improve the quality of their health, happiness and overall quality of daily life, despite the challenges and stresses we all face.
    The UGR’s Igor Zwir explains: “In previous research, we found significant differences in well-being between people in the three personality groups, depending on their level of self-awareness. Specifically, those with greater self-awareness (the creative group) reported greater well-being compared to the organised and unregulated groups. We have now shown that these levels of self-awareness are also strongly associated with the regulation of gene expression in the same order (creative > organised > unregulated). This suggests that a person can improve their health and well-being by cultivating a more self-transcendent and creative outlook on life.”
    However, he cautions that it remains to be confirmed whether the regulation of gene expression through interventions that enhance self-awareness is the mediating factor in the association between self-awareness and well-being. Nevertheless, treatments that promote greater self-transcendence and mindfulness have also been shown to contribute to improvements in all aspects of health, including physical, mental, social and spiritual well-being. It is therefore plausible that the regulation of gene expression is the real mediator in this association.
    As the researchers predicted, certain types of genes, such as transcription factors, microRNAs and long non-coding RNAs, showed extensive enrichment in the 4000-gene integrated molecular network. However, the most significant enrichment was observed in a group of RNAs that are thought to have played a crucial role in the origin of cellular life. These RNAs have the ability to form membraneless compartments and carry out chemical reactions, allowing them to adapt rapidly to stress. This process, known as liquid-liquid phase separation (LLPS), creates a comprehensive bioreactor in which the chemicals that are essential for life can be synthesised.
    “We are delighted to discover the important roles of different types of genes in health and personality. It is amazing to see that evolution has preserved genes that are thought to have been important in the origin of life, allowing for the increasing plasticity, complexity and consciousness that we observe in humans. The innovative computational methods used in this project enable us to study complex biological systems in humans in an ethical, non-intrusive and beneficial way, with the aim of understanding how to live healthily,” says Professor Cloninger. He adds: “These findings clearly demonstrate that a person’s mind and body are deeply interconnected. Each influences the other, so they are not separate. It is important to recognise that our future well-being is not entirely determined by our past or present conditions; rather, we can cultivate our own well-being in a creative process full of open-ended possibilities.” More

  • in

    Cellphone compass can measure tiny concentrations of compounds important for human health

    Nearly every modern cellphone has a built-in compass, or magnetometer, that detects the direction of Earth’s magnetic field, providing critical information for navigation. Now a team of researchers at the National Institute of Standards and Technology (NIST) has developed a technique that uses an ordinary cellphone magnetometer for an entirely different purpose — to measure the concentration of glucose, a marker for diabetes, to high accuracy.
    The same technique, which uses the magnetometer in conjunction with magnetic materials designed to change their shape in response to biological or environmental cues, could be used to rapidly and cheaply measure a host of other biomedical properties for monitoring or diagnosing human disease. The method also has the potential to detect environmental toxins, said NIST scientist Gary Zabow.
    In their proof-of-concept study, Zabow and fellow NIST researcher Mark Ferris clamped to a cellphone a tiny well containing the solution to be tested and a strip of hydrogel — a porous material that swells when immersed in water. The researchers embedded tiny magnetic particles within the hydrogel, which they had engineered to react either to the presence of glucose or to pH levels (a measure of acidity) by expanding or contracting. Changing pH levels can be associated with a variety of biological disorders.
    As the hydrogels enlarged or shrunk, they moved the magnetic particles closer to or farther from the cellphone’s magnetometer, which detected the corresponding changes in the strength of the magnetic field. Employing this strategy, the researchers measured glucose concentrations as small as a few millionths of a mole (the scientific unit for a certain number of atoms or molecules in a substance). Although such high sensitivity is not required for at-home monitoring of glucose levels using a drop of blood, it might in the future enable routine testing for glucose in saliva, which contains a much smaller concentration of the sugar.
    The researchers reported their findings in the March 30, 2024 edition of Nature Communications.
    Engineered, or “smart,” hydrogels like the ones the NIST team employed are inexpensive and relatively easy to fabricate, Ferris said, and can be tailored to react to a host of different compounds that medical researchers may want to measure. In their experiments, he and Zabow stacked single layers of two different hydrogels, each of which contracted and expanded at different rates in response to pH or glucose. These bilayers amplified the motion of the hydrogels, making it easier for the magnetometer to track changes in magnetic field strength.
    Because the technique does not require any electronics or power source beyond that of the cellphone nor call for any special processing of the sample, it offers an inexpensive way to conduct testing — even in locations with relatively few resources.

    Future efforts to improve the accuracy of such measurements using cellphone magnetometers might allow detection of DNA strands, specific proteins and histamines — compounds involved in the body’s immune response — at concentrations as low as a few tens of nanomoles (billionths of a mole).
    That improvement could have substantial benefit. For instance, measuring histamines, which are typically detected in urine at concentrations ranging from about 45 to 190 nanomoles, would ordinarily require a 24-hour urine collection and a sophisticated laboratory analysis.
    “An at-home test using a cellphone magnetometer sensitive to nanomolar concentrations would allow measurements to be done with much less hassle,” said Ferris. More generally, enhanced sensitivity would be essential when only a small amount of a substance is available for testing in extremely dilute quantities, Zabow added.
    Similarly, the team’s study suggests that a cellphone magnetometer can measure pH levels with the same sensitivity as a thousand-dollar benchtop meter but at a fraction of the cost. A home-brewer or a baker could use the magnetometer to quickly test the pH of various liquids to perfect their craft, and an environmental scientist could measure the pH of ground water samples on-site with higher accuracy than a litmus test strip could provide.
    In order to make the cellphone measurements a commercial success, engineers will need to develop a method to mass produce the hydrogel test strips and ensure that they have a long shelf life, Zabow said. Ideally, he added, the hydrogel strips should be designed to react more quickly to environmental cues in order to speed up measurements. More

  • in

    Physics-based predictive tool will speed up battery and superconductor research

    From lithium-ion batteries to next-generation superconductors, the functionality of many modern, advanced technologies depends on the physical property known as intercalation. Unfortunately, it’s difficult to identify in advance which of the many possible intercalated materials are stable, which necessitates a lot of trial-and-error lab work in product development.
    Now, in a study recently published in ACS Physical Chemistry Au, researchers from the Institute of Industrial Science, The University of Tokyo, and collaborating partners have devised a straightforward equation that correctly predicts the stability of intercalated materials. The systematic design guidelines enabled by this work will speed up the development of upcoming high-performance electronics and energy-storage devices.
    To appreciate the research team’s achievement, we need to understand the context of this research. Intercalation is the reversible insertion of guests (atoms or molecules) into hosts (for example, 2D-layered materials). The purpose of intercalation is commonly to modify the host’s properties or structure for improved device performance, as seen in, for example, commercial lithium-ion batteries. Although many synthetic methods are available for preparing intercalated materials, researchers have had no reliable means of predicting which host-guest combinations are stable. Therefore, much lab work has been needed to devise new intercalated materials for imparting next-generation device functionalities. Minimizing this lab work by proposing a straightforward predictive tool for host-guest stability was the goal of the research team’s study.
    “We are the first to develop accurate predictive tools for host-guest intercalation energies, and the stability of intercalated compounds,” explains Naoto Kawaguchi, lead author of the study. “Our analysis, based on a database of 9,000 compounds, uses straightforward principles from undergraduate first-year chemistry.”
    A particular highlight of the work is that only two guest properties and eight host-derived descriptors were necessary for the researchers’ energy and stability calculations. In other words, initial ‘best guesses’ weren’t necessary; only the underlying physics of the host-guest systems. Furthermore, the researchers validated their model against nearly 200 sets of regression coefficients.
    “We’re excited because our regression model formulation is straightforward and physically reasonable,” says Teruyasu Mizoguchi, senior author. “Other computational models in the literature lack a physical basis or validation against unknown intercalated compounds.”
    This work is an important step forward in minimizing the laborious lab work that’s typically required to prepare intercalated materials. Given that many current and upcoming energy storage and electronic devices depend on such materials, the time and expense required for corresponding research and development will be minimized. Consequently, products with advanced functionalities will reach the market faster than what has been previously possible. More