More stories

  • in

    Computer scientists unveil novel attacks on cybersecurity

    Researchers have found two novel types of attacks that target the conditional branch predictor found in high-end Intel processors, which could be exploited to compromise billions of processors currently in use.
    The multi-university and industry research team led by computer scientists at University of California San Diego will present their work at the 2024 ACM ASPLOS Conference that begins tomorrow. The paper, “Pathfinder: High-Resolution Control-Flow Attacks Exploiting the Conditional Branch Predictor,” is based on findings from scientists from UC San Diego, Purdue University, Georgia Tech, the University of North Carolina Chapel Hill and Google.
    They discover a unique attack that is the first to target a feature in the branch predictor called the Path History Register, which tracks both branch order and branch addresses. As a result, more information with more precision is exposed than with prior attacks that lacked insight into the exact structure of the branch predictor.
    Their research has resulted in Intel and Advanced Micro Devices (AMD) addressing the concerns raised by the researchers and advising users about the security issues. Today, Intel is set to issue a Security Announcement, while AMD will release a Security Bulletin.
    In software, frequent branching occurs as programs navigate different paths based on varying data values. The direction of these branches, whether “taken” or “not taken,” provides crucial insights into the executed program data. Given the significant impact of branches on modern processor performance, a crucial optimization known as the “branch predictor” is employed. This predictor anticipates future branch outcomes by referencing past histories stored within prediction tables. Previous attacks have exploited this mechanism by analyzing entries in these tables to discern recent branch tendencies at specific addresses.
    In this new study, researchers leverage modern predictors’ utilization of a Path History Register (PHR) to index prediction tables. The PHR records the addresses and precise order of the last 194 taken branches in recent Intel architectures. With innovative techniques for capturing the PHR, the researchers demonstrate the ability to not only capture the most recent outcomes but also every branch outcome in sequential order. Remarkably, they uncover the global ordering of all branches. Despite the PHR typically retaining the most recent 194 branches, the researchers present an advanced technique to recover a significantly longer history.
    “We successfully captured sequences of tens of thousands of branches in precise order, utilizing this method to leak secret images during processing by the widely used image library, libjpeg,” said Hosein Yavarzadeh, a UC San Diego Computer Science and Engineering Department PhD student and lead author of the paper.

    The researchers also introduce an exceptionally precise Spectre-style poisoning attack, enabling attackers to induce intricate patterns of branch mispredictions within victim code. “This manipulation leads the victim to execute unintended code paths, inadvertently exposing its confidential data,” said UC San Diego computer science Professor Dean Tullsen.
    “While prior attacks could misdirect a single branch or the first instance of a branch executed multiple times, we now have such precise control that we could misdirect the 732nd instance of a branch taken thousands of times,” said Tullsen.
    The team presents a proof-of-concept where they force an encryption algorithm to transiently exit earlier, resulting in the exposure of reduced-round ciphertext. Through this demonstration, they illustrate the ability to extract the secret AES encryption key.
    “Pathfinder can reveal the outcome of almost any branch in almost any victim program, making it the most precise and powerful microarchitectural control-flow extraction attack that we have seen so far,” said Kazem Taram, an assistant professor of computer science at Purdue University and a UC San Diego computer science PhD graduate.
    In addition to Dean Tullsen and Hosein Yavarzadeh, other UC San Diego coauthors are. Archit Agarwal and Deian Stefan. Other coauthors include Christina Garman and Kazem Taram, Purdue University; Daniel Moghimi, Google; Daniel Genkin, Georgia Tech; Max Christman and Andrew Kwong, University of North Carolina Chapel Hill.
    This work was partially supported by the Air Force Office of Scientific Research (FA9550- 20-1-0425); the Defense Advanced Research Projects Agency (W912CG-23-C-0022 and HR00112390029); the National Science Foundation (CNS-2155235, CNS-1954712, and CAREER CNS-2048262); the Alfred P. Sloan Research Fellowship; and gifts from Intel, Qualcomm, and Cisco. More

  • in

    The end of the quantum tunnel

    Quantum mechanical effects such as radioactive decay, or more generally: ‘tunneling’, display intriguing mathematical patterns. Two researchers at the University of Amsterdam now show that a 40-year-old mathematical discovery can be used to fully encode and understand this structure.
    Quantum physics — easy and hard
    In the quantum world, processes can be separated into two distinct classes. One class, that of the so-called ‘perturbative’ phenomena, is relatively easy to detect, both in an experiment and in a mathematical computation. Examples are plentiful: the light that atoms emit, the energy that solar cells produce, the states of qubits in a quantum computer. These quantum phenomena depend on Planck’s constant, the fundamental constant of nature that determines how the quantum world differs from our large-scale world, but in a simple way. Despite the ridiculous smallness of this constant — expressed in everyday units of kilograms, metres and seconds it takes a value that starts at the 34th decimal place after the comma — the fact that Planck’s constant is not exactly zero is enough to compute such quantum effects.
    Then, there are the ‘nonperturbative’ phenomena. One of the best known is radioactive decay: a process where due to quantum effects, elementary particles can escape the attractive force that ties them to atomic nuclei. If the world were ‘classical’ — that is, if Planck’s constant were exactly zero — this attractive force would be impossible to overcome. In the quantum world, decay does occur, but still only occasionally; a single uranium atom, for example, would on average take over four billion years to decay. The collective name for such rare quantum events is ‘tunneling’: for the particle to escape, it has to ‘dig a tunnel’ through the energy barrier that keeps it tied to the nucleus. A tunnel that can take billions of years to dig, and makes The Shawshank Redemption look like child’s play.
    Mathematics to the rescue
    Mathematically, nonperturbative quantum effects are much more difficult to describe than their perturbative cousins. Still, over the century that quantum mechanics has existed, physicists have found many ways to deal with these effects, and to describe and predict them accurately. “Still, in this century-old problem, there was work left to be done,” says Alexander van Spaendonck, one of the authors of the new publication. “The descriptions of tunneling phenomena in quantum mechanics needed further unification — a framework in which all such phenomena could be described and investigated using a single mathematical structure.”
    Surprisingly, such a structure was found in 40-year-old mathematics. In the 1980s, French mathematician Jean Écalle had set up a framework that he dubbed resurgence, and that had precisely this goal: giving structure to nonperturbative phenomena. So why did it take 40 years for the natural combination of Écalle’s formalism and the application to tunneling phenomena to be taken to their logical conclusion? Marcel Vonk, the other author of the publication, explains: “Écalle’s original papers were lengthy — over 1000 pages all combined — highly technical, and only published in French. As a result, it took until the mid-2000s before a significant number of physicists started getting familiar with this ‘toolbox’ of resurgence. Originally, it was mostly applied to simple ‘toy models’, but of course the tools were also tried on real-life quantum mechanics. Our work takes these developments to their logical conclusion.”
    Beautiful structure

    That conclusion is that one of the tools in Écalle’s toolbox, that of a ‘transseries’, is perfectly suited to describe tunneling phenomena in essentially any quantum mechanics problem, and does so always in the same way. By spelling out the mathematical details, the authors found that it became possible not only to unify all tunneling phenomena into a single mathematical object, but also to describe certain ‘jumps’ in how big the role of these phenomena is — an effect known as Stokes’ phenomenon.
    Van Spaendonck: “Using our description Stokes’ phenomenon, we were able to show that certain ambiguities that had plagued the ‘classical’ methods of computing nonperturbative effects — infinitely many, in fact — all dropped out in our method. The underlying structure turned out to be even more beautiful than we originally expected. The transseries that describes quantum tunneling turns out to split — or ‘factorize’ — in a surprising way: into a ‘minimal’ transseries that describes the basic tunneling phenomena that essentially exist in any quantum mechanics problem, and an object that we called the ‘median transseries’ that describes the more problem-specific details, and that depends for example on how symmetric a certain quantum setting is.”
    With this mathematical structure completely clarified, the next question is of course where the new lessons can be applied and what physicists can learn from them. In the case of radioactivity, for example, some atoms are stable whereas others decay. In other physical models, the lists of stable and unstable particles may vary as one slightly changes the setup — a phenomenon known as ‘wall-crossing’. What the researchers have in mind next is to clarify this notion of wall-crossing using the same techniques. This difficult problem has again been studied by many groups in many different ways, but now a similar unifying structure might be just around the corner. There is certainly light at the end of the tunnel. More

  • in

    New algorithm cuts through ‘noisy’ data to better predict tipping points

    Whether you’re trying to predict a climate catastrophe or mental health crisis, mathematics tells us to look for fluctuations.
    Changes in data, from wildlife population to anxiety levels, can be an early warning signal that a system is reaching a critical threshold, known as a tipping point, in which those changes may accelerate or even become irreversible.
    But which data points matter most? And which are simply just noise?
    A new algorithm developed by University at Buffalo researchers can identify the most predictive data points that a tipping point is near. Detailed in Nature Communications, this theoretical framework uses the power of stochastic differential equations to observe the fluctuation of data points, or nodes, and then determine which should be used to calculate an early warning signal.
    Simulations confirmed this method was more accurate at predicting theoretical tipping points than randomly selecting nodes.
    “Every node is somewhat noisy — in other words, it changes over time — but some may change earlier and more drastically than others when a tipping point is near. Selecting the right set of nodes may improve the quality of the early warning signal, as well as help us avoid wasting resources observing uninformative nodes,” says the study’s lead author, Naoki Masuda, PhD, professor and director of graduate studies in the UB Department of Mathematics, within the College of Arts and Sciences.
    The study was co-authored by Neil Maclaren, a postdoctoral research associate in the Department of Mathematics, and Kazuyuki Aihara, executive director of the International Research Center for Neurointelligence at the University of Tokyo.

    The work was supported by the National Science Foundation and the Japan Science and Technology Agency.
    Warning signals connected via networks
    The algorithm is unique in that it fully incorporates network science into the process. While early warning signals have been applied to ecology and psychology for the last two decades, little research has focused on how those signals are connected within a network, Masuda says.
    Consider depression. Recent research has considered it and other mental disorders as a network of symptoms influencing each other by creating feedback loops. A loss of appetite could mean the onset of five other symptoms in the near future, depending on how close those symptoms are on the network.
    “As a network scientist, I felt network science could offer a unique or perhaps even improved approach to early warning signals,” Masuda says.
    By thoroughly considering systems as networks, researchers found that simply selecting the nodes with highest fluctuations was not the best strategy. That’s because some selected nodes may be too closely related to other selected nodes.
    “Even if we combine two nodes with nice early warning signals, we don’t necessarily get a more accurate signal. Sometimes combining a node with a good signal and another node with a mid-quality signal actually gives us a better signal,” Masuda says.
    While the team validated the algorithm with numerical simulations, they say it can readily be applied to actual data because it does not require information about the network structure itself; it only requires two different states of the networked system to determine an optimal set of nodes.
    “The next steps will be to collaborate with domain experts such as ecologists, climate scientists and medical doctors to further develop and test the algorithm with their empirical data and get insights into their problems,” Masuda says. More

  • in

    From disorder to order: Flocking birds and ‘spinning’ particles

    Researchers Kazuaki Takasan and Kyogo Kawaguchi of the University of Tokyo with Kyosuke Adachi of RIKEN, Japan’s largest comprehensive research institution, have demonstrated that ferromagnetism, an ordered state of atoms, can be induced by increasing particle motility and that repulsive forces between atoms are sufficient to maintain it. The discovery not only extends the concept of active matter to quantum systems but also contributes to the development of novel technologies that rely on the magnetic properties of particles, such as magnetic memory and quantum computing. The findings were published in the journal Physical Review Research.
    Flocking birds, swarming bacteria, cellular flows. These are all examples of active matter, a state in which individual agents, such as birds, bacteria, or cells, self-organize. The agents change from a disordered to an ordered state in what is called a “phase transition.” As a result, they move together in an organized fashion without an external controller.
    “Previous studies have shown that the concept of active matter can apply to a wide range of scales, from nanometers (biomolecules) to meters (animals),” says Takasan, the first author. “However, it has not been known whether the physics of active matter can be applied usefully in the quantum regime. We wanted to fill in that gap.”
    To fill the gap, the researchers needed to demonstrate a possible mechanism that could induce and maintain an ordered state in a quantum system. It was a collaborative work between physics and biophysics. The researchers took inspiration from the phenomena of flocking birds because, due to the activity of each agent, the ordered state is more easily achieved than in other types of active matter. They created a theoretical model in which atoms were essentially mimicking the behavior of birds. In this model, when they increased the motility of the atoms, the repulsive forces between atoms rearranged them into an ordered state called ferromagnetism. In the ferromagnetic state, spins, the angular momentum of subatomic particles and nuclei, align in one direction, just like how flocking birds face the same direction while flying.
    “It was surprising at first to find that the ordering can appear without elaborate interactions between the agents in the quantum model,” Takasan reflects on the finding. “It was different from what was expected based on biophysical models.”
    The researcher took a multi-faceted approach to ensure their finding was not a fluke. Thankfully, the results of computer simulations, mean-field theory, a statistical theory of particles, and mathematical proofs based on linear algebra were all consistent. This strengthened the reliability of their finding, the first step in a new line of research.
    “The extension of active matter to the quantum world has only recently begun, and many aspects are still open,” says Takasan. “We would like to further develop the theory of quantum active matter and reveal its universal properties.” More

  • in

    Barcodes expand range of high-resolution sensor

    The same geometric quirk that lets visitors murmur messages around the circular dome of the whispering gallery at St. Paul’s Cathedral in London or across St. Louis Union Station’s whispering arch also enables the construction of high-resolution optical sensors. Whispering-gallery-mode (WGM) resonators have been used for decades to detect chemical signatures, DNA strands and even single molecules.
    In the same way that the architecture of a whispering gallery bends and focuses sound waves, WGM microresonators confine and concentrate light in a tiny circular path. This enables WGM resonators to detect and quantify physical and biochemical characteristics, making them ideal for high-resolution sensing applications in fields such as biomedical diagnostics and environmental monitoring. However, the broad use of WGM resonators has been limited by their narrow dynamic range as well as their limited resolution and accuracy.
    In a recent study, Lan Yang, the Edwin H. & Florence G. Skinner Professor, and Jie Liao, a postdoctoral research associate, both in the Preston M. Green Department of Electrical & Systems Engineering in the McKelvey School of Engineering at Washington University in St. Louis, demonstrate a transformative approach to overcome these limitations: optical WGM barcodes for multimode sensing. Liao and Yang’s innovative technique allows simultaneous monitoring of multiple resonant modes within a single WGM resonator, considering distinctive responses from each mode, vastly expanding the range of measurements achievable.
    WGM sensing uses a specific wavelelength of light that can circulate around the perimeter of the microresonator millions of times. When the sensor encounters a molecule, the resonant frequency of the circulating light shifts. Researchers can then measure that shift to detect and identify the presence of specific molecules.
    “Multimode sensing allows us to pick up multiple resonance changes in wavelength, rather than just one,” Liao explained. “With multiple modes, we can expand optical WGM sensing to a greater range of wavelengths, achieve greater resolution and accuracy, and ultimately sense more particles.”
    Liao and Yang found the theoretical limit of WGM detection and used it to estimate the sensing capabilities of a multimode system. They compared conventional single-mode with multimode sensing and determined that while single-mode sensing is limited to very narrow range — about 20 picometers (pm), constrained by the laser hardware — the range for multimode sensing is potentially limitless using the same setup.
    “More resonance means more information,” Liao said. “We derived a theoretically infinite range, though we’re practically limited by the sensing apparatus. In this study, the experimental limit we found was about 350 times larger with the new method than the conventional method for WGM sensing.”
    Commercial applications of multimode WGM sensing could include biomedical, chemical and environmental uses, Yang said. In biomedical applications, for instance, researchers could detect subtle changes in molecular interactions with unprecedented sensitivity to improve disease diagnosis and drug discovery. In environmental monitoring, with the capability to detect minute changes in environmental parameters such as temperature and pressure, multimode sensing could enable early warning systems for natural disasters or facilitate monitoring pollution levels in air and water.
    This new technology also enables continuous monitoring of chemical reactions, as demonstrated in the recent experiments conducted by Yang’s group. This capability holds promise for real-time analysis and control of chemical processes, offering potential applications in fields such as pharmaceuticals, materials science, and the food industry.
    “WGM resonators’ ultrahigh sensitivity lets us detect single particles and ions, but the potential of this powerful technology has not been fully utilized because we can’t use this ultrasensitive sensor directly to measure a complete unknown,” Liao added. “Multimode sensing enables that look into the unknown. By expanding our dynamic range to look at millions of particles, we can take on more ambitious projects and solve real-world problems.” More

  • in

    Imaging technique shows new details of peptide structures

    A new imaging technique developed by engineers at Washington University in St. Louis can give scientists a much closer look at fibril assemblies, stacks of peptides like amyloid beta, most notably associated with Alzheimer’s disease.
    These cross-β fibril assemblies are also useful building blocks within designer biomaterials for medical applications, but their resemblance to their amyloid beta cousins, whose tangles are a symptom of neurodegenerative disease, is concerning. Researchers want to learn how different sequences of these peptides are linked to their varying toxicity and function, for both naturally occurring peptides and their synthetic engineered cousins.
    Now, scientists can get a close enough look at fibril assemblies to see there are notable differences in how synthetic peptides stack compared with amyloid beta. These results stem from a fruitful collaboration between lead author Matthew Lew, associate professor in the Preston M. Green Department of Electrical & Systems Engineering, and Jai Rudra, associate professor of biomedical engineering, in WashU’s McKelvey School of Engineering.
    “We engineer microscopes to enable better nanoscale measurements so that the science can move forward,” Lew said.
    In a paper published in ACS Nano, Lew and colleagues outline how they used the Nile red chemical probe to light up cross-β fibrils. Their technique called single-molecule orientation-localization microscopy (SMOLM) uses the flashes of light from Nile red to visualize the fiber structures formed by synthetic peptides and by amyloid beta.
    The bottom line: these assemblies are much more complicated and heterogenous than anticipated. But that’s good news, because it means there’s more than one way to safely stack your proteins. With better measurements and images of fibril assemblies, bioengineers can better understand the rules that dictate how protein grammar affects toxicity and biological function, leading to more effective and less toxic therapeutics.
    First, scientists need to see the difference between them, something very challenging because of the tiny scale of these assemblies.

    “The helical twist of these fibers is impossible to discern using an optical microscope, or even some super-resolution microscopes, because these things are just too small,” Lew said.
    With high-dimensional imaging technology developed in Lew’s lab the past couple years, they are able to see the differences.
    A typical fluorescence microscope uses florescent molecules as light bulbs to highlight certain aspects of a biological target. In the case of this work, they used one of those probes, Nile red, as a sensor for what was around it. As Nile red randomly explores its environment and collides with the fibrils, it emits flashes of light that they can measure to determine where the fluorescent probe is and its orientation. From that data, they can piece together the full picture of engineered fibrils that stack very differently from the natural ones like amyloid beta.
    Their image of these fibril assemblies made the cover of the ACS Nano and was put together by first author Weiyan Zhou, who color-coded the image based on where the Nile reds were pointing. The resulting image is a blueish, red flowing assembly of peptides that looks like a river valley.
    They plan to continue to develop techniques like SMOLM to open new avenues of studying biological structures and processes at the nanoscale.
    “We are seeing things you can’t see with existing technology,” Lew said. More

  • in

    New circuit boards can be repeatedly recycled

    A recent United Nations report found that the world generated 137 billion pounds of electronic waste in 2022, an 82% increase from 2010. Yet less than a quarter of 2022’s e-waste was recycled. While many things impede a sustainable afterlife for electronics, one is that we don’t have systems at scale to recycle the printed circuit boards (PCBs) found in nearly all electronic devices.
    PCBs — which house and interconnect chips, transistors and other components — typically consist of layers of thin glass fiber sheets coated in hard plastic and laminated together with copper. That plastic can’t easily be separated from the glass, so PCBs often pile up in landfills, where their chemicals can seep into the environment. Or they’re burned to extract their electronics’ valuable metals like gold and copper. This burning, often undertaken in developing nations, is wasteful and can be toxic — especially for those doing the work without proper protections.
    A team led by researchers at the University of Washington developed a new PCB that performs on par with traditional materials and can be recycled repeatedly with negligible material loss. Researchers used a solvent that transforms a type of vitrimer — a cutting-edge class of sustainable polymers — to a jelly-like substance without damaging it, allowing the solid components to be plucked out for reuse or recycling.
    The vitrimer jelly can then be repeatedly used to make new, high-quality PCBs, unlike conventional plastics that degrade significantly with each recycling. With these “vPCBs” (vitrimer printed circuit boards), researchers recovered 98% of the vitrimer and 100% of the glass fiber, as well as 91% of the solvent used for recycling.
    The researchers published their findings April 26 in Nature Sustainability.
    “PCBs make up a pretty large fraction of the mass and volume of electronic waste,” said co-senior author Vikram Iyer, a UW assistant professor in the Paul G. Allen School of Computer Science & Engineering. “They’re constructed to be fireproof and chemical-proof, which is great in terms of making them very robust. But that also makes them basically impossible to recycle. Here, we created a new material formulation that has the electrical properties comparable to conventional PCBs as well as a process to recycle them repeatedly.”
    Vitrimers are a class of polymers first developed in 2015. When exposed to certain conditions, such as heat above a specific temperature, their molecules can rearrange and form new bonds. This makes them both “healable” (a bent PCB could be straightened, for instance) and highly recyclable.

    “On a molecular level, polymers are kind of like spaghetti noodles, which wrap and get compacted,” said co-senior author Aniruddh Vashisth, a UW assistant professor in the mechanical engineering department. “But vitrimers are distinct because the molecules that make up each noodle can unlink and relink. It’s almost like each piece of spaghetti is made of small Legos.”
    The team’s process to create the vPCB deviated only slightly from those used for PCBs. Conventionally, semi-cured PCB layers are held in cool, dry conditions where they have a limited shelf life before they’re laminated in a heat press. Because vitrimers can form new bonds, researchers laminated fully cured vPCB layers. The researchers found that to recycle the vPCBs they could immerse the material in an organic solvent that has a relatively low boiling point. This swelled the vPCB’s plastic without damaging the glass sheets and electronic components, letting the researchers extract these for reuse.
    This process allows for several paths to more sustainable, circular PCB lifecycles. Damaged circuit boards, such those with cracks or warping, can in some cases be repaired. If they aren’t repaired, they can be separated from their electronic components. Those components can then be recycled or reused, while the vitrimer and glass fibers can get recycled into new vPCBs.
    The team tested its vPCB for strength and electrical properties, and found that it performed comparable to the most common PCB material (FR-4). Vashisth and co-author Bichlien H. Nguyen, a principal researcher at Microsoft Research and an affiliate assistant professor in the Allen School, are now using artificial intelligence to explore new vitrimer formulations for different uses.
    Producing vPCBs wouldn’t entail major changes to manufacturing processes.
    “The nice thing is that a lot of industries — such as aerospace, automotive and even electronics — already have processing set up for the sorts of two-part epoxies that we use here,” said lead author Zhihan Zhang, a UW doctoral student in the Allen School.
    The team analyzed the environmental impact and found recycled vPCBs could entail a 48% reduction in global warming potential and an 81% reduction in carcinogenic emissions compared to traditional PCBs. While this work presents a technology solution, the team notes that a significant hurdle to recycling vPCBs at scale would be creating systems and incentives to gather e-waste so it can be recycled.
    “For real implementation of these systems, there needs to be cost parity and strong governmental regulations in place,” said Nguyen. “Moving forward, we need to design and optimize materials with sustainability metrics as a first principle.”
    Additional co-authors include Agni K. Biswal, a UW postdoctoral scholar in the mechanical engineering department; Ankush Nandi, a UW doctoral student in the mechanical engineering department; Kali Frost, a senior applied scientist at Microsoft Research; Jake A. Smith, a senior researcher at Microsoft Research and an affiliate researcher in the Allen School; and Shwetak Patel, a UW professor in the Allen School and the electrical and computer engineering department. This research is funded by the Microsoft Climate Research Initiative, an Amazon Research Award and the Google Research Scholar Program. Zhang was supported by the UW Clean Energy Institute Graduate Fellowship. More

  • in

    Surprising evolutionary pattern in yeast study

    University of North Carolina at Charlotte Assistant Professor of Bioinformatics Abigail Leavitt LaBella has co-led an ambitious research study — published in the widely influential journal Science — that reports intriguing findings made through innovative artificial intelligence analysis about yeasts, the small fungi that are key contributors to biotechnology, food production and human health. The findings challenge accepted frameworks within which yeast evolution is studied and provide access to an incredibly rich yeast analysis dataset that could have major implications for future evolutionary biology and bioinformatics research.
    LaBella, who joined UNC Charlotte’s Department of Bioinformatics in the College of Computing and Informatics as an assistant professor and researcher at the North Carolina Research Campus in 2022, conducted the study with co-lead author Dana A. Opulente of Villanova University. They collaborated with fellow researchers from Vanderbilt University and the University of Wisconsin at Madison, along with colleagues from research institutions across the globe.
    This is the flagship study of the Y1000+ Project, a massive inter-institutional yeast genome sequencing and phenotyping endeavor that LaBella joined as a postdoctoral researcher at Vanderbilt University.
    “Yeasts are single-celled fungi that play critical roles in our everyday lives. They make bread and beer, are used in the production of medicine, can cause infection, and as close relatives to animals have helped us learn about how cancer works,” said LaBella. “We wanted to know how these small fungi have evolved to have such an incredible range of functions and features. With the characterization of over one thousand yeasts, we found that yeasts do not fit the adage ‘jack of all trades, master of none.'”
    This study contributes to basic understanding of how the microbes change over time while generating more than 900 new genome sequences for yeasts — many of which could be leveraged in biofungal fields such as agricultural pest control, drug development and biofuels production.
    LaBella and her co-authors — through an artificial intelligence-assisted, machine-learning analysis of the Y1000+ Project’s dataset comprising 1,154 strains of the ancient, single-cell yeast Saccaromycotina — attempted to answer an important question. That is: Why do some yeasts eat (or metabolize) only a few types of carbon for energy while others can eat more than a dozen?
    The total number of carbon sources used by a yeast for energy is known in ecological terms as its carbon niche-breadth. Humans also vary in their carbon niche breadth — for example, some people can metabolize lactose while others cannot.

    Evolutionary biology research has supported two key overarching paradigms about niche breadth, the phenomenon explaining why some yeast organisms (“specialists”) evolve to be able to metabolize only a small number of carbon forms as fuel while others (“generalists”) evolve to be able to consume and grow on a broad variety of carbon forms. One of these paradigms illustrates that being a generalist comes with certain trade-offs compared to being a specialist. Notably, in the latter case, the ability to process a wide range of carbon forms comes at the expense of the yeast’s capacity to process and grow on each carbon form efficiently. The second is that these yeast specialists and generalists evolve to fit either profile due to the combined effects of different intrinsic traits of their respective genomes and different extrinsic influences based on the varying environments in which yeast organisms exist.
    LaBella and her colleagues found ample evidence supporting the idea that there are identifiable, intrinsic genetic differences in yeast specialists versus generalists, specifically that generalists tend to have a larger total number of genes than specialists. For example, they found that generalists are more likely to be able to synthesize carnitine, a molecule that is involved in energy production and often sold as an exercise supplement.
    But unexpectedly, their research found very limited evidence for the anticipated evolutionary trade-off of a yeast’s ability to process many forms of carbon coming at the expense of its ability to do so efficiently and grow accordingly, and vice versa.
    “We saw that the yeasts that could grow on lots of carbon substrates are actually very good growers,” said LaBella. “That was a very surprising finding to us.”
    While the findings of this specific experiment and the innovative machine-learning mechanisms used in its analysis could have major implications for bioinformatics, ecology, metabolics and evolutionary biology, the publishing of this study means that the Y1000+ Project’s massive compendium of yeast data is now available for scholars worldwide to use as a starting point to amplify their own yeast research.
    “This dataset will be a huge resource going forward,” said LaBella. More