More stories

  • in

    Lead-vacancy centers in diamond as building blocks for large-scale quantum networks

    Much like how electric circuits use components to control electronic signals, quantum networks rely on special components and nodes to transfer quantum information between different points, forming the foundation for building quantum systems. In the case of quantum networks, color centers in diamond, which are defects intentionally added to a diamond crystal, are crucial for generating and maintaining stable quantum states over long distances.
    When stimulated by external light, these color centers in diamond emit photons carrying information about their internal electronic states, especially the spin states. The interaction between the emitted photons and the spin states of the color centers enables quantum information to be transferred between different nodes in quantum networks.
    A well-known example of color centers in diamond is the nitrogen-vacancy (NV) center, where a nitrogen atom is added adjacent to missing carbon atoms in the diamond lattice. However, the photons emitted from NV color centers do not have well-defined frequencies and are affected by interactions with the surrounding environment, making it challenging to maintain a stable quantum system.
    To address this, an international group of researchers, including Associate Professor Takayuki Iwasaki from Tokyo Institute of Technology, has developed a single negatively charged lead-vacancy (PbV) center in diamond, where a lead atom is inserted between neighboring vacancies in a diamond crystal. In the study published in the journal Physical Review Letters on February 15, 2024, the researchers reveal that the PbV center emits photons of specific frequencies that are not influenced by the crystal’s vibrational energy. These characteristics make the photons dependable carriers of quantum information for large-scale quantum networks.
    For stable and coherent quantum states, the emitted photon must be transform-limited, which means that it should have the minimum possible spread in its frequency. Additionally, it should have emission into zero-phonon-line (ZPL), meaning that the energy associated with the emission of photons is only used to change the electronic configuration of the quantum system, and not exchanged with the vibrational lattice modes (phonons) in the crystal lattice.
    To fabricate the PbV center, the researchers introduced lead ions beneath the diamond surface through ion implantation. An annealing process was then carried out to repair any damage caused by the lead ion implantation. The resulting PbV center exhibits a spin 1/2 system, with four distinct energy states with the ground and the excited state split into two energy levels. On photoexciting the PbV center, electron transitions between the energy levels produced four distinct ZPLs, classified by the researchers as A, B, C, and D based on the decreasing energy of the associated transitions. Among these, the C transition was found to have a transform-limited linewidth of 36 MHz.
    “We investigated the optical properties of single PbV centers under resonant excitation and demonstrated that the C-transition, one of the ZPLs, reaches the nearly transform-limit at 6.2 K without prominent phonon-induced relaxation and spectral diffusion,” says Dr. Iwasaki.
    The PbV center stands out by being able to maintain its linewidth at approximately 1.2 times the transform-limit at temperatures as high as 16 K. This is important to achieve around 80% visibility in two-photon interference. In contrast, color centers like SiV, GeV, and SnV need to be cooled to much lower temperatures (4 K to 6 K) for similar conditions. By generating well-defined photons at relatively high temperatures compared to other color centers, the PbV center can function as an efficient quantum light-matter interface, which enables quantum information to be carried long distances by photons via optical fibers.
    “These results can pave the way for the PbV center to become a building block to construct large-scale quantum networks,” concludes Dr. Iwasaki. More

  • in

    This tiny chip can safeguard user data while enabling efficient computing on a smartphone

    Health-monitoring apps can help people manage chronic diseases or stay on track with fitness goals, using nothing more than a smartphone. However, these apps can be slow and energy-inefficient because the vast machine-learning models that power them must be shuttled between a smartphone and a central memory server.
    Engineers often speed things up using hardware that reduces the need to move so much data back and forth. While these machine-learning accelerators can streamline computation, they are susceptible to attackers who can steal secret information.
    To reduce this vulnerability, researchers from MIT and the MIT-IBM Watson AI Lab created a machine-learning accelerator that is resistant to the two most common types of attacks. Their chip can keep a user’s health records, financial information, or other sensitive data private while still enabling huge AI models to run efficiently on devices.
    The team developed several optimizations that enable strong security while only slightly slowing the device. Moreover, the added security does not impact the accuracy of computations. This machine-learning accelerator could be particularly beneficial for demanding AI applications like augmented and virtual reality or autonomous driving.
    While implementing the chip would make a device slightly more expensive and less energy-efficient, that is sometimes a worthwhile price to pay for security, says lead author Maitreyi Ashok, an electrical engineering and computer science (EECS) graduate student at MIT.
    “It is important to design with security in mind from the ground up. If you are trying to add even a minimal amount of security after a system has been designed, it is prohibitively expensive. We were able to effectively balance a lot of these tradeoffs during the design phase,” says Ashok.
    Her co-authors include Saurav Maji, an EECS graduate student; Xin Zhang and John Cohn of the MIT-IBM Watson AI Lab; and senior author Anantha Chandrakasan, MIT’s chief innovation and strategy officer, dean of the School of Engineering, and the Vannevar Bush Professor of EECS. The research will be presented at the IEEE Custom Integrated Circuits Conference.

    Side-channel susceptibility
    The researchers targeted a type of machine-learning accelerator called digital in-memory compute. A digital IMC chip performs computations inside a device’s memory, where pieces of a machine-learning model are stored after being moved over from a central server.
    The entire model is too big to store on the device, but by breaking it into pieces and reusing those pieces as much as possible, IMC chips reduce the amount of data that must be moved back and forth.
    But IMC chips can be susceptible to hackers. In a side-channel attack, a hacker monitors the chip’s power consumption and uses statistical techniques to reverse-engineer data as the chip computes. In a bus-probing attack, the hacker can steal bits of the model and dataset by probing the communication between the accelerator and the off-chip memory.
    Digital IMC speeds computation by performing millions of operations at once, but this complexity makes it tough to prevent attacks using traditional security measures, Ashok says.
    She and her collaborators took a three-pronged approach to blocking side-channel and bus-probing attacks.

    First, they employed a security measure where data in the IMC are split into random pieces. For instance, a bit zero might be split into three bits that still equal zero after a logical operation. The IMC never computes with all pieces in the same operation, so a side-channel attack could never reconstruct the real information.
    But for this technique to work, random bits must be added to split the data. Because digital IMC performs millions of operations at once, generating so many random bits would involve too much computing. For their chip, the researchers found a way to simplify computations, making it easier to effectively split data while eliminating the need for random bits.
    Second, they prevented bus-probing attacks using a lightweight cipher that encrypts the model stored in off-chip memory. This lightweight cipher only requires simple computations. In addition, they only decrypted the pieces of the model stored on the chip when necessary.
    Third, to improve security, they generated the key that decrypts the cipher directly on the chip, rather than moving it back and forth with the model. They generated this unique key from random variations in the chip that are introduced during manufacturing, using what is known as a physically unclonable function.
    “Maybe one wire is going to be a little bit thicker than another. We can use these variations to get zeros and ones out of a circuit. For every chip, we can get a random key that should be consistent because these random properties shouldn’t change significantly over time,” Ashok explains.
    They reused the memory cells on the chip, leveraging the imperfections in these cells to generate the key. This requires less computation than generating a key from scratch.
    “As security has become a critical issue in the design of edge devices, there is a need to develop a complete system stack focusing on secure operation. This work focuses on security for machine-learning workloads and describes a digital processor that uses cross-cutting optimization. It incorporates encrypted data access between memory and processor, approaches to preventing side-channel attacks using randomization, and exploiting variability to generate unique codes. Such designs are going to be critical in future mobile devices,” says Chandrakasan.
    Safety testing
    To test their chip, the researchers took on the role of hackers and tried to steal secret information using side-channel and bus-probing attacks.
    Even after making millions of attempts, they couldn’t reconstruct any real information or extract pieces of the model or dataset. The cipher also remained unbreakable. By contrast, it took only about 5,000 samples to steal information from an unprotected chip.
    The addition of security did reduce the energy efficiency of the accelerator, and it also required a larger chip area, which would make it more expensive to fabricate.
    The team is planning to explore methods that could reduce the energy consumption and size of their chip in the future, which would make it easier to implement at scale.
    “As it becomes too expensive, it becomes harder to convince someone that security is critical. Future work could explore these tradeoffs. Maybe we could make it a little less secure but easier to implement and less expensive,” Ashok says.
    The research is funded, in part, by the MIT-IBM Watson AI Lab, the National Science Foundation, and a Mathworks Engineering Fellowship. More

  • in

    Super Mario hackers’ tricks could protect software from bugs

    Video gamers who exploit glitches in games can help experts better understand buggy software, students at the University of Bristol suggest.
    Known as ‘speedrunners’, these types of gamers can complete games quickly by working out their malfunctions.
    The students examined four classic Super Mario games, and analysed 237 known glitches within them, classifying a variety of weaknesses. This research explores whether these are the same as the bugs exploited in more conventional software.
    Nintendo’s Super Mario is the quintessential video game. To understand the sorts of glitches speedrunners exploit, they examined four of the earliest Mario platforming games — Super Mario Bros (1985), Super Mario Bros. 3 (1988), Super Mario World (1990) and Super Mario 64 (1996). Whilst these games are old, they are still competitively run by speedrunners with new records reported in the news. The games are well also well understood, having been studied by speedrunners for decades, ensuring that there are large numbers of well researched glitches for analysis.
    Currently the world record time for conquering Super Mario World stands at a blistering 41 seconds. The team set out to understand 237 known glitches within them, classifying a variety of weaknesses to see if they can help software engineers make applications more robust.
    In the Super Mario platforming games Mario must rescue Princess Peach by jumping through an obstacle course of various platforms to reach a goal, avoiding baddies or defeating them by jumping on their heads. Players can collect power-ups along the way to unlock special abilities, and coins to increase their score. The Mario series of games is one of Nintendo’s flagship products, and one of the most influential video game series of all time.
    Dr Joseph Hallett from Bristol’s School of Computer Science explained: “Many early video games, such as the Super Mario games we have examined, were written for consoles that differ from the more uniform PC-like hardware of modern gaming systems.

    “Constraints stemming from the hardware, such as limited memory and buses, meant that aggressive optimization and tricks were required to make games run well.
    “Many of these techniques (for example, the NES’s memory mapping) are niche and can lead to bugs, by being so different to how many programmers usually expect things to work.”
    “Programming for these systems is closer to embedded development than most modern software, as it requires working around the limits of the hardware to create games. Despite the challenges of programming these systems, new games are still released and retro-inspired.”
    Categorizing bugs in software allows developers to understand similar problems and bugs.
    The Common Weakness Enumeration (CWE) is a category system for hardware and software weaknesses and vulnerabilities. The team identified seven new categories of weakness previously unspecified.
    Dr Joseph Hallett from Bristol’s School of Computer Science explained: “We found that some of the glitches speed runners use don’t have neat categorizations in existing software defect taxonomies and that there may be new kinds of bugs to look for in more general software.”
    The team thematically analysed with a code book of existing software weaknesses (CWE) — a qualitative research method to help categorize complex phenomena.

    Dr Hallett continued: “The cool bit of this research is that academia is starting to treat and appreciate the work speedrunners do and study something that hasn’t really been treated seriously before.
    “By studying speedrunners’ glitches we can better understand how they do it and whether the bugs they use are the same ones other software gets hacked with.
    “It turns out the speedrunners have some tricks that we didn’t know about before.”
    Now the team have turned their hand to studying Pokémon video games. More

  • in

    Hey Dave, I’ve got an idea for you: What’s the potential of AI-led workshopping?

    In a paper published in JOSPT Open (Journal of Orthopaedic and Sports Physical Therapy), UTS Graduate School of Health Senior Lecturer in Physiotherapy Dr Joshua Pate and PhD candidate Rebecca Fechner write that AI chatbots offer a novel avenue for idea generation, simulating multidisciplinary workshops that traditionally require significant time and resources.
    “We sought to simulate a multidisciplinary workshop on a complex clinical research question using three freely available AI chatbots — ChatGPT, Bing Creative Mode and Google Bard — aiming to broaden and accelerate the co-generation of ideas,” Dr Pate said.
    “Our focus was on AI’s practical applications in educational and clinical settings, particularly to address the challenge of pain in schools, but the findings likely generalise for wider applications — essentially anyone wanting help brainstorming on problem solving, policy or practice.
    “We found that the different chatbots provided some different responses to each of the prompts, but overall the most prominent responses were similar. In our simulation they consistently suggested an online platform or curriculum for pain science education in schools.
    “So, the consistency in responses suggested some reliability of the chatbots in co-generating ideas, while the differences between chatbots offered a range of perspectives, enriching the brainstorming process.
    “These freely available chatbots are accessible tools for broadening participation in idea generation across various domains.
    “While the technology might not be good at the details right now, the chatbots are very good at finding new perspectives and connecting dots in previously unexplored ways.”
    Dr Pate said the study highlights the potential for AI to contribute to educational strategies and clinical research, leveling the field for resource-limited settings. “By demonstrating the chatbots’ ability to seemingly simulate complex workshops, we provide a proof-of-concept that could influence future research methodologies and policy making,” he said. More

  • in

    Manipulating the geometry of ‘electron universe’ in magnets

    Researchers at Tohoku University and the Japan Atomic Energy Agency have developed fundamental experiments and theories to manipulate the geometry of the ‘electron universe,’ which describes the structure of electronic quantum states in a manner mathematically similar to the actual universe, within a magnetic material under ambient conditions.
    The investigated geometric property — i.e., the quantum metric — was detected as an electric signal distinct from ordinary electrical conduction. This breakthrough reveals the fundamental quantum science of electrons and paves the way for designing innovative spintronic devices utilizing the unconventional conduction emerging from the quantum metric.
    Details were published in the journal Nature Physics on April 22, 2024.
    Electric conduction, which is crucial for many devices, follows Ohm’s law: a current responds proportionally to applied voltage. But to realize new devices, scientists have had to find a means to go beyond this law. Here is where quantum mechanics come in. A unique quantum geometry known as the quantum metric can generate non-Ohmic conduction. This quantum metric is a property inherent to the material itself, suggesting that it’s a fundamental characteristic of the material’s quantum structure.
    The term ‘quantum metric’ draws its inspiration from the ‘metric’ concept in general relativity, which explains how the geometry of the universe distorts under the influence of intense gravitational forces, such as those around black holes. Similarly, in the pursuit of designing non-Ohmic conduction within materials, comprehending and harnessing the quantum metric becomes imperative. This metric delineates the geometry of the ‘electron universe,’ analogous to the physical universe. Specifically, the challenge lies in manipulating the quantum-metric structure within a single device and discerning its impact on electrical conduction at room temperature.
    The research team reported successful manipulation of the quantum-metric structure at room temperature in a thin-film heterostructure comprising an exotic magnet, Mn3Sn, and a heavy metal, Pt. Mn3Sn exhibits essential magnetic texture when adjacent to Pt, which is drastically modulated by an applied magnetic field. They detected and magnetically controlled a non-Ohmic conduction termed the second-order Hall effect, where voltage responds orthogonally and quadratically to the applied electric current. Through theoretical modeling, they confirmed that the observations can be exclusively described by the quantum metric.
    “Our second-order Hall effect arises from the quantum-metric structure that couples with the specific magnetic texture at the Mn3Sn/Pt interface. Hence, we can flexibly manipulate the quantum metric by modifying the magnetic structure of the material through spintronic approaches and verify such manipulation in the magnetic control of the second-order Hall effect,” explained Jiahao Han, the lead author of this study.
    The main contributor to the theoretical analysis, Yasufumi Araki, added, “Theoretical predictions posit the quantum metric as a fundamental concept that connects the material properties measured in experiments to the geometric structures studied in mathematical physics. However, confirming its evidence in experiments has remained challenging. I hope that our experimental approach to accessing the quantum metric will advance such theoretical studies.”
    Principal investigator Shunsuke Fukami further added, “Until now, the quantum metric has been believed to be inherent and uncontrollable, much like the universe, but we now need to change this perception. Our findings, particularly the flexible control at room temperature, may offer new opportunities to develop functional devices such as rectifiers and detectors in the future.” More

  • in

    Unlocking spin current secrets: A new milestone in spintronics

    Spintronics is a field garnering immense attention for its range of potential advantages for conventional electronics. These include reducing power consumption, high-speed operation, non-volatility, and the potential for new functionalities.
    Spintronics exploits the intrinsic spin of electrons, and fundamental to the field is controlling the flows of the spin degree of freedom, i.e., spin currents. Scientists are constantly looking at ways to create, remove, and control them for future applications.
    Detecting spin currents is no easy feat. It requires the use of macroscopic voltage measurement, which looks at the overall voltage changes across a material. However, a common stumbling block has been a lack of understanding into how this spin current actually moves or propagates within the material itself.
    “Using neutron scattering and voltage measurements, we demonstrated that the magnetic properties of the material can predict how a spin current changes with temperature,” points out Yusuke Nambu, co-author of the paper and an associate professor at Tohoku University’s Institute for Materials Research (IMR).”
    Nambu and his colleagues discovered that the spin current signal changes direction at a specific magnetic temperature and decreases at low temperatures. Additionally, they found that the spin direction, or magnon polarization, flips both above and below this critical magnetic temperature. This change in magnon polarization correlates with the spin current’s reversal, shedding light on its propagation direction.
    Furthermore, the material studied displayed magnetic behaviors with distinct gap energies. This suggests that below the temperature linked to this gap energy, spin current carriers are absent, leading to the observed decrease in the spin current signal at lower temperatures. Remarkably, the spin current’s temperature dependence follows an exponential decay, mirroring the neutron scattering results.
    Nambu emphasizes that their findings underscore the significance of understanding microscopic details in spintronics research. “By clarifying the magnetic behaviors and their temperature variations, we can gain a comprehensive understanding of spin currents in insulating magnets, paving the way for predicting spin currents more accurately and potentially developing advanced materials with enhanced performance.” More

  • in

    Perfecting the view on a crystal’s imperfection

    Single-photon emitters (SPEs) are akin to microscopic lightbulbs that emit only one photon (a quantum of light) at a time. These tiny structures hold immense importance for the development of quantum technology, particularly in applications such as secure communications and high-resolution imaging. However, many materials that contain SPEs are impractical for use in mass manufacturing due to their high cost and the difficulty of integrating them into complex devices.
    In 2015, scientists discovered SPEs within a material called hexagonal boron nitride (hBN). Since then, hBN has gained widespread attention and application across various quantum fields and technologies, including sensors, imaging, cryptography, and computing, thanks to its layered structure and ease of manipulation.
    The emergence of SPEs within hBN stems from imperfections in the material’s crystal structure, but the precise mechanisms governing their development and function have remained elusive. Now, a new study published in Nature Materials reveals significant insights into the properties of hBN, offering a solution to discrepancies in previous research on the proposed origins of SPEs within the material.
    The study involves a collaborative effort spanning three major institutions: the Advanced Science Research Center at the CUNY Graduate Center (CUNY ASRC); the National Synchrotron Light Source II (NSLS-II) user facility at Brookhaven National Laboratory; and the National Institute for Materials Science. Gabriele Grosso, a professor with the CUNY ASRC’s Photonics Initiative and the CUNY Graduate Center’s Physics program, and Jonathan Pelliciari, a beamline scientist at NSLS-II, led the study.
    The collaboration was sparked by a conversation at the annual NSLS-II and Center for Functional Nanomaterials Users’ Meeting when researchers from CUNY ASRC and NSLS-II realized how their unique expertise, skills, and resources could uncover some novel insights, sparking the idea for the hBN experiment. The work brought together physicists with diverse areas of expertise and instrumentation skillsets who rarely collaborate in such a close manner.
    Using advanced techniques based on X-ray scattering and optical spectroscopy, the research team uncovered a fundamental energy excitation occurring at 285 millielectron volts. This excitation triggers the generation of harmonic electronic states that give rise to single photons — similar to how musical harmonics produce notes across multiple octaves.
    Intriguingly, these harmonics correlate with the energies of SPEs observed across numerous experiments conducted worldwide. The discovery connects previous observations and provides an explanation for the variability observed in earlier findings. Identification of this harmonic energy scale points to a common underlying origin and reconciles the diverse reports on hBN properties over the last decade.

    “Everyone was reporting different properties and different energies of the single photons that seemed to contradict each other,” said Grosso. “The beauty of our findings is that with a single energy scale and harmonics, we can organize and connect all of these findings that were thought to be completely disconnected. Using the music analogy, the single photon properties people reported were basically different notes on the same music sheet.”
    While the defects in hBN give rise to its distinctive quantum emissions, they also present a significant challenge in research efforts to understand them.
    “Defects are one of the most difficult physical phenomena to study, because they are very localized and hard to replicate,” explained Pelliciari. “Think of it this way; if you want to make a perfect circle, you can calculate a way to always replicate it. But if you want to replicate an imperfect circle, that’s much harder.”
    The implications of the team’s work extend far beyond hBN. The researchers say the findings are a stepping stone for studying defects in other materials containing SPEs. Understanding quantum emission in hBN holds the potential to drive advancements in quantum information science and technologies, facilitating secure communications and enabling powerful computation that can vastly expand and expedite research efforts.
    “These results are exciting because they connect measurements across a wide range of optical excitation energies, from single digits to hundreds of electron volts,” said Enrique Mejia, a Ph.D. student in Grosso lab and lead author of the work conducted at the CUNY ASRC. “We can clearly distinguish between samples with and without SPEs, and we can now explain how the observed harmonics are responsible for a wide range of single photon emitters.”
    This work was funded by LDRD, FWP DOE on quantum information science, DOE BES, and DOE ECA. The work at CUNY was supported by the National Science Foundation (NSF), the CUNY Graduate Center Physics Program, the CUNY Advanced Science Research Center, and the CUNY Research Foundation. More

  • in

    AI tool creates ‘synthetic’ images of cells for enhanced microscopy analysis

    Observing individual cells through microscopes can reveal a range of important cell biological phenomena that frequently play a role in human diseases, but the process of distinguishing single cells from each other and their background is extremely time consuming — and a task that is well-suited for AI assistance.
    AI models learn how to carry out such tasks by using a set of data that are annotated by humans, but the process of distinguishing cells from their background, called “single-cell segmentation,” is both time-consuming and laborious. As a result, there are limited amount of annotated data to use in AI training sets. UC Santa Cruz researchers have developed a method to solve this by building a microscopy image generation AI model to create realistic images of single cells, which are then used as “synthetic data” to train an AI model to better carry out single cell-segmentation.
    The new software is described in a new paper published in the journal iScience. The project was led by Assistant Professor of Biomolecular Engineering Ali Shariati and his graduate student Abolfazl Zargari. The model, called cGAN-Seg, is freely available on GitHub.
    “The images that come out of our model are ready to be used to train segmentation models,” Shariati said. “In a sense we are doing microscopy without a microscope, in that we are able to generate images that are very close to real images of cells in terms of the morphological details of the single cell. The beauty of it is that when they come out of the model, they are already annotated and labeled. The images show a ton of similarities to real images, which then allows us to generate new scenarios that have not been seen by our model during the training.”
    Images of individual cells seen through a microscope can help scientists learn about cell behavior and dynamics over time, improve disease detection, and find new medicines. Subcellular details such as texture can help researchers answer important questions, like if a cell is cancerous or not.
    Manually finding and labeling the boundaries of cells from their background is extremely difficult, however, especially in tissue samples where there are many cells in an image. It could take researchers several days to manually perform cell segmentation on just 100 microscopy images.
    Deep learning can speed up this process, but an initial data set of annotated images is needed to train the models — at least thousands of images are needed as a baseline to train an accurate deep learning model. Even if the researchers can find and annotate 1,000 images, those images may not contain the variation of features that appear across different experimental conditions.

    “You want to show your deep learning model works across different samples with different cell types and different image qualities,” Zargari said. “For example if you train your model with high quality images, it’s not going to be able to segment the low quality cell images. We can rarely find such a good data set in the microscopy field.”
    To address this issue, the researchers created an image-to-image generative AI model that takes a limited set of annotated, labeled cell images and generates more, introducing more intricate and varied subcellular features and structures to create a diverse set of “synthetic” images. Notably, they can generate annotated images with a high density of cells, which are especially difficult to annotate by hand and are especially relevant for studying tissues. This technique works to process and generate images of different cell types as well as different imaging modalities, such as those taken using fluorescence or histological staining.
    Zargari, who led the development of the generative model, employed a commonly used AI algorithm called a “cycle generative adversarial network” for creating realistic images. The generative model is enhanced with so-called “augmentation functions” and a “style injecting network,” which helps the generator to create a wide variety of high quality synthetic images that show different possibilities for what the cells could look like. To the researchers’ knowledge, this is the first time style injecting techniques have been used in this context.
    Then, this diverse set of synthetic images created by the generator are used to train a model to accurately carry out cell segmentation on new, real images taken during experiments.
    “Using a limited data set, we can train a good generative model. Using that generative model, we are able to generate a more diverse and larger set of annotated, synthetic images. Using the generated synthetic images we can train a good segmentation model — that is the main idea,” Zagari said.
    The researchers compared the results of their model using synthetic training data to more traditional methods of training AI to carry out cell segmentation across different types of cells. They found that their model produces significantly improved segmentation compared to models trained with conventional, limited training data. This confirms to the researchers that providing a more diverse dataset during training of the segmentation model improves performance.
    Through these enhanced segmentation capabilities, the researchers will be able to better detect cells and study variability between individual cells, especially among stem cells. In the future, the researchers hope to use the technology they have developed to move beyond still images to generate videos, which can help them pinpoint which factors influence the fate of a cell early in its life and predict their future.
    “We are generating synthetic images that can also be turned into a time lapse movie, where we can generate the unseen future of cells,” Shariati said. “With that, we want to see if we are able to predict the future states of a cell, like if the cell is going to grow, migrate, differentiate or divide.” More