More stories

  • in

    Making the most of quite little: Improving AI training for edge sensor time series

    Engineers at the Tokyo Institute of Technology (Tokyo Tech) have demonstrated a simple computational approach for improving the way artificial intelligence classifiers, such as neural networks, can be trained based on limited amounts of sensor data. The emerging applications of the internet of things often require edge devices that can reliably classify behaviors and situations based on time series. However, training data are difficult and expensive to acquire. The proposed approach promises to substantially increase the quality of classifier training, at almost no extra cost.
    In recent times, the prospect of having huge numbers of Internet of Things (IoT) sensors quietly and diligently monitoring countless aspects of human, natural, and machine activities has gained ground. As our society becomes more and more hungry for data, scientists, engineers, and strategists increasingly hope that the additional insight which we can derive from this pervasive monitoring will improve the quality and efficiency of many production processes, also resulting in improved sustainability.
    The world in which we live is incredibly complex, and this complexity is reflected in a huge multitude of variables that IoT sensors may be designed to monitor. Some are natural, such as the amount of sunlight, moisture, or the movement of an animal, while others are artificial, for example, the number of cars crossing an intersection or the strain applied to a suspended structure like a bridge. What these variables all have in common is that they evolve over time, creating what is known as time series, and that meaningful information is expected to be contained in their relentless changes. In many cases, researchers are interested in classifying a set of predetermined conditions or situations based on these temporal changes, as a way of reducing the amount of data and making it easier to understand. For instance, measuring how frequently a particular condition or situation arises is often taken as the basis for detecting and understanding the origin of malfunctions, pollution increases, and so on.
    Some types of sensors measure variables that in themselves change very slowly over time, such as moisture. In such cases, it is possible to transmit each individual reading over a wireless network to a cloud server, where the analysis of large amounts of aggregated data takes place. However, more and more applications require measuring variables that change rather quickly, such as the accelerations tracking the behavior of an animal or the daily activity of a person. Since many readings per second are often required, it becomes impractical or impossible to transmit the raw data wirelessly, due to limitations of available energy, data charges, and, in remote locations, bandwidth. To circumvent this issue, engineers all over the world have long been looking for clever and efficient ways to pull aspects of data analysis away from the cloud and into the sensor nodes themselves. This is often called edge artificial intelligence, or edge AI. In general terms, the idea is to send wirelessly not the raw recordings, but the results of a classification algorithm searching for particular conditions or situations of interest, resulting in a much more limited amount of data from each node.
    There are, however, many challenges to face. Some are physical and stem from the need to fit a good classifier in what is usually a rather limited amount of space and weight, and often making it run on a very small amount of power so that long battery life can be achieved. “Good engineering solutions to these requirements are emerging every day, but the real challenge holding back many real-world solutions is actually another. Classification accuracy is often just not good enough, and society requires reliable answers to start trusting a technology,” says Dr. Hiroyuki Ito, head of the Nano Sensing Unit where the study was conducted. “Many exemplary applications of artificial intelligence such as self-driving cars have shown that how good or poor an artificial classifier is, depends heavily on the quality of the data used to train it. But, more often than not, sensor time series data are really demanding and expensive to acquire in the field. For example, considering cattle behavior monitoring, to acquire it engineers need to spend time at farms, instrumenting individual cows and having experts patiently annotate their behavior based on video footage,” adds co-author Dr. Korkut Kaan Tokgoz, formerly part of the same research unit and now with Sabanci University in Turkey.
    As a consequence of the fact that training data is so precious, engineers have started looking at new ways of making the most out of even quite a limited amount of data available to train edge AI devices. An important trend in this area is using techniques known as “data augmentation,” wherein some manipulations, deemed reasonable based on experience, are applied to the recorded data so as to try and mimic the variability and uncertainty that can be encountered in real applications. “For example, in our previous work, we simulated the unpredictable rotation of a collar containing an acceleration sensor around the neck of a monitored cow, and found that the additional data generated in this way could really improve the performance in behavior classification,” explains Ms. Chao Li, doctoral student and lead author of the study [1]. “However, we also realized that we needed a much more general approach to augmenting sensor time series, one that could in principle be used for any kind of data and not make specific assumptions about the measurement condition. Moreover, in real-world situations, there are actually two issues, related but distinct. The first is that the overall amount of training data is often limited. The second is that some situations or conditions occur much more frequently than others, and this is unavoidable. For example, cows naturally spend much more time resting or ruminating than drinking. Yet, accurately measuring the less frequent behaviors is quite essential to properly judge the welfare status of an animal. A cow that does not drink will surely succumb, even thought the accuracy of classifying drinking may have low impact on common training approaches due to its rarity. This is called the data imbalance problem,” she adds.
    The computational research performed by the researchers at Tokyo Tech and initially targeted at improving cattle behavior monitoring offers a possible solution to these problems, by combining two very different and complementary approaches. The first one is known as sampling, and consists of extracting “snippets” of time series corresponding to the conditions to be classified always starting from different and random instants. How many snippets are extracted is adjusted carefully, ensuring that one always ends up with approximately the same number of snippets across all the behaviors to be classified, regardless of how common or rare they are. This results in a more balanced dataset, which is decidedly preferable as a basis for training any classifier such as a neural network. Because the procedure is based on selecting subsets of actual data, it is safe in terms of avoiding the generation of the artifacts which may stem from artificially synthesizing new snippets to make up for the less represented behaviors. The second one is known as surrogate data, and involves a very robust numerical procedure to generate, from any existing time series, any number of new ones that preserve some key features, but are completely uncorrelated. “This virtuous combination turned out to be very important, because sampling may cause a lot of duplication of the same data, when certain behaviors are too rare compared to others. Surrogate data are never the same and prevent this problem, which can very negatively affect the training process. And a key aspect of this work is that the data augmentation is integrated with the training process, so, different data are always presented to the network throughout its training,” explains Mr. Jim Bartels, co-author and doctoral student at the unit.
    Surrogate time series are generated by completely scrambling the phases of one or more signals, thus rendering them totally unrecognizable when their changes over time are considered. However, the distribution of values, the autocorrelation, and, if there are multiple signals, the crosscorrelation, are perfectly preserved. “In another previous work, we found that many empirical operations such as reversing and recombining time series actually helped to improve training. As these operations change the nonlinear content of the data, we later reasoned that the sort of linear features which are retained during surrogate generation are probably key to performance, at least for the application of cow behavior recognition that I focus on,” further explains Ms. Chao Li [2]. “The method of surrogate time series originates from an entirely different field, namely the study of nonlinear dynamics in complex systems like the brain, for which such time series are used to help distinguish chaotic behavior from noise. By bringing together our different experiences, we quickly realized that they could be helpful for this application, too,” adds Dr. Ludovico Minati, second author of the study and also with the Nano Sensing Unit. “However, considerable caution is needed because no two application scenarios are ever the same, and what holds true for the time series reflecting cow behaviors may not be valid for other sensors monitoring different types of dynamics. In any case, the elegance of the proposed method is that it is quite essential, simple, and generic. Therefore, it will be easy for other researchers to quickly try it out on their specific problems,” he adds.
    After this interview, the team explained that this type of research will be applied first of all to improving the classification of cattle behaviors, for which it was initially intended and on which the unit is conducting multidisciplinary research in partnership with other universities and companies. “One of our main goals is to successfully demonstrate high accuracy on a small, inexpensive device that can monitor a cow over its entire lifetime, allowing early detection of disease and therefore really improving not only animal welfare but also the efficiency and sustainability of farming,” concludes Dr. Hiroyuki Ito. The methodology and results are reported in a recent article published in the journal IEEE Sensors [3]. More

  • in

    Using math to better treat cancer

    Researchers at the University of Waterloo have identified a new method for scheduling radiation therapy that could be as much as 22 percent more effective at killing cancer cells than current standard radiation treatment regimens.
    While many mathematical studies have examined how to optimize the scheduling of radiation treatment for maximum effectiveness against cancer, most of these studies assume “intratumoral homogeneity” — that is, that all of the cancer cells are the same. In recent years, however, scientists have realized that tumours are made up of many different kinds of cells. Most importantly, they include cancer stem cells, which are more resistant to radiation than other kinds of cells.
    “The problem with any calculation involving cancer is that it’s super hard to get exact values because things vary from cancer type to cancer type, patient to patient, even within the tumour,” said Cameron Meaney, a PhD candidate in Applied Mathematics at Waterloo and the lead researcher on the study.
    This new algorithm can generalize the differing radiation resistances of stem cells and non-stem cells, allowing doctors to predict how a tumour will respond to treatment before gathering exact data on an individual’s cancer.
    The model has limitations, Meaney explained, as tumours contain far more than two kinds of cells. What it does, however, is provide clinical researchers with a better starting point for treatment research.
    “The results of the algorithm are important because they shed light on the idea that heterogeneity in tumours matters for planning treatment,” Meaney said.
    The next step the researchers hope to see is an application of their algorithm to clinical studies: will their suggested therapy schedule outperform existing scheduling practices in a lab trial?
    Story Source:
    Materials provided by University of Waterloo. Note: Content may be edited for style and length. More

  • in

    Dry pet food may be more environmentally friendly than wet food

    Pet owners may have a new reason to reach for the kibble.

    Dry cat and dog food tends to be better for the environment than wet food, veterinary nutritionist Vivian Pedrinelli of the University of São Paulo in Brazil and colleagues report. Their analysis of more than 900 hundred pet diets shows that nearly 90 percent of calories in wet chow comes from animal sources. That’s roughly double the share of calories from animal ingredients in dry food.

    Science News headlines, in your inbox

    Headlines and summaries of the latest Science News articles, delivered to your email inbox every Friday.

    Thank you for signing up!

    There was a problem signing you up.

    The team factored in the cost of different pet food ingredients across several environmental measures. The findings, described November 17 in Scientific Reports, suggest that wet food production uses more land and water and emits more greenhouse gases than dry food.  

    Scientists already knew that meat-heavy human diets drive greenhouse gas emissions (SN: 5/5/22). But when it comes to environmental sustainability, “we shouldn’t ignore pet food,” says Peter Alexander, an economist at the University of Edinburgh who was not involved in the work.

    Just how much various pet foods impact the environment isn’t clear, Alexander says. Commercial cat and canine fares aren’t typically made from prime cuts of meat. Instead, the ingredient lists often include animal byproducts — the gristle and bits people aren’t likely to eat anyway.

    How to calculate the carbon cost of these leftovers is an ongoing debate, says Gregory Okin, an environmental scientist at the University of California, Los Angeles who was not involved with study.

    Some argue that the byproducts in pet food are essentially free, since they come from animals already raised for human consumption. Others note that any calories require energy and therefore incur an environmental cost. Plus, animal ingredients in pet food might not be just scraps. If they contain even a small amount of human-edible meat, that could add up to a big impact.

    Knowing that there’s an environmental difference between moist morsels and crunchier cuisines could be helpful for eco-conscious pet owners, Okin says. Having that info handy at the grocery store is “super important when people are making decisions,” he adds. “There are consumers who want to pay attention.” More

  • in

    AI tailors artificial DNA for future drug development

    With the help of an AI, researchers at Chalmers University of Technology, Sweden, have succeeded in designing synthetic DNA that controls the cells’ protein production. The technology can contribute to the development and production of vaccines, drugs for severe diseases, as well as alternative food proteins much faster and at significantly lower costs than today.
    How our genes are expressed is a process that is fundamental to the functionality of cells in all living organisms. Simply put, the genetic code in DNA is transcribed to the molecule messenger RNA (mRNA), which tells the cell’s factory which protein to produce and in which quantities.
    Researchers have put a lot of effort into trying to control gene expression because it can, among other things, contribute to the development of protein-based drugs. A recent example is the mRNA vaccine against Covid-19, which instructed the body’s cells to produce the same protein found on the surface of the coronavirus. The body’s immune system could then learn to form antibodies against the virus. Likewise, it is possible to teach the body’s immune system to defeat cancer cells or other complex diseases if one understands the genetic code behind the production of specific proteins.
    Most of today’s new drugs are protein-based, but the techniques for producing them are both expensive and slow, because it is difficult to control how the DNA is expressed. Last year, a research group at Chalmers, led by Aleksej Zelezniak, Associate Professor of Systems Biology, took an important step in understanding and controlling how much of a protein is made from a certain DNA sequence.
    “First it was about being able to fully ‘read’ the DNA molecule’s instructions. Now we have succeeded in designing our own DNA that contains the exact instructions to control the quantity of a specific protein,” says Aleksej Zelezniak about the research group’s latest important breakthrough.
    DNA molecules made-to-order
    The principle behind the new method is similar to when an AI generates faces that look like real people. By learning what a large selection of faces looks like, the AI can then create completely new but natural-looking faces. It is then easy to modify a face by, for example, saying that it should look older, or have a different hairstyle. On the other hand, programming a believable face from scratch, without the use of AI, would have been much more difficult and time-consuming. Similarly, the researchers’ AI has been taught the structure and regulatory code of DNA. The AI then designs synthetic DNA, where it is easy to modify its regulatory information in the desired direction of gene expression. Simply put, the AI is told how much of a gene is desired and then ‘prints’ the appropriate DNA sequence.
    “DNA is an incredibly long and complex molecule. It is thus experimentally extremely challenging to make changes to it by iteratively reading and changing it, then reading and changing it again. This way it takes years of research to find something that works. Instead, it is much more effective to let an AI learn the principles of navigating DNA. What otherwise takes years is now shortened to weeks or days,” says first author Jan Zrimec, a research associate at the National Institute of Biology in Slovenia and past postdoc in Aleksej Zelezniak’s group.
    The researchers have developed their method in the yeast Saccharomyces cerevisiae, whose cells resemble mammalian cells. The next step is to use human cells. The researchers have hopes that their progress will have an impact on the development of new as well as existing drugs.
    “Protein-based drugs for complex diseases or alternative sustainable food proteins can take many years and can be extremely expensive to develop. Some are so expensive that it is impossible to obtain a return on investment, making them economically nonviable. With our technology, it is possible to develop and manufacture proteins much more efficiently so that they can be marketed,” says Aleksej Zelezniak.
    The authors of the study are Jan Zrimec, Xiaozhi Fu, Azam Sheikh Muhammad, Christos Skrekas, Vykintas Jauniskis, Nora K. Speicher, Christoph S. Börlin, Vilhelm Verendel, Morteza Haghir Chehreghani, Devdatt Dubhashi, Verena Siewers, Florian David, Jens Nielsen and Aleksej Zelezniak.
    The researcher are active at Chalmers University of Technology, Sverige; National Institute of Biology, Slovenia; Biomatter Designs, Lithuania; Institute of Biotechnology, Lithuania; BioInnovation Institute, Denmark; King’s College London, UK. More

  • in

    Sweet new way to print microchip patterns on curvy surfaces

    NIST scientist Gary Zabow had never intended to use candy in his lab. It was only as a last resort that he had even tried burying microscopic magnetic dots in hardened chunks of sugar — hard candy, basically — and sending these sweet packages to colleagues in a biomedical lab. The sugar dissolves easily in water, freeing the magnetic dots for their studies without leaving any harmful plastics or chemicals behind.
    By chance, Zabow had left one of these sugar pieces, embedded with arrays of micromagnetic dots, in a beaker, and it did what sugar does with time and heat — it melted, coating the bottom of the beaker in a gooey mess.
    “No problem,” he thought. He would just dissolve away the sugar, as normal. Except this time when he rinsed out the beaker, the microdots were gone. But they weren’t really missing; instead of releasing into the water, they had been transferred onto the bottom of the glass where they were casting a rainbow reflection.
    “It was those rainbow colors that really surprised me,” Zabow recalls. The colors indicated that the arrays of microdots had retained their unique pattern.
    This sweet mess gave him an idea. Could regular table sugar be used to bring the power of microchips to new and unconventional surfaces? Zabow’s findings on this potential transfer printing process were published in Science on Nov. 25.
    Semiconductor chips, micropatterned surfaces, and electronics all rely on microprinting, the process of putting precise but minuscule patterns millionths to billionths of a meter wide onto surfaces to give them new properties. Traditionally, these tiny mazes of metals and other materials are printed on flat wafers of silicon. But as the possibilities for semiconductor chips and smart materials expand, these intricate, tiny patterns need to be printed on new, unconventional, non-flat surfaces. More

  • in

    A far-sighted approach to machine learning

    Picture two teams squaring off on a football field. The players can cooperate to achieve an objective, and compete against other players with conflicting interests. That’s how the game works.
    Creating artificial intelligence agents that can learn to compete and cooperate as effectively as humans remains a thorny problem. A key challenge is enabling AI agents to anticipate future behaviors of other agents when they are all learning simultaneously.
    Because of the complexity of this problem, current approaches tend to be myopic; the agents can only guess the next few moves of their teammates or competitors, which leads to poor performance in the long run.
    Researchers from MIT, the MIT-IBM Watson AI Lab, and elsewhere have developed a new approach that gives AI agents a farsighted perspective. Their machine-learning framework enables cooperative or competitive AI agents to consider what other agents will do as time approaches infinity, not just over a few next steps. The agents then adapt their behaviors accordingly to influence other agents’ future behaviors and arrive at an optimal, long-term solution.
    This framework could be used by a group of autonomous drones working together to find a lost hiker in a thick forest, or by self-driving cars that strive to keep passengers safe by anticipating future moves of other vehicles driving on a busy highway.
    “When AI agents are cooperating or competing, what matters most is when their behaviors converge at some point in the future. There are a lot of transient behaviors along the way that don’t matter very much in the long run. Reaching this converged behavior is what we really care about, and we now have a mathematical way to enable that,” says Dong-Ki Kim, a graduate student in the MIT Laboratory for Information and Decision Systems (LIDS) and lead author of a paper describing this framework. More

  • in

    Achieving a quantum fiber

    Invented in 1970 by Corning Incorporated, low-loss optical fiber became the best means to efficiently transport information from one place to another over long distances without loss of information. The most common way of data transmission nowadays is through conventional optical fibers — one single core channel transmits the information. However, with the exponential increase of data generation, these systems are reaching information-carrying capacity limits. Thus, research now focuses on finding new ways to utilize the full potential of fibers by examining their inner structure and applying new approaches to signal generation and transmission. Moreover, applications in quantum technology are enabled by extending this research from classical to quantum light.
    In the late 50s, the physicist Philip W. Anderson (who also made important contributions to particle physics and superconductivity) predicted what is now called Anderson localization. For this discovery, he received the 1977 Nobel Prize in Physics. Anderson showed theoretically under which conditions an electron in a disordered system can either move freely through the system as a whole, or be tied to a specific position as a “localized electron.” This disordered system can for example be a semiconductor with impurities.
    Later, the same theoretical approach was applied to a variety of disordered systems, and it was deduced that also light could experience Anderson localization. Experiments in the past have demonstrated Anderson localization in optical fibers, realizing the confinement or localization of light — classical or conventional light — in two dimensions while propagating it through the third dimension. While these experiments had shown successful results with classical light, so far no one had tested such systems with quantum light — light consisting of quantum correlated states. That is, until recently.
    In a study published in Communications Physics, ICFO researchers Alexander Demuth, Robing Camphausen, Alvaro Cuevas, led by ICREA Prof at ICFO Valerio Pruneri, in collaboration with Nick Borrelli, Thomas Seward, Lisa Lamberson and Karl W. Koch from Corning, together with Alessandro Ruggeri from Micro Photon Devices (MPD) and Federica Villa and Francesca Madonini from Politecnico di Milano, have been able to successfully demonstrate the transport of two-photon quantum states of light through a phase-separated Anderson localization optical fibre (PSF).
    A conventional optical fiber vs an Anderson localization fiber
    Contrary to conventional single mode optical fibers, where data is transmitted through a single core, a phase separated fiber (PSF) or phase separated Anderson localization fiber is made of many glass strands embedded in a glass matrix of two different refractive indexes. During its fabrication, as borosilicate glass is heated and melted, it is drawn into a fiber, where one of the two phases of different refractive indexes tends to form elongated glass strands. Since there are two refractive indexes within the material, this generates what is known as a lateral disorder, which leads to transverse (2D) Anderson localization of light in the material.
    Experts in optical fiber fabrication, Corning created an optical fiber that can propagate multiple optical beams in a single optical fiber by harnessing Anderson localization. Contrary to multicore fiber bundles, this PSF showed to be very suitable for such experiments since many parallel optical beams can propagate through the fiber with minimal spacing between them.
    The team of scientists, experts in quantum communications, wanted to transport quantum information as efficiently as possible through Corning’s phase-separated optical fiber. In experiment, the PSF connects a transmitter and a receiver. The transmitter is a quantum light source (built by ICFO). The source generates quantum correlated photon pairs via spontaneous parametric down-conversion (SPDC) in a non-linear crystal, where one photon of high energy is converted to pairs of photons, which have lower energy each. The low-energy photon pairs have a wavelength of 810 nm. Due to momentum conservation, spatial anti-correlation arises. The receiver is a single-photon avalanche diode (SPAD) array camera, developed by Polimi and MPD. The SPAD array camera, unlike common CMOS cameras, is so sensitive that it can detect single photons with extremely low noise; it also has very high time resolution, such that the arrival time of the single photons is known with high precision.
    Quantum light
    The ICFO team engineered the optical setup to send the quantum light through the phase-separated Anderson localization fiber and detected its arrival with the SPAD array camera. The SPAD array enabled them not only to detect the pairs of photons but also to identify them as pairs, as they arrive at the same time (coincident). As the pairs are quantum correlated, knowing where one of the two photons is detected tells us the other photon’s location. The team verified this correlation right before and after sending the quantum light through PSF, successfully showing that the spatial anti-correlation of the photons was indeed maintained.
    After this demonstration, the ICFO team then set out to show how to improve their results in future work. For this, they conducted a scaling analysis, in order to find out the optimal size distribution of the elongated glass strands for the quantum light wavelength of 810 nm. After a thorough analysis with classical light they were able to identify the current limitations of phase-separated fiber and propose improvements of its fabrication, in order to minimize attenuation and loss of resolution during transport.
    The results of this study have shown this approach to be potentially attractive for scalable fabrication processes in real-world applications in quantum imaging or quantum communications, especially for the fields of high-resolution endoscopy, entanglement distribution and quantum key distribution. More

  • in

    Spin correlation between paired electrons demonstrated

    Physicists at the University of Basel have experimentally demonstrated for the first time that there is a negative correlation between the two spins of an entangled pair of electrons from a superconductor. For their study, the researchers used spin filters made of nanomagnets and quantum dots, as they report in the scientific journal Nature.
    The entanglement between two particles is among those phenomena in quantum physics that are hard to reconcile with everyday experiences. If entangled, certain properties of the two particles are closely linked, even when far apart. Albert Einstein described entanglement as a “spooky action at a distance.” Research on entanglement between light particles (photons) was awarded this year’s Nobel Prize in Physics.
    Two electrons can be entangled as well — for example in their spins. In a superconductor, the electrons form so-called Cooper pairs responsible for the lossless electrical currents and in which the individual spins are entangled.
    For several years, researchers at the Swiss Nanoscience Institute and the Department of Physics at the University of Basel have been able to extract electron pairs from a superconductor and spatially separate the two electrons. This is achieved by means of two quantum dots — nanoelectronic structures connected in parallel, each of which only allows single electrons to pass.
    Opposite electron spins from Cooper pairs
    The team of Prof. Dr. Christian Schönenberger and Dr. Andreas Baumgartner, in collaboration with researchers led by Prof. Dr. Lucia Sorba from the Istituto Nanoscienze-CNR and the Scuola Normale Superiore in Pisa have now been able to experimentally demonstrate what has long been expected theoretically: electrons from a superconductor always emerge in pairs with opposite spins.
    Using an innovative experimental setup, the physicists were able to measure that the spin of one electron points upwards when the other is pointing downwards, and vice versa. “We have thus experimentally proven a negative correlation between the spins of paired electrons,” explains project leader Andreas Baumgartner.
    The researchers achieved this by using a spin filter they developed in their laboratory. Using tiny magnets, they generated individually adjustable magnetic fields in each of the two quantum dots that separate the Cooper pair electrons. Since the spin also determines the magnetic moment of an electron, only one particular type of spin is allowed through at a time.
    “We can adjust both quantum dots so that mainly electrons with a certain spin pass through them,” explains first author Dr. Arunav Bordoloi. “For example, an electron with spin up passes through one quantum dot and an electron with spin down passes through the other quantum dot, or vice versa. If both quantum dots are set to pass only the same spins, the electric currents in both quantum dots are reduced, even though an individual electron may well pass through a single quantum dot.”
    “With this method, we were able to detect such negative correlations between electron spins from a superconductor for the first time,” Andreas Baumgartner concludes. “Our experiments are a first step, but not yet a definitive proof of entangled electron spins, since we cannot set the orientation of the spin filters arbitrarily — but we are working on it.”
    The research, which was recently published in Nature, is considered an important step toward further experimental investigations of quantum mechanical phenomena, such as the entanglement of particles in solids, which is also a key component of quantum computers.
    Story Source:
    Materials provided by University of Basel. Original written by Christel Möller. Note: Content may be edited for style and length. More