More stories

  • in

    Scientists create entangled photons 100 times more efficiently than previously possible

    Super-fast quantum computers and communication devices could revolutionize countless aspects of our lives — but first, researchers need a fast, efficient source of the entangled pairs of photons such systems use to transmit and manipulate information. Researchers at Stevens Institute of Technology have done just that, not only creating a chip-based photon source 100 times more efficient that previously possible, but bringing massive quantum device integration within reach.
    “It’s long been suspected that this was possible in theory, but we’re the first to show it in practice,” said Yuping Huang, Gallagher associate professor of physics and director of the Center for Quantum Science and Engineering.
    To create photon pairs, researchers trap light in carefully sculpted nanoscale microcavities; as light circulates in the cavity, its photons resonate and split into entangled pairs. But there’s a catch: at present, such systems are extremely inefficient, requiring a torrent of incoming laser light comprising hundreds of millions of photons before a single entangled photon pair will grudgingly drip out at the other end.
    Huang and colleagues at Stevens have now developed a new chip-based photon source that’s 100 times more efficient than any previous device, allowing the creation of tens of millions of entangled photon pairs per second from a single microwatt-powered laser beam.
    “This is a huge milestone for quantum communications,” said Huang, whose work will appear in the Dec. 17 issue of Physical Review Letters.
    Working with Stevens graduate students Zhaohui Ma and Jiayang Chen, Huang built on his laboratory’s previous research to carve extremely high-quality microcavities into flakes of lithium niobate crystal. The racetrack-shaped cavities internally reflect photons with very little loss of energy, enabling light to circulate longer and interact with greater efficiency.
    By fine-tuning additional factors such as temperature, the team was able to create an unprecedentedly bright source of entangled photon pairs. In practice, that allows photon pairs to be produced in far greater quantities for a given amount of incoming light, dramatically reducing the energy needed to power quantum components.
    The team is already working on ways to further refine their process, and say they expect to soon attain the true Holy Grail of quantum optics: a system with that can turn a single incoming photon into an entangled pair of outgoing photons, with virtually no waste energy along the way. “It’s definitely achievable,” said Chen. “At this point we just need incremental improvements.”
    Until then, the team plans to continue refining their technology, and seeking ways to use their photon source to drive logic gates and other quantum computing or communication components. “Because this technology is already chip-based, we’re ready to start scaling up by integrating other passive or active optical components,” explained Huang.
    The ultimate goal, Huang said, is to make quantum devices so efficient and cheap to operate that they can be integrated into mainstream electronic devices. “We want to bring quantum technology out of the lab, so that it can benefit every single one of us,” he explained. “Someday soon we want kids to have quantum laptops in their backpacks, and we’re pushing hard to make that a reality.”

    Story Source:
    Materials provided by Stevens Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Scientists simulate a large-scale virus, M13

    Scientists have developed a procedure that combines various resolution levels in a computer simulation of a biological virus. Their procedure maps a large-scale model that includes features, such as a virus structure, nanoparticles, etc, to its corresponding coarse-grained molecular model. This approach opens the prospects to a whole-length virus simulation at the molecular level. More

  • in

    Method finds hidden warning signals in measurements collected over time

    When you’re responsible for a multimillion-dollar satellite hurtling through space at thousands of miles per hour, you want to be sure it’s running smoothly. And time series can help.
    A time series is simply a record of a measurement taken repeatedly over time. It can keep track of a system’s long-term trends and short-term blips. Examples include the infamous Covid-19 curve of new daily cases and the Keeling curve that has tracked atmospheric carbon dioxide concentrations since 1958. In the age of big data, “time series are collected all over the place, from satellites to turbines,” says Kalyan Veeramachaneni. “All that machinery has sensors that collect these time series about how they’re functioning.”
    But analyzing those time series, and flagging anomalous data points in them, can be tricky. Data can be noisy. If a satellite operator sees a string of high temperature readings, how do they know whether it’s a harmless fluctuation or a sign that the satellite is about to overheat?
    That’s a problem Veeramachaneni, who leads the Data-to-AI group in MIT’s Laboratory for Information and Decision Systems, hopes to solve. The group has developed a new, deep-learning-based method of flagging anomalies in time series data. Their approach, called TadGAN, outperformed competing methods and could help operators detect and respond to major changes in a range of high-value systems, from a satellite flying through space to a computer server farm buzzing in a basement.
    The research will be presented at this month’s IEEE BigData conference. The paper’s authors include Data-to-AI group members Veeramachaneni, postdoc Dongyu Liu, visiting research student Alexander Geiger, and master’s student Sarah Alnegheimish, as well as Alfredo Cuesta-Infante of Spain’s Rey Juan Carlos University.
    High stakes
    For a system as complex as a satellite, time series analysis must be automated. The satellite company SES, which is collaborating with Veeramachaneni, receives a flood of time series from its communications satellites — about 30,000 unique parameters per spacecraft. Human operators in SES’ control room can only keep track of a fraction of those time series as they blink past on the screen. For the rest, they rely on an alarm system to flag out-of-range values. “So they said to us, ‘Can you do better?'” says Veeramachaneni. The company wanted his team to use deep learning to analyze all those time series and flag any unusual behavior.

    advertisement

    The stakes of this request are high: If the deep learning algorithm fails to detect an anomaly, the team could miss an opportunity to fix things. But if it rings the alarm every time there’s a noisy data point, human reviewers will waste their time constantly checking up on the algorithm that cried wolf. “So we have these two challenges,” says Liu. “And we need to balance them.”
    Rather than strike that balance solely for satellite systems, the team endeavored to create a more general framework for anomaly detection — one that could be applied across industries. They turned to deep-learning systems called generative adversarial networks (GANs), often used for image analysis.
    A GAN consists of a pair of neural networks. One network, the “generator,” creates fake images, while the second network, the “discriminator,” processes images and tries to determine whether they’re real images or fake ones produced by the generator. Through many rounds of this process, the generator learns from the discriminator’s feedback and becomes adept at creating hyper-realistic fakes. The technique is deemed “unsupervised” learning, since it doesn’t require a prelabeled dataset where images come tagged with their subjects. (Large labeled datasets can be hard to come by.)
    The team adapted this GAN approach for time series data. “From this training strategy, our model can tell which data points are normal and which are anomalous,” says Liu. It does so by checking for discrepancies — possible anomalies — between the real time series and the fake GAN-generated time series. But the team found that GANs alone weren’t sufficient for anomaly detection in time series, because they can fall short in pinpointing the real time series segment against which the fake ones should be compared. As a result, “if you use GAN alone, you’ll create a lot of false positives,” says Veeramachaneni.
    To guard against false positives, the team supplemented their GAN with an algorithm called an autoencoder — another technique for unsupervised deep learning. In contrast to GANs’ tendency to cry wolf, autoencoders are more prone to miss true anomalies. That’s because autoencoders tend to capture too many patterns in the time series, sometimes interpreting an actual anomaly as a harmless fluctuation — a problem called “overfitting.” By combining a GAN with an autoencoder, the researchers crafted an anomaly detection system that struck the perfect balance: TadGAN is vigilant, but it doesn’t raise too many false alarms.

    advertisement

    Standing the test of time series
    Plus, TadGAN beat the competition. The traditional approach to time series forecasting, called ARIMA, was developed in the 1970s. “We wanted to see how far we’ve come, and whether deep learning models can actually improve on this classical method,” says Alnegheimish.
    The team ran anomaly detection tests on 11 datasets, pitting ARIMA against TadGAN and seven other methods, including some developed by companies like Amazon and Microsoft. TadGAN outperformed ARIMA in anomaly detection for eight of the 11 datasets. The second-best algorithm, developed by Amazon, only beat ARIMA for six datasets.
    Alnegheimish emphasized that their goal was not only to develop a top-notch anomaly detection algorithm, but also to make it widely useable. “We all know that AI suffers from reproducibility issues,” she says. The team has made TadGAN’s code freely available, and they issue periodic updates. Plus, they developed a benchmarking system for users to compare the performance of different anomaly detection models.
    “This benchmark is open source, so someone can go try it out. They can add their own model if they want to,” says Alnegheimish. “We want to mitigate the stigma around AI not being reproducible. We want to ensure everything is sound.”
    Veeramachaneni hopes TadGAN will one day serve a wide variety of industries, not just satellite companies. For example, it could be used to monitor the performance of computer apps that have become central to the modern economy. “To run a lab, I have 30 apps. Zoom, Slack, Github — you name it, I have it,” he says. “And I’m relying on them all to work seamlessly and forever.” The same goes for millions of users worldwide.
    TadGAN could help companies like Zoom monitor time series signals in their data center — like CPU usage or temperature — to help prevent service breaks, which could threaten a company’s market share. In future work, the team plans to package TadGAN in a user interface, to help bring state-of-the-art time series analysis to anyone who needs it. More

  • in

    Artificial intelligence classifies supernova explosions with unprecedented accuracy

    Artificial intelligence is classifying real supernova explosions without the traditional use of spectra, thanks to a team of astronomers at the Center for Astrophysics | Harvard & Smithsonian. The complete data sets and resulting classifications are publicly available for open use.
    By training a machine learning model to categorize supernovae based on their visible characteristics, the astronomers were able to classify real data from the Pan-STARRS1 Medium Deep Survey for 2,315 supernovae with an accuracy rate of 82-percent without the use of spectra.
    The astronomers developed a software program that classifies different types of supernovae based on their light curves, or how their brightness changes over time. “We have approximately 2,500 supernovae with light curves from the Pan-STARRS1 Medium Deep Survey, and of those, 500 supernovae with spectra that can be used for classification,” said Griffin Hosseinzadeh, a postdoctoral researcher at the CfA and lead author on the first of two papers published in The Astrophysical Journal. “We trained the classifier using those 500 supernovae to classify the remaining supernovae where we were not able to observe the spectrum.”
    Edo Berger, an astronomer at the CfA explained that by asking the artificial intelligence to answer specific questions, the results become increasingly more accurate. “The machine learning looks for a correlation with the original 500 spectroscopic labels. We ask it to compare the supernovae in different categories: color, rate of evolution, or brightness. By feeding it real existing knowledge, it leads to the highest accuracy, between 80- and 90-percent.”
    Although this is not the first machine learning project for supernovae classification, it is the first time that astronomers have had access to a real data set large enough to train an artificial intelligence-based supernovae classifier, making it possible to create machine learning algorithms without the use of simulations.
    “If you make a simulated light curve, it means you are making an assumption about what supernovae will look like, and your classifier will then learn those assumptions as well,” said Hosseinzadeh. “Nature will always throw some additional complications in that you did not account for, meaning that your classifier will not do as well on real data as it did on simulated data. Because we used real data to train our classifiers, it means our measured accuracy is probably more representative of how our classifiers will perform on other surveys.” As the classifier categorizes the supernovae, said Berger, “We will be able to study them both in retrospect and in real-time to pick out the most interesting events for detailed follow up. We will use the algorithm to help us pick out the needles and also to look at the haystack.”
    The project has implications not only for archival data, but also for data that will be collected by future telescopes. The Vera C. Rubin Observatory is expected to go online in 2023, and will lead to the discovery of millions of new supernovae each year. This presents both opportunities and challenges for astrophysicists, where limited telescope time leads to limited spectral classifications.
    “When the Rubin Observatory goes online it will increase our discovery rate of supernovae by 100-fold, but our spectroscopic resources will not increase,” said Ashley Villar, a Simons Junior Fellow at Columbia University and lead author on the second of the two papers, adding that while roughly 10,000 supernovae are currently discovered each year, scientists only take spectra of about 10-percent of those objects. “If this holds true, it means that only 0.1-percent of supernovae discovered by the Rubin Observatory each year will get a spectroscopic label. The remaining 99.9-percent of data will be unusable without methods like ours.”
    Unlike past efforts, where data sets and classifications have been available to only a limited number of astronomers, the data sets from the new machine learning algorithm will be made publicly available. The astronomers have created easy-to-use, accessible software, and also released all of the data from Pan-STARRS1 Medium Deep Survey along with the new classifications for use in other projects. Hosseinzadeh said, “It was really important to us that these projects be useful for the entire supernova community, not just for our group. There are so many projects that can be done with these data that we could never do them all ourselves.” Berger added, “These projects are open data for open science.”
    This project was funded in part by a grant from the National Science Foundation (NSF) and the Harvard Data Science Initiative (HDSI). More

  • in

    Catalyst research: Molecular probes require highly precise calculations

    Catalysts are indispensable for many technologies. To further improve heterogeneous catalysts, it is required to analyze the complex processes on their surfaces, where the active sites are located. Scientists of Karlsruhe Institute of Technology (KIT), together with colleagues from Spain and Argentina, have now reached decisive progress: As reported in Physical Review Letters, they use calculation methods with so-called hybrid functionals for the reliable interpretation of experimental data.
    Many important technologies, such as processes for energy conversion, emission reduction, or the production of chemicals, work with suitable catalysts only. For this reason, highly efficient materials for heterogeneous catalysis are gaining importance. In heterogeneous catalysis, the material acting as a catalyst and the reacting substances exist in different phases as a solid or gas, for instance. Material compositions can be determined reliably by various methods. Processes taking place on the catalyst surface, however, can be detected by hardly any analysis method. “But it is these highly complex chemical processes on the outermost surface of the catalyst that are of decisive importance,” says Professor Christof Wöll, Head of KIT’s Institute of Functional Interfaces (IFG). “There, the active sites are located, where the catalyzed reaction takes place.”
    Precise Examination of the Surface of Powder Catalysts
    Among the most important heterogeneous catalysts are cerium oxides, i.e. compounds of the rare-earth metal cerium with oxygen. They exist in powder form and consist of nanoparticles of controlled structure. The shape of the nanoparticles considerably influences the reactivity of the catalyst. To study the processes on the surface of such powder catalysts, researchers recently started to use probe molecules, such as carbon monoxide molecules, that bind to the nanoparticles. These probes are then measured by infrared reflection absorption spectroscopy (IRRAS). Infrared radiation causes molecules to vibrate. From the vibration frequencies of the probe molecules, detailed information can be obtained on the type and composition of the catalytic sites. So far, however, interpretation of the experimental IRRAS data has been very difficult, because technologically relevant powder catalysts have many vibration bands, whose exact allocation is challenging. Theoretical calculations were of no help, because the deviation from the experiment, also in the case of model systems, was so large that experimentally observed vibration bands could not be allocated precisely.
    Long Calculation Time — High Accuracy
    Researchers of KIT’s Institute of Functional Interfaces (IFG) and Institute of Catalysis Research and Technology (IKFT), in cooperation with colleagues from Spain and Argentina coordinated by Dr. M. Verónica Ganduglia-Pirovano from Consejo Superior de Investigaciones Científicas (CSIC) in Madrid, have now identified and solved a major problem of theoretical analysis. As reported in Physical Review Letters, systematic theoretical studies and validation of the results using model systems revealed that theoretical methods used so far have some fundamental weaknesses. In general, such weaknesses can be observed in calculations using the density functional theory (DFT), a method with which the quantum mechanics basic state of a multi-electron system can be determined based on the density of the electrons. The researchers found that the weaknesses can be overcome with so-called hybrid functionals that combine DFT with the Hartree-Fock method, an approximation method in quantum chemistry. This makes the calculations very complex, but also highly precise. “The calculation times required by these new methods are longer by a factor of 100 than for conventional methods,” says Christof Wöll. “But this drawback is more than compensated by the excellent agreement with the experimental systems.” Using nanoscaled cerium oxide catalysts, the researchers demonstrated this progress that may contribute to making heterogeneous catalysts more effective and durable.
    The results of the work also represent an important contribution to the new Collaborative Research Center (CRC) “TrackAct — Tracking the Active Site in Heterogeneous Catalysis for Emission Control” at KIT. Professor Christof Wöll and Dr. Yuemin Wang from IFG as well as Professor Felix Studt and Dr. Philipp Pleßow from IKFT are among the principal investigators of this interdisciplinary CRC that is aimed at holistically understanding catalytic processes.

    Story Source:
    Materials provided by Karlsruher Institut für Technologie (KIT). Note: Content may be edited for style and length. More

  • in

    Longest intergalactic gas filament discovered

    More than half of the matter in our universe has so far remained hidden from us. However, astrophysicists had a hunch where it might be: In so-called filaments, unfathomably large thread-like structures of hot gas that surround and connect galaxies and galaxy clusters. A team led by the University of Bonn (Germany) has now for the first time observed a gas filament with a length of 50 million light years. Its structure is strikingly similar to the predictions of computer simulations. The observation therefore also confirms our ideas about the origin and evolution of our universe. The results are published in the journal Astronomy & Astrophysics.
    We owe our existence to a tiny aberration. Pretty much exactly 13.8 billion years ago, the Big Bang occurred. It is the beginning of space and time, but also of all matter that makes up our universe today. Although it was initially concentrated at one point, it expanded at breakneck speed — a gigantic gas cloud in which matter was almost uniformly distributed.
    Almost, but not completely: In some parts the cloud was a bit denser than in others. And for this reason alone there are planets, stars and galaxies today. This is because the denser areas exerted slightly higher gravitational forces, which drew the gas from their surroundings towards them. More and more matter therefore concentrated at these regions over time. The space between them, however, became emptier and emptier. Over the course of a good 13 billion years, a kind of sponge structure developed: large “holes” without any matter, with areas in between where thousands of galaxies are gathered in a small space, so-called galaxy clusters.
    Fine web of gas threads
    If it really happened that way, the galaxies and clusters should still be connected by remnants of this gas, like the gossamer-thin threads of a spider web. “According to calculations, more than half of all baryonic matter in our universe is contained in these filaments — this is the form of matter of which stars and planets are composed, as are we ourselves,” explains Prof. Dr. Thomas Reiprich from the Argelander Institute for Astronomy at the University of Bonn. Yet it has so far escaped our gaze: Due to the enormous expansion of the filaments, the matter in them is extremely diluted: It contains just ten particles per cubic meter, which is much less than the best vacuum we can create on Earth.
    However, with a new measuring instrument, the eROSITA space telescope, Reiprich and his colleagues were now able to make the gas fully visible for the first time. “eROSITA has very sensitive detectors for the type of X-ray radiation that emanates from the gas in filaments,” explains Reiprich. “It also has a large field of view — like a wide-angle lens, it captures a relatively large part of the sky in a single measurement, and at a very high resolution.” This allows detailed images of such huge objects as filaments to be taken in a comparatively short time.
    Confirmation of the standard model
    In their study, the researchers examined a celestial object called Abell 3391/95. This is a system of three galaxy clusters, which is about 700 million light years away from us. The eROSITA images show not only the clusters and numerous individual galaxies, but also the gas filaments connecting these structures. The entire filament is 50 million light years long. But it may be even more enormous: The scientists assume that the images only show a section.
    “We compared our observations with the results of a simulation that reconstructs the evolution of the universe,” explains Reiprich. “The eROSITA images are strikingly similar to computer-generated graphics. This suggests that the widely accepted standard model for the evolution of the universe is correct.” Most importantly, the data show that the missing matter is probably actually hidden in the filaments.

    Story Source:
    Materials provided by University of Bonn. Note: Content may be edited for style and length. More

  • in

    Tiny quantum computer solves real optimization problem

    Quantum computers have already managed to surpass ordinary computers in solving certain tasks — unfortunately, totally useless ones. The next milestone is to get them to do useful things. Researchers at Chalmers University of Technology, Sweden, have now shown that they can solve a small part of a real logistics problem with their small, but well-functioning quantum computer.
    Interest in building quantum computers has gained considerable momentum in recent years, and feverish work is underway in many parts of the world. In 2019, Google’s research team made a major breakthrough when their quantum computer managed to solve a task far more quickly than the world’s best supercomputer. The downside is that the solved task had no practical use whatsoever — it was chosen because it was judged to be easy to solve for a quantum computer, yet very difficult for a conventional computer.
    Therefore, an important task is now to find useful, relevant problems that are beyond the reach of ordinary computers, but which a relatively small quantum computer could solve.
    “We want to be sure that the quantum computer we are developing can help solve relevant problems early on. Therefore, we work in close collaboration with industrial companies,” says theoretical physicist Giulia Ferrini, one of the leaders of Chalmers University of Technology’s quantum computer project, which began in 2018.
    Together with Göran Johansson, Giulia Ferrini led the theoretical work when a team of researchers at Chalmers, including an industrial doctoral student from the aviation logistics company Jeppesen, recently showed that a quantum computer can solve an instance of a real problem in the aviation industry.
    The algorithm proven on two qubits All airlines are faced with scheduling problems. For example, assigning individual aircraft to different routes represents an optimisation problem, one that grows very rapidly in size and complexity as the number of routes and aircraft increases.

    advertisement

    Researchers hope that quantum computers will eventually be better at handling such problems than today’s computers. The basic building block of the quantum computer — the qubit — is based on completely different principles than the building blocks of today’s computers, allowing them to handle enormous amounts of information with relatively few qubits.
    However, due to their different structure and function, quantum computers must be programmed in other ways than conventional computers. One proposed algorithm that is believed to be useful on early quantum computers is the so-called Quantum Approximate Optimization Algorithm (QAOA).
    The Chalmers research team has now successfully executed said algorithm on their quantum computer — a processor with two qubits — and they showed that it can successfully solve the problem of assigning aircraft to routes. In this first demonstration, the result could be easily verified as the scale was very small — it involved only two airplanes.
    Potential to handle many aircraft With this feat, the researchers were first to show that the QAOA algorithm can solve the problem of assigning aircraft to routes in practice. They also managed to run the algorithm one level further than anyone before, an achievement that requires very good hardware and accurate control.
    “We have shown that we have the ability to map relevant problems onto our quantum processor. We still have a small number of qubits, but they work well. Our plan has been to first make everything work very well on a small scale, before scaling up,” says Jonas Bylander, senior researcher responsible for the experimental design, and one of the leaders of the project of building a quantum computer at Chalmers.
    The theorists in the research team also simulated solving the same optimisation problem for up to 278 aircraft, which would require a quantum computer with 25 qubits.
    “The results remained good as we scaled up. This suggests that the QAOA algorithm has the potential to solve this type of problem at even larger scales,” says Giulia Ferrini.
    Surpassing today’s best computers would, however, require much larger devices. The researchers at Chalmers have now begun scaling up and are currently working with five quantum bits. The plan is to reach at least 20 qubits by 2021 while maintaining the high quality. More

  • in

    How the spread of the internet is changing migration

    The spread of the Internet is shaping migration in profound ways. A McGill-led study of over 150 countries links Internet penetration with migration intentions and behaviours, suggesting that digital connectivity plays a key role in migration decisions and actively supports the migration process.
    Countries with higher proportions of Internet users tend to have more people who are willing to emigrate. At the individual level, the association between Internet use and intention to migrate is stronger among women and those with less education. The same result was found for economic migrants compared to political migrants, according to the team of international researchers from McGill University, University of Oxford, University of Calabria, and Bocconi University.
    “The digital revolution brought about by the advent of the Internet has transformed our societies, economies, and way of life. Migration is no exception in this revolution,” says co-author Luca Maria Pesando, an Assistant Professor in the Department of Sociology and Centre on Population Dynamics at McGill University.
    In the study, published in Population and Development Review, the researchers tracked Internet use and migration pathways with data from the World Bank, the International Telecommunication Union, the Global Peace Index, the Arab Barometer, and the Gallup World Poll, an international survey of citizens across 160 countries.
    Their findings underscore the importance of the Internet as an informational channel for migrants who leave their country in search of better opportunities. Unlike political migrants, who might be pushed, for example, by the sudden explosion of a civil conflict, economic migrants’ decisions are more likely to benefit from access to information provided by the Internet, and more likely to be shaped by aspirations of brighter futures in their destination countries.
    “The Internet not only gives us access to more information; it allows us to easily compare ourselves to others living in other — often wealthier — countries through social media,” says Pesando.
    Case study of Italy
    Looking at migration data in Italy — a country that has witnessed sizeable increases in migrant inflows over the past two decades — the researchers found a strong correlation between Internet use in migrants’ countries of origin, and the presence of people from that country in the Italian population register in the following year. Tracking migrants including asylum seekers and refugees passing through the Sant’Anna immigration Centre in Calabria, the researchers also found a link between migrants’ digital skills and knowledge of the Internet and voluntary departure from the Centre in search of better economic opportunities.
    “Our findings contribute to the growing research on digital demography, where Internet-generated data or digital breadcrumbs are used to study migration and other demographic phenomena,” says Pesando. “Our work suggests that the Internet acts not just as an instrument to observe migration behaviors, but indeed actively supports the migration process.”
    As next steps, the research team, which includes Francesco Billari of Bocconi University and Ridhi Kashyap and Valentina Rotondi of University of Oxford, will explore how digital technology and connectivity affect social development outcomes, ranging from women’s empowerment to reproductive health and children’s wellbeing across generations.

    Story Source:
    Materials provided by McGill University. Note: Content may be edited for style and length. More