More stories

  • in

    Development of fusion energy

    The U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) is collaborating with private industry on cutting-edge fusion research aimed at achieving commercial fusion energy. This work, enabled through a public-private DOE grant program, supports efforts to develop high-performance fusion grade plasmas. In one such project PPPL is working in coordination with MIT’s Plasma Science and Fusion Center (PSFC) and Commonwealth Fusion Systems, a start-up spun out of MIT that is developing a tokamak fusion device called “SPARC.”
    The goal of the project is to predict the leakage of fast “alpha” particles produced during the fusion reactions in SPARC, given the size and potential misalignments of the superconducting magnets that confine the plasma. These particles can create a largely self-heated or “burning plasma” that fuels fusion reactions. Development of burning plasma is a major scientific goal for fusion energy research. However, leakage of alpha particles could slow or halt the production of fusion energy and damage the interior of the SPARC facility.
    New superconducting magnets
    Key features of the SPARC machine include its compact size and powerful magnetic fields enabled by the ability of new superconducting magnets to operate at higher fields and stresses than existing superconducting magnets. These features will enable design and construction of smaller and less-expensive fusion facilities, as described in recent publications by the SPARC team — assuming that the fast alpha particles created in fusion reactions can be contained long enough to keep the plasma hot.
    “Our research indicates that they can be,” said PPPL physicist Gerrit Kramer, who participates in the project through the DOE Innovation Network for Fusion Energy (INFUSE) program. The two-year-old program, which PPPL physicist Ahmed Diallo serves as deputy director, aims to speed private-sector development of fusion energy through partnerships with national laboratories.

    advertisement

    Well-confined
    “We found that the alpha particles are indeed well confined in the SPARC design,” said Kramer, coauthor of a paper in the Journal of Plasma Physics that reports the findings. He worked closely with the lead author Steven Scott, a consultant to Commonwealth Fusion Systems and former long-time physicist at PPPL.
    Kramer used the SPIRAL computer code developed at PPPL to verify the particle confinement. “The code, which simulates the wavy pattern, or ripples, in a magnetic field that could allow the escape of fast particles, showed good confinement and lack of damage to the SPARC walls,” Kramer said. Moreover, he added, “the SPIRAL code agreed well with the ASCOT code from Finland. While the two codes are completely different, the results were similar.”
    The findings gladdened Scott. “It’s gratifying to see the computational validation of our understanding of ripple-induced losses,” he said, “since I studied the issue experimentally back in the early 1980s for my doctoral dissertation.”
    Fusion reactions combine light elements in the form of plasma — the hot, charged state of matter composed of free electrons and atomic nuclei, or ions, that comprises 99 percent of the visible universe — to generate massive amounts of energy. Scientists around the world are seeking to create fusion as a virtually unlimited source of power for generating electricity.
    Key guidance
    Kramer and colleagues noted that misalignment of the SPARC magnets will increase the ripple-induced losses of fusion particles leading to increased power striking the walls. Their calculations should provide key guidance to the SPARC engineering team about how well the magnets must be aligned to avoid excessive power loss and wall damage. Properly aligned magnets will enable studies of plasma self-heating for the first time and development of improved techniques for plasma control in future fusion power plants. More

  • in

    A pursuit of better testing to sort out the complexities of ADHD

    The introduction of computer simulation to the identification of symptoms in children with attention deficit/hyperactivity disorder (ADHD) has potential to provide an additional objective tool to gauge the presence and severity of behavioral problems, Ohio State University researchers suggest in a new publication.
    Most mental health disorders are diagnosed and treated based on clinical interviews and questionnaires — and, for about a century, data from cognitive tests has been added to the diagnostic process to help clinicians learn more about how and why people behave in a certain way.
    Cognitive testing in ADHD is used to identify a variety of symptoms and deficits, including selective attention, poor working memory, altered time perception, difficulties in maintaining attention and impulsive behavior. In the most common class of performance tests, children are told to either press a computer key or avoid hitting a key when they see a certain word, symbol or other stimulus.
    For ADHD, however, these cognitive tests often don’t capture the complexity of symptoms. The advent of computational psychiatry — comparing a computer-simulated model of normal brain processes to dysfunctional processes observed in tests — could be an important supplement to the diagnostic process for ADHD, the Ohio State researchers report in a new review published in the journal Psychological Bulletin.
    The research team reviewed 50 studies of cognitive tests for ADHD and described how three common types of computational models could supplement these tests.
    It is widely recognized that children with ADHD take longer to make decisions while performing tasks than children who don’t have the disorder, and tests have relied on average response times to explain the difference. But there are intricacies to that dysfunction that a computational model could help pinpoint, providing information clinicians, parents and teachers could use to make life easier for kids with ADHD.

    advertisement

    “We can use models to simulate the decision process and see how decision-making happens over time — and do a better job of figuring out why children with ADHD take longer to make decisions,” said Nadja Ging-Jehli, lead author of the review and a graduate student in psychology at Ohio State.
    Ging-Jehli completed the review with Ohio State faculty members Roger Ratcliff, professor of psychology, and L. Eugene Arnold, professor emeritus of psychiatry and behavioral health.
    The researchers offer recommendations for testing and clinical practice to achieve three principal goals: better characterizing ADHD and any accompanying mental health diagnoses such as anxiety and depression, improving treatment outcomes (about one-third of patients with ADHD do not respond to medical treatment), and potentially predicting which children will “lose” the ADHD diagnosis as adults.
    Decision-making behind the wheel of a car helps illustrate the problem: Drivers know that when a red light turns green, they can go through an intersection — but not everyone hits the gas pedal at the same time. A common cognitive test of this behavior would repeatedly expose drivers to the same red light-green light scenario to arrive at an average reaction time and use that average, and deviations from it, to categorize the typical versus disordered driver.
    This approach has been used to determine that individuals with ADHD are typically slower to “start driving” than those without ADHD. But that determination leaves out a range of possibilities that help explain why they’re slower — they could be distracted, daydreaming, or feeling nervous in a lab setting. The broad distribution of reactions captured by computer modeling could provide more, and useful, information.

    advertisement

    “In our review, we show that this method has multiple problems that prevent us from understanding the underlying characteristics of a mental-health disorder such as ADHD, and that also prevent us from finding the best treatment for different individuals,” Ging-Jehli said. “We can use computational modeling to think about the factors that generate the observed behavior. These factors will broaden our understanding of a disorder, acknowledging that there are different types of individuals who have different deficits that also call for different treatments.
    “We are proposing using the entire distribution of the reaction times, taking into consideration the slowest and the fastest reaction times to distinguish between different types of ADHD.”
    The review also identified a complicating factor for ADHD research going forward — a broader range of externally evident symptoms as well as subtle characteristics that are hard to detect with the most common testing methods. Understanding that children with ADHD have so many biologically based differences suggests that a single task-based test is not sufficient to make a meaningful ADHD diagnosis, the researchers say.
    “ADHD is not only the child who is fidgeting and restless in a chair. It’s also the child who is inattentive because of daydreaming. Even though that child is more introverted and doesn’t express as many symptoms as a child with hyperactivity, that doesn’t mean that child doesn’t suffer,” Ging-Jehli said. Daydreaming is especially common in girls, who are not enrolled in ADHD studies nearly as frequently as boys, she said.
    Ging-Jehli described computational psychiatry as a tool that could also take into account — continuing the analogy — mechanical differences in the car, and how that could influence driver behavior. These dynamics can make it harder to understand ADHD, but also open the door to a broader range of treatment options.
    “We need to account for the different types of drivers and we need to understand the different conditions to which we expose them. Based on only one observation, we cannot make conclusions about diagnosis and treatment options,” she said.
    “However, cognitive testing and computational modeling should not be seen as an attempt to replace existing clinical interviews and questionnaire-based procedures, but as complements that add value by providing new information.”
    According to the researchers, a battery of tasks gauging social and cognitive characteristics should be assigned for a diagnosis rather than just one, and more consistency is needed across studies to ensure the same cognitive tasks are used to assess the appropriate cognitive concepts.
    Finally, combining cognitive testing with physiological tests — especially eye-tracking and EEGs that record electrical activity in the brain — could provide powerful objective and quantifiable data to make a diagnosis more reliable and help clinicians better predict which medicines would be most effective.
    Ging-Jehli is putting these suggestions to the test in her own research, applying a computational model in a study of a specific neurological intervention in children with ADHD.
    “The purpose of our analysis was to show there’s a lack of standardization and so much complexity, and symptoms are hard to measure with existing tools,” Ging-Jehli said. “We need to understand ADHD better for children and adults to have a better quality of life and get the treatment that is most appropriate.”
    This research was supported by the Swiss National Science Foundation and the National Institute on Aging. More

  • in

    More effective training model for robots

    Multi-domain operations, the Army’s future operating concept, requires autonomous agents with learning components to operate alongside the warfighter. New Army research reduces the unpredictability of current training reinforcement learning policies so that they are more practically applicable to physical systems, especially ground robots.
    These learning components will permit autonomous agents to reason and adapt to changing battlefield conditions, said Army researcher Dr. Alec Koppel from the U.S. Army Combat Capabilities Development Command, now known as DEVCOM, Army Research Laboratory.
    The underlying adaptation and re-planning mechanism consists of reinforcement learning-based policies. Making these policies efficiently obtainable is critical to making the MDO operating concept a reality, he said.
    According to Koppel, policy gradient methods in reinforcement learning are the foundation for scalable algorithms for continuous spaces, but existing techniques cannot incorporate broader decision-making goals such as risk sensitivity, safety constraints, exploration and divergence to a prior.
    Designing autonomous behaviors when the relationship between dynamics and goals are complex may be addressed with reinforcement learning, which has gained attention recently for solving previously intractable tasks such as strategy games like go, chess and videogames such as Atari and Starcraft II, Koppel said.
    Prevailing practice, unfortunately, demands astronomical sample complexity, such as thousands of years of simulated gameplay, he said. This sample complexity renders many common training mechanisms inapplicable to data-starved settings required by MDO context for the Next-Generation Combat Vehicle, or NGCV.

    advertisement

    “To facilitate reinforcement learning for MDO and NGCV, training mechanisms must improve sample efficiency and reliability in continuous spaces,” Koppel said. “Through the generalization of existing policy search schemes to general utilities, we take a step towards breaking existing sample efficiency barriers of prevailing practice in reinforcement learning.”
    Koppel and his research team developed new policy search schemes for general utilities, whose sample complexity is also established. They observed that the resulting policy search schemes reduce the volatility of reward accumulation, yield efficient exploration of an unknown domains and a mechanism for incorporating prior experience.
    “This research contributes an augmentation of the classical Policy Gradient Theorem in reinforcement learning,” Koppel said. “It presents new policy search schemes for general utilities, whose sample complexity is also established. These innovations are impactful to the U.S. Army through their enabling of reinforcement learning objectives beyond the standard cumulative return, such as risk sensitivity, safety constraints, exploration and divergence to a prior.”
    Notably, in the context of ground robots, he said, data is costly to acquire.
    “Reducing the volatility of reward accumulation, ensuring one explores an unknown domain in an efficient manner, or incorporating prior experience, all contribute towards breaking existing sample efficiency barriers of prevailing practice in reinforcement learning by alleviating the amount of random sampling one requires in order to complete policy optimization,” Koppel said.
    The future of this research is very bright, and Koppel has dedicated his efforts towards making his findings applicable for innovative technology for Soldiers on the battlefield.
    “I am optimistic that reinforcement-learning equipped autonomous robots will be able to assist the warfighter in exploration, reconnaissance and risk assessment on the future battlefield,” Koppel said. “That this vision is made a reality is essential to what motivates which research problems I dedicate my efforts.”
    The next step for this research is to incorporate the broader decision-making goals enabled by general utilities in reinforcement learning into multi-agent settings and investigate how interactive settings between reinforcement learning agents give rise to synergistic and antagonistic reasoning among teams.
    According to Koppel, the technology that results from this research will be capable of reasoning under uncertainty in team scenarios. More

  • in

    Important milestone in the creation of a quantum computer

    Quantum computer: One of the obstacles for progress in the quest for a working quantum computer has been that the working devices that go into a quantum computer and perform the actual calculations, the qubits, have hitherto been made by universities and in small numbers. But in recent years, a pan-European collaboration, in partnership with French microelectronics leader CEA-Leti, has been exploring everyday transistors — that are present in billions in all our mobile phones — for their use as qubits. The French company Leti makes giant wafers full of devices, and, after measuring, researchers at the Niels Bohr Institute, University of Copenhagen, have found these industrially produced devices to be suitable as a qubit platform capable of moving to the second dimension, a significant step for a working quantum computer. The result is now published in Nature Communications.
    Quantum dots in two dimensional array is a leap ahead
    One of the key features of the devices is the two-dimensional array of quantum dot. Or more precisely, a two by two lattice of quantum dots. “What we have shown is that we can realize single electron control in every single one of these quantum dots. This is very important for the development of a qubit, because one of the possible ways of making qubits is to use the spin of a single electron. So reaching this goal of controlling the single electrons and doing it in a 2D array of quantum dots was very important for us,” says Fabio Ansaloni, former PhD student, now postdoc at center for Quantum Devices, NBI.
    Using electron spins has proven to be advantageous for the implementation of qubits. In fact, their “quiet” nature makes spins weakly interacting with the noisy environment, an important requirement to obtain highly performing qubits.
    Extending quantum computers processors to the second dimension has been proven to be essential for a more efficient implementation of quantum error correction routines. Quantum error correction will enable future quantum computers to be fault tolerant against individual qubit failures during the computations.
    The importance of industry scale production
    Assistant Professor at Center for Quantum Devices, NBI, Anasua Chatterjee adds: “The original idea was to make an array of spin qubits, get down to single electrons and become able to control them and move them around. In that sense it is really great that Leti was able to deliver the samples we have used, which in turn made it possible for us to attain this result. A lot of credit goes to the pan-European project consortium, and generous funding from the EU, helping us to slowly move from the level of a single quantum dot with a single electron to having two electrons, and now moving on to the two dimensional arrays. Two dimensional arrays is a really big goal, because that’s beginning to look like something you absolutely need to build a quantum computer. So Leti has been involved with a series of projects over the years, which have all contributed to this result.”

    advertisement

    The credit for getting this far belongs to many projects across Europe
    The development has been gradual. In 2015, researchers in Grenoble succeeded in making the first spin qubit, but this was based on holes, not electrons. Back then, the performance of the devices made in the “hole regime” were not optimal, and the technology has advanced so that the devices now at NBI can have two dimensional arrays in the single electron regime. The progress is threefold, the researchers explain: “First, producing the devices in an industrial foundry is a necessity. The scalability of a modern, industrial process is essential as we start to make bigger arrays, for example for small quantum simulators. Second, when making a quantum computer, you need an array in two dimensions, and you need a way of connecting the external world to each qubit. If you have 4-5 connections for each qubit, you quickly end up with a unrealistic number of wires going out of the low-temperature setup. But what we have managed to show is that we can have one gate per electron, and you can read and control with the same gate. And lastly, using these tools we were able to move and swap single electrons in a controlled way around the array, a challenge in itself.”
    Two dimensional arrays can control errors
    Controlling errors occurring in the devices is a chapter in itself. The computers we use today produce plenty of errors, but they are corrected through what is called the repetition code. In a conventional computer, you can have information in either a 0 or a 1. In order to be sure that the outcome of a calculation is correct, the computer repeats the calculation and if one transistor makes an error, it is corrected through simple majority. If the majority of the calculations performed in other transistors point to 1 and not 0, then 1 is chosen as the result. This is not possible in a quantum computer since you cannot make an exact copy of a qubit, so quantum error correction works in another way: State-of-the-art physical qubits do not have low error rate yet, but if enough of them are combined in the 2D array, they can keep each other in check, so to speak. This is another advantage of the now realized 2D array.
    The next step from this milestone
    The result realized at the Niels Bohr Institute shows that it is now possible to control single electrons, and perform the experiment in the absence of a magnetic field. So the next step will be to look for spins — spin signatures — in the presence of a magnetic field. This will be essential to implement single and two qubit gates between the single qubits in the array. Theory has shown that a handful of single and two qubit gates, called a complete set of quantum gates, are enough to enable universal quantum computation. More

  • in

    Mathematical modeling can help balance economy, health during pandemic

    This summer, when bars and restaurants and stores began to reopen across the United States, people headed out despite the continuing threat of COVID-19.
    As a result, many areas, including the St. Louis region, saw increases in cases in July.
    Using mathematical modeling, new interdisciplinary research from the lab of Arye Nehorai, the Eugene & Martha Lohman Professor of Electrical Engineering in the Preston M. Green Department of Electrical & Systems Engineering at Washington University in St. Louis, determines the best course of action when it comes to walking the line between economic stability and the best possible health outcomes.
    The group — which also includes David Schwartzman, a business economics PhD candidate at Olin Business School, and Uri Goldsztejn, a PhD candidate in biomedical engineering at the McKelvey School of Engineering — published their findings Dec. 22 in PLOS ONE.
    The model indicates that of the scenarios they consider, communities could maximize economic productivity and minimize disease transmission if, until a vaccine were readily available, seniors mostly remained at home while younger people gradually returned to the workforce.
    “We have developed a predictive model for COVID-19 that considers, for the first time, its intercoupled effect on both economic and health outcomes for different quarantine policies,” Nehorai said. “You can have an optimal quarantine policy that minimizes the effect both on health and on the economy.”
    The work was an expanded version of a Susceptible, Exposed, Infectious, Recovered (SEIR) model, a commonly used mathematical tool for predicting the spread of infections. This dynamic model allows for people to be moved between groups known as compartments, and for each compartment to influence the other in turn.

    advertisement

    At their most basic, these models divide the population into four compartments: Those who are susceptible, exposed, infectious and recovered. In an innovation to this traditional model, Nehorai’s team included infected but asymptomatic people as well, taking into account the most up-to-date understanding of how transmission may work differently between them as well as how their behaviors might differ from people with symptoms. This turned out to be highly influential in the model’s outcomes.
    People were then divided into different “sub-compartments,” for example age (seniors are those older than 60), or by productivity. This was a measure of a person’s ability to work from home in the case of quarantine measures. To do this, they looked at college degrees as a proxy for who could continue to work during a period of quarantine.
    Then they got to work, developing equations which modeled the ways in which people moved from one compartment to another. Movement was affected by policy as well as the decisions an individual made.
    Interestingly, the model included a dynamic mortality rate — one that shrunk over time. “We had a mortality rate that accounted for improvements in medical knowledge over time,” said Uri Goldsztejn, a PhD candidate in biomedical engineering. “And we see that now; mortality rates have gone down.”
    “For example,” Goldsztejn said, “if the economy is decreasing, there is more incentive to leave quarantine,” which might show up in the model as people moving from the isolated compartment to the susceptible compartment. On the other hand, moving from infectious to recovered was based less on a person’s actions and can be better determined by recovery or mortality rates. Additionally, the researchers modeled the mortality rate as decreasing over time, due to medical knowledge about how to treat COVID-19 becoming better over time.

    advertisement

    The team looked at three scenarios, according to Schwartzman. In all three scenarios, the given timeline was 76 weeks — at which time it assumed a vaccine would be available — and seniors remained mostly quarantined until then.
    If strict isolation measures were maintained throughout.
    If, after the curve was flattened, there was a rapid relaxation of isolation measures by younger people to normal movement.
    If, after the curve was flattened, isolation measures were slowly lifted for younger people.
    “The third scenario is the case which was the best in terms of economic damage and health outcomes,” he said. “Because in the rapid relaxation scenario, there was another disease spread and restrictions would be reinstated.”
    Specifically, they found in the first scenario, there are 235,724 deaths and the economy shrinks by 34%.
    In the second scenario, where there was a rapid relaxation of isolation measures, a second outbreak occurs for a total of 525,558 deaths, and the economy shrinks by 32.2%.
    With a gradual relaxation, as in the third scenario, there are 262,917 deaths, and the economy shrinks by 29.8%.
    “We wanted to show there is a tradeoff,” Nehorai said. “And we wanted to find, mathematically, where is the sweet spot?” As with so many things, the “sweet spot” was not at either extreme — total lockdown or carrying on as if there was no virus.
    Another key finding was one no one should be surprised to hear: “People’s’ sensitivity to contagiousness is related to the precautions they take,” Nehorai said. “It’s still critical to use precautions — masks, social distancing, avoiding crowds and washing hands.” More

  • in

    Plastic drinking water pipes exposed to high heat can leak hazardous chemicals

    In August, a massive wildfire tore through the San Lorenzo Valley north of Santa Cruz, Calif., destroying almost 1,500 structures and exposing many others to extreme heat. Before the fire was even out, lab tests revealed benzene levels as high as 9.1 parts per billion in residential water samples — nine times higher than the state’s maximum safety level.
    This isn’t the first time the carcinogen has followed wildfires: California water managers found unsafe levels of benzene and other volatile organic compounds, or VOCs, in Santa Rosa after the Tubbs Fire in 2017, and in Paradise after the Camp Fire in 2018.
    Scientists suspected that, among other possibilities, plastic drinking water pipes exposed to extreme heat released the chemicals (SN: 11/13/20). Now, lab experiments show that’s possible.  
    Andrew Whelton, an environmental engineer at Purdue University in West Lafayette, Ind., and colleagues subjected commonly available pipes to temperatures from 200° Celsius to 400° C. Those temperatures, hot enough to damage but not destroy pipes, can occur as heat radiates from nearby flames, Whelton says.
    A plastic water pipe (left) and meter box (right) recovered from homes in Paradise, Calif., after the Camp Fire scorched the community in 2018 reveal the degree to which plastics can melt when exposed to high temperatures.Andrew Whelton/Purdue University (CC-BY-ND)
    When the researchers then submerged the pipes in water and cooled them, varying amounts of benzene and VOCs — more than 100 chemicals in some tests — leached from 10 of the 11 types of pipe into the water, the team reports December 14 in Environmental Science: Water Research & Technology.
    “Some contamination for the past fires likely originated from thermally damaged plastics,” says Whelton. It’s impossible to do experiments in the midst of a raging fire to pinpoint the exact source of the contamination, he says, but inspecting damaged pipes after the fact can suggest what temperatures they may have experienced.
    Benzene exposure can cause immediate health problems, including skin and throat irritation, dizziness, and longer-term effects such as leukemia. The team suggests testing drinking water if fire comes anywhere near your property and, if possible, replacing any plastic in a home’s water system with heat-resistant metal. More

  • in

    Quantum wave in helium dimer filmed for the first time

    Anyone entering the world of quantum physics must prepare themself for quite a few things unknown in the everyday world: Noble gases form compounds, atoms behave like particles and waves at the same time and events that in the macroscopic world exclude each other occur simultaneously.
    In the world of quantum physics, Reinhard Dörner and his team are working with molecules which — in the sense of most textbooks — ought not to exist: Helium compounds with two atoms, known as helium dimers. Helium is called a noble gase precisely because it does not form any compounds. However, if the gas is cooled down to just 10 degrees above absolute zero (minus 273 °C) and then pumped through a small nozzle into a vacuum chamber, which makes it even colder, then — very rarely — such helium dimers form. These are unrivaledly the weakest bound stable molecules in the Universe, and the two atoms in the molecule are correspondingly extremely far apart from each other. While a chemical compound of two atoms commonly measures about 1 angstrom (0.1 nanometres), helium dimers on average measure 50 times as much, i.e. 52 angstrom.
    The scientists in Frankfurt irradiated such helium dimers with an extremely powerful laser flash, which slightly twisted the bond between the two helium atoms. This was enough to make the two atoms fly apart. They then saw — for the very first time — the helium atom flying away as a wave and record it on film.
    According to quantum physics, objects behave like a particle and a wave at the same time, something that is best known from light particles (photons), which on the one hand superimpose like waves where they can pile upor extinguish each other (interference), but on the other hand as “solar wind” can propel spacecraft via their solar sails, for example.
    That the researchers were able to observe and film the helium atom flying away as a wave at all in their laser experiment was due to the fact that the helium atom only flew away with a certain probability: With 98 per cent probability it was still bound to its second helium partner, with 2 per cent probability it flew away. These two helium atom waves — Here it comes! Quantum physics! — superimpose and their interference could be measured.
    The measurement of such “quantum waves” can be extended to quantum systems with several partners, such as the helium trimer composed of three helium atoms. The helium trimer is interesting because it can form what is referred to as an “exotic Efimov state,” says Maksim Kunitski, first author of the study: “Such three-particle systems were predicted by Russian theorist Vitaly Efimov in 1970 and first corroborated on caesium atoms. Five years ago, we discovered the Efimov state in the helium trimer. The laser pulse irradiation method we’ve now developed might allow us in future to observe the formation and decay of Efimov systems and thus better understand quantum physical systems that are difficult to access experimentally.”

    Story Source:
    Materials provided by Goethe University Frankfurt. Note: Content may be edited for style and length. More