More stories

  • in

    Income inequality can harm children’s achievement in maths – but not reading, 27-year study suggests

    Inequalities in income affect how well children do in maths — but not reading, the most comprehensive study of its kind has found.
    Looking at data stretching from 1992 to 2019, the analysis, published in the journal Educational Review, revealed that 10-year-olds in US states with bigger gaps in income did less well in maths than those living in areas of America where earnings were more evenly distributed.
    With income inequality in the US the highest in the developed world, researcher Professor Joseph Workman argues that addressing social inequality may do more to boost academic achievement than reforming schools or curricula — favoured methods of policymakers.
    Income inequality — a measure of how unevenly income is distributed through a population — has long been associated with a host of health and social problems including mental health issues, lack of trust, higher rates of imprisonment and lower rates of social mobility.
    It may also affect academic achievement, through various routes.
    For instance, income inequality is linked to higher rates of divorce, substance abuse and child maltreatment, the stresses of which may affect a child’s development. It is also associated with higher odds of babies being of a low weight a birth — something which can raise their risk of developmental delays as they grow up. More

  • in

    Making self-driving cars human-friendly

    Automated vehicles could be made more pedestrian-friendly thanks to new research which could help them predict when people will cross the road.
    University of Leeds-led scientists investigating how to better understand human behaviour in traffic say that neuroscientific theories of how the brain makes decisions can be used in automated vehicle technology to improve safety and make them more human-friendly.
    The researchers set out to determine whether a decision-making model called drift diffusion could predict when pedestrians would cross a road in front of approaching cars, and whether it could be used in scenarios where the car gives way to the pedestrian, either with or without explicit signals. This prediction capability will allow the autonomous vehicle to communicate more effectively with pedestrians, in terms of its movements in traffic and any external signals such as flashing lights, to maximise traffic flow and decrease uncertainty.
    Drift diffusion models assume that people reach decisions after accumulation of sensory evidence up to a threshold at which the decision is made.
    Professor Gustav Markkula, from the University of Leeds’ Institute for Transport Studies and the senior author of the study, said: “When making the decision to cross, pedestrians seem to be adding up lots of different sources of evidence, not only relating to the vehicle’s distance and speed, but also using communicative cues from the vehicle in terms of deceleration and headlight flashes.
    “When a vehicle is giving way, pedestrians will often feel quite uncertain about whether the car is actually yielding, and will often end up waiting until the car has almost come to a full stop before starting to cross. Our model clearly shows this state of uncertainty borne out, meaning it can be used to help design how automated vehicles behave around pedestrians in order to limit uncertainty, which in turn can improve both traffic safety and traffic flow. More

  • in

    Calculating the path of cancer

    Biologists at Cold Spring Harbor Laboratory (CSHL) are using a mathematical approach developed in CSHL Assistant Professor David McCandlish’s lab to find solutions to a diverse set of biological problems. Originally created as a way to understand interactions between different mutations in proteins, the tool is now being used by McCandlish and his collaborators to learn about the complexities of gene expression and the chromosomal mutations associated with cancer. McCandlish says:
    “This is one of the things that’s really fascinating about mathematical research, is sometimes you can see connections between topics, which on the surface they seem so different, but at a mathematical level, they might be using some of the same technical ideas.”
    All of these questions involve mapping the likelihood of different variations on a biological theme: which combinations of mutations are most likely to arise in a particular protein, for example, or which chromosome mutations are most often found together in the same cancer cell. McCandlish explains that these are problems of density estimation — a statistical tool that predicts how often an event happens. Density estimation can be relatively straightforward, such as charting different heights within a group of people. But when dealing with complex biological sequences, such as the hundreds, or thousands of amino acids that are strung together to build a protein, predicting the probability of each potential sequence becomes astonishingly complex.
    McCandlish explains the fundamental problem his team is using math to address: “Sometimes if you make, say one mutation to a protein sequence, it doesn’t do anything. The protein works fine. And if you make a second mutation, it still works fine, but then if you put the two of them together, now you’ve got a broken protein. We’ve been trying to come up with methods to model not just interactions between pairs of mutations, but between three or four or any number of mutations.”
    The methods they have developed can be used to interpret data from experiments that measure how hundreds of thousands of different combinations of mutations impact the function of a protein.
    This study, reported in the Proceedings of the National Academy of Sciences, began with conversations with two other CSHL colleagues: CSHL Fellow Jason Sheltzer and Associate Professor Justin Kinney. They worked with McCandlish to apply his methods to gene expression and the evolution of cancer mutations. Software released by McCandlish’s team will enable other researchers to use these same approaches in their own work. He says he hopes it will be applied to a variety of biological problems.
    Story Source:
    Materials provided by Cold Spring Harbor Laboratory. Original written by Jennifer Michalowski. Note: Content may be edited for style and length. More

  • in

    Precious metals from electronic waste in seconds

    In what should be a win-win-win for the environment, a process developed at Rice University to extract valuable metals from electronic waste would also use up to 500 times less energy than current lab methods and produce a byproduct clean enough for agricultural land.
    The flash Joule heating method introduced last year to produce graphene from carbon sources like waste food and plastic has been adapted to recover rhodium, palladium, gold and silver for reuse.
    A report in Nature Communications by the Rice lab of chemist James Tour also shows highly toxic heavy metals including chromium, arsenic, cadmium, mercury and lead are removed from the flashed materials, leaving a byproduct with minimal metal content.
    Instantly heating the waste to 3,400 Kelvin (5,660 degrees Fahrenheit) with a jolt of electricity vaporizes the precious metals, and the gases are vented away for separation, storage or disposal. Tour said that with more than 40 million tons of e-waste produced globally every year, there is plenty of potential for “urban mining.”
    “Here, the largest growing source of waste becomes a treasure,” Tour said. “This will curtail the need to go all over the world to mine from ores in remote and dangerous places, stripping the Earth’s surface and using gobs of water resources. The treasure is in our dumpsters.”
    He noted an increasingly rapid turnover of personal devices like cell phones has driven the worldwide rise of electronic waste, with only about 20% of landfill waste currently being recycled. More

  • in

    Induced flaws in quantum materials could enhance superconducting properties

    In a surprising discovery, an international team of researchers, led by scientists in the University of Minnesota Center for Quantum Materials, found that deformations in quantum materials that cause imperfections in the crystal structure can actually improve the material’s superconducting and electrical properties.
    The groundbreaking findings could provide new insight for developing the next generation of quantum-based computing and electronic devices.
    The research just appeared in Nature Materials, a peer-reviewed scientific journal published by Nature Publishing Group.
    “Quantum materials have unusual magnetic and electrical properties that, if understood and controlled, could revolutionize virtually every aspect of society and enable highly energy-efficient electrical systems and faster, more accurate electronic devices,” said study co-author Martin Greven, a Distinguished McKnight Professor in the University of Minnesota’s School of Physics and Astronomy and the Director of the Center for Quantum Materials. “The ability to tune and modify the properties of quantum materials is pivotal to advances in both fundamental research and modern technology.”
    Elastic deformation of materials occurs when the material is subjected to stress but returns to its original shape once the stress is removed. In contrast, plastic deformation is the non-reversible change of a material’s shape in response to an applied stress — or, more simply, the act of squeezing or stretching it until it loses its shape. Plastic deformation has been used by blacksmiths and engineers for thousands of years. An example of a material with a large plastic deformation range is wet chewing gum, which can be stretched to dozens of times its original length.
    While elastic deformation has been extensively used to study and manipulate quantum materials, the effects of plastic deformation have not yet been explored. In fact, conventional wisdom would lead scientists to believe that “squeezing” or “stretching” quantum materials may remove their most intriguing properties. More

  • in

    How apples get their shapes

    Apples are among the oldest and most recognizable fruits in the world. But have you ever really considered an apple’s shape? Apples are relatively spherical except for that characteristic dimple at the top where the stem grows.
    How do apples grow that distinctive shape?
    Now, a team of mathematicians and physicists have used observations, lab experiments, theory and computation to understand the growth and form of the cusp of an apple.
    The paper is published in Nature Physics.
    “Biological shapes are often organized by the presence of structures that serve as focal points,” said L Mahadevan, the Lola England de Valpine Professor of Applied Mathematics, of Organismic and Evolutionary Biology, and of Physics at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and senior author of the study. “These focal points can sometimes take the form of singularities where deformations are localized. A ubiquitous example is seen in the cusp of an apple, the inward dimple where the stalk meets the fruit.”
    Mahadevan had already developed a simple theory to explain the form and growth of apples but the project began to bear fruit when the researchers were able to connect observations of real apples at different growth stages and gel experiments to mimic the growth along with theory and computations. More

  • in

    Foundational step shows quantum computers can be better than the sum of their parts

    Pobody’s nerfect — not even the indifferent, calculating bits that are the foundation of computers. But JQI Fellow Christopher Monroe’s group, together with colleagues from Duke University, have made progress toward ensuring we can trust the results of quantum computers even when they are built from pieces that sometimes fail. They have shown in an experiment, for the first time, that an assembly of quantum computing pieces can be better than the worst parts used to make it. In a paper published in the journal Nature on Oct. 4, 2021, the team shared how they took this landmark step toward reliable, practical quantum computers.
    In their experiment, the researchers combined several qubits — the quantum version of bits — so that they functioned together as a single unit called a logical qubit. They created the logical qubit based on a quantum error correction code so that, unlike for the individual physical qubits, errors can be easily detected and corrected, and they made it to be fault-tolerant — capable of containing errors to minimize their negative effects.
    “Qubits composed of identical atomic ions are natively very clean by themselves,” says Monroe, who is also a Fellow of the Joint Center for Quantum Information and Computer Science and a College Park Professor in the Department of Physics at the University of Maryland. “However, at some point, when many qubits and operations are required, errors must be reduced further, and it is simpler to add more qubits and encode information differently. The beauty of error correction codes for atomic ions is they can be very efficient and can be flexibly switched on through software controls.”
    This is the first time that a logical qubit has been shown to be more reliable than the most error-prone step required to make it. The team was able to successfully put the logical qubit into its starting state and measure it 99.4% of the time, despite relying on six quantum operations that are individually expected to work only about 98.9% of the time.
    That might not sound like a big difference, but it’s a crucial step in the quest to build much larger quantum computers. If the six quantum operations were assembly line workers, each focused on one task, the assembly line would only produce the correct initial state 93.6% of the time (98.9% multiplied by itself six times) — roughly ten times worse than the error measured in the experiment. That improvement is because in the experiment the imperfect pieces work together to minimize the chance of quantum errors compounding and ruining the result, similar to watchful workers catching each other’s mistakes.
    The results were achieved using Monroe’s ion-trap system at UMD, which uses up to 32 individual charged atoms — ions — that are cooled with lasers and suspended over electrodes on a chip. They then use each ion as a qubit by manipulating it with lasers. More

  • in

    Scientists are one step closer to error-correcting quantum computers

    Mistakes happen — especially in quantum computers. The fragile quantum bits, or qubits, that make up the machines are notoriously error-prone, but now scientists have shown that they can fix the flubs.

    Computers that harness the rules of quantum mechanics show promise for making calculations far out of reach for standard computers (SN: 6/29/17). But without a mechanism for fixing the computers’ mistakes, the answers that a quantum computer spits out could be gobbledygook (SN: 6/22/20).

    Combining the power of multiple qubits into one can solve the error woes, researchers report October 4 in Nature. Scientists used nine qubits to make a single, improved qubit called a logical qubit, which, unlike the individual qubits from which it was made, can be probed to check for mistakes.

    “This is a key demonstration on the path to build a large-scale quantum computer,” says quantum physicist Winfried Hensinger of the University of Sussex in Brighton, England, who was not involved in the new study.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Still, that path remains a long one, Hensinger says. To do complex calculations, scientists will have to dramatically scale up the number of qubits in the machines. But now that scientists have shown that they can keep errors under control, he says, “there’s nothing fundamentally stopping us to build a useful quantum computer.”

    In a logical qubit, information is stored redundantly. That allows researchers to check and fix mistakes in the data. “If a piece of it goes missing, you can reconstruct it from the other pieces, like Voldemort,” says quantum physicist David Schuster of the University of Chicago, who was not involved with the new research. (The Harry Potter villain kept his soul safe by concealing it in multiple objects called Horcruxes.)

    In the new study, four additional, auxiliary qubits interfaced with the logical qubit, in order to identify errors in its data. Future quantum computers could make calculations using logical qubits in place of the original, faulty qubits, repeatedly checking and fixing any errors that crop up.

    To make their logical qubit, the researchers used a technique called a Bacon-Shor code, applying it to qubits made of ytterbium ions hovering above an ion-trapping chip inside a vacuum, which are manipulated with lasers. The researchers also designed sequences of operations so that errors don’t multiply uncontrollably, what’s known as “fault tolerance.”

    Thanks to those efforts, the new logical qubit had a lower error rate than that of the most flawed components that made it up, says quantum physicist Christopher Monroe of the University of Maryland in College Park and Duke University.

    However, the team didn’t quite complete the full process envisioned for error correction. While the computer detected the errors that arose, the researchers didn’t correct the mistakes and continue on with computation. Instead, they fixed errors after the computer was finished. In a full-fledged example, scientists would detect and correct errors multiple times on the fly.

    Demonstrating quantum error correction is a necessity for building useful quantum computers. “It’s like achieving criticality with [nuclear] fission,” Schuster says. Once that nuclear science barrier was passed in 1942, it led to technologies like nuclear power and atomic bombs (SN: 11/29/17).

    As quantum computers gradually draw closer to practical usefulness, companies are investing in the devices. Technology companies such as IBM, Google and Intel host major quantum computing endeavors. On October 1, a quantum computing company cofounded by Monroe, called IonQ, went public; Monroe spoke to Science News while on a road trip to ring the opening bell at the New York Stock Exchange.

    The new result suggests that full-fledged quantum error correction is almost here, says coauthor Kenneth Brown, a quantum engineer also at Duke University. “It really shows that we can get all the pieces together and do all the steps.” More