More stories

  • in

    The dark matter mystery deepens with the demise of a reported detection

    First of two parts

    In mystery stories, the chief suspect almost always gets exonerated before the end of the book. Typically because a key piece of evidence turned out to be wrong.

    In science, key evidence is supposed to be right. But sometimes it’s not. In the mystery of the invisible “dark matter” in space, evidence implicating one chief suspect has now been directly debunked. WIMPs, tiny particles widely regarded as prime dark matter candidates, have failed to appear in an experiment designed specifically to test the lone previous study claiming to detect them.

    For decades, physicists have realized that most of the universe’s matter is nothing like earthly matter, which is made mostly from protons and neutrons. Gravitational influences on visible matter (stars and galaxies) indicate that some dark stuff of unknown identity pervades the cosmos. Ordinary matter accounts for less than 20 percent of the cosmic matter abundance.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    For unrelated reasons, theorists have also long suggested that nature possesses mysterious types of tiny particles predicted by a theoretical mathematical framework known as supersymmetry, or SUSY for short. Those particles would be massive by subatomic standards but would interact only weakly with other matter, and so are known as Weakly Interacting Massive particles, hence WIMPs.

    Of the many possible species of WIMPs, one (presumably the lightest one) should have the properties necessary to explain the dark matter messing with the motion of stars and galaxies (SN: 12/27/12). Way back in the last century, searches began for WIMPs in an effort to demonstrate their existence and identify which species made up the dark matter.

    In1998, one research team announced apparent success. An experiment called DAMA (for DArk MAtter, get it?), consisting of a particle detector buried under the Italian Alps, seemingly did detect particles with properties matching some physicists’ expectations for a dark matter signal.

    It was a tricky experiment to perform, relying on the premise that space is full of swarms of WIMPs. A detector containing chunks of sodium iodide should give off a flash of light when hit by a WIMP. But other particles from natural radioactive substances would also produce flashes of light even if WIMPs are a myth.

    So the experimenters adopted a clever suggestion proposed earlier by physicists Katherine Freese, David Spergel and Andrzej Drukier, known formally as an annual modulation test. But let’s just call it the June-December approach.

    As the Earth orbits the sun, the sun also moves, traveling around the Milky Way galaxy, carried by a spiral arm in the direction of the constellation Cygnus. If the galaxy really is full of WIMPs, the sun should be constantly plowing through them, generating a “WIMP wind.” (It’s like the wind you feel if you stick your head out of the window of a moving car.) In June, the Earth’s orbit moves it in the same direction as the sun’s motion around the galaxy — into the wind. But in December, the Earth moves the opposite direction, away from the wind. So more WIMPs should be striking the Earth in June than in December. It’s just like the way your car windshield smashes into more raindrops when driving forward than when going in reverse.

    As the sun moves through space, it should collide with dark matter particles called WIMPs, if they exist. When the Earth’s revolution carries it in the same direction as the sun, in summer, the resulting “WIMP wind” should appear stronger, with more WIMP collisions detected in June than in December.GEOATLAS/GRAPHI-OGRE, ADAPTED BY T. DUBÉ

    At an astrophysics conference in Paris in December 1998, Pierluigi Belli of the DAMA team reported a clear signal (or at least a strong hint) that more particles arrived in June than December. (More precisely, the results showed an annual modulation in frequency of light flashes, peaking around June with a minimum in December.) The DAMA data indicated a WIMP weighing in at 59 billion electron volts, roughly 60 times the mass of a proton.

    But some experts had concerns about the DAMA team’s data analysis. And other searches for WIMPs, with different detectors and strategies, should have found WIMPs if DAMA was right — but didn’t. Still, DAMA persisted. An advanced version of the experiment, DAMA/LIBRA, continued to find the June-December disparity.

    Perhaps DAMA was more sensitive to WIMPs than other experiments. After all, the other searches did not duplicate DAMA’s methods. Some used substances other than sodium iodide as a detecting material, or watched for slight temperature increases as a sign of a WIMP collision rather than flashes of light.

    For that matter, WIMPs might not be what theorists originally thought. DAMA initially reported 60 proton-mass WIMPs based on the belief that the WIMPs collided with iodine atoms. But later data suggested that perhaps the WIMPs were hitting sodium atoms, implying a much lighter WIMP mass — lighter than other experiments had been optimally designed to detect. Yet another possibility: Maybe trace amounts of the metallic element thallium (much heavier atoms than either iodine or sodium) had been the WIMP targets. But a recent review of that proposal found once again that the DAMA results could not be reconciled with the absence of a signal in other experiments.

    And now DAMA’s hope for vindication has been further dashed by a new underground experiment, this one in Spain. Scientists with the ANAIS collaboration have repeated the June-December method with sodium iodide, in an effort to reproduce DAMA’s results with the same method and materials. After three years of operation, the ANAIS team reports no sign of WIMPs.

    To be fair, the no-WIMP conclusion relies on a lot of seriously sophisticated technical analysis. It’s not just a matter of counting light flashes. You have to collect rigorous data on the behavior of nine different sodium iodide modules. You have to correct for the presence of rare radioactive isotopes generated by cosmic ray collisions while the modules were still under construction. And then the statistical analysis needed to discern a winter-summer signal difference is not something you should try at home (unless you’re fully versed in things like the least-square periodogram or the Lomb-Scargle technique). Plus, ANAIS it still going, with plans to collect two more years of data before issuing a final analysis. So the judgment on DAMA’s WIMPs is not necessarily final.

    Nevertheless, it doesn’t look good for WIMPs, at least for the WIMPs motivated by belief in supersymmetry.   

    Sadly for SUSY fans, searches for WIMPs from space are not the only bad news. Attempts to produce WIMPs in particle accelerators have also so far failed. Dark matter might just turn out to consist of some other kind of subatomic particle.

    If so, it would be a plot twist worthy of Agatha Christie, kind of like Poirot turning out to be the killer. For symmetry has long been physicists’ most reliable friend, guiding many great successes, from Einstein’s relativity theory to the standard model of particles and forces.

    Still, failure to find SUSY particles so far does not necessarily mean they don’t exist. Supersymmetry just might be not as simple as it first seemed. And SUSY particles might just be harder to detect than scientists originally surmised. But if supersymmetry does turn out not to be so super, scientists might need to reflect on the ways that faith in symmetry can lead them astray. More

  • in

    More than words: Using AI to map how the brain understands sentences

    Have you ever wondered why you are able to hear a sentence and understand its meaning — given that the same words in a different order would have an entirely different meaning? New research involving neuroimaging and A.I., describes the complex network within the brain that comprehends the meaning of a spoken sentence.
    “It has been unclear whether the integration of this meaning is represented in a particular site in the brain, such as the anterior temporal lobes, or reflects a more network level operation that engages multiple brain regions,” said Andrew Anderson, Ph.D., research assistant professor in the University of Rochester Del Monte Institute for Neuroscience and lead author on of the study which was published in the Journal of Neuroscience. “The meaning of a sentence is more than the sum of its parts. Take a very simple example — ‘the car ran over the cat’ and ‘the cat ran over the car’ — each sentence has exactly the same words, but those words have a totally different meaning when reordered.”
    The study is an example of how the application of artificial neural networks, or A.I., are enabling researchers to unlock the extremely complex signaling in the brain that underlies functions such as processing language. The researchers gather brain activity data from study participants who read sentences while undergoing fMRI. These scans showed activity in the brain spanning across a network of different regions — anterior and posterior temporal lobes, inferior parietal cortex, and inferior frontal cortex. Using the computational model InferSent — an A.I. model developed by Facebook trained to produce unified semantic representations of sentences — the researchers were able to predict patterns of fMRI activity reflecting the encoding of sentence meaning across those brain regions.
    “It’s the first time that we’ve applied this model to predict brain activity within these regions, and that provides new evidence that contextualized semantic representations are encoded throughout a distributed language network, rather than at a single site in the brain.”
    Anderson and his team believe the findings could be helpful in understanding clinical conditions. “We’re deploying similar methods to try to understand how language comprehension breaks down in early Alzheimer’s disease. We are also interested in moving the models forward to predict brain activity elicited as language is produced. The current study had people read sentences, in the future we’re interested in moving forward to predict brain activity as people might speak sentences.”
    Story Source:
    Materials provided by University of Rochester Medical Center. Original written by Kelsie Smith Hayduk. Note: Content may be edited for style and length. More

  • in

    How UK, South Africa coronavirus variants escape immunity

    All viruses mutate as they make copies of themselves to spread and thrive. SARS-CoV-2, the virus the causes COVID-19, is proving to be no different. There are currently more than 4,000 variants of COVID-19, which has already killed more than 2.7 million people worldwide during the pandemic.
    The UK variant, also known as B.1.1.7, was first detected in September 2020, and is now causing 98 percent of all COVID-19 cases in the United Kingdom. And it appears to be gaining a firm grip in about 100 other countries it has spread to in the past several months, including France, Denmark, and the United States.
    The World Health Organization says B.1.1.7 is one of several variants of concern along with others that have emerged in South Africa and Brazil.
    “The UK, South Africa, and Brazil variants are more contagious and escape immunity easier than the original virus,” said Victor Padilla-Sanchez, a research scientist at The Catholic University of America. “We need to understand why they are more infectious and, in many cases, more deadly.”
    All three variants have undergone changes to their spike protein — the part of the virus which attaches to human cells. As a result, they are better at infecting cells and spreading.
    In a research paper published in January 2021 in Research Ideas and Outcomes, Padilla-Sanchez discusses the UK and South African variants in detail. He presents a computational analysis of the structure of the spike glycoprotein bound to the ACE2 receptor where the mutations have been introduced. His paper outlines the reason why these variants bind better to human cells. More

  • in

    Discovery of non-toxic semiconductors with a direct band gap in the near-infrared

    NIMS and the Tokyo Institute of Technology have jointly discovered that the chemical compound Ca3SiO is a direct transition semiconductor, making it a potentially promising infrared LED and infrared detector component. This compound — composed of calcium, silicon and oxygen — is cheap to produce and non-toxic. Many of the existing infrared semiconductors contain toxic chemical elements, such as cadmium and tellurium. Ca3SiO may be used to develop less expensive and safer near-infrared semiconductors.
    Infrared wavelengths have been used for many purposes, including optical fiber communications, photovoltaic power generation and night vision devices. Existing semiconductors capable of emitting infrared radiation (i.e., direct transition semiconductors) contain toxic chemical compounds, such as mercury cadmium telluride and gallium arsenide. Infrared semiconductors free of toxic chemical elements are generally incapable of emitting infrared radiation (i.e., indirect transition semiconductors). It is desirable to develop high-performance infrared devices using non-toxic, direct transition semiconductors with a band gap in the infrared range.
    Conventionally, the semiconductive properties of materials, such as energy band gap, have been controlled by combining two chemical elements that are located on the left and right side of group IV elements, such as III and V or II and VI. In this conventional strategy, energy band gap becomes narrower by using heavier elements: consequently, this strategy has led to the development of direct transition semiconductors composed of toxic elements, such as mercury cadmium telluride and gallium arsenide. To discover infrared semiconductors free of toxic elements, this research group took an unconventional approach: they focused on crystalline structures in which silicon atoms behave as tetravalent anions rather than their normal tetravalent cation state. The group ultimately chose oxysilicides (e.g., Ca3SiO) and oxygermanides with an inverse perovskite crystalline structure, synthesized them, evaluated their physical properties and conducted theoretical calculations. These processes revealed that these compounds exhibit a very small band gap of approximately 0.9 eV at a wavelength of 1.4 ?m, indicating their great potential to serve as direct transition semiconductors. These compounds with a small direct band gap may potentially be effective in absorbing, detecting and emitting long infrared wavelengths even when they are processed into thin films, making them very promising near-infrared semiconductor materials to be used in infrared sources (e.g., LEDs) and detectors.
    In future research, we plan to develop high-intensity infrared LEDs and highly sensitive infrared detectors by synthesizing these compounds in the form of large single-crystals, developing thin film growth processes and controlling their physical properties through doping and transforming them into solid solutions. If these efforts bear fruit, toxic chemical elements currently used in existing near-infrared semiconductors may be replaced with non-toxic ones.
    Story Source:
    Materials provided by National Institute for Materials Science, Japan. Note: Content may be edited for style and length. More

  • in

    Novel thermometer can accelerate quantum computer development

    Researchers at Chalmers University of Technology, Gothenburg, Sweden, have developed a novel type of thermometer that can simply and quickly measure temperatures during quantum calculations with extremely high accuracy. The breakthrough provides a benchmarking tool for quantum computing of great value — and opens up for experiments in the exciting field of quantum thermodynamics.
    A key component in quantum computers are coaxial cables and waveguides — structures which guide waveforms, and act as the vital connection between the quantum processor, and the classical electronics which control it. Microwave pulses travel along the waveguides to the quantum processor, and are cooled down to extremely low temperatures along the way. The waveguide also attenuates and filters the pulses, enabling the extremely sensitive quantum computer to work with stable quantum states.
    In order to have maximum control over this mechanism, the researchers need to be sure that these waveguides are not carrying noise due to thermal motion of electrons on top of the pulses that they send. In other words, they have to measure the temperature of the electromagnetic fields at the cold end of the microwave waveguides, the point where the controlling pulses are delivered to the computer’s qubits. Working at the lowest possible temperature minimises the risk of introducing errors in the qubits.
    Until now, researchers have only been able to measure this temperature indirectly, with relatively large delay. Now, with the Chalmers researchers’ novel thermometer, very low temperatures can be measured directly at the receiving end of the waveguide — very accurately and with extremely high time resolution.
    “Our thermometer is a superconducting circuit, directly connected to the end of the waveguide being measured. It is relatively simple — and probably the world’s fastest and most sensitive thermometer for this particular purpose at the millikelvin scale,” says Simone Gasparinetti, Assistant Professor at the Quantum Technology Laboratory, Chalmers University of Technology.
    Important for measuring quantum computer performance
    The researchers at the Wallenberg Centre for Quantum Technology, WACQT, have the goal to build a quantum computer — based on superconducting circuits — with at least 100 well-functioning qubits, performing correct calculations by 2030. It requires a processor working temperature close to absolute zero, ideally down to 10 millikelvin. The new thermometer gives the researchers an important tool for measuring how good their systems are and what shortcomings exist — a necessary step to be able to refine the technology and achieve their goal. More

  • in

    Machine learning shows potential to enhance quantum information transfer

    Army-funded researchers demonstrated a machine learning approach that corrects quantum information in systems composed of photons, improving the outlook for deploying quantum sensing and quantum communications technologies on the battlefield.
    When photons are used as the carriers of quantum information to transmit data, that information is often distorted due to environment fluctuations destroying the fragile quantum states necessary to preserve it.
    Researchers from Louisiana State University exploited a type of machine learning to correct for information distortion in quantum systems composed of photons. Published in Advanced Quantum Technologies, the team demonstrated that machine learning techniques using the self-learning and self-evolving features of artificial neural networks can help correct distorted information. This results outperformed traditional protocols that rely on conventional adaptive optics.
    “We are still in the fairly early stages of understanding the potential for machine learning techniques to play a role in quantum information science,” said Dr. Sara Gamble, program manager at the Army Research Office, an element of U.S. Army Combat Capabilities Development Command, known as DEVCOM, Army Research Laboratory. “The team’s result is an exciting step forward in developing this understanding, and it has the potential to ultimately enhance the Army’s sensing and communication capabilities on the battlefield.”
    For this research, the team used a type of neural network to correct for distorted spatial modes of light at the single-photon level.
    “The random phase distortion is one of the biggest challenges in using spatial modes of light in a wide variety of quantum technologies, such as quantum communication, quantum cryptography, and quantum sensing,” said Narayan Bhusal, doctoral candidate at LSU. “Our method is remarkably effective and time-efficient compared to conventional techniques. This is an exciting development for the future of free-space quantum technologies.”
    According to the research team, this smart quantum technology demonstrates the possibility of encoding of multiple bits of information in a single photon in realistic communication protocols affected by atmospheric turbulence.
    “Our technique has enormous implications for optical communication and quantum cryptography,” said Omar Magaña Loaiza, assistant professor of physics at LSU. “We are now exploring paths to implement our machine learning scheme in the Louisiana Optical Network Initiative to make it smart, more secure, and quantum.”
    Story Source:
    Materials provided by U.S. Army Research Laboratory. Note: Content may be edited for style and length. More

  • in

    Researchers' algorithm designs soft robots that sense

    There are some tasks that traditional robots — the rigid and metallic kind — simply aren’t cut out for. Soft-bodied robots, on the other hand, may be able to interact with people more safely or slip into tight spaces with ease. But for robots to reliably complete their programmed duties, they need to know the whereabouts of all their body parts. That’s a tall task for a soft robot that can deform in a virtually infinite number of ways.
    MIT researchers have developed an algorithm to help engineers design soft robots that collect more useful information about their surroundings. The deep-learning algorithm suggests an optimized placement of sensors within the robot’s body, allowing it to better interact with its environment and complete assigned tasks. The advance is a step toward the automation of robot design. “The system not only learns a given task, but also how to best design the robot to solve that task,” says Alexander Amini. “Sensor placement is a very difficult problem to solve. So, having this solution is extremely exciting.”
    The research will be presented during April’s IEEE International Conference on Soft Robotics and will be published in the journal IEEE Robotics and Automation Letters. Co-lead authors are Amini and Andrew Spielberg, both PhD students in MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Other co-authors include MIT PhD student Lillian Chin, and professors Wojciech Matusik and Daniela Rus.
    Creating soft robots that complete real-world tasks has been a long-running challenge in robotics. Their rigid counterparts have a built-in advantage: a limited range of motion. Rigid robots’ finite array of joints and limbs usually makes for manageable calculations by the algorithms that control mapping and motion planning. Soft robots are not so tractable.
    Soft-bodied robots are flexible and pliant — they generally feel more like a bouncy ball than a bowling ball. “The main problem with soft robots is that they are infinitely dimensional,” says Spielberg. “Any point on a soft-bodied robot can, in theory, deform in any way possible.” That makes it tough to design a soft robot that can map the location of its body parts. Past efforts have used an external camera to chart the robot’s position and feed that information back into the robot’s control program. But the researchers wanted to create a soft robot untethered from external aid.
    “You can’t put an infinite number of sensors on the robot itself,” says Spielberg. “So, the question is: How many sensors do you have, and where do you put those sensors in order to get the most bang for your buck?” The team turned to deep learning for an answer.
    The researchers developed a novel neural network architecture that both optimizes sensor placement and learns to efficiently complete tasks. First, the researchers divided the robot’s body into regions called “particles.” Each particle’s rate of strain was provided as an input to the neural network. Through a process of trial and error, the network “learns” the most efficient sequence of movements to complete tasks, like gripping objects of different sizes. At the same time, the network keeps track of which particles are used most often, and it culls the lesser-used particles from the set of inputs for the networks’ subsequent trials.
    By optimizing the most important particles, the network also suggests where sensors should be placed on the robot to ensure efficient performance. For example, in a simulated robot with a grasping hand, the algorithm might suggest that sensors be concentrated in and around the fingers, where precisely controlled interactions with the environment are vital to the robot’s ability to manipulate objects. While that may seem obvious, it turns out the algorithm vastly outperformed humans’ intuition on where to site the sensors.
    The researchers pitted their algorithm against a series of expert predictions. For three different soft robot layouts, the team asked roboticists to manually select where sensors should be placed to enable the efficient completion of tasks like grasping various objects. Then they ran simulations comparing the human-sensorized robots to the algorithm-sensorized robots. And the results weren’t close. “Our model vastly outperformed humans for each task, even though I looked at some of the robot bodies and felt very confident on where the sensors should go,” says Amini. “It turns out there are a lot more subtleties in this problem than we initially expected.”
    Spielberg says their work could help to automate the process of robot design. In addition to developing algorithms to control a robot’s movements, “we also need to think about how we’re going to sensorize these robots, and how that will interplay with other components of that system,” he says. And better sensor placement could have industrial applications, especially where robots are used for fine tasks like gripping. “That’s something where you need a very robust, well-optimized sense of touch,” says Spielberg. “So, there’s potential for immediate impact.”
    “Automating the design of sensorized soft robots is an important step toward rapidly creating intelligent tools that help people with physical tasks,” says Rus. “The sensors are an important aspect of the process, as they enable the soft robot to “see” and understand the world and its relationship with the world.”
    This research was funded, in part, by the National Science Foundation and the Fannie and John Hertz Foundation. More

  • in

    Tunable smart materials

    Researchers developed a system of self-assembling polymer microparticles with adjustable concentrations of two types of attached residues. They found that tuning the concentration of each type allowed them to control the aggregation and resulting shape of the clusters. This work may lead to advances in ‘smart’ materials, including sensors and damage-resistant surfaces.
    Scientists from the Graduate School of Science at Osaka University created superabsorbent polymer (SAP) microparticles that self-assemble into structures that can be modified by adjusting the proportion of particle type. This research may lead to new tunable biomimetic “smart materials” that can sense and respond to specific chemicals.
    Biological molecules in living organisms have a remarkable ability to form self-assembled structures when triggered by an external molecule. This has led scientists to try to create other “smart materials” that respond to their environment. Now, a team of researchers at Osaka University has come up with a tunable system involving poly(sodium acrylate) microparticles that can have one of two types of chemical groups attached. The adjustable parameters x and y refer to the molar percent of microparticles with β-cyclodextrin (βCD) and adamantyl (Ad) residues, respectively.
    “We found that the macroscopic shape of assemblies formed by microparticles was dependent on the residue content,” co-senior author Akihito Hashidzume says. In order for assemblies to form, x needed to be at least 22.3; however, the shape of assemblies could be controlled by varying y. As the value of y increased, the clusters became more and more elongated. The team hypothesized that at higher values of y, small clusters could form early and stick together, leading to elongated aggregates. Conversely, when y was small, clusters would only stick together after many collisions, resulting in more spherical aggregates. This provides a way to tune to the shape of the resulting clusters. The team measured the aggregates under a microscope to determine the shapes of assemblies using a statistical analysis.
    “On the basis of these findings, we hope to help reveal the origin of the diverse shape of living organisms, which are macroscopic assemblies controlled by molecular recognition,” co-senior author Akira Harada says. This research may also lead to the development of new smart sensors that can form clusters large enough to be seen with the naked eye.
    Story Source:
    Materials provided by Osaka University. Note: Content may be edited for style and length. More