More stories

  • in

    A universal system for decoding any type of data sent across a network

    Every piece of data that travels over the internet — from paragraphs in an email to 3D graphics in a virtual reality environment — can be altered by the noise it encounters along the way, such as electromagnetic interference from a microwave or Bluetooth device. The data are coded so that when they arrive at their destination, a decoding algorithm can undo the negative effects of that noise and retrieve the original data.
    Since the 1950s, most error-correcting codes and decoding algorithms have been designed together. Each code had a structure that corresponded with a particular, highly complex decoding algorithm, which often required the use of dedicated hardware.
    Researchers at MIT, Boston University, and Maynooth University in Ireland have now created the first silicon chip that is able to decode any code, regardless of its structure, with maximum accuracy, using a universal decoding algorithm called Guessing Random Additive Noise Decoding (GRAND). By eliminating the need for multiple, computationally complex decoders, GRAND enables increased efficiency that could have applications in augmented and virtual reality, gaming, 5G networks, and connected devices that rely on processing a high volume of data with minimal delay.
    The research at MIT is led by Muriel Médard, the Cecil H. and Ida Green Professor in the Department of Electrical Engineering and Computer Science, and was co-authored by Amit Solomon and Wei Ann, both graduate students at MIT; Rabia Tugce Yazicigil, assistant professor of electrical and computer engineering at Boston University; Arslan Riaz and Vaibhav Bansal, both graduate students at Boston University; Ken R. Duffy, director of the Hamilton Institute at the National University of Ireland at Maynooth; and Kevin Galligan, a Maynooth graduate student. The research will be presented at the European Solid-States Device Research and Circuits Conference next week.
    Focus on noise
    One way to think of these codes is as redundant hashes (in this case, a series of 1s and 0s) added to the end of the original data. The rules for the creation of that hash are stored in a specific codebook. More

  • in

    Emissions from computing and ICT could be worse than previously thought

    Global computing could be responsible for a greater share of greenhouse gas emissions than previously thought and these emissions will continue to rise significantly unless action is taken, a new study highlights.
    A team of researchers from Lancaster University and sustainability consultancy Small World Consulting Ltd claim that previous calculations of ICT’s share of global greenhouse emissions, estimated at 1.8-2.8%, likely fall short of the sector’s real climate impact as they only show a partial picture.
    The researchers point out that some of these prior estimates do not account for the full life-cycle and supply chain of ICT products and infrastructure — such as: the energy expended in manufacturing the products and equipment; the carbon cost associated with all of their components and the operational carbon footprint of the companies behind them; the energy consumed when using the equipment; and also their disposal after they have fulfilled their purpose.
    The researchers argue ICT’s true proportion of global greenhouse gas emissions could be around 2.1-3.9% — though they stress that there are still significant uncertainties around these calculations. Although like for like comparisons are difficult, these figures would suggest ICT has emissions greater than the aviation industry, which are around 2 % of global emissions.
    In addition, the paper warns that new trends in computing and ICT such as big data and AI, the Internet of Things, as well as blockchain and cryptocurrencies, risk driving further substantial growth in ICT’s greenhouse gas footprint.
    In their new paper ‘The real climate and transformative impact of ICT: A critique of estimates, trends and regulations’ published today by the journal Patterns, the researchers looked at two central issues — ICT’s own carbon footprint, as well as ICT’s impact on the rest of the economy. More

  • in

    AI can make better clinical decisions than humans: Study

    It’s an old adage: there’s no harm in getting a second opinion. But what if that second opinion could be generated by a computer, using artificial intelligence? Would it come up with better treatment recommendations than your professional proposes?
    A pair of Canadian mental-health researchers believe it can. In a study published in the Journal of Applied Behavior Analysis, Marc Lanovaz of Université de Montréal and Kieva Hranchuk of St. Lawrence College, in Ontario, make a case for using AI in treating behavioural problems.
    “Medical and educational professionals frequently disagree on the effectiveness of behavioral interventions, which may cause people to receive inadequate treatment,” said Lanovaz, an associate professor who heads the Applied Behavioural Research Lab at UdeM’s School of Psychoeducation.
    To find a better way, Lanovaz and Hranchuk, a professor of behavioural science and behavioural psychology at St. Lawrence, compiled simulated data from 1,024 individuals receiving treatment for behavioral issues.
    The researchers then compared the treatment conclusions drawn in each case by five doctoral-level behavior analysts with those produced by a computer model the two academics developed using machine learning.
    “The five professionals only came to the same conclusions approximately 75 per cent of the time,” said Lanovaz. “More importantly, machine learning produced fewer decision-making errors than did all the professionals.”
    Given these very positive results, the next step would be to “integrate our models in an app that could automatically make decisions or provide feedback about how treatment is progressing,” he added.
    The goal, the researchers believe, should be to use machine learning to facilitate the work of professionals, not actually replace them, while also making treatment decisions more consistent and predictable.
    “For example, doctors could someday use the technology to help them decide whether to continue or terminate the treatment of people with disorders as varied as autism, ADHD, anxiety and depression,” Lanovaz said.
    “Individualized clinical and educational decision-making is one of the cornerstones of psychological and behavioral treatment. Our study may thus lead to better treatment options for the millions of individuals who receive these types of services worldwide.”
    Story Source:
    Materials provided by University of Montreal. Note: Content may be edited for style and length. More

  • in

    Largest virtual universe free for anyone to explore

    Forget about online games that promise you a “whole world” to explore. An international team of researchers has generated an entire virtual UNIVERSE, and made it freely available on the cloud to everyone.
    Uchuu (meaning “Outer Space” in Japanese) is the largest and most realistic simulation of the Universe to date. The Uchuu simulation consists of 2.1 trillion particles in a computational cube an unprecedented 9.63 billion light-years to a side. For comparison, that’s about three-quarters the distance between Earth and the most distant observed galaxies. Uchuu will allow us to study the evolution of the Universe on a level of both size and detail inconceivable until now.
    Uchuu focuses on the large-scale structure of the Universe: mysterious halos of dark matter which control not only the formation of galaxies, but also the fate of the entire Universe itself. The scale of these structures ranges from the largest galaxy clusters down to the smallest galaxies. Individual stars and planets aren’t resolved, so don’t expect to find any alien civilizations in Uchuu. But one way that Uchuu wins big in comparison to other virtual worlds is the time domain; Uchuu simulates the evolution of matter over almost the entire 13.8 billion year history of the Universe from the Big Bang to the present. That is over 30 times longer than the time since animal life first crawled out of the seas on Earth.
    Julia F. Ereza, a Ph.D. student at IAA-CSIC who uses Uchuu to study the large-scale structure of the Universe explains the importance of the time domain, “Uchuu is like a time machine: we can go forward, backward and stop in time, we can ‘zoom in’ on a single galaxy or ‘zoom out’ to visualize a whole cluster, we can see what is really happening at every instant and in every place of the Universe from its earliest days to the present, being an essential tool to study the Cosmos.”
    An international team of researchers from Japan, Spain, U.S.A., Argentina, Australia, Chile, France, and Italy created Uchuu using ATERUI II, the world’s most powerful supercomputer dedicated to astronomy. Even with all this power, it still took a year to produce Uchuu. Tomoaki Ishiyama, an associate professor at Chiba University who developed the code used to generate Uchuu, explains, “To produce Uchuu we have used … all 40,200 processors (CPU cores) available exclusively for 48 hours each month. Twenty million supercomputer hours were consumed, and 3 Petabytes of data were generated, the equivalent of 894,784,853 pictures from a 12-megapixel cell phone.”
    Before you start worrying about download time, the research team used high-performance computational techniques to compress information on the formation and evolution of dark matter haloes in the Uchuu simulation into a 100-terabyte catalog. This catalog is now available to everyone on the cloud in an easy to use format thanks to the computational infrastructure skun6 located at the Instituto de Astrofísica de Andalucía (IAA-CSIC), the RedIRIS group, and the Galician Supercomputing Center (CESGA). Future data releases will include catalogues of virtual galaxies and gravitational lensing maps.
    Big Data science products from Uchuu will help astronomers learn how to interpret Big Data galaxy surveys expected in coming years from facilities like the Subaru Telescope and the ESA Euclid space mission.
    Story Source:
    Materials provided by National Institutes of Natural Sciences. Note: Content may be edited for style and length. More

  • in

    After 20 years of trying, scientists succeed in doping a 1D chain of cuprates

    When scientists study unconventional superconductors — complex materials that conduct electricity with zero loss at relatively high temperatures — they often rely on simplified models to get an understanding of what’s going on.
    Researchers know these quantum materials get their abilities from electrons that join forces to form a sort of electron soup. But modeling this process in all its complexity would take far more time and computing power than anyone can imagine having today. So for understanding one key class of unconventional superconductors — copper oxides, or cuprates — researchers created, for simplicity, a theoretical model in which the material exists in just one dimension, as a string of atoms. They made these one-dimensional cuprates in the lab and found that their behavior agreed with the theory pretty well.
    Unfortunately, these 1D atomic chains lacked one thing: They could not be doped, a process where some atoms are replaced by others to change the number of electrons that are free to move around. Doping is one of several factors scientists can adjust to tweak the behavior of materials like these, and it’s a critical part of getting them to superconduct.
    Now a study led by scientists at the Department of Energy’s SLAC National Accelerator Laboratory and Stanford and Clemson universities has synthesized the first 1D cuprate material that can be doped. Their analysis of the doped material suggests that the most prominent proposed model of how cuprates achieve superconductivity is missing a key ingredient: an unexpectedly strong attraction between neighboring electrons in the material’s atomic structure, or lattice. That attraction, they said, may be the result of interactions with natural lattice vibrations.
    The team reported their findings today in Science.
    “The inability to controllably dope one-dimensional cuprate systems has been a significant barrier to understanding these materials for more than two decades,” said Zhi-Xun Shen, a Stanford professor and investigator with the Stanford Institute for Materials and Energy Sciences (SIMES) at SLAC. More

  • in

    Breakthrough achievement in quantum computing

    A University of Texas at San Antonio (UTSA) researcher is part of a collaboration that has set a world record for innovation in quantum computing. The accomplishment comes from R. Tyler Sutherland, an assistant professor in the College of Sciences Department of Physics and Astronomy and the College of Engineering and Integrated Design’s Department of Electrical Engineering, who developed the theory behind the record setting experiment.
    Sutherland and his team set the world record for the most accurate entangling gate ever demonstrated without lasers.
    According to Sutherland, an entangling gate takes two qubits (quantum bits) and creates an operation on the secondary qubit that is conditioned on the state of the first qubit.
    “For example, if the state of qubit A is 0, an entangling gate doesn’t do anything to qubit B, but if the state of qubit A is 1, then the gate flips the state of qubit B from 0 to 1 or 1 to 0,” he said. “The name comes from the fact that this can generate a quantum mechanical property called ‘entanglement’ between the qubits.”
    Sutherland adds that making the entangling gates in your quantum computer “laser-free” enables more cost-effective and easier to use quantum computers. He says the price of an integrated circuit that performs a laser-free gate is negligible compared to the tens of thousands of dollars it costs for a laser that does the same thing.
    “Laser-free gate methods do not have the drawbacks of photon scattering, energy, cost and calibration that are typically associated with using lasers,” said Sutherland. “This alternative gate method matches the accuracy of lasers by instead using microwaves, which are less expensive and easier to calibrate.”
    This quantum computing accomplishment is detailed in a paper Sutherland co-authored titled, “High-fidelity laser-free universal control of trapped-ion qubits.” It was published in the scientific journal, Nature, on September 8.
    Quantum computers have the potential to solve certain complex problems exponentially faster than classical supercomputers. One of the most promising uses for quantum computers is to simulate quantum mechanical processes themselves, chemical reactions for example, which could exponentially reduce the experimental trial and error required to solve difficult problems. These computers are being explored in many industries including science, engineering, finance and logistics.
    “Broadly speaking, the goal of my research is to increase human control over quantum mechanics.” said Sutherland. “Giving people power over a different part of nature hands them a new toolkit. What they will eventually build with it is uncertain.”
    That uncertainty, says Sutherland, is what excites him most.
    Sutherland’s research background includes quantum optics, which studies how quantum mechanical systems emit light. He earned his Ph.D. at Purdue University and went on to Lawrence Livermore National Laboratory for his postdoc, where he began working on experimental applications for quantum computers.
    He became a tenure-track assistant professor at UTSA last August as part of the university’s Quantum Computation and Quantum Information Cluster Hiring Initiative.
    Story Source:
    Materials provided by University of Texas at San Antonio. Original written by Bruce Forey. Note: Content may be edited for style and length. More

  • in

    Researchers enlist robot swarms to mine lunar resources

    With scientists beginning to more seriously consider constructing bases on celestial bodies such as the moon, the idea of space mining is growing in popularity.
    After all, if someone from Los Angeles was moving to New York to build a house, it would be a lot easier to buy the building materials in New York rather than buy them in Los Angeles and lug them 2,800 miles. Considering the distance between Earth and the moon is about 85 times greater, and that getting there requires defying gravity, using the moon’s existing resources is an appealing idea.
    A University of Arizona team, led by researchers in the College of Engineering, has received $500,000 in NASA funding for a new project to advance space-mining methods that use swarms of autonomous robots. As a Hispanic-Serving Institution, the university was eligible to receive funding through NASA’s Minority University Research and Education Project Space Technology Artemis Research Initiative.
    “It’s really exciting to be at the forefront of a new field,” said Moe Momayez, interim head of the Department of Mining and Geological Engineering and the David & Edith Lowell Chair in Mining and Geological Engineering. “I remember watching TV shows as a kid, like ‘Space: 1999,’ which is all about bases on the moon. Here we are in 2021, and we’re talking about colonizing the moon.”
    Blast Off!
    According to the Giant Impact Hypothesis, Earth and the moon came from a common parent body, so scientists expect their chemical compositions to be relatively similar. Mining on the moon’s surface could turn up rare earth metals needed for technologies such as smartphones and medical equipment, titanium for use in titanium alloys, precious metals such as gold and platinum, and helium-3 — a stable helium isotope that could fuel nuclear power plants but is extremely rare on Earth. More

  • in

    New ways to improve the science of ‘trade-offs’

    QUT researchers working on complicated problems in agriculture, ecology and medicine have developed a mathematical model to enable faster solutions.
    Questions about intervention, how strong and how long, are just some of the judgment calls faced by doctors and scientists during everyday decision-making.
    From crop production to chemotherapy, new research published in Journal of the Royal Society Interface, improves how to determine the ‘best’ intervention strategies.
    Professor Matthew Simpson, PhD researcher Jesse Sharp and Professor Kevin Burrage from QUT’s Centre for Data Science and Australian Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS) have developed the new mathematical method to faster simulate different scenarios to reach optimal solutions.
    Mr Sharp, who is studying his PhD, said the method involved optimal control theory which could be described as a “science of trade-offs” between competing objectives.
    “Using mathematical optimisation techniques help us to make smarter, more efficient resource allocation decisions,” he said. More