More stories

  • in

    Sensor breakthrough paves way for groundbreaking map of world under Earth surface

    An object hidden below ground has been located using quantum technology — a long-awaited milestone with profound implications for industry, human knowledge and national security.
    University of Birmingham researchers from the UK National Quantum Technology Hub in Sensors and Timing have reported their achievement in Nature. It is the first in the world for a quantum gravity gradiometer outside of laboratory conditions.
    The quantum gravity gradiometer, which was developed under a contract for the Ministry of Defence and in the UKRI-funded Gravity Pioneer project, was used to find a tunnel buried outdoors in real-world conditions one metre below the ground surface. It wins an international race to take the technology outside.
    The sensor works by detecting variations in microgravity using the principles of quantum physics, which is based on manipulating nature at the sub-molecular level.
    The success opens a commercial path to significantly improved mapping of what exists below ground level.
    This will mean: Reduced costs and delays to construction, rail and road projects. Improved prediction of natural phenomena such as volcanic eruptions. Discovery of hidden natural resources and built structures. Understanding archaeological mysteries without damaging excavation.Professor Kai Bongs, Head of Cold Atom Physics at the University of Birmingham and Principal Investigator of the UK Quantum Technology Hub Sensors and Timing, said: “This is an ‘Edison moment’ in sensing that will transform society, human understanding and economies. More

  • in

    Evidence for exotic magnetic phase of matter

    Scientists at the U.S. Department of Energy’s Brookhaven National Laboratory have discovered a long-predicted magnetic state of matter called an “antiferromagnetic excitonic insulator.”
    “Broadly speaking, this is a novel type of magnet,” said Brookhaven Lab physicist Mark Dean, senior author on a paper describing the research just published in Nature Communications. “Since magnetic materials lie at the heart of much of the technology around us, new types of magnets are both fundamentally fascinating and promising for future applications.”
    The new magnetic state involves strong magnetic attraction between electrons in a layered material that make the electrons want to arrange their magnetic moments, or “spins,” into a regular up-down “antiferromagnetic” pattern. The idea that such antiferromagnetism could be driven by quirky electron coupling in an insulating material was first predicted in the 1960s as physicists explored the differing properties of metals, semiconductors, and insulators.
    “Sixty years ago, physicists were just starting to consider how the rules of quantum mechanics apply to the electronic properties of materials,” said Daniel Mazzone, a former Brookhaven Lab physicist who led the study and is now at the Paul Scherrer Institut in Switzerland. “They were trying to work out what happens as you make the electronic ‘energy gap’ between an insulator and a conductor smaller and smaller. Do you just change a simple insulator into a simple metal where the electrons can move freely, or does something more interesting happen?”
    The prediction was that, under certain conditions, you could get something more interesting: namely, the “antiferromagnetic excitonic insulator” just discovered by the Brookhaven team.
    Why is this material so exotic and interesting? To understand, let’s dive into those terms and explore how this new state of matter forms. More

  • in

    Monitoring Arctic permafrost with satellites, supercomputers, and deep learning

    Permafrost — ground that has been permanently frozen for two or more years — makes up a large part of the Earth, around 15% of the Northern Hemisphere.
    Permafrost is important for our climate, containing large amounts of biomass stored as methane and carbon dioxide, making tundra soil a carbon sink. However, permafrost’s innate characteristics and changing nature are not broadly understood.
    As global warming heats the Earth and causes soil thawing, the permafrost carbon cycle is expected to accelerate and release soil-contained greenhouse gases into the atmosphere, creating a feedback loop that will exacerbate climate change.
    Remote sensing is one way of getting a handle on the breadth, dynamics, and changes to permafrost. “It’s like a virtual passport to see this remote and difficult to reach part of the world,” says Chandi Witharana, assistant professor of Natural Resources & the Environment at the University of Connecticut. “Satellite imaging helps us monitor remote landscape in a detailed manner that we never had before.”
    Over the past two decades, much of the Arctic has been mapped with extreme precision by commercial satellites. These maps are a treasure trove of data about this largely underexplored region. But the data is so large and unwieldy, it makes scholarship difficult, Witharana says.
    With funding and support from the U.S. National Science Foundation (NSF) as part of the “Navigating the New Arctic” program, Witharana, as well as Kenton McHenry from the National Center for Supercomputing Applications, and Arctic researcher Anna Liljedahl of the Woodwell Climate Research Center, are making data about Arctic permafrost much more accessible. More

  • in

    Insect wingbeats will help quantify biodiversity

    Insect populations are plummeting worldwide, with major consequences for our ecosystems and without us quite knowing why. A new AI method from the University of Copenhagen is set to help monitor and catalogue insect biodiversity, which until now has been quite challenging.
    Insects are vital as plant pollinators, as a food source for a wide variety of animals and as decomposers of dead material in nature. But in recent decades, they have been struggling. It is estimated that 40 percent of insect species are in decline and a third of them are endangered.
    Therefore, it is more important than ever to monitor insect biodiversity, so as to understand their decline and hopefully help them out. So far, this task has been difficult and resource-intensive. In part, this is due to the fact that insects are small and very dynamic. Furthermore, scientific researchers and public agencies need to set up traps, capture insects and study them under the microscope.
    To overcome these hurdles, University of Copenhagen researchers have developed a method that uses the data obtained from an infrared sensor to recognize and detect the wingbeats of individual insects. The AI method is based on unsupervised machine learning — where the algorithms can group insects belonging to the same species without any human input. The results from this method could provide information about the diversity of insect species in a natural space without anyone needing to catch and count the critters by hand.
    “Our method makes it much easier to keep track of how insect populations are evolving. There has been a huge loss of insect biomass in recent years. But until we know exactly why insects are in decline, it is difficult to develop the right solutions. This is where our method can contribute new and important knowledge,” states PhD student Klas Rydhmer of the Department of Geosciences and Natural Resource Management at UCPH’s Faculty of Science, who helped develop the method.
    Advanced artificial intelligence
    The researchers have already developed an algorithm that identifies pests in agricultural fields. But instead of identifying insects as pests, the researchers have been able to develop this new algorithm to identify and count various insect populations in nature based on the measurements obtained from the sensor. More

  • in

    Physicists harness electrons to make 'synthetic dimensions'

    Our spatial sense doesn’t extend beyond the familiar three dimensions, but that doesn’t stop scientists from playing with whatever lies beyond.
    Rice University physicists are pushing spatial boundaries in new experiments. They’ve learned to control electrons in gigantic Rydberg atoms with such precision they can create “synthetic dimensions,” important tools for quantum simulations.
    The Rice team developed a technique to engineer the Rydberg states of ultracold strontium atoms by applying resonant microwave electric fields to couple many states together. A Rydberg state occurs when one electron in the atom is energetically bumped up to a highly excited state, supersizing its orbit to make the atom thousands of times larger than normal.
    Ultracold Rydberg atoms are about a millionth of a degree above absolute zero. By precisely and flexibly manipulating the electron motion, Rice Quantum Initiative researchers coupled latticelike Rydberg levels in ways that simulate aspects of real materials. The techniques could also help realize systems that can’t be achieved in real three-dimensional space, creating a powerful new platform for quantum research.
    Rice physicists Tom Killian, Barry Dunning and Kaden Hazzard, all members of the initiative, detailed the research along with lead author and graduate student Soumya Kanungo in a paper published in Nature Communications. The study built off previous work on Rydberg atoms that Killian and Dunning first explored in 2018.
    Rydberg atoms possess many regularly spaced quantum energy levels, which can be coupled by microwaves that allow the highly excited electron to move from level to level. Dynamics in this “synthetic dimension” are mathematically equivalent to a particle moving between lattice sites in a real crystal. More

  • in

    Artificial intelligence tutoring outperforms expert instructors in neurosurgical training

    The COVID-19 pandemic has presented both challenges and opportunities for medical training. Remote learning technology has become increasingly important in several fields. A new study finds that in a remote environment, an artificial intelligence (AI) tutoring system can outperform expert human instructors.
    The Neurosurgical Simulation and Artificial Intelligence Learning Centre at The Neuro (Montreal Neurological Institute-Hospital) recruited seventy medical students to perform virtual brain tumour removals on a neurosurgical simulator. Students were randomly assigned to receive instruction and feedback by either an AI tutor or a remote expert instructor, with a third control group receiving no instruction.
    An AI-powered tutor called the Virtual Operative Assistant (VOA) used a machine learning algorithm to teach safe and efficient surgical technique and provided personalized feedback, while a deep learning Intelligent Continuous Expertise Monitoring System (ICEMS) and a panel of experts assessed student performance.
    In the other group, remote instructors watched a live feed of the surgical simulations and provided feedback based on the student’s performance.
    The researchers found that students who received VOA instruction and feedback learned surgical skills 2.6 times faster and achieved 36 per cent better performance compared to those who received instruction and feedback from remote instructors. And while researchers expected students instructed by VOA to experience greater stress and negative emotion, they found no significant difference between the two groups.
    Surgical skill plays an important role in patient outcomes both during and after brain surgery. VOA may be an effective way to increase neurosurgeon performance, improving patient safety while reducing the burden on human instructors.
    “Artificially intelligent tutors like the VOA may become a valuable tool in the training of the next generation of neurosurgeons,” says Dr. Rolando Del Maestro, the study’s senior author. “The VOA significantly improved expertise while fostering an excellent learning environment. Ongoing studies are assessing how in-person instructors and AI-powered intelligent tutors can most effectively be used together to improve the mastery of neurosurgical skills.”
    “Intelligent tutoring systems can use a variety of simulation platforms to provide almost unlimited chances for repetitive practice without the constraints imposed by the availability of supervision,” says Ali Fazlollahi, the study’s first author. “With continued research, increased development, and dissemination of intelligent tutoring systems, we can be better prepared for ever-evolving future challenges.”
    This study, published in the Journal of the American Medical Association (JAMA Network Open) on Feb. 22, 2022, was funded by the Franco Di Giovanni Foundation, the Royal College of Physicians and Surgeons of Canada, and the Brain Tumour Foundation of Canada Tumour Research Grant along with The Neuro. Cognitive assessment was led by Dr. Jason Harley at McGill University’s Department of Surgery.
    The Neuro
    The Neuro – The Montreal Neurological Institute-Hospital – is a bilingual, world-leading destination for brain research and advanced patient care. Since its founding in 1934 by renowned neurosurgeon Dr. Wilder Penfield, The Neuro has grown to be the largest specialized neuroscience research and clinical center in Canada, and one of the largest in the world. The seamless integration of research, patient care, and training of the world’s top minds make The Neuro uniquely positioned to have a significant impact on the understanding and treatment of nervous system disorders. In 2016, The Neuro became the first institute in the world to fully embrace the Open Science philosophy, creating the Tanenbaum Open Science Institute. The Montreal Neurological Institute is a McGill University research and teaching institute. The Montreal Neurological Hospital is part of the Neuroscience Mission of the McGill University Health Centre. For more information, please visit www.theneuro.ca
    Story Source:
    Materials provided by McGill University. Note: Content may be edited for style and length. More

  • in

    Can machine-learning models overcome biased datasets?

    Artificial intelligence systems may be able to complete tasks quickly, but that doesn’t mean they always do so fairly. If the datasets used to train machine-learning models contain biased data, it is likely the system could exhibit that same bias when it makes decisions in practice.
    For instance, if a dataset contains mostly images of white men, then a facial-recognition model trained with this data may be less accurate for women or people with different skin tones.
    A group of researchers at MIT, in collaboration with researchers at Harvard University and Fujitsu, Ltd., sought to understand when and how a machine-learning model is capable of overcoming this kind of dataset bias. They used an approach from neuroscience to study how training data affects whether an artificial neural network can learn to recognize objects it has not seen before. A neural network is a machine-learning model that mimics the human brain in the way it contains layers of interconnected nodes, or “neurons,” that process data.
    The new results show that diversity in training data has a major influence on whether a neural network is able to overcome bias, but at the same time dataset diversity can degrade the network’s performance. They also show that how a neural network is trained, and the specific types of neurons that emerge during the training process, can play a major role in whether it is able to overcome a biased dataset.
    “A neural network can overcome dataset bias, which is encouraging. But the main takeaway here is that we need to take into account data diversity. We need to stop thinking that if you just collect a ton of raw data, that is going to get you somewhere. We need to be very careful about how we design datasets in the first place,” says Xavier Boix, a research scientist in the Department of Brain and Cognitive Sciences (BCS) and the Center for Brains, Minds, and Machines (CBMM), and senior author of the paper.
    Co-authors include former graduate students Spandan Madan, a corresponding author who is currently pursuing a PhD at Harvard, Timothy Henry, Jamell Dozier, Helen Ho, and Nishchal Bhandari; Tomotake Sasaki, a former visiting scientist now a researcher at Fujitsu; Frédo Durand, a professor of electrical engineering and computer science and a member of the Computer Science and Artificial Intelligence Laboratory; and Hanspeter Pfister, the An Wang Professor of Computer Science at the Harvard School of Enginering and Applied Sciences. The research appears today in Nature Machine Intelligence. More

  • in

    Hiddenite: A new AI processor for reduced computational power consumption based on a cutting-edge neural network theory

    A new accelerator chip called “Hiddenite” that can achieve state-of-the-art accuracy in the calculation of sparse “hidden neural networks” with lower computational burdens has now been developed by Tokyo Tech researchers. By employing the proposed on-chip model construction, which is the combination of weight generation and “supermask” expansion, the Hiddenite chip drastically reduces external memory access for enhanced computational efficiency.
    Deep neural networks (DNNs) are a complex piece of machine learning architecture for AI (artificial learning) that require numerous parameters to learn to predict outputs. DNNs can, however, be “pruned,” thereby reducing the computational burden and model size. A few years ago, the “lottery ticket hypothesis” took the machine learning world by storm. The hypothesis stated that a randomly initialized DNN contains subnetworks that achieve accuracy equivalent to the original DNN after training. The larger the network, the more “lottery tickets” for successful optimization. These lottery tickets thus allow “pruned” sparse neural networks to achieve accuracies equivalent to more complex, “dense” networks, thereby reducing overall computational burdens and power consumptions.
    One technique to find such subnetworks is the hidden neural network (HNN) algorithm, which uses AND logic (where the output is only high when all the inputs are high) on the initialized random weights and a “binary mask” called a “supermask”. The supermask, defined by the top-k% highest scores, denotes the unselected and selected connections as 0 and 1, respectively. The HNN helps reduce computational efficiency from the software side. However, the computation of neural networks also requires improvements in the hardware components.
    Traditional DNN accelerators offer high performance, but they do not consider the power consumption caused by external memory access. Now, researchers from Tokyo Institute of Technology (Tokyo Tech), led by Professors Jaehoon Yu and Masato Motomura, have developed a new accelerator chip called “Hiddenite,” which can calculate hidden neural networks with drastically improved power consumption. “Reducing the external memory access is the key to reducing power consumption. Currently, achieving high inference accuracy requires large models. But this increases external memory access to load model parameters. Our main motivation behind the development of Hiddenite was to reduce this external memory access,” explains Prof. Motomura. Their study will feature in the upcoming International Solid-State Circuits Conference (ISSCC) 2022, an international conference showcasing the pinnacles of achievement in integrated circuits.
    “Hiddenite” stands for Hidden Neural Network Inference Tensor Engine and is the first HNN inference chip. The Hiddenite architecture  offers three-fold benefits to reduce external memory access and achieve high energy efficiency. The first is that it offers the on-chip weight generation for re-generating weights by using a random number generator. This eliminates the need to access the external memory and store the weights. The second benefit is the provision of the “on-chip supermask expansion,” which reduces the number of supermasks that need to be loaded by the accelerator. The third improvement offered by the Hiddenite chip is the high-density four-dimensional (4D) parallel processor that maximizes data re-use during the computational process, thereby improving efficiency.
    “The first two factors are what set the Hiddenite chip apart from existing DNN inference accelerators,” reveals Prof. Motomura. “Moreover, we also introduced a new training method for hidden neural networks, called ‘score distillation,’ in which the conventional knowledge distillation weights are distilled into the scores because hidden neural networks never update the weights. The accuracy using score distillation is comparable to the binary model while being half the size of the binary model.”
    Based on the hiddenite architecture, the team has designed, fabricated, and measured a prototype chip with Taiwan Semiconductor Manufacturing Company’s (TSMC) 40nm process. The chip is only 3mm x 3mm and handles 4,096 MAC (multiply-and-accumulate) operations at once. It achieves a state-of-the-art level of computational efficiency, up to 34.8 trillion or tera operations per second (TOPS) per Watt of power, while reducing the amount of model transfer to half that of binarized networks.
    These findings and their successful exhibition in a real silicon chip are sure to cause another paradigm shift in the world of machine learning, paving the way for faster, more efficient, and ultimately more environment-friendly computing.
    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More