More stories

  • in

    Physicists harness electrons to make 'synthetic dimensions'

    Our spatial sense doesn’t extend beyond the familiar three dimensions, but that doesn’t stop scientists from playing with whatever lies beyond.
    Rice University physicists are pushing spatial boundaries in new experiments. They’ve learned to control electrons in gigantic Rydberg atoms with such precision they can create “synthetic dimensions,” important tools for quantum simulations.
    The Rice team developed a technique to engineer the Rydberg states of ultracold strontium atoms by applying resonant microwave electric fields to couple many states together. A Rydberg state occurs when one electron in the atom is energetically bumped up to a highly excited state, supersizing its orbit to make the atom thousands of times larger than normal.
    Ultracold Rydberg atoms are about a millionth of a degree above absolute zero. By precisely and flexibly manipulating the electron motion, Rice Quantum Initiative researchers coupled latticelike Rydberg levels in ways that simulate aspects of real materials. The techniques could also help realize systems that can’t be achieved in real three-dimensional space, creating a powerful new platform for quantum research.
    Rice physicists Tom Killian, Barry Dunning and Kaden Hazzard, all members of the initiative, detailed the research along with lead author and graduate student Soumya Kanungo in a paper published in Nature Communications. The study built off previous work on Rydberg atoms that Killian and Dunning first explored in 2018.
    Rydberg atoms possess many regularly spaced quantum energy levels, which can be coupled by microwaves that allow the highly excited electron to move from level to level. Dynamics in this “synthetic dimension” are mathematically equivalent to a particle moving between lattice sites in a real crystal. More

  • in

    Artificial intelligence tutoring outperforms expert instructors in neurosurgical training

    The COVID-19 pandemic has presented both challenges and opportunities for medical training. Remote learning technology has become increasingly important in several fields. A new study finds that in a remote environment, an artificial intelligence (AI) tutoring system can outperform expert human instructors.
    The Neurosurgical Simulation and Artificial Intelligence Learning Centre at The Neuro (Montreal Neurological Institute-Hospital) recruited seventy medical students to perform virtual brain tumour removals on a neurosurgical simulator. Students were randomly assigned to receive instruction and feedback by either an AI tutor or a remote expert instructor, with a third control group receiving no instruction.
    An AI-powered tutor called the Virtual Operative Assistant (VOA) used a machine learning algorithm to teach safe and efficient surgical technique and provided personalized feedback, while a deep learning Intelligent Continuous Expertise Monitoring System (ICEMS) and a panel of experts assessed student performance.
    In the other group, remote instructors watched a live feed of the surgical simulations and provided feedback based on the student’s performance.
    The researchers found that students who received VOA instruction and feedback learned surgical skills 2.6 times faster and achieved 36 per cent better performance compared to those who received instruction and feedback from remote instructors. And while researchers expected students instructed by VOA to experience greater stress and negative emotion, they found no significant difference between the two groups.
    Surgical skill plays an important role in patient outcomes both during and after brain surgery. VOA may be an effective way to increase neurosurgeon performance, improving patient safety while reducing the burden on human instructors.
    “Artificially intelligent tutors like the VOA may become a valuable tool in the training of the next generation of neurosurgeons,” says Dr. Rolando Del Maestro, the study’s senior author. “The VOA significantly improved expertise while fostering an excellent learning environment. Ongoing studies are assessing how in-person instructors and AI-powered intelligent tutors can most effectively be used together to improve the mastery of neurosurgical skills.”
    “Intelligent tutoring systems can use a variety of simulation platforms to provide almost unlimited chances for repetitive practice without the constraints imposed by the availability of supervision,” says Ali Fazlollahi, the study’s first author. “With continued research, increased development, and dissemination of intelligent tutoring systems, we can be better prepared for ever-evolving future challenges.”
    This study, published in the Journal of the American Medical Association (JAMA Network Open) on Feb. 22, 2022, was funded by the Franco Di Giovanni Foundation, the Royal College of Physicians and Surgeons of Canada, and the Brain Tumour Foundation of Canada Tumour Research Grant along with The Neuro. Cognitive assessment was led by Dr. Jason Harley at McGill University’s Department of Surgery.
    The Neuro
    The Neuro – The Montreal Neurological Institute-Hospital – is a bilingual, world-leading destination for brain research and advanced patient care. Since its founding in 1934 by renowned neurosurgeon Dr. Wilder Penfield, The Neuro has grown to be the largest specialized neuroscience research and clinical center in Canada, and one of the largest in the world. The seamless integration of research, patient care, and training of the world’s top minds make The Neuro uniquely positioned to have a significant impact on the understanding and treatment of nervous system disorders. In 2016, The Neuro became the first institute in the world to fully embrace the Open Science philosophy, creating the Tanenbaum Open Science Institute. The Montreal Neurological Institute is a McGill University research and teaching institute. The Montreal Neurological Hospital is part of the Neuroscience Mission of the McGill University Health Centre. For more information, please visit www.theneuro.ca
    Story Source:
    Materials provided by McGill University. Note: Content may be edited for style and length. More

  • in

    Can machine-learning models overcome biased datasets?

    Artificial intelligence systems may be able to complete tasks quickly, but that doesn’t mean they always do so fairly. If the datasets used to train machine-learning models contain biased data, it is likely the system could exhibit that same bias when it makes decisions in practice.
    For instance, if a dataset contains mostly images of white men, then a facial-recognition model trained with this data may be less accurate for women or people with different skin tones.
    A group of researchers at MIT, in collaboration with researchers at Harvard University and Fujitsu, Ltd., sought to understand when and how a machine-learning model is capable of overcoming this kind of dataset bias. They used an approach from neuroscience to study how training data affects whether an artificial neural network can learn to recognize objects it has not seen before. A neural network is a machine-learning model that mimics the human brain in the way it contains layers of interconnected nodes, or “neurons,” that process data.
    The new results show that diversity in training data has a major influence on whether a neural network is able to overcome bias, but at the same time dataset diversity can degrade the network’s performance. They also show that how a neural network is trained, and the specific types of neurons that emerge during the training process, can play a major role in whether it is able to overcome a biased dataset.
    “A neural network can overcome dataset bias, which is encouraging. But the main takeaway here is that we need to take into account data diversity. We need to stop thinking that if you just collect a ton of raw data, that is going to get you somewhere. We need to be very careful about how we design datasets in the first place,” says Xavier Boix, a research scientist in the Department of Brain and Cognitive Sciences (BCS) and the Center for Brains, Minds, and Machines (CBMM), and senior author of the paper.
    Co-authors include former graduate students Spandan Madan, a corresponding author who is currently pursuing a PhD at Harvard, Timothy Henry, Jamell Dozier, Helen Ho, and Nishchal Bhandari; Tomotake Sasaki, a former visiting scientist now a researcher at Fujitsu; Frédo Durand, a professor of electrical engineering and computer science and a member of the Computer Science and Artificial Intelligence Laboratory; and Hanspeter Pfister, the An Wang Professor of Computer Science at the Harvard School of Enginering and Applied Sciences. The research appears today in Nature Machine Intelligence. More

  • in

    Hiddenite: A new AI processor for reduced computational power consumption based on a cutting-edge neural network theory

    A new accelerator chip called “Hiddenite” that can achieve state-of-the-art accuracy in the calculation of sparse “hidden neural networks” with lower computational burdens has now been developed by Tokyo Tech researchers. By employing the proposed on-chip model construction, which is the combination of weight generation and “supermask” expansion, the Hiddenite chip drastically reduces external memory access for enhanced computational efficiency.
    Deep neural networks (DNNs) are a complex piece of machine learning architecture for AI (artificial learning) that require numerous parameters to learn to predict outputs. DNNs can, however, be “pruned,” thereby reducing the computational burden and model size. A few years ago, the “lottery ticket hypothesis” took the machine learning world by storm. The hypothesis stated that a randomly initialized DNN contains subnetworks that achieve accuracy equivalent to the original DNN after training. The larger the network, the more “lottery tickets” for successful optimization. These lottery tickets thus allow “pruned” sparse neural networks to achieve accuracies equivalent to more complex, “dense” networks, thereby reducing overall computational burdens and power consumptions.
    One technique to find such subnetworks is the hidden neural network (HNN) algorithm, which uses AND logic (where the output is only high when all the inputs are high) on the initialized random weights and a “binary mask” called a “supermask”. The supermask, defined by the top-k% highest scores, denotes the unselected and selected connections as 0 and 1, respectively. The HNN helps reduce computational efficiency from the software side. However, the computation of neural networks also requires improvements in the hardware components.
    Traditional DNN accelerators offer high performance, but they do not consider the power consumption caused by external memory access. Now, researchers from Tokyo Institute of Technology (Tokyo Tech), led by Professors Jaehoon Yu and Masato Motomura, have developed a new accelerator chip called “Hiddenite,” which can calculate hidden neural networks with drastically improved power consumption. “Reducing the external memory access is the key to reducing power consumption. Currently, achieving high inference accuracy requires large models. But this increases external memory access to load model parameters. Our main motivation behind the development of Hiddenite was to reduce this external memory access,” explains Prof. Motomura. Their study will feature in the upcoming International Solid-State Circuits Conference (ISSCC) 2022, an international conference showcasing the pinnacles of achievement in integrated circuits.
    “Hiddenite” stands for Hidden Neural Network Inference Tensor Engine and is the first HNN inference chip. The Hiddenite architecture  offers three-fold benefits to reduce external memory access and achieve high energy efficiency. The first is that it offers the on-chip weight generation for re-generating weights by using a random number generator. This eliminates the need to access the external memory and store the weights. The second benefit is the provision of the “on-chip supermask expansion,” which reduces the number of supermasks that need to be loaded by the accelerator. The third improvement offered by the Hiddenite chip is the high-density four-dimensional (4D) parallel processor that maximizes data re-use during the computational process, thereby improving efficiency.
    “The first two factors are what set the Hiddenite chip apart from existing DNN inference accelerators,” reveals Prof. Motomura. “Moreover, we also introduced a new training method for hidden neural networks, called ‘score distillation,’ in which the conventional knowledge distillation weights are distilled into the scores because hidden neural networks never update the weights. The accuracy using score distillation is comparable to the binary model while being half the size of the binary model.”
    Based on the hiddenite architecture, the team has designed, fabricated, and measured a prototype chip with Taiwan Semiconductor Manufacturing Company’s (TSMC) 40nm process. The chip is only 3mm x 3mm and handles 4,096 MAC (multiply-and-accumulate) operations at once. It achieves a state-of-the-art level of computational efficiency, up to 34.8 trillion or tera operations per second (TOPS) per Watt of power, while reducing the amount of model transfer to half that of binarized networks.
    These findings and their successful exhibition in a real silicon chip are sure to cause another paradigm shift in the world of machine learning, paving the way for faster, more efficient, and ultimately more environment-friendly computing.
    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Versatile ‘nanocrystal gel’ could enable advances in energy, defense and telecommunications

    New applications in energy, defense and telecommunications could receive a boost after a team from The University of Texas at Austin created a new type of “nanocrystal gel” — a gel composed of tiny nanocrystals each 10,000 times smaller than the width of a human hair that are linked together into an organized network.
    The crux of the team’s discovery is that this new material is easily tunable. That is, it can be switched between two different states by changing the temperature. This means the material can work as an optical filter, absorbing different frequencies of light depending on whether it’s in a gelled state or not. So, it could be used, for example, on the outside of buildings to control heating or cooling dynamically. This type of optical filter also has applications for defense, particularly for thermal camouflage.
    The gels can be customized for these wide-ranging applications because both the nanocrystals and the molecular linkers that connect them into networks are designer components. Nanocrystals can be chemically tuned to be useful for routing communications through fiber optic networks or keep the temperature of space craft steady on remote planetary bodies. Linkers can be designed to cause gels to switch based on ambient temperature or detection of environmental toxins.
    “You could shift the apparent heat signature of an object by changing the infrared properties of its skin,” said Delia Milliron, professor and chair of the McKetta Department of Chemical Engineering in the Cockrell School of Engineering. “It could also be useful for telecommunications which all use infrared wavelengths.”
    The new research is published in the recent issue of the journal Science Advances.
    The team, led by graduate students Jiho Kang and Stephanie Valenzuela, did this work through the university’s Center for Dynamics and Control of Materials, a National Science Foundation Materials Research Science and Engineering Center that brings together engineers and scientists from across campus to collaborate on materials science research.
    The lab experiments allowed the team to see the material change back and forth between its two states of gel and not-gel (that is, free-floating nanocrystals suspended in liquid) that they triggered by specific temperature changes.
    Supercomputer simulations done at UT’s Texas Advanced Computing Center helped them to understand what was happening in the gel at the microscopic level when heat was applied. Based on theories of chemistry and physics, the simulations revealed the types of chemical bonds that hold the nanocrystals together in a network, and how those bonds break when hit with heat, causing the gel to break down.
    This is the second unique nanocrystal gel created by this team, and they continue to pursue advances in this arena. Kang is currently working to create a nanocrystal gel that can change between four states, making it even more versatile and useful. That gel would be a blend of two different types of nanocrystals, each able to switch between states in response to chemical signals or temperature changes. Such tunable nanocrystal gels are called “programmable” materials.
    Story Source:
    Materials provided by University of Texas at Austin. Note: Content may be edited for style and length. More

  • in

    Forget handheld virtual reality controllers: a smile, frown or clench will suffice

    Our face can unlock a smartphone, provide access to a secure building and speed up passport control at airports, verifying our identity for numerous purposes.
    An international team of researchers from Australia, New Zealand and India has taken facial recognition technology to the next level, using a person’s expression to manipulate objects in a virtual reality setting without the use of a handheld controller or touchpad.
    In a world first study led by the University of Queensland, human computer interaction experts used neural processing techniques to capture a person’s smile, frown and clenched jaw and used each expression to trigger specific actions in virtual reality environments.
    One of the researchers involved in the experiment, University of South Australia’s Professor Mark Billinghurst, says the system has been designed to recognise different facial expressions via an EEG headset.
    “A smile was used to trigger the ‘move’ command; a frown for the ‘stop’ command and a clench for the ‘action’ command, in place of a handheld controller performing these actions,” says Prof Billinghurst.
    “Essentially we are capturing common facial expressions such as anger, happiness and surprise and implementing them in a virtual reality environment.”
    The researchers designed three virtual environments — happy, neutral and scary — and measured each person’s cognitive and physiological state while they were immersed in each scenario. More

  • in

    CROPSR: A new tool to accelerate genetic discoveries

    Commercially viable biofuel crops are vital to reducing greenhouse gas emissions, and a new tool developed by the Center for Advanced Bioenergy and Bioproducts Innovation (CABBI) should accelerate their development — as well as genetic editing advances overall.
    The genomes of crops are tailored by generations of breeding to optimize specific traits, and until recently breeders were limited to selection on naturally occurring diversity. CRISPR/Cas9 gene-editing technology can change this, but the software tools necessary for designing and evaluating CRISPR experiments have so far been based on the needs of editing in mammalian genomes, which don’t share the same characteristics as complex crop genomes.
    Enter CROPSR, the first open-source software tool for genome-wide design and evaluation of guide RNA (gRNA) sequences for CRISPR experiments, created by scientists at CABBI, a Department of Energy-funded Bioenergy Research Center (BRC). The genome-wide approach significantly shortens the time required to design a CRISPR experiment, reducing the challenge of working with crops and accelerating gRNA sequence design, evaluation, and validation, according to the study published in BMC Bioinformatics.
    “CROPSR provides the scientific community with new methods and a new workflow for performing CRISPR/Cas9 knockout experiments,” said CROPSR developer Hans Müller Paul, a molecular biologist and Ph.D. student with co-author Matthew Hudson, Professor of Crop Sciences at the University of Illinois Urbana-Champaign. “We hope that the new software will accelerate discovery and reduce the number of failed experiments.”
    To better meet the needs of crop geneticists, the team built software that lifts restrictions imposed by other packages on design and evaluation of gRNA sequences, the guides used to locate targeted genetic material. Team members also developed a new machine learning model that would not avoid guides for repetitive genomic regions often found in plants, a problem with existing tools. The CROPSR scoring model provided much more accurate predictions, even in non-crop genomes, the authors said.
    “The goal was to incorporate features to make life easier for the scientist,” Müller Paul said. More

  • in

    Vortex microscope sees more than ever before

    Understanding the nitty gritty of how molecules interact with each other in the real, messy, dynamic environment of a living body is a challenge that must be overcome in order to understand a host of diseases, such as Alzheimer’s.
    Until now, researchers could capture the motion of a single molecule, and they could capture its rotation — how it tumbles as it bumps into surrounding molecules — but only by compromising 3D resolution.
    Now, the lab of Matthew Lew, assistant professor of electrical and systems engineering at the McKelvey School of Engineering at Washington University in St. Louis, has developed an imaging method that provides an unprecedented look at a molecule as it spins and rolls through liquid, providing the most comprehensive picture yet of molecular dynamics collected using optical microscopes.
    The research was published in a special issue of the Journal of Physical Chemistry B. The Feb. 17, 2022, Festschrift is dedicated to Nobel laureate William E. (W.E.) Moerner, an imaging pioneer, Washington University alumnus and mentor to more than 100 students over the years, including Lew.
    Moerner was the first person to observe optical signatures of a single molecule; previously, researchers weren’t sure it was even possible to measure such signals.
    Now Lew’s lab is the first to be able to visualize the orientation and direction of a molecule’s rotational movement — how it spins and wobbles — while it’s in a liquid system. More