More stories

  • in

    Insect wingbeats will help quantify biodiversity

    Insect populations are plummeting worldwide, with major consequences for our ecosystems and without us quite knowing why. A new AI method from the University of Copenhagen is set to help monitor and catalogue insect biodiversity, which until now has been quite challenging.
    Insects are vital as plant pollinators, as a food source for a wide variety of animals and as decomposers of dead material in nature. But in recent decades, they have been struggling. It is estimated that 40 percent of insect species are in decline and a third of them are endangered.
    Therefore, it is more important than ever to monitor insect biodiversity, so as to understand their decline and hopefully help them out. So far, this task has been difficult and resource-intensive. In part, this is due to the fact that insects are small and very dynamic. Furthermore, scientific researchers and public agencies need to set up traps, capture insects and study them under the microscope.
    To overcome these hurdles, University of Copenhagen researchers have developed a method that uses the data obtained from an infrared sensor to recognize and detect the wingbeats of individual insects. The AI method is based on unsupervised machine learning — where the algorithms can group insects belonging to the same species without any human input. The results from this method could provide information about the diversity of insect species in a natural space without anyone needing to catch and count the critters by hand.
    “Our method makes it much easier to keep track of how insect populations are evolving. There has been a huge loss of insect biomass in recent years. But until we know exactly why insects are in decline, it is difficult to develop the right solutions. This is where our method can contribute new and important knowledge,” states PhD student Klas Rydhmer of the Department of Geosciences and Natural Resource Management at UCPH’s Faculty of Science, who helped develop the method.
    Advanced artificial intelligence
    The researchers have already developed an algorithm that identifies pests in agricultural fields. But instead of identifying insects as pests, the researchers have been able to develop this new algorithm to identify and count various insect populations in nature based on the measurements obtained from the sensor. More

  • in

    Physicists harness electrons to make 'synthetic dimensions'

    Our spatial sense doesn’t extend beyond the familiar three dimensions, but that doesn’t stop scientists from playing with whatever lies beyond.
    Rice University physicists are pushing spatial boundaries in new experiments. They’ve learned to control electrons in gigantic Rydberg atoms with such precision they can create “synthetic dimensions,” important tools for quantum simulations.
    The Rice team developed a technique to engineer the Rydberg states of ultracold strontium atoms by applying resonant microwave electric fields to couple many states together. A Rydberg state occurs when one electron in the atom is energetically bumped up to a highly excited state, supersizing its orbit to make the atom thousands of times larger than normal.
    Ultracold Rydberg atoms are about a millionth of a degree above absolute zero. By precisely and flexibly manipulating the electron motion, Rice Quantum Initiative researchers coupled latticelike Rydberg levels in ways that simulate aspects of real materials. The techniques could also help realize systems that can’t be achieved in real three-dimensional space, creating a powerful new platform for quantum research.
    Rice physicists Tom Killian, Barry Dunning and Kaden Hazzard, all members of the initiative, detailed the research along with lead author and graduate student Soumya Kanungo in a paper published in Nature Communications. The study built off previous work on Rydberg atoms that Killian and Dunning first explored in 2018.
    Rydberg atoms possess many regularly spaced quantum energy levels, which can be coupled by microwaves that allow the highly excited electron to move from level to level. Dynamics in this “synthetic dimension” are mathematically equivalent to a particle moving between lattice sites in a real crystal. More

  • in

    Artificial intelligence tutoring outperforms expert instructors in neurosurgical training

    The COVID-19 pandemic has presented both challenges and opportunities for medical training. Remote learning technology has become increasingly important in several fields. A new study finds that in a remote environment, an artificial intelligence (AI) tutoring system can outperform expert human instructors.
    The Neurosurgical Simulation and Artificial Intelligence Learning Centre at The Neuro (Montreal Neurological Institute-Hospital) recruited seventy medical students to perform virtual brain tumour removals on a neurosurgical simulator. Students were randomly assigned to receive instruction and feedback by either an AI tutor or a remote expert instructor, with a third control group receiving no instruction.
    An AI-powered tutor called the Virtual Operative Assistant (VOA) used a machine learning algorithm to teach safe and efficient surgical technique and provided personalized feedback, while a deep learning Intelligent Continuous Expertise Monitoring System (ICEMS) and a panel of experts assessed student performance.
    In the other group, remote instructors watched a live feed of the surgical simulations and provided feedback based on the student’s performance.
    The researchers found that students who received VOA instruction and feedback learned surgical skills 2.6 times faster and achieved 36 per cent better performance compared to those who received instruction and feedback from remote instructors. And while researchers expected students instructed by VOA to experience greater stress and negative emotion, they found no significant difference between the two groups.
    Surgical skill plays an important role in patient outcomes both during and after brain surgery. VOA may be an effective way to increase neurosurgeon performance, improving patient safety while reducing the burden on human instructors.
    “Artificially intelligent tutors like the VOA may become a valuable tool in the training of the next generation of neurosurgeons,” says Dr. Rolando Del Maestro, the study’s senior author. “The VOA significantly improved expertise while fostering an excellent learning environment. Ongoing studies are assessing how in-person instructors and AI-powered intelligent tutors can most effectively be used together to improve the mastery of neurosurgical skills.”
    “Intelligent tutoring systems can use a variety of simulation platforms to provide almost unlimited chances for repetitive practice without the constraints imposed by the availability of supervision,” says Ali Fazlollahi, the study’s first author. “With continued research, increased development, and dissemination of intelligent tutoring systems, we can be better prepared for ever-evolving future challenges.”
    This study, published in the Journal of the American Medical Association (JAMA Network Open) on Feb. 22, 2022, was funded by the Franco Di Giovanni Foundation, the Royal College of Physicians and Surgeons of Canada, and the Brain Tumour Foundation of Canada Tumour Research Grant along with The Neuro. Cognitive assessment was led by Dr. Jason Harley at McGill University’s Department of Surgery.
    The Neuro
    The Neuro – The Montreal Neurological Institute-Hospital – is a bilingual, world-leading destination for brain research and advanced patient care. Since its founding in 1934 by renowned neurosurgeon Dr. Wilder Penfield, The Neuro has grown to be the largest specialized neuroscience research and clinical center in Canada, and one of the largest in the world. The seamless integration of research, patient care, and training of the world’s top minds make The Neuro uniquely positioned to have a significant impact on the understanding and treatment of nervous system disorders. In 2016, The Neuro became the first institute in the world to fully embrace the Open Science philosophy, creating the Tanenbaum Open Science Institute. The Montreal Neurological Institute is a McGill University research and teaching institute. The Montreal Neurological Hospital is part of the Neuroscience Mission of the McGill University Health Centre. For more information, please visit www.theneuro.ca
    Story Source:
    Materials provided by McGill University. Note: Content may be edited for style and length. More

  • in

    Can machine-learning models overcome biased datasets?

    Artificial intelligence systems may be able to complete tasks quickly, but that doesn’t mean they always do so fairly. If the datasets used to train machine-learning models contain biased data, it is likely the system could exhibit that same bias when it makes decisions in practice.
    For instance, if a dataset contains mostly images of white men, then a facial-recognition model trained with this data may be less accurate for women or people with different skin tones.
    A group of researchers at MIT, in collaboration with researchers at Harvard University and Fujitsu, Ltd., sought to understand when and how a machine-learning model is capable of overcoming this kind of dataset bias. They used an approach from neuroscience to study how training data affects whether an artificial neural network can learn to recognize objects it has not seen before. A neural network is a machine-learning model that mimics the human brain in the way it contains layers of interconnected nodes, or “neurons,” that process data.
    The new results show that diversity in training data has a major influence on whether a neural network is able to overcome bias, but at the same time dataset diversity can degrade the network’s performance. They also show that how a neural network is trained, and the specific types of neurons that emerge during the training process, can play a major role in whether it is able to overcome a biased dataset.
    “A neural network can overcome dataset bias, which is encouraging. But the main takeaway here is that we need to take into account data diversity. We need to stop thinking that if you just collect a ton of raw data, that is going to get you somewhere. We need to be very careful about how we design datasets in the first place,” says Xavier Boix, a research scientist in the Department of Brain and Cognitive Sciences (BCS) and the Center for Brains, Minds, and Machines (CBMM), and senior author of the paper.
    Co-authors include former graduate students Spandan Madan, a corresponding author who is currently pursuing a PhD at Harvard, Timothy Henry, Jamell Dozier, Helen Ho, and Nishchal Bhandari; Tomotake Sasaki, a former visiting scientist now a researcher at Fujitsu; Frédo Durand, a professor of electrical engineering and computer science and a member of the Computer Science and Artificial Intelligence Laboratory; and Hanspeter Pfister, the An Wang Professor of Computer Science at the Harvard School of Enginering and Applied Sciences. The research appears today in Nature Machine Intelligence. More

  • in

    Hiddenite: A new AI processor for reduced computational power consumption based on a cutting-edge neural network theory

    A new accelerator chip called “Hiddenite” that can achieve state-of-the-art accuracy in the calculation of sparse “hidden neural networks” with lower computational burdens has now been developed by Tokyo Tech researchers. By employing the proposed on-chip model construction, which is the combination of weight generation and “supermask” expansion, the Hiddenite chip drastically reduces external memory access for enhanced computational efficiency.
    Deep neural networks (DNNs) are a complex piece of machine learning architecture for AI (artificial learning) that require numerous parameters to learn to predict outputs. DNNs can, however, be “pruned,” thereby reducing the computational burden and model size. A few years ago, the “lottery ticket hypothesis” took the machine learning world by storm. The hypothesis stated that a randomly initialized DNN contains subnetworks that achieve accuracy equivalent to the original DNN after training. The larger the network, the more “lottery tickets” for successful optimization. These lottery tickets thus allow “pruned” sparse neural networks to achieve accuracies equivalent to more complex, “dense” networks, thereby reducing overall computational burdens and power consumptions.
    One technique to find such subnetworks is the hidden neural network (HNN) algorithm, which uses AND logic (where the output is only high when all the inputs are high) on the initialized random weights and a “binary mask” called a “supermask”. The supermask, defined by the top-k% highest scores, denotes the unselected and selected connections as 0 and 1, respectively. The HNN helps reduce computational efficiency from the software side. However, the computation of neural networks also requires improvements in the hardware components.
    Traditional DNN accelerators offer high performance, but they do not consider the power consumption caused by external memory access. Now, researchers from Tokyo Institute of Technology (Tokyo Tech), led by Professors Jaehoon Yu and Masato Motomura, have developed a new accelerator chip called “Hiddenite,” which can calculate hidden neural networks with drastically improved power consumption. “Reducing the external memory access is the key to reducing power consumption. Currently, achieving high inference accuracy requires large models. But this increases external memory access to load model parameters. Our main motivation behind the development of Hiddenite was to reduce this external memory access,” explains Prof. Motomura. Their study will feature in the upcoming International Solid-State Circuits Conference (ISSCC) 2022, an international conference showcasing the pinnacles of achievement in integrated circuits.
    “Hiddenite” stands for Hidden Neural Network Inference Tensor Engine and is the first HNN inference chip. The Hiddenite architecture  offers three-fold benefits to reduce external memory access and achieve high energy efficiency. The first is that it offers the on-chip weight generation for re-generating weights by using a random number generator. This eliminates the need to access the external memory and store the weights. The second benefit is the provision of the “on-chip supermask expansion,” which reduces the number of supermasks that need to be loaded by the accelerator. The third improvement offered by the Hiddenite chip is the high-density four-dimensional (4D) parallel processor that maximizes data re-use during the computational process, thereby improving efficiency.
    “The first two factors are what set the Hiddenite chip apart from existing DNN inference accelerators,” reveals Prof. Motomura. “Moreover, we also introduced a new training method for hidden neural networks, called ‘score distillation,’ in which the conventional knowledge distillation weights are distilled into the scores because hidden neural networks never update the weights. The accuracy using score distillation is comparable to the binary model while being half the size of the binary model.”
    Based on the hiddenite architecture, the team has designed, fabricated, and measured a prototype chip with Taiwan Semiconductor Manufacturing Company’s (TSMC) 40nm process. The chip is only 3mm x 3mm and handles 4,096 MAC (multiply-and-accumulate) operations at once. It achieves a state-of-the-art level of computational efficiency, up to 34.8 trillion or tera operations per second (TOPS) per Watt of power, while reducing the amount of model transfer to half that of binarized networks.
    These findings and their successful exhibition in a real silicon chip are sure to cause another paradigm shift in the world of machine learning, paving the way for faster, more efficient, and ultimately more environment-friendly computing.
    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Versatile ‘nanocrystal gel’ could enable advances in energy, defense and telecommunications

    New applications in energy, defense and telecommunications could receive a boost after a team from The University of Texas at Austin created a new type of “nanocrystal gel” — a gel composed of tiny nanocrystals each 10,000 times smaller than the width of a human hair that are linked together into an organized network.
    The crux of the team’s discovery is that this new material is easily tunable. That is, it can be switched between two different states by changing the temperature. This means the material can work as an optical filter, absorbing different frequencies of light depending on whether it’s in a gelled state or not. So, it could be used, for example, on the outside of buildings to control heating or cooling dynamically. This type of optical filter also has applications for defense, particularly for thermal camouflage.
    The gels can be customized for these wide-ranging applications because both the nanocrystals and the molecular linkers that connect them into networks are designer components. Nanocrystals can be chemically tuned to be useful for routing communications through fiber optic networks or keep the temperature of space craft steady on remote planetary bodies. Linkers can be designed to cause gels to switch based on ambient temperature or detection of environmental toxins.
    “You could shift the apparent heat signature of an object by changing the infrared properties of its skin,” said Delia Milliron, professor and chair of the McKetta Department of Chemical Engineering in the Cockrell School of Engineering. “It could also be useful for telecommunications which all use infrared wavelengths.”
    The new research is published in the recent issue of the journal Science Advances.
    The team, led by graduate students Jiho Kang and Stephanie Valenzuela, did this work through the university’s Center for Dynamics and Control of Materials, a National Science Foundation Materials Research Science and Engineering Center that brings together engineers and scientists from across campus to collaborate on materials science research.
    The lab experiments allowed the team to see the material change back and forth between its two states of gel and not-gel (that is, free-floating nanocrystals suspended in liquid) that they triggered by specific temperature changes.
    Supercomputer simulations done at UT’s Texas Advanced Computing Center helped them to understand what was happening in the gel at the microscopic level when heat was applied. Based on theories of chemistry and physics, the simulations revealed the types of chemical bonds that hold the nanocrystals together in a network, and how those bonds break when hit with heat, causing the gel to break down.
    This is the second unique nanocrystal gel created by this team, and they continue to pursue advances in this arena. Kang is currently working to create a nanocrystal gel that can change between four states, making it even more versatile and useful. That gel would be a blend of two different types of nanocrystals, each able to switch between states in response to chemical signals or temperature changes. Such tunable nanocrystal gels are called “programmable” materials.
    Story Source:
    Materials provided by University of Texas at Austin. Note: Content may be edited for style and length. More

  • in

    Core memory weavers and Navajo women made the Apollo missions possible

    The historic Apollo moon missions are often associated with high-visibility test flights, dazzling launches and spectacular feats of engineering. But intricate, challenging handiwork — comparable to weaving — was just as essential to putting men on the moon. Beyond Neil Armstrong, Buzz Aldrin and a handful of other names that we remember were hundreds of thousands of men and women who contributed to Apollo over a decade. Among them: the Navajo women who assembled state-of-the-art integrated circuits for the Apollo Guidance Computer and the women employees of Raytheon who wove the computer’s core memory.

    In 1962, when President John F. Kennedy declared that putting Americans on the moon should be the top priority for NASA, computers were large mainframes; they occupied entire rooms. And so one of the most daunting yet crucial challenges was developing a highly stable, reliable and portable computer to control and navigate the spacecraft.

    NASA chose to use cutting-edge integrated circuits in the Apollo Guidance Computer. These commercial circuits had been introduced only recently. Also known as microchips, they were revolutionizing electronics and computing, contributing to the gradual miniaturization of computers from mainframes to today’s smartphones. NASA sourced the circuits from the original Silicon Valley start-up, Fairchild Semiconductor. Fairchild was also leading the way in the practice known as outsourcing; the company opened a factory in Hong Kong in the early 1960s, which by 1966 employed 5,000 people, compared with Fairchild’s 3,000 California employees.

    At the same time, Fairchild sought low-cost labor within the United States. Lured by tax incentives and the promise of a labor force with almost no other employment options, Fairchild opened a plant in Shiprock, N.M., within the Navajo reservation, in 1965. The Fairchild factory operated until 1975 and employed more than 1,000 individuals at its peak, most of them Navajo women manufacturing integrated circuits.

    It was challenging work. Electrical components had to be placed on tiny chips made of a semiconductor such as silicon and connected by wires in precise locations, creating complex and varying patterns of lines and geometric shapes. The Navajo women’s work “was performed using a microscope and required painstaking attention to detail, excellent eyesight, high standards of quality and intense focus,” writes digital media scholar Lisa Nakamura.

    A brochure commemorating the dedication of Fairchild Semiconductor’s plant in Shiprock, N.M., included this Fairchild 9040 integrated circuit.Courtesy of the Computer History Museum

    In a brochure commemorating the dedication of the Shiprock plant, Fairchild directly compared the assembly of integrated circuits with what the company portrayed as the traditional, feminine, Indigenous craft of rug-weaving. The Shiprock brochure juxtaposed a photo of a microchip with one of a geometric-patterned rug, and another of a woman weaving such a rug. That portrayal, Nakamura argues, reinforced racial and gender stereotypes. The work was dismissed as “women’s work,” depriving the Navajo women of appropriate recognition and commensurate compensation.  Journalists and Fairchild employees also “depict[ed] electronics manufacture as a high-tech version of blanket weaving performed by willing and skillful Indigenous women,” Nakamura notes, yet “the women who performed this labor did so for the same reason that women have performed factory labor for centuries — to survive.”

    Far from the Shiprock desert, outside of Boston, women employees at Raytheon assembled the Apollo Guidance Computer’s core memory with a process that in this case directly mimicked weaving. Again, the moon missions demanded a stable and compact way of storing Apollo’s computing instructions. Core memory used metal wires threaded through tiny doughnut-shaped ferrite rings, or “cores,” to represent 1s and 0s. All of this core memory was woven by hand, with women sitting on opposite sides of a panel passing a wire-threaded needle back and forth to create a particular pattern. (In some cases, a woman worked alone, passing the needle through the panel to herself.)

    Women employees of Raytheon assembled core memory for the Apollo Guidance Computer by threading metal wires through rings. (This unnamed woman was described as a “space age needleworker” in a Raytheon press kit.)Courtesy of the collection of David Meerman Scott, Raytheon public relations

    Apollo engineers referred to this process of building memory as the “LOL,” or “Little Old Ladies,” method. Yet this work was so mission critical that it was tested and inspected multiple times. Mary Lou Rogers, who worked on Apollo, recalled, “[Each component] had to be looked at by three of four people before it was stamped off. We had a group of inspectors come in for the federal government to check our work all the time.”

    The core memory was also known as rope memory, and those who supervised its development were “rope mothers.” We know a great deal about one rope mother — Margaret Hamilton. She has been recognized with the Presidential Medal of Freedom, among other awards, and is now remembered as the woman who oversaw most of the Apollo software. But her efforts were unrecognized by many at the time. Hamilton recalled, “At the beginning, nobody thought software was that big a deal. But then they began to realize how much they were relying on it…. Astronauts‘ lives were at stake. Our software needed to be ultrareliable and it needed to be able to detect an error and recover from it at any time during the mission. And it all had to fit on the hardware.” Yet, little is known about the thousands of others who performed this mission-critical work of weaving integrated circuits and core memory.

    Margaret Hamilton is known for overseeing the development of the Apollo software. Draper Laboratory, restored by Adam Cuerden/Wikimedia Commons

    At the time, Fairchild’s representation of the Navajo women’s work as a feminine craft differentiated it from the high-status and masculine work of engineering. As Nakamura has written, the work “came to be understood as affective labor, or a ‘labor of love.’” Similarly, the work performed at Raytheon was described by Eldon Hall, who led the Apollo Guidance Computer’s hardware design, as “tender loving care.” Journalists and even a Raytheon manager presented this work as requiring no thinking and no skill.

    Recently, the communications scholar Samantha Shorey, engineer Daniela Rosner, technologist Brock Craft and quilt artist Helen Remick firmly overturned the notion that weaving core memory was a “no-brainer” with their Making Core Memory project. In nine workshops, they invited participants to weave core memory “patches” using metal matrices, beads and conductive threads, showcasing the deep focus and meticulous attention to detail required. The patches were then assembled in an electronic quilt that played aloud accounts from 1960s Apollo engineers and Raytheon managers. The Making Core Memory collaboration challenged the dichotomy of masculine, high-status, well-paid science and engineering cognitive labor versus feminine, low-status, low-paid, manual labor.

    A 1975 NASA report that summarized the Apollo missions spoke glowingly of the Apollo computing systems — but mentioned none of the Navajo or Raytheon women. “The performance of the computer was flawless,” the report declared. “Perhaps the most significant accomplishment during Apollo pertaining to guidance, navigation, and control was the demonstration of the versatility and adaptability of the computer software.”

    That computer, and that software, relied on the skilled, technical, embodied expertise and labor of thousands of women, including women of color. They were indubitably women of science, and their untold stories call us to reconsider who does science, and what counts as scientific expertise.  More

  • in

    Forget handheld virtual reality controllers: a smile, frown or clench will suffice

    Our face can unlock a smartphone, provide access to a secure building and speed up passport control at airports, verifying our identity for numerous purposes.
    An international team of researchers from Australia, New Zealand and India has taken facial recognition technology to the next level, using a person’s expression to manipulate objects in a virtual reality setting without the use of a handheld controller or touchpad.
    In a world first study led by the University of Queensland, human computer interaction experts used neural processing techniques to capture a person’s smile, frown and clenched jaw and used each expression to trigger specific actions in virtual reality environments.
    One of the researchers involved in the experiment, University of South Australia’s Professor Mark Billinghurst, says the system has been designed to recognise different facial expressions via an EEG headset.
    “A smile was used to trigger the ‘move’ command; a frown for the ‘stop’ command and a clench for the ‘action’ command, in place of a handheld controller performing these actions,” says Prof Billinghurst.
    “Essentially we are capturing common facial expressions such as anger, happiness and surprise and implementing them in a virtual reality environment.”
    The researchers designed three virtual environments — happy, neutral and scary — and measured each person’s cognitive and physiological state while they were immersed in each scenario. More