More stories

  • in

    AI learns through the eyes and ears of a child

    AI systems, such as GPT-4, can now learn and use human language, but they learn from astronomical amounts of language input — much more than children receive when learning how to understand and speak a language. The best AI systems train on text with a word count in the trillions, whereas children receive just millions per year.
    Due to this enormous data gap, researchers have been skeptical that recent AI advances can tell us much about human learning and development. An ideal test for demonstrating a connection would involve training an AI model, not on massive data from the web, but on only the input that a single child receives. What would the model be able to learn then?
    A team of New York University researchers ran this exact experiment. They trained a multimodal AI system through the eyes and ears of a single child, using headcam video recordings from when the child was six months and through their second birthday. They examined if the AI model could learn words and concepts present in a child’s everyday experience.
    Their findings, reported in the latest issue of the journal Science, showed that the model, or neural network, could, in fact, learn a substantial number of words and concepts using limited slices of what the child experienced. That is, the video only captured about 1% of the child’s waking hours, but that was sufficient for genuine language learning.
    In this video, the researchers describe their work in greater detail.
    “We show, for the first time, that a neural network trained on this developmentally realistic input from a single child can learn to link words to their visual counterparts,” says Wai Keen Vong, a research scientist at NYU’s Center for Data Science and the paper’s first author. “Our results demonstrate how recent algorithmic advances paired with one child’s naturalistic experience has the potential to reshape our understanding of early language and concept acquisition.”
    “By using AI models to study the real language-learning problem faced by children, we can address classic debates about what ingredients children need to learn words — whether they need language-specific biases, innate knowledge, or just associative learning to get going,” adds Brenden Lake, an assistant professor in NYU’s Center for Data Science and Department of Psychology and the paper’s senior author. “It seems we can get more with just learning than commonly thought.”
    Vong, Lake, and their NYU colleagues, Wentao Wang and Emin Orhan, analyzed a child’s learning process captured on first-person video — via a light, head-mounted camera — on a weekly basis beginning at six months and through 25 months, using more than 60 hours of footage. The footage contained approximately a quarter of a million word instances (i.e., the number of words communicated, many of them repeatedly) that are linked with video frames of what the child saw when those words were spoken and included a wide range of different activities across development, including mealtimes, reading books, and the child playing.

    The NYU researchers then trained a multimodal neural network with two separate modules: one that takes in single video frames (the vision encoder) and another that takes in the transcribed child-directed speech (the language encoder). These two encoders were combined and trained using an algorithm called contrastive learning, which aims to learn useful input features and their cross-modal associations. For instance, when a parent says something in view of the child, it is likely that some of the words used are likely referring to something that the child can see, meaning comprehension is instilled by linking visual and linguistic cues.
    “This provides the model a clue as to which words should be associated with which objects,” explains Vong. “Combining these cues is what enables contrastive learning to gradually determine which words belong with which visuals and to capture the learning of a child’s first words.”
    After training the model, the researchers tested it using the same kinds of evaluations used to measure word learning in infants — presenting the model with the target word and an array of four different image options and asking it to select the image that matches the target word. Their results showed that the model was able to learn a substantial number of the words and concepts present in the child’s everyday experience. Furthermore, for some of the words the model learned, it could generalize them to very different visual instances than those seen at training, reflecting an aspect of generalization also seen in children when they are tested in the lab.
    “These findings suggest that this aspect of word learning is feasible from the kind of naturalistic data that children receive while using relatively generic learning mechanisms such as those found in neural networks,” observes Lake.
    The work was supported by the U.S. Department of Defense’s Defense Advanced Research Projects Agency (N6600119C4030) and the National Science Foundation (1922658). Participation of the child was approved by the parents and the methodology was approved by NYU’s Institutional Review Board. More

  • in

    Photonics-based wireless link breaks speed records for data transmission

    From coffee-shop customers who connect their laptop to the local Wi-Fi network to remote weather monitoring stations in the Antarctic, wireless communication is an essential part of modern life. Researchers worldwide are currently working on the next evolution of communication networks, called “beyond 5G” or 6G networks. To enable the near-instantaneous communication needed for applications like augmented reality or the remote control of surgical robots, ultra-high data speeds will be needed on wireless channels. In a study published recently in IEICE Electronics Express, researchers from Osaka University and IMRA AMERICA have found a way to increase these data speeds by reducing the noise in the system through lasers.
    To pack in large amounts of data and keep responses fast, the sub-terahertz band, which extends from 100 GHz to 300 GHz, will be used by 6G transmitters and receivers. A sophisticated approach called “multi-level signal modulation” is used to further increase the data transmission rate of these wireless links. However, when operating at the top end of these extremely high frequencies, multi-level signal modulation becomes highly sensitive to noise. To work well, it relies on precise reference signals, and when these signals begin to shift forward and backward in time (a phenomenon called “phase noise”), the performance of multi-level signal modulation drops.
    “This problem has limited 300-GHz communications so far,” says Keisuke Maekawa, lead author of the study. “However, we found that at high frequencies, a signal generator based on a photonic device had much less phase noise than a conventional electrical signal generator.”
    Specifically, the team used a stimulated Brillouin scattering laser, which employs interactions between sound and light waves, to generate a precise signal. They then set up a 300 GHz-band wireless communication system that employs the laser-based signal generator in both the transmitter and receiver. The system also used on-line digital signal processing (DSP) to demodulate the signals in the receiver and increase the data rate.
    “Our team achieved a single-channel transmission rate of 240 gigabits per second,” says Tadao Nagatsuma, PI of the project. “This is the highest transmission rate obtained so far in the world using on-line DSP.”
    As 5G spreads across the globe, researchers are working hard to develop the technology that will be needed for 6G, and the results of this study are a significant step toward 300GHz-band wireless communication. The researchers anticipate that with multiplexing techniques (where more than one channel can be used) and more sensitive receivers, the data rate can be increased to 1 terabit per second, ushering in a new era of near-instantaneous global communication. More

  • in

    Hexagonal copper disk lattice unleashes spin wave control

    A collaborative group of researchers has potentially developed a means of controlling spin waves by creating a hexagonal pattern of copper disks on a magnetic insulator. The breakthrough is expected to lead to greater efficiency and miniaturization of communication devices in fields such as artificial intelligence and automation technology.
    Details of the study were published in the journal Physical Review Applied on January 30, 2024.
    In a magnetic material, the spins of electrons are aligned. When these spins undergo coordinated movement, it generates a kind of ripple in the magnetic order, dubbed spin waves. Spin waves generate little heat and offer an abundance of advantages for next-generation devices.
    Implementing spin waves in semiconductor circuits, which conventionally rely on electrical currents, could lessen power consumption and promote high integration. Since spin waves are waves, they tend to propagate in random directions unless controlled by structures and other means. As such, elements capable of generating, propagating, superimposing, and measuring spin waves are being competitively developed worldwide.
    “We leveraged the wavelike nature of spin waves to successfully control their propagation directly,” points out Taichi Goto, associate professor at Tohoku University’s Electrical Communication Research Institute, and co-author of the paper. “We did so by first developing an excellent magnetic insulator material called magnetic garnet film, which has low spin wave losses. We then periodically arranged small copper disks with diameters less than 1 mm on this film.”
    By arranging copper disks in a hexagonal pattern resembling snowflakes, Goto and his colleagues could effectively reflect the spin waves. Furthermore, by rotating the magnonic crystal and changing the incident angle of spin waves, the researchers revealed that the frequency at which the magnonic band gap occurs remains largely unchanged in the range from 10 to 30 degrees. This suggests the potential for the two-dimensional magnonic crystal to freely control the propagation direction of spin waves.
    Goto notes the novelty of their findings: “To date, there have been no experimental confirmations of changes in the spin wave incident angle for a two-dimensional magnonic crystal comprising a magnetic insulator and copper disks, making this the world’s first report.”
    Looking ahead, the team hopes to demonstrate the direction control of spin waves using two-dimensional magnonic crystals and to develop functional components that utilize this technology. More

  • in

    How to run a password update campaign efficiently and with minimal IT costs

    Updating passwords for all users of a company or institution’s internal computer systems is stressful and disruptive to both users and IT professionals. Many studies have looked at user struggles and password best practices. But very little research has been done to determine how a password update campaign can be conducted most efficiently and with minimal IT costs. Until now.
    A team of computer scientists at the University of California San Diego partnered with the campus’ Information Technology Services to analyze the messaging for a campuswide mandatory password change impacting almost 10,000 faculty and staff members. The team found that email notifications to update passwords potentially yielded diminishing returns after three messages. They also found that a prompt to update passwords while users were trying to log in was effective for those who had ignored email reminders. Researchers also found that users whose jobs didn’t require much computer use struggled the most with the update.
    To the team’s knowledge, it’s the first time an empirical analysis of a mandatory password update has been conducted at this large a scale and in the wild, rather than as part of a simulation or controlled experiment.
    The research team hopes that lessons from their analysis will be helpful to IT professionals at other institutions and companies.
    The team presented their work at ACSAC ’23: Annual Computer Security Applications Conference in December 2023.
    During the campaign, almost 10,000 faculty and staff at UC San Diego received four emails at about a weekly interval prompting them to change their single sign-on password. Users who still hadn’t changed their password even after receiving four emails then got a prompt to do so as they logged in.
    The emails were clearly effective, leading between 5 and 15% of users to update their passwords during each wave of emails. However, even after four such email prompts, a quarter of users had not completed the update procedure.

    The finding contradicts a previous study that found 98% of participants changed their passwords after receiving multiple email messages. But that study had a much smaller sample size.
    Remarkably, 80% of the remaining users who hadn’t changed their passwords after the email campaign finally did so when they were prompted at log in.
    “The active single sign on prompting was a big winner across the board,” said Ariana Mirian, the paper’s first author, who earned her Ph.D. in the UC San Diego Department of Computer Science and Engineering. “You managed to get people who are stubborn-and maybe not paying attention-to take action, and that’s huge.”
    Researchers also noted that despite concerns from the campus, the campaign did not generate a significant increase in tickets to the IT help desk. Ticket volume did increase three to four times, but tickets related to the password update only represented 8% of all requests.
    Not surprisingly, users that struggled the most work in areas where they’re not required to log in to their computers regularly, such as maintenance, recreation and dining services.
    “Targeting such users earlier, or forgoing email reminders and using login intercepts from the start, or even using a different notification mechanism such as text messages, may be more effective,” the researchers write.
    The research was funded in part by the National Science Foundation, the UC San Diego CSE postdoctoral fellows program, the Irwin Mark and Joel Klein Jacobs Chair in Information and Computer Science, and operational support from the UC San Diego Center for Networked Systems.
    An Empirical Analysis of the Enterprise-Wide Mandatory Password Updates
    Ariana Mirian, Grant Ho, Stefan Savage and Geoffrey M. Voelker, Department of Computer Science and Engineering, University of California San Diego More

  • in

    Promising heart drugs ID’d by cutting-edge combo of machine learning, human learning

    University of Virginia scientists have developed a new approach to machine learning — a form of artificial intelligence — to identify drugs that help minimize harmful scarring after a heart attack or other injuries.
    The new machine-learning tool has already found a promising candidate to help prevent harmful heart scarring in a way distinct from previous drugs. The UVA researchers say their cutting-edge computer model has the potential to predict and explain the effects of drugs for other diseases as well.
    “Many common diseases such as heart disease, metabolic disease and cancer are complex and hard to treat,” said researcher Anders R. Nelson, PhD, a computational biologist and former student in the lab of UVA’s Jeffrey J. Saucerman, PhD. “Machine learning helps us reduce this complexity, identify the most important factors that contribute to disease and better understand how drugs can modify diseased cells.”
    “On its own, machine learning helps us to identify cell signatures produced by drugs,” said Saucerman, of UVA’s Department of Biomedical Engineering, a joint program of the School of Medicine and School of Engineering. “Bridging machine learning with human learning helped us not only predict drugs against fibrosis [scarring] but also explain how they work. This knowledge is needed to design clinical trials and identify potential side effects.”
    Combining Machine Learning, Human Learning
    Saucerman and his team combined a computer model based on decades of human knowledge with machine learning to better understand how drugs affect cells called fibroblasts. These cells help repair the heart after injury by producing collagen and contract the wound. But they can also cause harmful scarring, called fibrosis, as part of the repair process. Saucerman and his team wanted to see if a selection of promising drugs would give doctors more ability to prevent scarring and, ultimately, improve patient outcomes.
    Previous attempts to identify drugs targeting fibroblasts have focused only on selected aspects of fibroblast behavior, and how these drugs work often remains unclear. This knowledge gap has been a major challenge in developing targeted treatments for heart fibrosis. So Saucerman and his colleagues developed a new approach called “logic-based mechanistic machine learning” that not only predicts drugs but also predicts how they affect fibroblast behaviors.

    They began by looking at the effect of 13 promising drugs on human fibroblasts, then used that data to train the machine learning model to predict the drugs’ effects on the cells and how they behave. The model was able to predict a new explanation of how the drug pirfenidone, already approved by the federal Food and Drug Administration for idiopathic pulmonary fibrosis, suppresses contractile fibers inside the fibroblast that stiffen the heart. The model also predicted how another type of contractile fiber could be targeted by the experimental Src inhibitor WH4023, which they experimentally validated with human cardiac fibroblasts.
    Additional research is needed to verify the drugs work as intended in animal models and human patients, but the UVA researchers say their research suggests mechanistic machine learning represents a powerful tool for scientists seeking to discover biological cause-and-effect. The new findings, they say, speak to the great potential the technology holds to advance the development of new treatments — not just for heart injury but for many diseases.
    “We’re looking forward to testing whether pirfenidone and WH4023 also suppress the fibroblast contraction of scars in preclinical animal models,” Saucerman said. “We hope this provides an example of how machine learning and human learning can work together to not only discover but also understand how new drugs work.”
    The research was supported by the National Institutes of Health, grants HL137755, HL007284, HL160665, HL162925 and 1S10OD021723-01A1. More

  • in

    Swarming cicadas, stock traders, and the wisdom of the crowd

    Pick almost any location in the eastern United States — say, Columbus Ohio. Every 13 or 17 years, as the soil warms in springtime, vast swarms of cicadas emerge from their underground burrows singing their deafening song, take flight and mate, producing offspring for the next cycle.
    This noisy phenomenon repeats all over the eastern and southeastern US as 17 distinct broods emerge in staggered years. In spring 2024, billions of cicadas are expected as two different broods — one that appears every 13 years and another that appears every 17 years — emerge simultaneously.
    Previous research has suggested that cicadas emerge once the soil temperature reaches 18°C, but even within a small geographical area, differences in sun exposure, foliage cover or humidity can lead to variations in temperature.
    Now, in a paper published in the journal Physical Review E, researchers from the University of Cambridge have discovered how such synchronous cicada swarms can emerge despite these temperature differences.
    The researchers developed a mathematical model for decision-making in an environment with variations in temperature and found that communication between cicada nymphs allows the group to come to a consensus about the local average temperature that then leads to large-scale swarms. The model is closely related to one that has been used to describe ‘avalanches’ in decision-making like those among stock market traders, leading to crashes.
    Mathematicians have been captivated by the appearance of 17- and 13-year cycles in various species of cicadas, and have previously developed mathematical models that showed how the appearance of such large prime numbers is a consequence of evolutionary pressures to avoid predation. However, the mechanism by which swarms emerge coherently in a given year has not been understood.
    In developing their model, the Cambridge team was inspired by previous research on decision-making that represents each member of a group by a ‘spin’ like that in a magnet, but instead of pointing up or down, the two states represent the decision to ‘remain’ or ’emerge’.

    The local temperature experienced by the cicadas is then like a magnetic field that tends to align the spins and varies slowly from place to place on the scale of hundreds of metres, from sunny hilltops to shaded valleys in a forest. Communication between nearby nymphs is represented by an interaction between the spins that leads to local agreement of neighbours.
    The researchers showed that in the presence of such interactions the swarms are large and space-filling, involving every member of the population in a range of local temperature environments, unlike the case without communication in which every nymph is on its own, responding to every subtle variation in microclimate.
    The research was carried out Professor Raymond E Goldstein, the Alan Turing Professor of Complex Physical Systems in the Department of Applied Mathematics and Theoretical Physics (DAMTP), Professor Robert L Jack of DAMTP and the Yusuf Hamied Department of Chemistry, and Dr Adriana I Pesci, a Senior Research Associate in DAMTP.
    “As an applied mathematician, there is nothing more interesting than finding a model capable of explaining the behaviour of living beings, even in the simplest of cases,” said Pesci.
    The researchers say that while their model does not require any particular means of communication between underground nymphs, acoustical signalling is a likely candidate, given the ear-splitting sounds that the swarms make once they emerge from underground.
    The researchers hope that their conjecture regarding the role of communication will stimulate field research to test the hypothesis.
    “If our conjecture that communication between nymphs plays a role in swarm emergence is confirmed, it would provide a striking example of how Darwinian evolution can act for the benefit of the group, not just the individual,” said Goldstein.
    This work was supported in part by the Complex Physical Systems Fund. More

  • in

    Engineers develop hack to make automotive radar ‘hallucinate’

    A black sedan cruises silently down a quiet suburban road, driver humming Christmas carols quietly while the car’s autopilot handles the driving. Suddenly, red flashing lights and audible warnings blare to life, snapping the driver from their peaceful reprieve. They look at the dashboard screen and see the outline of a car speeding toward them for a head-on collision, yet the headlights reveal nothing ahead through the windshield.
    Despite the incongruity, the car’s autopilot grabs control and swerves into a ditch. Exasperated, the driver looks around the vicinity, finding no other vehicles as the incoming danger disappears from the screen. Moments later, the real threat emerges — a group of hijackers jogging toward the immobilized vehicle.
    This scene seems destined to become a common plot point in Hollywood films for decades to come. But due to the complexities of modern automotive detection systems, it remains firmly in the realm of science fiction. At least for the moment.
    Engineers at Duke University, led by Miroslav Pajic, the Dickinson Family Associate Professor of Electrical and Computer Engineering, and Tingjun Chen, assistant professor of electrical and computer engineering, have now demonstrated a system they’ve dubbed “MadRadar” for fooling automotive radar sensors into believing almost anything is possible.
    The technology can hide the approach of an existing car, create a phantom car where none exists or even trick the radar into thinking a real car has quickly deviated from its actual course. And it can achieve this feat in the blink of an eye without having any prior knowledge about the specific settings of the victim’s radar, making it the most troublesome threat to radar security to date.
    The researchers say MadRadar shows that manufacturers should immediately begin taking steps to better safeguard their products.
    The research will be published in 2024 Network and Distributed System Security Symposium, taking place February 26 — March 1 in San Diego, California.

    “Without knowing much about the targeted car’s radar system, we can make a fake vehicle appear out of nowhere or make an actual vehicle disappear in real-world experiments,” Pajic said. “We’re not building these systems to hurt anyone, we’re demonstrating the existing problems with current radar systems to show that we need to fundamentally change how we design them.”
    In modern cars that feature assistive and autonomous driving systems, radar is typically used to detect moving vehicles in front of and around the vehicle. It also helps to augment visual and laser-based systems to detect vehicles moving in front of or behind the car.
    Because there are now so many different cars using radar on a typical highway, it is unlikely that any two vehicles will have the exact same operating parameters, even if they share a make and model. For example, they might use slightly different operating frequencies or take measurements at slightly different intervals. Because of this, previous demonstrations of radar-spoofing systems have needed to know the specific parameters being used.
    “Think of it like trying to stop someone from listening to the radio,” explained Pajic. “To block the signal or to hijack it with your own broadcast, you’d need to know what station they were listening to first.”
    In the MadRadar demonstration, the team from Duke showed off the capabilities of a radar-spoofing system they’ve built that can accurately detect a car’s radar parameters in less than a quarter of a second. Once they’ve been discovered, the system can send out its own radar signals to fool the target’s radar.
    In one demonstration, MadRadar sends signals to the target car to make it perceive another car where none actually exist. This involves modifying the signal’s characteristics based on time and velocity in such a way that it mimics what a real contact would look like.

    In a second and much more complicated example, it fools the target’s radar into thinking the opposite — that there is no passing car when one actually does exist. It achieves this by delicately adding masking signals around the car’s true location to create a sort of bright spot that confuses the radar system.
    “You have to be judicious about adding signals to the radar system, because if you simply flooded the entire field of vision, it’d immediately know something was wrong,” said David Hunt, a PhD student working in Pajic’s lab.
    In a third kind of attack, the researchers mix the two approaches to make it seem as though an existing car has suddenly changed course. The researchers recommend that carmakers try randomizing a radar system’s operating parameters over time and adding safeguards to the processing algorithms to spot similar attacks.
    “Imagine adaptive cruise control, which uses radar, believing that the car in front of me was speeding up, causing your own car to speed up, when in reality it wasn’t changing speed at all,” said Pajic. “If this were done at night, by the time your car’s cameras figured it out you’d be in trouble.”
    Each of these attack demonstrations, the researchers emphasize, were done on real-world radar systems in actual cars moving at roadway speeds. It’s an impressive feat, given that if the spoofing radar signals are even a microsecond off the mark, the fake datapoint would be misplaced by the length of a football field.
    “These lessons go far beyond radar systems in cars as well,” Pajic said. “If you want to build drones that can explore dark environments, like in search and rescue or reconnaissance operations, that don’t cost thousands of dollars, radar is the way to go.”
    This research was supported by the Office of Naval Research (N00014-23-1-2206, N00014-20-1-2745), the Air Force Office of Scientific Research (FA9550-19-1-0169), the National Science Foundation (CNS-1652544, CNS-2211944), and the National AI Institute for Edge Computing Leveraging Next Generation Wireless Networks (Athena) (CNS-2112562). More

  • in

    Scientists make breakthrough in quantum materials research

    Researchers at the University of California, Irvine and Los Alamos National Laboratory, publishing in the latest issue of Nature Communications, describe the discovery of a new method that transforms everyday materials like glass into materials scientists can use to make quantum computers.
    “The materials we made are substances that exhibit unique electrical or quantum properties because of their specific atomic shapes or structures,” said Luis A. Jauregui, professor of physics & astronomy at UCI and lead author of the new paper. “Imagine if we could transform glass, typically considered an insulating material, and convert it into efficient conductors akin to copper. That’s what we’ve done.”
    Conventional computers use silicon as a conductor, but silicon has limits. Quantum computers stand to help bypass these limits, and methods like those described in the new study will help quantum computers become an everyday reality.
    “This experiment is based on the unique capabilities that we have at UCI for growing high-quality quantum materials. How can we transform these materials that are poor conductors into good conductors?” said Jauregui, who’s also a member of UCI’s Eddleman Quantum Institute. “That’s what we’ve done in this paper. We’ve been applying new techniques to these materials, and we’ve transformed them to being good conductors.”
    The key, Jauregui explained, was applying the right kind of strain to materials at the atomic scale. To do this, the team designed a special apparatus called a “bending station” at the machine shop in the UCI School of Physical Sciences that allowed them to apply large strain to change the atomic structure of a material called hafnium pentatelluride from a “trivial” material into a material fit for a quantum computer.
    “To create such materials, we need to ‘poke holes’ in the atomic structure,” said Jauregui. “Strain allows us to do that.”
    “You can also turn the atomic structure change on or off by controlling the strain, which is useful if you want to create an on-off switch for the material in a quantum computer in the future,” said Jinyu Liu, who is the first author of the paper and a postdoctoral scholar working with Jauregui.

    “I am pleased by the way theoretical simulations offer profound insights into experimental observations, thereby accelerating the discovery of methods for controlling the quantum states of novel materials,” said co-author Ruqian Wu, professor of physics and Associate Director of the UCI Center for Complex and Active Materials — a National Science Foundation Materials Research Science and Engineering Center (MRSEC). “This underscores the success of collaborative efforts involving diverse expertise in frontier research.”
    “I’m excited that our team was able to show that these elusive and much-sought-after material states can be made,” said Michael Pettes, study co-author and scientist with the Center for Integrated Nanotechnologies at Los Alamos National Laboratory. “This is promising for the development of quantum devices, and the methodology we demonstrate is compatible for experimentation on other quantum materials as well.”
    Right now, quantum computers only exist in a few places, such as in the offices of companies like IBM, Google and Rigetti. “Google, IBM and many other companies are looking for effective quantum computers that we can use in our daily lives,” said Jauregui. “Our hope is that this new research helps make the promise of quantum computers more of a reality.”
    Funding came from the UCI-MRSEC — an NSF CAREER grant to Jauregui and Los Alamos National Laboratory Directed Research and Development Directed Research program funds. More