More stories

  • in

    Affordances in the brain: The human superpower AI hasn’t mastered

    How do you intuitively know that you can walk on a footpath and swim in a lake? Researchers from the University of Amsterdam have discovered unique brain activations that reflect how we can move our bodies through an environment. The study not only sheds new light on how the human brain works, but also shows where artificial intelligence is lagging behind. According to the researchers, AI could become more sustainable and human-friendly if it incorporated this knowledge about the human brain.
    When we see a picture of an unfamiliar environment — a mountain path, a busy street, or a river — we immediately know how we could move around in it: walk, cycle, swim or not go any further. That sounds simple, but how does your brain actually determine these action opportunities?
    PhD student Clemens Bartnik and a team of co-authors show how we make estimates of possible actions thanks to unique brain patterns. The team, led by computational neuroscientist Iris Groen, also compared this human ability with a large number of AI models, including ChatGPT. “AI models turned out to be less good at this and still have a lot to learn from the efficient human brain,” Groen concludes.
    Viewing images in the MRI scanner
    Using an MRI scanner, the team investigated what happens in the brain when people look at various photos of indoor and outdoor environments. The participants used a button to indicate whether the image invited them to walk, cycle, drive, swim, boat or climb. At the same time, their brain activity was measured.
    “We wanted to know: when you look at a scene, do you mainly see what is there — such as objects or colors — or do you also automatically see what you can do with it,” says Groen. “Psychologists call the latter “affordances” — opportunities for action; imagine a staircase that you can climb, or an open field that you can run through.”
    Unique processes in the brain
    The team discovered that certain areas in the visual cortex become active in a way that cannot be explained by visible objects in the image. “What we saw was unique,” says Groen. “These brain areas not only represent what can be seen, but also what you can do with it.” The brain did this even when participants were not given an explicit action instruction. ‘These action possibilities are therefore processed automatically,” says Groen. “Even if you do not consciously think about what you can do in an environment, your brain still registers it.”

    The research thus demonstrates for the first time that affordances are not only a psychological concept, but also a measurable property of our brains.
    What AI doesn’t understand yet
    The team also compared how well AI algorithms — such as image recognition models or GPT-4 — can estimate what you can do in a given environment. They were worse at predicting possible actions. “When trained specifically for action recognition, they could somewhat approximate human judgments, but the human brain patterns didn’t match the models’ internal calculations,” Groen explains.
    “Even the best AI models don’t give exactly the same answers as humans, even though it’s such a simple task for us,” Groen says. “This shows that our way of seeing is deeply intertwined with how we interact with the world. We connect our perception to our experience in a physical world. AI models can’t do that because they only exist in a computer.”
    AI can still learn from the human brain
    The research thus touches on larger questions about the development of reliable and efficient AI. “As more sectors — from healthcare to robotics — use AI, it is becoming important that machines not only recognize what something is, but also understand what it can do,” Groen explains. “For example, a robot that has to find its way in a disaster area, or a self-driving car that can tell apart a bike path from a driveway.”
    Groen also points out the sustainable aspect of AI. “Current AI training methods use a huge amount of energy and are often only accessible to large tech companies. More knowledge about how our brain works, and how the human brain processes certain information very quickly and efficiently, can help make AI smarter, more economical and more human-friendly.” More

  • in

    Half of today’s jobs could vanish—Here’s how smart countries are future-proofing workers

    Artificial intelligence is spreading into many aspects of life, from communications and advertising to grading tests. But with the growth of AI comes a shake-up in the workplace.
    New research from the University of Georgia is shedding light on how different countries are preparing for how AI will impact their workforces.
    According to previous research, almost half of today’s jobs could vanish over the next 20 years. But it’s not all doom and gloom.
    Researchers also estimate that 65% of current elementary school students will have jobs in the future that don’t exist now. Most of these new careers will require advanced AI skills and knowledge.
    “Human soft skills, such as creativity, collaboration and communication cannot be replaced by AI.” — Lehong Shi, College of Education
    To tackle these challenges, governments around the world are taking steps to help their citizens gain the skills they’ll need. The present study examined 50 countries’ national AI strategies, focusing on policies for education and the workforce.
    Learning what other countries are doing could help the U.S. improve its own plans for workforce preparation in the era of AI, the researcher said.

    “AI skills and competencies are very important,” said Lehong Shi, author of the study and an assistant research scientist at UGA’s Mary Frances Early College of Education. “If you want to be competitive in other areas, it’s very important to prepare employees to work with AI in the future.”
    Some countries put larger focus on training, education
    Shi used six indicators to evaluate each country’s prioritization on AI workforce training and education: the plan’s objective, how goals will be reached, examples of projects, how success will be measured, how projects will be supported and the timelines for each project.
    Each nation was classified as giving high, medium or low priority to prepare an AI competent workforce depending on how each aspect of their plan was detailed.
    Of the countries studied, only 13 gave high prioritization to training the current workforce and improving AI education in schools. Eleven of those were European countries, with Mexico and Australia being the two exceptions. This may be because European nations tend to have more resources for training and cultures of lifelong learning, the researcher said.
    The United States was one of 23 countries that considered workforce training and AI education a medium priority, with a less detailed plan compared to countries that saw them as a high priority.

    Different countries prioritize different issues when it comes to AI preparation
    Some common themes emerged between countries, even when their approaches to AI differed. For example, almost every nation aimed to establish or improve AI-focused programs in universities. Some also aimed to improve AI education for K-12 students.
    On-the-job training was also a priority for more than half the countries, with some offering industry-specific training programs or internships. However, few focused on vulnerable populations such as the elderly or unemployed through programs to teach them basic AI skills.
    Shi stressed that just because a country gives less prioritization to education and workforce preparation doesn’t mean AI isn’t on its radar. Some Asian countries, for example, put more effort into improving national security and health care rather than education.
    Cultivating interest in AI could help students prepare for careers
    Some countries took a lifelong approach to developing these specialized skills. Germany, for instance, emphasized creating a culture that encourages interest in AI. Spain started teaching kids AI-related skills as early as preschool.
    Of the many actions governments took, Shi noted one area that needs more emphasis when preparing future AI-empowered workplaces. “Human soft skills, such as creativity, collaboration and communication cannot be replaced by AI,” Shi said. “And they were only mentioned by a few countries.”
    Developing these sorts of “soft skills” is key to making sure students and employees continue to have a place in the workforce.
    This study was published in Human Resource Development Review. More

  • in

    Quantum breakthrough: ‘Magic states’ now easier, faster, and way less noisy

    For decades, quantum computers that perform calculations millions of times faster than conventional computers have remained a tantalizing yet distant goal. However, a new breakthrough in quantum physics may have just sped up the timeline.In an article published in PRX Quantum, researchers from the Graduate School of Engineering Science and the Center for Quantum Information and Quantum Biology at The University of Osaka devised a method that can be used to prepare high-fidelity “magic states” for use in quantum computers with dramatically less overhead and unprecedented accuracy.Quantum computers harness the fantastic properties of quantum mechanics such as entanglement and superposition to perform calculations much more efficiently than classical computers can. Such machines could catalyze innovations in fields as diverse as engineering, finance, and biotechnology. But before this can happen, there is a significant obstacle that must be overcome.“Quantum systems have always been extremely susceptible to noise,” says lead researcher Tomohiro Itogawa. “Even the slightest perturbation in temperature or a single wayward photon from an external source can easily ruin a quantum computer setup, making it useless. Noise is absolutely the number one enemy of quantum computers.”Thus, scientists have become very interested in building so-called fault-tolerant quantum computers, which are robust enough to continue computing accurately even when subject to noise. Magic state distillation, in which a single high-fidelity quantum state is prepared from many noisy ones, is a popular method for creating such systems. But there is a catch.“The distillation of magic states is traditionally a very computationally expensive process because it requires many qubits,” explains Keisuke Fujii, senior author. “We wanted to explore if there was any way of expediting the preparation of the high-fidelity states necessary for quantum computation.”Following this line of inquiry, the team was inspired to create a “level-zero” version of magic state distillation, in which a fault-tolerant circuit is developed at the physical qubit or “zeroth” level as opposed to higher, more abstract levels. In addition to requiring far fewer qubits, this new method led to a roughly several dozen times decrease in spatial and temporal overhead compared with that of the traditional version in numerical simulations.Itogawa and Fujii are optimistic that the era of quantum computing is not as far off as we imagine. Whether one calls it magic or physics, this technique certainly marks an important step toward the development of larger-scale quantum computers that can withstand noise. More

  • in

    MIT’s tiny 5G receiver could make smart devices last longer and work anywhere

    MIT researchers have designed a compact, low-power receiver for 5G-compatible smart devices that is about 30 times more resilient to a certain type of interference than some traditional wireless receivers.
    The low-cost receiver would be ideal for battery-powered internet of things (IoT) devices like environmental sensors, smart thermostats, or other devices that need to run continuously for a long time, such as health wearables, smart cameras, or industrial monitoring sensors.
    The researchers’ chip uses a passive filtering mechanism that consumes less than a milliwatt of static power while protecting both the input and output of the receiver’s amplifier from unwanted wireless signals that could jam the device.
    Key to the new approach is a novel arrangement of precharged, stacked capacitors, which are connected by a network of tiny switches. These miniscule switches need much less power to be turned on and off than those typically used in IoT receivers.
    The receiver’s capacitor network and amplifier are carefully arranged to leverage a phenomenon in amplification that allows the chip to use much smaller capacitors than would typically be necessary.
    “This receiver could help expand the capabilities of IoT gadgets. Smart devices like health monitors or industrial sensors could become smaller and have longer battery lives. They would also be more reliable in crowded radio environments, such as factory floors or smart city networks,” says Soroush Araei, an electrical engineering and computer science (EECS) graduate student at MIT and lead author of a paper on the receiver.
    He is joined on the paper by Mohammad Barzgari, a postdoc in the MIT Research Laboratory of Electronics (RLE); Haibo Yang, an EECS graduate student; and senior author Negar Reiskarimian, the X-Window Consortium Career Development Assistant Professor in EECS at MIT and a member of the Microsystems Technology Laboratories and RLE. The research was recently presented at the IEEE Radio Frequency Integrated Circuits Symposium.

    A new standard
    A receiver acts as the intermediary between an IoT device and its environment. Its job is to detect and amplify a wireless signal, filter out any interference, and then convert it into digital data for processing.
    Traditionally, IoT receivers operate on fixed frequencies and suppress interference using a single narrow-band filter, which is simple and inexpensive.
    But the new technical specifications of the 5G mobile network enable reduced-capability devices that are more affordable and energy-efficient. This opens a range of IoT applications to the faster data speeds and increased network capability of 5G. These next-generation IoT devices need receivers that can tune across a wide range of frequencies while still being cost-effective and low-power.
    “This is extremely challenging because now we need to not only think about the power and cost of the receiver, but also flexibility to address numerous interferers that exist in the environment,” Araei says.
    To reduce the size, cost, and power consumption of an IoT device, engineers can’t rely on the bulky, off-chip filters that are typically used in devices that operate on a wide frequency range.

    One solution is to use a network of on-chip capacitors that can filter out unwanted signals. But these capacitor networks are prone to special type of signal noise known as harmonic interference.
    In prior work, the MIT researchers developed a novel switch-capacitor network that targets these harmonic signals as early as possible in the receiver chain, filtering out unwanted signals before they are amplified and converted into digital bits for processing.
    Shrinking the circuit
    Here, they extended that approach by using the novel switch-capacitor network as the feedback path in an amplifier with negative gain. This configuration leverages the Miller effect, a phenomenon that enables small capacitors to behave like much larger ones.
    “This trick lets us meet the filtering requirement for narrow-band IoT without physically large components, which drastically shrinks the size of the circuit,” Araei says.
    Their receiver has an active area of less than 0.05 square millimeters.
    One challenge the researchers had to overcome was determining how to apply enough voltage to drive the switches while keeping the overall power supply of the chip at only 0.6 volts.
    In the presence of interfering signals, such tiny switches can turn on and off in error, especially if the voltage required for switching is extremely low.
    To address this, the researchers came up with a novel solution, using a special circuit technique called bootstrap clocking. This method boosts the control voltage just enough to ensure the switches operate reliably while using less power and fewer components than traditional clock boosting methods.
    Taken together, these innovations enable the new receiver to consume less than a milliwatt of power while blocking about 30 times more harmonic interference than traditional IoT receivers.
    “Our chip also is very quiet, in terms of not polluting the airwaves. This comes from the fact that our switches are very small, so the amount of signal that can leak out of the antenna is also very small,” Araei adds.
    Because their receiver is smaller than traditional devices and relies on switches and precharged capacitors instead of more complex electronics, it could be more cost-effective to fabricate. In addition, since the receiver design can cover a wide range of signal frequencies, it could be implemented on a variety of current and future IoT devices.
    Now that they have developed this prototype, the researchers want to enable the receiver to operate without a dedicated power supply, perhaps by harvesting Wi-Fi or Bluetooth signals from the environment to power the chip.
    This research is supported, in part, by the National Science Foundation. More

  • in

    Scientists create ‘universal translator’ for quantum tech

    UBC researchers are proposing a solution to a key hurdle in quantum networking: a device that can “translate” microwave to optical signals and vice versa.
    The technology could serve as a universal translator for quantum computers — enabling them to talk to each other over long distances and converting up to 95 per cent of a signal with virtually no noise. And it all fits on a silicon chip, the same material found in everyday computers.
    “It’s like finding a translator that gets nearly every word right, keeps the message intact and adds no background chatter,” says study author Mohammad Khalifa, who conducted the research during his PhD at UBC’s faculty of applied science and the UBC Blusson Quantum Matter Institute.
    “Most importantly, this device preserves the quantum connections between distant particles and works in both directions. Without that, you’d just have expensive individual computers. With it, you get a true quantum network.”
    How it works
    Quantum computers process information using microwave signals. But to send that information across cities or continents, it needs to be converted into optical signals that travel through fibre optic cables. These signals are so fragile, even tiny disturbances during translation can destroy them.
    That’s a problem for entanglement, the phenomenon quantum computers rely on, where two particles remain connected regardless of distance. Einstein called it “spooky action at a distance.” Losing that connection means losing the quantum advantage. The UBC device, described in npj Quantum Information, could enable long-distance quantum communication while preserving these entangled links.

    The silicon solution
    The team’s model is a microwave-optical photon converter that can be fabricated on a silicon wafer. The breakthrough lies in tiny engineered flaws, magnetic defects intentionally embedded in silicon to control its properties. When microwave and optical signals are precisely tuned, electrons in these defects convert one signal to the other without absorbing energy, avoiding the instability that plagues other transformation methods.
    The device also runs efficiently at extremely low power — just millionths of a watt. The authors outlined a practical design that uses superconducting components, materials that conduct electricity perfectly, alongside this specially engineered silicon.
    What’s next
    While the work is still theoretical, it marks an important step in quantum networking.
    “We’re not getting a quantum internet tomorrow — but this clears a major roadblock,” says the study’s senior author Dr. Joseph Salfi, an assistant professor in the department of electrical and computer engineering and principal investigator at UBC Blusson QMI.
    “Currently, reliably sending quantum information between cities remains challenging. Our approach could change that: silicon-based converters could be built using existing chip fabrication technology and easily integrated into today’s communication infrastructure.”
    Eventually, quantum networks could enable virtually unbreakable online security, GPS that works indoors, and the power to tackle problems beyond today’s reach such as designing new medicines or predicting weather with dramatically improved accuracy. More

  • in

    U.S. seal populations have rebounded — and so have their conflicts with humans

    Aaron Tremper is the editorial assistant for Science News Explores. He has a B.A. in English (with minors in creative writing and film production) from SUNY New Paltz and an M.A. in Journalism from the Craig Newmark Graduate School of Journalism’s Science and Health Reporting program. A former intern at Audubon magazine and Atlanta’s NPR station, WABE 90.1 FM, he has reported a wide range of science stories for radio, print, and digital media. His favorite reporting adventure? Tagging along with researchers studying bottlenose dolphins off of New York City and Long Island, NY. More

  • in

    AI at light speed: How glass fibers could replace silicon brains

    Imagine a computer that does not rely only on electronics but uses light to perform tasks faster and more efficiently. Collaboration between two research teams from Tampere University in Finland and Université Marie et Louis Pasteur in France, have now demonstrated a novel way for processing information using light and optical fibers, opening up the possibility to build ultra-fast computers.
    The study performed by postdoctoral researchers Dr. Mathilde Hary from Tampere University and Dr. Andrei Ermolaev from the Université Marie et Louis Pasteur, Besançon, demonstrated how laser light inside thin glass fibers can mimic the way artificial intelligence (AI) processes information. Their work has investigated a particular class of computing architecture known as an Extreme Learning Machine, an approach inspired by neural networks.
    “Instead of using conventional electronics and algorithms, computation is achieved by taking advantage of the nonlinear interaction between intense light pulses and the glass,” Hary and Ermolaev explain.
    Traditional electronics approaches their limits in terms of bandwidth, data throughput and power consumption. AI models are growing larger, they are more energy-hungry, and electronics can process data only up to a certain speed. Optical fibers on the other hand can transform input signals at speeds thousands of times faster and amplify tiny differences via extreme nonlinear interactions to make them discernable.
    Towards efficient computing
    In their recent work, the researchers used femtosecond laser pulses (a billion times shorter than a camera flash) and an optical fiber confining light in an area smaller than a fraction of human hair to demonstrate the working principle of an optical ELM system. The pulses are short enough to contain a large number of different wavelengths or colors. By sending those into the fiber with a relative delay encoded according to an image, they show that the resulting spectrum of wavelengths at the output of the fiber transformed by the nonlinear interaction of light and glass contains sufficient information to classify handwritten digits (like those used in the popular MNIST AI benchmark). According to the researchers the best systems reached an accuracy of over 91%, close to the state of art digital methods, in under one picosecond.
    What is remarkable is that the best results did not occur at maximum level of nonlinear interaction or complexity; but rather from a delicate balance between fiber length, dispersion (the propagation speed difference between different wavelengths) and power levels.

    “Performance is not simply matter of pushing more power through the fiber. It depends on how precisely the light is initially structured, in other words how information is encoded, and how it interacts with the fiber properties,” says Hary.
    By harnessing the potential of light, this research could pave the way towards new ways of computing while exploring routes towards more efficient architectures.
    “Our models show how dispersion, nonlinearity and even quantum noise influence performance, providing critical knowledge for designing the next generation of hybrid optical-electronic AI systems,” continues Ermolaev.
    Advancing optical nonlinearity through collaborative research in AI and photonics
    Both research teams are internationally recognized for their expertise in nonlinear light-matter interactions. Their collaboration brings together theoretical understanding and state-of-the-art experimental capabilities to harness optical nonlinearity for various applications.
    “This work demonstrates how fundamental research in nonlinear fiber optics can drive new approaches to computation. By merging physics and machine learning, we are opening new paths toward ultrafast and energy-efficient AI hardware” say Professors Goëry Genty from Tampere University and John Dudley and Daniel Brunner from the Université Marie et Louis Pasteur, who led the teams.
    The research combines nonlinear fiber optics and applied AI to explore new types of computing. In the future their aim would be to build on-chip optical systems that can operate in real time and outside the lab. Potential applications range from real-time signal processing to environmental monitoring and high-speed AI inference.
    The project is funded by the Research Council of Finland, the French National Research Agency and the European Research Council. More

  • in

    Thinking AI models emit 50x more CO2—and often for nothing

    No matter which questions we ask an AI, the model will come up with an answer. To produce this information – regardless of whether than answer is correct or not – the model uses tokens. Tokens are words or parts of words that are converted into a string of numbers that can be processed by the LLM.
    This conversion, as well as other computing processes, produce CO2 emissions. Many users, however, are unaware of the substantial carbon footprint associated with these technologies. Now, researchers in Germany measured and compared CO2 emissions of different, already trained, LLMs using a set of standardized questions.
    “The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions,” said first author Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences and first author of the Frontiers in Communication study. “We found that reasoning-enabled models produced up to 50 times more CO2 emissions than concise response models.”
    ‘Thinking’ AI causes most emissions
    The researchers evaluated 14 LLMs ranging from seven to 72 billion parameters on 1,000 benchmark questions across diverse subjects. Parameters determine how LLMs learn and process information.
    Reasoning models, on average, created 543.5 ‘thinking’ tokens per questions, whereas concise models required just 37.7 tokens per question. Thinking tokens are additional tokens that reasoning LLMs generate before producing an answer. A higher token footprint always means higher CO2 emissions. It doesn’t, however, necessarily mean the resulting answers are more correct, as elaborate detail that is not always essential for correctness.
    The most accurate model was the reasoning-enabled Cogito model with 70 billion parameters, reaching 84.9% accuracy. The model produced three times more CO2 emissions than similar sized models that generated concise answers. “Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies,” said Dauner. “None of the models that kept emissions below 500 grams of CO2 equivalent achieved higher than 80% accuracy on answering the 1,000 questions correctly.” CO2 equivalent is the unit used to measure the climate impact of various greenhouse gases.

    Subject matter also resulted in significantly different levels of CO2 emissions. Questions that required lengthy reasoning processes, for example abstract algebra or philosophy, led to up to six times higher emissions than more straightforward subjects, like high school history.
    Practicing thoughtful use
    The researchers said they hope their work will cause people to make more informed decisions about their own AI use. “Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power,” Dauner pointed out.
    Choice of model, for instance, can make a significant difference in CO2 emissions. For example, having DeepSeek R1 (70 billion parameters) answer 600,000 questions would create CO2 emissions equal to a round-trip flight from London to New York. Meanwhile, Qwen 2.5 (72 billion parameters) can answer more than three times as many questions (about 1.9 million) with similar accuracy rates while generating the same emissions.
    The researchers said that their results may be impacted by the choice of hardware used in the study, an emission factor that may vary regionally depending on local energy grid mixes, and the examined models. These factors may limit the generalizability of the results.
    “If users know the exact CO2 cost of their AI-generated outputs, such as casually turning themselves into an action figure, they might be more selective and thoughtful about when and how they use these technologies,” Dauner concluded. More