More stories

  • in

    Climate change is coming for your cheese

    By affecting cows’ diets, climate change can affect cheese’s nutritional value and sensory traits such as taste, color and texture. This is true at least for Cantal — a firm, unpasteurized cheese from the Auvergne region in central France, researchers report February 20 in the Journal of Dairy Science.

    Cows in this region typically graze on local grass. But as climate change causes more severe droughts, some dairy producers are shifting to other feedstocks for their cows, such as corn, to adapt. “Farmers are looking for feed with better yields than grass or that are more resilient to droughts,” but they also want to know how dietary changes affect their products, says animal scientist Matthieu Bouchon.

    For almost five months in 2021, Bouchon and colleagues at France’s National Research Institute for Agriculture, Food and Environment tested 40 dairy cows from two different breeds — simulating a drought and supplementing grass with other fodder, largely corn, in varying amounts.

    The research team tested climate-adapted diets on cows, like the one seen here, at France’s National Research Institute for Agriculture, Food and Environment.INRAE/Matthieu Bouchon

    The team sampled milk from all cows at regular intervals. Milk’s fatty acid and protein profiles impact cheese formation, melting qualities and nutrition, so the researchers chemically identified distributions of those molecules with a technique called gas chromatography. They also identified beneficial microbes in the milk by making Petri dish cultures.

    They found that a corn-based diet did not affect milk yield and even led to an estimated reduction in the greenhouse gas methane coming from cows’ belching. But grass-fed cows’ cheese was richer and more savory than that from cows mostly or exclusively fed corn. Grass-based diets also yielded cheese with more heart-healthy omega-3 fatty acids and higher counts of probiotic lactic acid bacteria. The authors suggest that to maintain cheese quality, producers should include fresh vegetation in cows’ fodder when it is based on corn. More

  • in

    Sharper than lightning: Oxford’s one-in-6. 7-million quantum breakthrough

    Physicists at the University of Oxford have set a new global benchmark for the accuracy of controlling a single quantum bit, achieving the lowest-ever error rate for a quantum logic operation — just 0.000015%, or one error in 6.7 million operations. This record-breaking result represents nearly an order of magnitude improvement over the previous benchmark, set by the same research group a decade ago.
    To put the result in perspective: a person is more likely to be struck by lightning in a given year (1 in 1.2 million) than for one of Oxford’s quantum logic gates to make a mistake.
    The findings, published in Physical Review Letters, are a major advance towards having robust and useful quantum computers.
    “As far as we are aware, this is the most accurate qubit operation ever recorded anywhere in the world,” said Professor David Lucas, co-author on the paper, from the University of Oxford’s Department of Physics. “It is an important step toward building practical quantum computers that can tackle real-world problems.”
    To perform useful calculations on a quantum computer, millions of operations will need to be run across many qubits. This means that if the error rate is too high, the final result of the calculation will be meaningless. Although error correction can be used to fix mistakes, this comes at the cost of requiring many more qubits. By reducing the error, the new method reduces the number of qubits required and consequently the cost and size of the quantum computer itself.
    Co-lead author Molly Smith (Graduate Student, Department of Physics, University of Oxford), said: “By drastically reducing the chance of error, this work significantly reduces the infrastructure required for error correction, opening the way for future quantum computers to be smaller, faster, and more efficient. Precise control of qubits will also be useful for other quantum technologies such as clocks and quantum sensors.”
    This unprecedented level of precision was achieved using a trapped calcium ion as the qubit (quantum bit). These are a natural choice to store quantum information due to their long lifetime and their robustness. Unlike the conventional approach, which uses lasers, the Oxford team controlled the quantum state of the calcium ions using electronic (microwave) signals.

    This method offers greater stability than laser control and also has other benefits for building a practical quantum computer. For instance, electronic control is much cheaper and more robust than lasers, and easier to integrate in ion trapping chips. Furthermore, the experiment was conducted at room temperature and without magnetic shielding, thus simplifying the technical requirements for a working quantum computer.
    The previous best single-qubit error rate, also achieved by the Oxford team, in 2014, was 1 in 1 million. The group’s expertise led to the launch of the spinout company Oxford Ionics in 2019, which has become an established leader in high-performance trapped-ion qubit platforms.
    Whilst this record-breaking result marks a major milestone, the research team caution that it is part of a larger challenge. Quantum computing requires both single- and two-qubit gates to function together. Currently, two-qubit gates still have significantly higher error rates — around 1 in 2000 in the best demonstrations to date — so reducing these will be crucial to building fully fault-tolerant quantum machines.
    The experiments were carried out at the University of Oxford’s Department of Physics by Molly Smith, Aaron Leu, Dr Mario Gely and Professor David Lucas, together with a visiting researcher, Dr Koichiro Miyanishi, from the University of Osaka’s Centre for Quantum Information and Quantum Biology.
    The Oxford scientists are part of the UK Quantum Computing and Simulation (QCS) Hub, which was a part of the ongoing UK National Quantum Technologies Programme. More

  • in

    Photonic quantum chips are making AI smarter and greener

    One of the current hot research topics is the combination of two of the most recent technological breakthroughs: machine learning and quantum computing. An experimental study shows that already small-scale quantum computers can boost the performance of machine learning algorithms. This was demonstrated on a photonic quantum processor by an international team of researchers of the University of Vienna. The work, recently published in Nature Photonics, shows promising new applications for optical quantum computers.
    Recent scientific breakthroughs have reshaped the development of future technologies. On the one hand, machine learning and artificial intelligence have already revolutionized our lives from everyday tasks to scientific research. On the other hand, quantum computing has emerged as a new paradigm of computation.
    From the combination of these promising two fields, a new research line has opened up: Quantum Machine Learning. This field aims at finding potential enhancements in the speed, efficiency or accuracy of algorithms when they run on quantum platforms. It is however still an open challenge, to achieve such an advantage on current technology quantum computers.
    This is where an international team of researchers took the next step and designed a novel experiment carried out by scientists from the University of Vienna. The set-up features a quantum photonic circuit built at the Politecnico di Milano (Italy), which runs a machine learning algorithm first proposed by researchers working at Quantinuum (United Kingdom). The goal was to classify data points using a photonic quantum computer and single out the contribution of quantum effects, to understand the advantage with respect to classical computers. The experiment showed that already small-sized quantum processors can peform better than conventional algorithms. “We found that for specific tasks our algorithm commits fewer errors than its classical Counterpart,” explains Philip Walther from the University of Vienna, lead of the project. “This implies that existing quantum computers can show good performances without necessarily going beyond the state-of-the-art Technology” adds Zhenghao Yin, first author of the publication in Nature Photonics.
    Another interesting aspect of the new research is that photonic platforms can consume less energy with respect to standard computers. “This could prove crucial in the future, given that machine learning algorithms are becoming infeasible, due to the too high energy demands,” emphasizes co-author Iris Agresti.
    The result of the researchers has an impact both on quantum computation, since it identifies tasks that benefit from quantum effects, as well as on standard computing. Indeed, new algorithms, inspired by quantum architectures could be designed, reaching better performances and reducing energy consumption. More

  • in

    How outdated phones can power smart cities and save the seas

    Each year, more than 1.2 billion smartphones are produced globally. The production of electronic devices is not only energy-intensive but also consumes valuable natural resources. Additionally, the manufacturing and delivery processes release a significant amount of CO2 into the atmosphere. Meanwhile, devices are aging faster than ever — users replace their still-functional phones on average every 2 to 3 years. At best, old devices are recycled; at worst, they end up in landfills.
    Although the most sustainable solution would be to change consumer behavior and consider more carefully whether every new model truly requires replacing the old one, this is easier said than done. Rapid technological development quickly renders older devices obsolete. Therefore, alternative solutions are needed — such as extending the lifespan of devices by giving them an entirely new purpose.
    This is precisely the approach tested by researchers Huber Flores, Ulrich Norbisrath, and Zhigang Yin from the University of Tartu’s Institute of Computer Science, along with Perseverance Ngoy from the Institute of Technology and their international colleagues. “Innovation often begins not with something new, but with a new way of thinking about the old, re-imagining its role in shaping the future,” explained Huber Flores, Associate Professor of Pervasive Computing. They demonstrated that old smartphones can be successfully repurposed into tiny data centers capable of efficiently processing and storing data. They also found that building such a data center is remarkably inexpensive — around 8 euros per device.
    These tiny data centers have a wide range of applications. For example, they could be used in urban environments like bus stops to collect real-time data on the number of passengers, which could then be used to optimize public transportation networks.
    In the project’s first stage, the researchers removed the phones’ batteries and replaced them with external power sources to reduce the risk of chemical leakage into the environment. Then, four phones were connected together, fitted with 3D-printed casings and holders, and turned into a working prototype ready to be re-used, fostering sustainable practices for old electronics.
    The prototype was then successfully tested underwater, where it participated in marine life monitoring by helping to count different sea species. Normally, these kinds of tasks require a scuba diver to record video and bring it to the surface for analysis. But with the prototype, the whole process was done automatically underwater.
    The team’s results show that outdated technology doesn’t have to end up as waste. With minimal resources, these devices can be given a new purpose, contributing to the development of more environmentally friendly and sustainable digital solutions.
    “Sustainability is not just about preserving the future — it’s about reimagining the present, where yesterday’s devices become tomorrow’s opportunities,” commented Ulrich Norbisrath, Associate Professor of Software Engineering. More

  • in

    This “robot bird” flies at 45 mph through forests—With no GPS or light

    Unlike birds, which navigate unknown environments with remarkable speed and agility, drones typically rely on external guidance or pre-mapped routes. However, a groundbreaking development by Professor Fu Zhang and researchers from the Department of Mechanical Engineering of Faculty of Engineering at the University of Hong Kong (HKU), has enabled drones and micro air vehicles (MAVs) to emulate the flight capabilities of birds more closely than ever before.
    The team has developed the Safety-Assured High-Speed Aerial Robot (SUPER), capable of flying at speeds exceeding 20 meters per second and avoiding obstacles as thin as 2.5 millimeters – such as power lines or twigs – using solely on onboard sensors and computing power. With a compact design featuring a wheelbase of just 280 mm and a takeoff weight of 1.5 kg, SUPER demonstrates exceptional agility, navigating dense forests at night and skillfully avoiding thin wires.
    Professor Zhang describes this invention as a game-changer in the field of drone technology, “Picture a ‘Robot Bird’ swiftly maneuvering through the forest, effortlessly dodging branches and obstacles at high speeds. This is a significant step forward in autonomous flight technology. Our system allows MAVs to navigate complex environments at high speeds with a level of safety previously unattainable. It’s like giving the drone the reflexes of a bird, enabling it to dodge obstacles in real-time while racing toward its goal.”
    The breakthrough lies in the sophisticated integration of hardware and software. SUPER utilizes a lightweight 3D light detection and ranging (LIDAR) sensor capable of detecting obstacles up to 70 meters away with pinpoint accuracy. This is paired with an advanced planning framework that generates two trajectories during flight: one that optimizing speed by venturing into unknown spaces and another prioritizing safety by remaining within known, obstacle-free zones.
    By processing LIDAR data directly as point clouds, the system significantly reduces computation time, enabling rapid decision-making even at high velocities. The technology has been tested in various real-life applications, such as the autonomous exploration of ancient sites, and has demonstrated seamless navigation in both indoor and outdoor environments.
    “The ability to avoid thin obstacles and navigate tight spaces opens up new possibilities for applications like search and rescue, where every second counts. SUPER’s robustness in various lighting conditions, including nighttime, makes it a reliable tool for round-the-clock operations.” said Mr Yunfan Ren, the lead author of the research paper.
    The research team envisions a wide range of applications for this innovative technology, including autonomous delivery, power line inspection, forest monitoring, autonomous exploration, and mapping. In search and rescue missions, MAVs equipped with SUPER technology could swiftly navigate disaster zones – such as collapsed buildings or dense forests – day and night, locating survivors or assessing hazards more efficiently than current drones. Moreover, in disaster relief scenarios, they could deliver crucial supplies to remote and inaccessible areas. More

  • in

    Scientists built a transistor that could leave silicon in the dust

    Hailed as one of the greatest inventions of the 20th century, transistors are integral components of modern electronics that amplify or switch electrical signals. As electronics become smaller, it is becoming increasingly difficult to continue scaling down silicon-based transistors. Has the development of our electronics hit a wall?
    Now, a research team led by the Institute of Industrial Science, The University of Tokyo, has sought a solution. As detailed in their new paper, to be issued in 2025 Symposium on VLSI Technology and Circuits , the team ditched the silicon and instead opted to create a transistor made from gallium-doped indium oxide (InGaOx). This material can be structured as a crystalline oxide, whose orderly, crystal lattice is well suited for electron mobility.
    “We also wanted our crystalline oxide transistor to feature a ‘gate-all-around’ structure, whereby the gate, which turns the current on or off, surrounds the channel where the current flows,” explains Anlan Chen, lead author of the study. “By wrapping the gate entirely around the channel, we can enhance efficiency and scalability compared with traditional gates.”
    With these goals in mind, the team got to work. The researchers knew that they would need to introduce impurities to the indium oxide by ‘doping’ it with gallium. This would make the material react with electricity in a more favorable way.
    “Indium oxide contains oxygen-vacancy defects, which facilitate carrier scattering and thus lower device stability,” says Masaharu Kobayashi, senior author. “We doped indium oxide with gallium to suppress oxygen vacancies and in turn improve transistor reliability.”
    The team used atomic-layer deposition to coat the channel region of a gate-all-around transistor with a thin film of InGaOx, one atomic layer at a time. After deposition, the film was heated to transform it into the crystalline structure needed for electron mobility. This process ultimately enabled the fabrication of a gate-all-around ‘metal oxide-based field-effect transistor’ (MOSFET).
    “Our gate-all-around MOSFET, containing a gallium-doped indium oxide layer, achieves high mobility of 44.5 cm2/Vs,” explains Dr Chen. “Crucially, the device demonstrates promising reliability by operating stably under applied stress for nearly three hours. In fact, our MOSFET outperformed similar devices that have previously been reported.”
    The efforts shown by the team have provided the field with a new transistor design that considers the importance of both materials and structure. The research is a step towards the development of reliable, high-density electronic components suited for applications with high computational demand, such as big data and artificial intelligence. These tiny transistors promise to help next-gen technology run smoothly, making a big difference to our everyday lives.
    The article “A Gate-All-Around Nanosheet Oxide Semiconductor Transistor by Selective Crystallization of InGaOx for Performance and Reliability Enhancement” was issued in 2025 Symposium on VLSI Technology and Circuits. More

  • in

    Trees ‘remember’ times of water abundance and scarcity

    How trees fare under drought depends heavily on their past experiences.

    In some cases, adversity breeds resilience: Spruce trees that experience long-term droughts are more resistant to future droughts, owing to an impressive ability to adjust their canopies to save water, researchers in Germany report May 16 in Plant Biology.

    On the other hand, trees may suffer when they’ve known only wet conditions and are blindsided by droughts. Pines in Switzerland, for example, have needles that appear to acclimatize to wet periods in ways that make them more vulnerable to drought, another group of scientists reported last year. More

  • in

    Guardrails, education urged to protect adolescent AI users

    The effects of artificial intelligence on adolescents are nuanced and complex, according to a report from the American Psychological Association that calls on developers to prioritize features that protect young people from exploitation, manipulation and the erosion of real-world relationships.
    “AI offers new efficiencies and opportunities, yet its deeper integration into daily life requires careful consideration to ensure that AI tools are safe, especially for adolescents,” according to the report, entitled “Artificial Intelligence and Adolescent Well-being: An APA Health Advisory.” “We urge all stakeholders to ensure youth safety is considered relatively early in the evolution of AI. It is critical that we do not repeat the same harmful mistakes made with social media.”
    The report was written by an expert advisory panel and follows on two other APA reports on social media use in adolescence and healthy video content recommendations.
    The AI report notes that adolescence — which it defines as ages 10-25 — is a long development period and that age is “not a foolproof marker for maturity or psychological competence.” It is also a time of critical brain development, which argues for special safeguards aimed at younger users.
    “Like social media, AI is neither inherently good nor bad,” said APA Chief of Psychology Mitch Prinstein, PhD, who spearheaded the report’s development. “But we have already seen instances where adolescents developed unhealthy and even dangerous ‘relationships’ with chatbots, for example. Some adolescents may not even know they are interacting with AI, which is why it is crucial that developers put guardrails in place now.”
    The report makes a number of recommendations to make certain that adolescents can use AI safely. These include:
    Ensuring there are healthy boundaries with simulated human relationships. Adolescents are less likely than adults to question the accuracy and intent of information offered by a bot, rather than a human.

    Creating age-appropriate defaults in privacy settings, interaction limits and content. This will involve transparency, human oversight and support and rigorous testing, according to the report.
    Encouraging uses of AI that can promote healthy development. AI can assist in brainstorming, creating, summarizing and synthesizing information — all of which can make it easier for students to understand and retain key concepts, the report notes. But it is critical for students to be aware of AI’s limitations.
    Limiting access to and engagement with harmful and inaccurate content. AI developers should build in protections to prevent adolescents’ exposure to harmful content.
    Protecting adolescents’ data privacy and likenesses. This includes limiting the use of adolescents’ data for targeted advertising and the sale of their data to third parties.
    The report also calls for comprehensive AI literacy education, integrating it into core curricula and developing national and state guidelines for literacy education.
    “Many of these changes can be made immediately, by parents, educators and adolescents themselves,” Prinstein said. “Others will require more substantial changes by developers, policymakers and other technology professionals.”
    Report: https://www.apa.org/topics/artificial-intelligence-machine-learning/health-advisory-ai-adolescent-well-being
    In addition to the report, further resources and guidance for parents on AI and keeping teens safe and for teens on AI literacy are available at APA.org. More