More stories

  • in

    Mathematical paradoxes demonstrate the limits of AI

    Humans are usually pretty good at recognising when they get things wrong, but artificial intelligence systems are not. According to a new study, AI generally suffers from inherent limitations due to a century-old mathematical paradox.
    Like some people, AI systems often have a degree of confidence that far exceeds their actual abilities. And like an overconfident person, many AI systems don’t know when they’re making mistakes. Sometimes it’s even more difficult for an AI system to realise when it’s making a mistake than to produce a correct result.
    Researchers from the University of Cambridge and the University of Oslo say that instability is the Achilles’ heel of modern AI and that a mathematical paradox shows AI’s limitations. Neural networks, the state of the art tool in AI, roughly mimic the links between neurons in the brain. The researchers show that there are problems where stable and accurate neural networks exist, yet no algorithm can produce such a network. Only in specific cases can algorithms compute stable and accurate neural networks.
    The researchers propose a classification theory describing when neural networks can be trained to provide a trustworthy AI system under certain specific conditions. Their results are reported in the Proceedings of the National Academy of Sciences.
    Deep learning, the leading AI technology for pattern recognition, has been the subject of numerous breathless headlines. Examples include diagnosing disease more accurately than physicians or preventing road accidents through autonomous driving. However, many deep learning systems are untrustworthy and easy to fool.
    “Many AI systems are unstable, and it’s becoming a major liability, especially as they are increasingly used in high-risk areas such as disease diagnosis or autonomous vehicles,” said co-author Professor Anders Hansen from Cambridge’s Department of Applied Mathematics and Theoretical Physics. “If AI systems are used in areas where they can do real harm if they go wrong, trust in those systems has got to be the top priority.”
    The paradox identified by the researchers traces back to two 20th century mathematical giants: Alan Turing and Kurt Gödel. At the beginning of the 20th century, mathematicians attempted to justify mathematics as the ultimate consistent language of science. However, Turing and Gödel showed a paradox at the heart of mathematics: it is impossible to prove whether certain mathematical statements are true or false, and some computational problems cannot be tackled with algorithms. And, whenever a mathematical system is rich enough to describe the arithmetic we learn at school, it cannot prove its own consistency. More

  • in

    Public transport: AI assesses resilience of timetables

    A brief traffic jam, a stuck door, or many passengers getting on and off at a stop — even small delays in the timetables of trains and buses can lead to major problems. A new artificial intelligence (AI) could help designing schedules that are less susceptible to those minor disruptions. It was developed by a team from the Martin Luther University Halle-Wittenberg (MLU), the Fraunhofer Institute for Industrial Mathematics ITWM and the University of Kaiserslautern. The study was published in “Transportation Research Part C: Emerging Technologies.”
    The team was looking for an efficient way to test how well timetables can compensate for minor, unavoidable disruptions and delays. In technical terms, this is called robustness. Until now, such timetable optimisations have required elaborate computer simulations that calculate the routes of a large number of passengers under different scenarios. A single simulation can easily take several minutes of computing time. However, many thousands of such simulations are needed to optimise timetables. “Our new method enables a timetable’s robustness to be very accurately estimated within milliseconds,” says Professor Matthias Müller-Hannemann from the Institute of Computer Science at MLU. The researchers from Halle and Kaiserslautern used numerous methods for evaluating timetables in order to train their artificial intelligence. The team tested the new AI using timetables for Göttingen and part of southern Lower Saxony and achieved very good results.
    “Delays are unavoidable. They happen, for example, when there is a traffic jam during rush hour, when a door of the train jams, or when a particularly large number of passengers get on or off at a stop,” Müller-Hannemann says. When transfers are tightly scheduled, even a few minutes of delay can lead to travellers missing their connections. “In the worst case, they miss the last connection of the day,” adds co-author Ralf Rückert. Another consequence is that vehicle rotations can be disrupted so that follow-on journeys begin with a delay and the problem continues to grow.
    There are limited ways to counteract such delays ahead of time: Travel times between stops and waiting times at stops could be more generously calculated, and larger time buffers could be planned at terminal stops and between subsequent trips. However, all this comes at the expense of economic efficiency. The new method could now help optimise timetables so that a very good balance can be achieved between passenger needs, such as fast connections and few transfers, timetable robustness against disruptions, and the external economic conditions of the transport companies.
    The study was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) within the framework of the research unit “Integrated Planning for Public Transport.”
    Story Source:
    Materials provided by Martin-Luther-Universität Halle-Wittenberg. Note: Content may be edited for style and length. More

  • in

    BirdBot is energy-efficient thanks to nature as a model

    If a Tyrannosaurus Rex living 66 million years ago featured a similar leg structure as an ostrich running in the savanna today, then we can assume bird legs stood the test of time — a good example of evolutionary selection.
    Graceful, elegant, powerful — flightless birds like the ostrich are a mechanical wonder. Ostriches, some of which weigh over 100kg, run through the savanna at up to 55km/h. The ostriches outstanding locomotor performance is thought to be enabled by the animal’s leg structure. Unlike humans, birds fold their feet back when pulling their legs up towards their bodies. Why do the animals do this? Why is this foot movement pattern energy-efficient for walking and running? And can the bird’s leg structure with all its bones, muscles, and tendons be transferred to walking robots?
    Alexander Badri-Spröwitz has spent more than five years on these questions. At the Max Planck Institute for Intelligent Systems (MPI-IS), he leads the Dynamic Locomotion Group. His team works at the interface between biology and robotics in the field of biomechanics and neurocontrol. The dynamic locomotion of animals and robots is the group’s main focus.
    Together with his doctoral student Alborz Aghamaleki Sarvestani, Badri-Spröwitz has constructed a robot leg that, like its natural model, is energy-efficient: BirdBot needs fewer motors than other machines and could, theoretically, scale to large size. On March 16th, Badri-Spröwitz, Aghamaleki Sarvestani, the roboticist Metin Sitti, a director at MPI-IS, and biology professor Monica A. Daley of the University of California, Irvine, published their research in the  journal Science Robotics.
    Compliant spring-tendon network made of muscles and tendons
    When walking, humans pull their feet up and bend their knees, but feet and toes point forward almost unchanged. It is known that Birds are different — in the swing phase, they fold their feet backward. But what is the function of this motion? Badri-Spröwitz and his team attribute this movement to a mechanical coupling. “It’s not the nervous system, it’s not electrical impulses, it’s not muscle activity,” Badri-Spröwitz explains. “We hypothesized a new function of the foot-leg coupling through a network of muscles and tendons that extends across multiple joints.” These multi-joint muscle-tendon coordinate foot folding in the swing phase. In our robot, we have implemented the coupled mechanics in the leg and foot, which enables energy-efficient and robust robot walking. Our results demonstrating this mechanism in a robot lead us to believe that similar efficiency benefits also hold true for birds,” he explains. More

  • in

    Scientists devise new technique to increase chip yield from semiconductor wafer

    Scientists from the Nanyang Technological University, Singapore (NTU Singapore) and the Korea Institute of Machinery & Materials (KIMM) have developed a technique to create a highly uniform and scalable semiconductor wafer, paving the way to higher chip yield and more cost-efficient semiconductors.
    Semiconductor chips commonly found in smart phones and computers are difficult and complex to make, requiring highly advanced machines and special environments to manufacture.
    Their fabrication is typically done on silicon wafers and then diced into the small chips that are used in devices. However, the process is imperfect and not all chips from the same wafer work or operate as desired. These defective chips are discarded, lowering semiconductor yield while increasing production cost.
    The ability to produce uniform wafers at the desired thickness is the most important factor in ensuring that every chip fabricated on the same wafer performs correctly.
    Nanotransfer-based printing — a process that uses a polymer mould to print metal onto a substrate through pressure, or ‘stamping’ — has gained traction in recent years as a promising technology for its simplicity, relative cost-effectiveness, and high throughput.
    However, the technique uses a chemical adhesive layer, which causes negative effects, such as surface defects and performance degradation when printed at scale, as well as human health hazards. For these reasons, mass adoption of the technology and consequent chip application in devices has been limited. More

  • in

    What's the prevailing opinion on social media? Look at the flocks, says researcher

    A University at Buffalo communication researcher has developed a framework for measuring the slippery concept of social media public opinion.
    These collective views on a topic or issue expressed on social media, distinct from the conclusions determined through survey-based public opinion polling, have never been easy to determine. But the “murmuration” framework developed and tested by Yini Zhang, PhD, an assistant professor of communication in the UB College of Arts and Sciences, and her collaborators addresses challenges, like identifying online demographics and factoring for opinion manipulation, that are characteristic on these digital battlegrounds of public discourse.
    Murmuration identifies meaningful groups of social media actors based on the “who-follows-whom” relationship. The actors attract like-minded followers to form “flocks,” which serve as the units of analysis. As opinions form and shift in response to external events, the flocks’ unfolding opinions move like the fluid murmuration of airborne starlings.
    The framework and the findings from an analysis of social network structure and opinion expression from over 193,000 Twitter accounts, which followed more than 1.3 million other accounts, suggest that flock membership can predict opinion and that the murmuration framework reveals distinct patterns of opinion intensity. The researchers studied Twitter because of the ability to see who is following whom, information that is not publicly accessible on other platforms.
    The results, published in the Journal of Computer-Mediated Communication, further support the echo chamber tendencies prevalent on social media, while adding important nuance to existing knowledge.
    “By identifying different flocks and examining the intensity, temporal pattern and content of their expression, we can gain deeper insights far beyond where liberals and conservatives stand on a certain issue,” says Zhang, an expert in social media and political communication. “These flocks are segments of the population, defined not by demographic variables of questionable salience, like white women aged 18-29, but by their online connections and response to events.
    “As such, we can observe opinion variations within an ideological camp and opinions of people that might not be typically assumed to have an opinion on certain issues. We see the flocks as naturally occurring, responding to things as they happen, in ways that take a conversational element into consideration.”
    Zhang says it’s important not to confuse public opinion, as measured by survey-based polling methods, and social media public opinion.
    “Arguably, social media public opinion is twice removed from the general public opinion measured by surveys,” say Zhang. “First, not everyone uses social media. Second, among those who do, only a subset of them actually express opinions on social media. They tend to be strongly opinionated and thus more willing to express their views publicly.”
    Murmuration offers insights that can complement information gathered through survey-based polling. It also moves away from mining social media for text from specific tweets. Murmuration takes full advantage of social media’s dynamic aspect. When text is removed from its context, it becomes difficult to accurately determine questions about what led to the discussion, when it began, and how it evolved over time.
    “Murmuration can allow for research that makes better use of social media data to study public opinion as a form of social interaction and reveal underlying social dynamics,” says Zhang.
    Story Source:
    Materials provided by University at Buffalo. Original written by Bert Gambini. Note: Content may be edited for style and length. More

  • in

    Pivotal technique harnesses cutting-edge AI capabilities to model and map the natural environment

    Scientists have developed a pioneering new technique that harnesses the cutting-edge capabilities of AI to model and map the natural environment in intricate detail.
    A team of experts, including Charlie Kirkwood from the University of Exeter, has created a sophisticated new approach to modelling the Earth’s natural features in greater detail and accuracy.
    The new technique can recognise intricate features and aspects of the terrain far beyond the capabilities of more traditional methods and use these to generate enhanced-quality environmental maps.
    Crucially, the new system could also pave the way to unlocking new discoveries of the relationships within the natural environment, that may help tackle some of the greater climate and environment issues of the 21st century.
    The study is published in leading journal Mathematical Geosciences, as part of a special issue on geostatistics and machine learning.
    Modelling and mapping the environment is a lengthy, time consuming and expensive process. Cost limits the number of observations that can be obtained, which means that creating comprehensive spatially-continuous maps depends upon filling in the gaps between these observations. More

  • in

    Tiny battery-free devices float in the wind like dandelion seeds

    Wireless sensors can monitor how temperature, humidity or other environmental conditions vary across large swaths of land, such as farms or forests.
    These tools could provide unique insights for a variety of applications, including digital agriculture and monitoring climate change. One problem, however, is that it is currently time-consuming and expensive to physically place hundreds of sensors across a large area.
    Inspired by how dandelions use the wind to distribute their seeds, a University of Washington team has developed a tiny sensor-carrying device that can be blown by the wind as it tumbles toward the ground. This system is about 30 times as heavy as a 1 milligram dandelion seed but can still travel up to 100 meters in a moderate breeze, about the length of a football field, from where it was released by a drone. Once on the ground, the device, which can hold at least four sensors, uses solar panels to power its onboard electronics and can share sensor data up to 60 meters away.
    The team published these results March 16 in Nature.
    “We show that you can use off-the-shelf components to create tiny things. Our prototype suggests that you could use a drone to release thousands of these devices in a single drop. They’ll all be carried by the wind a little differently, and basically you can create a 1,000-device network with this one drop,” said senior author Shyam Gollakota, a UW professor in the Paul G. Allen School of Computer Science & Engineering. “This is amazing and transformational for the field of deploying sensors, because right now it could take months to manually deploy this many sensors.”
    Because the devices have electronics on board, it’s challenging to make the whole system as light as an actual dandelion seed. The first step was to develop a shape that would allow the system to take its time falling to the ground so that it could be tossed around by a breeze. The researchers tested 75 designs to determine what would lead to the smallest “terminal velocity,” or the maximum speed a device would have as it fell through the air. More

  • in

    Toward a quantum computer that calculates molecular energy

    Quantum computers are getting bigger, but there are still few practical ways to take advantage of their extra computing power. To get over this hurdle, researchers are designing algorithms to ease the transition from classical to quantum computers. In a new study in Nature, researchers unveil an algorithm that reduces the statistical errors, or noise, produced by quantum bits, or qubits, in crunching chemistry equations.
    Developed by Columbia chemistry professor David Reichman and postdoc Joonho Lee with researchers at Google Quantum AI, the algorithm uses up to 16 qubits on Sycamore, Google’s 53-qubit computer, to calculate ground state energy, the lowest energy state of a molecule. “These are the largest quantum chemistry calculations that have ever been done on a real quantum device,” Reichman said.
    The ability to accurately calculate ground state energy, will enable chemists to develop new materials, said Lee, who is also a visiting researcher at Google Quantum AI. The algorithm could be used to design materials to speed up nitrogen fixation for farming and hydrolysis for making clean energy, among other sustainability goals, he said.
    The algorithm uses a quantum Monte Carlo, a system of methods for calculating probabilities when there are a large number of random, unknown variables at play, like in a game of roulette. Here, the researchers used their algorithm to determine the ground state energy of three molecules: heliocide (H4), using eight qubits for the calculation; molecular nitrogen (N2), using 12 qubits; and solid diamond, using 16 qubits.
    Ground state energy is influenced by variables such as the number of electrons in a molecule, the direction in which they spin, and the paths they take as they orbit a nucleus. This electronic energy is encoded in the Schrodinger equation. Solving the equation on a classical computer becomes exponentially harder as molecules get bigger, although methods for estimating the solution have made the process easier. How quantum computers might circumvent the exponential scaling problem has been an open question in the field.
    In principle, quantum computers should be able to handle exponentially larger and more complex calculations, like those needed to solve the Schrodinger equation, because the qubits that make them up take advantage of quantum states. Unlike binary digits, or bits, made up of ones and zeros, qubits can exist in two states simultaneously. Qubits, however, are fragile and error-prone: the more qubits used, the less accurate the final answer. Lee’s algorithm harnesses the combined power of classical and quantum computers to solve chemistry equations more efficiently while minimizing the quantum computer’s mistakes.
    “It’s the best of both worlds,” Lee said. “We leveraged tools that we already had as well as tools that are considered state-of-the-art in quantum information science to refine quantum computational chemistry.”
    A classical computer can handle most of Lee’s quantum Monte Carlo simulation. Sycamore jumps in for the last, most computationally complex step: the calculation of the overlap between a trial wave function — a guess at the mathematical description of the ground state energy that can be implemented by the quantum computer — and a sample wave function, which is part of the Monte Carlo’s statistical process. This overlap provides a set of constraints, known as the boundary condition, to the Monte Carlo sampling, which ensures the statistical efficiency of the calculation.
    The prior record for solving ground state energy used 12 qubits and a method called the variational quantum eigensolver, or VQE. But VQE ignored the effects of interacting electrons, an important variable in calculating ground state energy that Lee’s quantum Monte Carlo algorithm now includes. Adding virtual correlation techniques from classic computers could help chemists tackle even larger molecules, Lee said.
    The hybrid classical-quantum calculations in this new work were found to be as accurate as some of the best classical methods. This suggests that problems could be solved more accurately and/or quickly with a quantum computer than without — a key milestone for quantum computing. Lee and his colleagues will continue to tweak their algorithm to make it more efficient, while engineers work to build better quantum hardware.
    “The feasibility of solving larger and more challenging chemical problems will only increase with time,” Lee said. “This gives us hope that quantum technologies that are being developed will be practically useful.”
    Story Source:
    Materials provided by Columbia University. Original written by Ellen Neff. Note: Content may be edited for style and length. More