More stories

  • in

    Natural history specimens have never been so accessible

    With the help of 16 grants from the National Science Foundation, researchers have painstakingly taken computed topography (CT) scans of more than 13,000 individual specimens to create 3D images of more than half of all the world’s animal groups, including mammals, fishes, amphibians and reptiles.
    The research team, made of members from The University of Texas at Arlington and 25 other institutions, are now a quarter of the way through inputting nearly 30,000 media files to the open-source repository MorphoSource. This will allow researchers and scholars to share findings and improve access to material critical for scientific discovery.
    “Thanks to this exciting openVertebrate project, also called oVert, anyone — scientists, researchers, students, teachers, artists — can now look online to research the anatomy of just about any animal imaginable without leaving home,” said Gregory Pandelis, collections manager of UT Arlington’s Amphibian and Reptile Diversity Research Center. “This will help reduce wear and tear on many rare specimens while increasing access to them at the same time.”
    A summary of the project has just been published in the peer-reviewed journal BioScience reviewing the specimens that have been scanned to date and offering a glimpse of how the data might be used in the future.
    For example, one research team has used the data to conclude that Spinosaurus, a massive dinosaur that was larger than Tyrannosaurus rex and thought to be aquatic, would have actually been a poor swimmer, and thus likely stayed on land. Another study revealed that frogs have evolved to gain and lose the ability to grow teeth more than any other animal.
    The value of oVert extends beyond scientific inquiry. Artists are using the 3D models to create realistic animal replicas. Photographs of oVert specimens have been displayed as part of museum exhibits. In addition, specimens have been incorporated into virtual reality headsets that allow users to interact with the animals.
    Educators also are able to use oVert models in their classrooms. From the outset of the project, the research team placed a strong emphasis on K-12 outreach, organizing workshops where teachers could learn how to use the data in their classrooms.
    “As a kid who loved all things science- and nature-related and had a particular interest in skeletal anatomy, I would go through great pains to collect, preserve and study skulls and other specimens for my childhood natural history collection, the start of my scientific inspiration,” Pandelis said. “Realizing that you could study these things digitally with just a few clicks on a computer was eye-opening for me, and it opened up the path to my current research using CT scans of snake specimens to study their skull evolution. Now, this wealth of data has been opened and made publicly accessible to anyone who has a professional, recreational or educational interest in anatomy and morphology. Natural history specimens have never been so accessible and impactful.”
    In the next phase of the research project, the team will be creating sophisticated tools to analyze the data collected. Since researchers have never had digital access to so many 3D natural history specimens before, it will take further developments in machine learning and supercomputing to use them to their full potential. More

  • in

    How surface roughness influences the adhesion of soft materials

    Adhesive tape or sticky notes are easy to attach to a surface, but can be difficult to remove. This phenomenon, known as adhesion hysteresis, can be fundamentally observed in soft, elastic materials: Adhesive contact is formed more easily than it is broken. Researchers at the University of Freiburg, the University of Pittsburgh and the University of Akron in the US have now discovered that this adhesion hysteresis is caused by the surface roughness of the adherent soft materials. Through a combination of experimental observations and simulations, the team demonstrated that roughness interferes with the separation process, causing the materials to detach in minute, abrupt movements, which release parts of the adhesive bond incrementally. Dr. Antoine Sanner and Prof. Dr. Lars Pastewka from the Department of Microsystems Engineering and the livMatS Cluster of Excellence at the University of Freiburg, Dr. Nityanshu Kumar and Prof. Dr. Ali Dhinojwala from the University of Akron and Prof. Dr. Tevis Jacobs from the University of Pittsburgh have published their results in the journal Science Advances.
    “Our findings will make it possible to specifically control the adhesion properties of soft materials through surface roughness,” says Sanner. “They will also allow new and improved applications to be developed in soft robotics or production technology in the future, for example for grippers or placement systems.”
    Sudden jumping movement of the edge of the contact
    Until now, researchers have hypothesized that viscoelastic energy dissipation causes adhesion hysteresis in soft solids. In other words, energy is lost to heat in the material because it deforms in the contact cycle: It is compressed when making contact and expands during release. Those energy losses counteract the movement of the contact surface, which increases the adhesive force during separation. Contact ageing, i.e. the formation of chemical bonds on the contact surface, has also been suggested as a cause. Here the longer the contact exists, the greater the adhesion. “Our simulations show that the observed hysteresis can be explained without these specific energy dissipation mechanisms. The only source of energy dissipation in our numerical model is the sudden jumping movement of the edge of the contact, which is induced by the roughness,” says Sanner.
    Adhesion hysteresis calculated for realistic surface roughness
    This sudden jumping motion is clearly recognisable in the simulations of the Freiburg researchers and in the adhesion experiments of the University of Akron. “The abrupt change in the contact surface was already mentioned in the 1990s as a possible cause of adhesion hysteresis, but previous theoretical work on this was limited to simplified surface properties,” explains Kumar. “We have succeeded for the first time in calculating the adhesion hysteresis for realistic surface roughness. This is based on the efficiency of the numerical model and an extremely detailed surface characterisation carried out by researchers at the University of Pittsburgh,” says Jacobs. More

  • in

    Balancing training data and human knowledge makes AI act more like a scientist

    When you teach a child how to solve puzzles, you can either let them figure it out through trial and error, or you can guide them with some basic rules and tips. Similarly, incorporating rules and tips into AI training — such as the laws of physics — could make them more efficient and more reflective of the real world. However, helping the AI assess the value of different rules can be a tricky task.
    Researchers report March 8 in the journal Nexus that they have developed a framework for assessing the relative value of rules and data in “informed machine learning models” that incorporate both. They showed that by doing so, they could help the AI incorporate basic laws of the real world and better navigate scientific problems like solving complex mathematical problems and optimizing experimental conditions in chemistry experiments.
    “Embedding human knowledge into AI models has the potential to improve their efficiency and ability to make inferences, but the question is how to balance the influence of data and knowledge,” says first author Hao Xu of Peking University. “Our framework can be employed to evaluate different knowledge and rules to enhance the predictive capability of deep learning models.”
    Generative AI models like ChatGPT and Sora are purely data-driven — the models are given training data, and they teach themselves via trial and error. However, with only data to work from, these systems have no way to learn physical laws, such as gravity or fluid dynamics, and they also struggle to perform in situations that differ from their training data. An alternative approach is informed machine learning, in which researchers provide the model with some underlying rules to help guide its training process, but little is known about the relative importance of rules vs data in driving model accuracy.
    “We are trying to teach AI models the laws of physics so that they can be more reflective of the real world, which would make them more useful in science and engineering,” says senior author Yuntian Chen of the Eastern Institute of Technology, Ningbo.
    To improve the performance of informed machine learning, the team developed a framework to calculate the contribution of an individual rule to a given model’s predictive accuracy. The researchers also examined interactions between different rules because most informed machine learning models incorporate multiple rules, and having too many rules can cause models to collapse.
    This allowed them to optimize models by tweaking the relative influence of different rules and to filter out redundant or interfering rules entirely. They also identified some rules that worked synergistically and other rules that were completely dependent on the presence of other rules.

    “We found that the rules have different kinds of relationships, and we use these relationships to make model training faster and get higher accuracy,” says Chen.
    The researchers say that their framework has broad practical applications in engineering, physics, and chemistry. In the paper, they demonstrated the method’s potential by using it to optimize machine learning models to solve multivariate equations and to predict the results of thin layer chromatography experiments and thereby optimize future experimental chemistry conditions.
    Next, the researchers plan to develop their framework into a plugin tool that can be used by AI developers. Ultimately, they also want to train their models so that the models can extract knowledge and rules directly from data, rather than having rules selected by human researchers.
    “We want to make it a closed loop by making the model into a real AI scientist,” says Chen. “We are working to develop a model that can directly extract knowledge from the data and then use this knowledge to create rules and improve itself.”
    This research was supported by the National Center for Applied Mathematics Shenzhen, the Shenzhen Key Laboratory of Natural Gas Hydrates, the SUSTech — Qingdao New Energy Technology Research Institute, and the National Natural Science Foundation of China. More

  • in

    The role of machine learning and computer vision in Imageomics

    A new field promises to usher in a new era of using machine learning and computer vision to tackle small and large-scale questions about the biology of organisms around the globe.
    The field of imageomics aims to help explore fundamental questions about biological processes on Earth by combining images of living organisms with computer-enabled analysis and discovery.
    Wei-Lun Chao, an investigator at The Ohio State University’s Imageomics Institute and a distinguished assistant professor of engineering inclusive excellencein computer science and engineering at Ohio State, gave an in-depth presentation about the latest research advances in the field last month at the annual meeting of the American Association for the Advancement of Science.
    Chao and two other presenters described how imageomics could transform society’s understanding of the biological and ecological world by turning research questions into computable problems. Chao’s presentation focused on imageomics’ potential application for micro to macro-level problems.
    “Nowadays we have many rapid advances in machine learning and computer vision techniques,” said Chao. “If we use them appropriately, they could really help scientists solve critical but laborious problems.”
    While some research problems might take years or decades to solve manually, imageomics researchers suggest that with the aid of machine and computer vision techniques — such as pattern recognition and multi-modal alignment — the rate and efficiency of next-generation scientific discoveries could be expanded exponentially.
    “If we can incorporate the biological knowledge that people have collected over decades and centuries into machine learning techniques, we can help improve their capabilities in terms of interpretability and scientific discovery,” said Chao.

    One of the ways Chao and his colleagues are working toward this goal is by creating foundation models in imageomics that will leverage data from all kinds of sources to enable various tasks. Another way is to develop machine learning models capable of identifying and even discovering traits to make it easier for computers to recognize and classify objects in images, which is what Chao’s team did.
    “Traditional methods for image classification with trait detection require a huge amount of human annotation, but our method doesn’t,” said Chao. “We were inspired to develop our algorithm through how biologists and ecologists look for traits to differentiate various species of biological organisms.”
    Conventional machine learning-based image classifiers have achieved a great level of accuracy by analyzing an image as a whole, and then labeling it a certain object category. However, Chao’s team takes a more proactive approach: Their method teaches the algorithm to actively look for traits like colors and patterns in any image that are specific to an object’s class — such as its animal species — while it’s being analyzed.
    This way, imageomics can offer biologists a much more detailed account of what is and isn’t revealed in the image, paving the way to quicker and more accurate visual analysis. Most excitingly, Chao said, it was shown to be able to handle recognition tasks for very challenging fine-grained species to identify, like butterfly mimicries, whose appearance is characterized by fine detail and variety in their wing patterns and coloring.
    The ease with which the algorithm can be used could potentially also allow imageomics to be integrated into a variety of other diverse purposes, ranging from climate to material science research, he said.
    Chao said that one of the most challenging parts of fostering imageomics research is integrating different parts of scientific culture to collect enough data and form novel scientific hypotheses from them.
    It’s one of the reasons why collaboration between different types of scientists and disciplines is such an integral part of the field, he said. Imageomics research will continue to evolve, but for now, Chao is enthusiastic about its potential to allow for the natural world to be seen and understood in brand-new, interdisciplinary ways.
    “What we really want is for AI to have strong integration with scientific knowledge, and I would say imageomics is a great starting point towards that,” he said.
    Chao’s AAAS presentation, titled “An Imageomics Perspective of Machine Learning and Computer Vision: Micro to Global,” was part of the session “Imageomics: Powering Machine Learning for Understanding Biological Traits.” More

  • in

    Method rapidly verifies that a robot will avoid collisions

    Before a robot can grab dishes off a shelf to set the table, it must ensure its gripper and arm won’t crash into anything and potentially shatter the fine china. As part of its motion planning process, a robot typically runs “safety check” algorithms that verify its trajectory is collision-free.
    However, sometimes these algorithms generate false positives, claiming a trajectory is safe when the robot would actually collide with something. Other methods that can avoid false positives are typically too slow for robots in the real world.
    Now, MIT researchers have developed a safety check technique which can prove with 100 percent accuracy that a robot’s trajectory will remain collision-free (assuming the model of the robot and environment is itself accurate). Their method, which is so precise it can discriminate between trajectories that differ by only millimeters, provides proof in only a few seconds.
    But a user doesn’t need to take the researchers’ word for it — the mathematical proof generated by this technique can be checked quickly with relatively simple math.
    The researchers accomplished this using a special algorithmic technique, called sum-of-squares programming, and adapted it to effectively solve the safety check problem. Using sum-of-squares programming enables their method to generalize to a wide range of complex motions.
    This technique could be especially useful for robots that must move rapidly avoid collisions in spaces crowded with objects, such as food preparation robots in a commercial kitchen. It is also well-suited for situations where robot collisions could cause injuries, like home health robots that care for frail patients.
    “With this work, we have shown that you can solve some challenging problems with conceptually simple tools. Sum-of-squares programming is a powerful algorithmic idea, and while it doesn’t solve every problem, if you are careful in how you apply it, you can solve some pretty nontrivial problems,” says Alexandre Amice, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this technique.

    Amice is joined on the paper fellow EECS graduate student Peter Werner and senior author Russ Tedrake, the Toyota Professor of EECS, Aeronautics and Astronautics, and Mechanical Engineering, and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The work will be presented at the International Conference on Robots and Automation.
    Certifying safety
    Many existing methods that check whether a robot’s planned motion is collision-free do so by simulating the trajectory and checking every few seconds to see whether the robot hits anything. But these static safety checks can’t tell if the robot will collide with something in the intermediate seconds.
    This might not be a problem for a robot wandering around an open space with few obstacles, but for robots performing intricate tasks in small spaces, a few seconds of motion can make an enormous difference.
    Conceptually, one way to prove that a robot is not headed for a collision would be to hold up a piece of paper that separates the robot from any obstacles in the environment. Mathematically, this piece of paper is called a hyperplane. Many safety check algorithms work by generating this hyperplane at a single point in time. However, each time the robot moves, a new hyperplane needs to be recomputed to perform the safety check.
    Instead, this new technique generates a hyperplane function that moves with the robot, so it can prove that an entire trajectory is collision-free rather than working one hyperplane at a time.

    The researchers used sum-of-squares programming, an algorithmic toolbox that can effectively turn a static problem into a function. This function is an equation that describes where the hyperplane needs to be at each point in the planned trajectory so it remains collision-free.
    Sum-of-squares can generalize the optimization program to find a family of collision-free hyperplanes. Often, sum-of-squares is considered a heavy optimization that is only suitable for offline use, but the researchers have shown that for this problem it is extremely efficient and accurate.
    “The key here was figuring out how to apply sum-of-squares to our particular problem. The biggest challenge was coming up with the initial formulation. If I don’t want my robot to run into anything, what does that mean mathematically, and can the computer give me an answer?” Amice says.
    In the end, like the name suggests, sum-of-squares produces a function that is the sum of several squared values. The function is always positive, since the square of any number is always a positive value.
    Trust but verify
    By double-checking that the hyperplane function contains squared values, a human can easily verify that the function is positive, which means the trajectory is collision-free, Amice explains.
    While the method certifies with perfect accuracy, this assumes the user has an accurate model of the robot and environment; the mathematical certifier is only as good as the model.
    “One really nice thing about this approach is that the proofs are really easy to interpret, so you don’t have to trust me that I coded it right because you can check it yourself,” he adds.
    They tested their technique in simulation by certifying that complex motion plans for robots with one and two arms were collision-free. At its slowest, their method took just a few hundred milliseconds to generate a proof, making it much faster than some alternate techniques.
    While their approach is fast enough to be used as a final safety check in some real-world situations, it is still too slow to be implemented directly in a robot motion planning loop, where decisions need to be made in microseconds, Amice says.
    The researchers plan to accelerate their process by ignoring situations that don’t require safety checks, like when the robot is far away from any objects it might collide with. They also want to experiment with specialized optimization solvers that could run faster.
    This work was supported, in part, by Amazon and the U.S. Air Force Research Laboratory. More

  • in

    Drawings of mathematical problems predict their resolution

    Solving arithmetic problems, even simple subtractions, involves mental representations whose influence remains to be clarified. Visualizing these representations would enable us to better understand our reasoning and adapt our teaching methods. A team from the University of Geneva (UNIGE), in collaboration with CY Cergy Paris University (CYU) and University of Burgundy (uB), analyzed drawings made by children and adults when solving simple problems. The scientists found that, whatever the age of the participant, the most effective calculation strategies were associated with certain drawing typologies. These results, published in the journal Memory & Cognition, open up new perspectives for the teaching of mathematics.
    Learning mathematics often involves small problems, linked to concrete everyday situations. For example, pupils have to add up quantities of flour to make a recipe or subtract sums of money to find out what’s left in their wallets after shopping. They are thus led to translate statements into algorithmic procedures to find the solution. This translation of words into solving strategies involves a stage of mental representation of mathematical information, such as numbers or the arithmetic operation to be performed, and non-mathematical information, such as the context of the problem.
    The cardinal or ordinal dimensions of problems
    Having a clearer idea of these mental representations would enable a better understanding of the choice of calculation strategies. Scientists from UNIGE, CYU and uB conducted a study with 10-year-old children and adults, asking them to solve simple problems with the instruction to use as few calculation steps as possible. The participants were then asked to produce a drawing or diagram explaining their problem-solving strategy for each statement. The contexts of some problems called on the cardinal properties of numbers — the quantity of elements in a set — others on their ordinal properties — their position in an ordered list.
    The former involved marbles, fishes, or books, for example: ”Paul has 8 red marbles. He also has blue marbles. In total, Paul has 11 marbles. Jolene has as many blue marbles as Paul, and some green marbles. She has 2 green marbles less than Paul has red marbles. In total, how many marbles does Jolene have?”. The latter involved lengths or durations, for example: ”Sofia traveled for 8 hours. Her trip started during the day. Sofia arrived at 11. Fred leaves at the same time as Sofia. Fred’s trip lasted 2 hours less than Sofia’s. What time was it when Fred arrived?”
    Both of the above problems share the same mathematical structure, and both can be solved by a long strategy in 3 steps: 11 — 8 = 3; 8 — 2 = 6; 6 + 3 = 9, but also in a single calculation: 11 — 2 = 9, using a simple subtraction. However, the mental representations of these problems are very different, and the researchers wanted to determine whether the type of representations could predict the calculation strategy, in 1 or 3 steps, of those who solve them.
    ”Our hypothesis was that cardinal problems — such as the one involving marbles — would inspire cardinal drawings, i.e. diagrams with identical individual elements, such as crosses or circles, or with overlaps of elements in sets or subsets. Similarly, we assumed that ordinal problems — such as the one mentioning travel times — would lead to ordinal representations, i.e. diagrams with axes, graduations or intervals — and that these ordinal drawings would reflect participants’ representations and indicate that they would be more successful in identifying the one-step solution strategy,” explains Hippolyte Gros, former post-doctoral fellow at UNIGE’s Faculty of Psychology and Educational Sciences, associate professor at CYU, and first author of the study.

    Identifying mental representations through drawings
    These hypotheses were validated by analyzing the drawings of 52 adults and 59 children. ”We have shown that, irrespective of their experience — since the same results were obtained in both children and adults — the use of strategies by the participants depends on their representation of the problem, and that this is influenced by the non-mathematical information contained in the problem statement, as revealed by their drawings,” says Emmanuel Sander, full professor at the UNIGE’s Faculty of Psychology and Educational Sciences. ”Our study also shows that, even after years of experience in solving addition and subtraction, the difference between cardinal and ordinal problems remains very marked. The majority of participants were only able to solve problems of the second type in a single step”.
    Improving mathematical learning through drawing analysis
    The team also noted that drawings showing ordinal representations were more frequently associated with a one-step solution, even if the problem was cardinal. In other words, drawing with a scale or an axis is linked to the choice of the fastest calculation. “From a pedagogical point of view, this suggests that the presence of specific features in a student’s drawing may or may not indicate that his or her representation of the problem is the most efficient one for meeting the instructions — in this case, solving with the fewest calculations possible,” observes Jean-Pierre Thibaut, full professor at the uB Laboratory for Research on Learning and Development.
    ”Thus, when it comes to subtracting individual elements, a representation via an axis — rather than via subsets — is more effective in finding the fastest method. Analysis of students’ drawings in arithmetic can therefore enable targeted intervention to help them translate problems into more optimal representations. One way of doing this is to work on the graphical representation of statements in class, to help students understand the most direct strategies,” concludes Hippolyte Gros. More

  • in

    How air pollution may make it harder for pollinators to find flowers

    Air pollution may blunt the signature scents of some night-blooming flowers, jeopardizing pollination.

    When the aroma of a pale evening primrose encounters certain pollutants in the night air, the pollutants destroy key scent molecules, lab and field tests show. As a result, moths and other nocturnal pollinators may find it difficult to detect the fragrance and navigate to the flower, researchers report in the Feb. 9 Science.

    The finding highlights how air pollution can affect more than human health. “It’s really going deeper … affecting ecosystems and food security,” says Joel Thornton, an atmospheric scientist at the University of Washington in Seattle. “Pollination is so important for agriculture.” More

  • in

    Making quantum bits fly

    Two physicists at the University of Konstanz are developing a method that could enable the stable exchange of information in quantum computers. In the leading role: photons that make quantum bits “fly.”
    Quantum computers are considered the next big evolutionary step in information technology. They are expected to solve computing problems that today’s computers simply cannot solve — or would take ages to do so. Research groups around the world are working on making the quantum computer a reality. This is anything but easy, because the basic components of such a computer, the quantum bits or qubits, are extremely fragile. One type of qubits consists of the intrinsic angular momentum (spin) of a single electron, i.e. they are at the scale of an atom. It is hard enough to keep such a fragile system intact. It is even more difficult to interconnect two or more of these qubits. So how can a stable exchange of information between qubits be achieved?
    Flying qubits
    The two Konstanz physicists Benedikt Tissot and Guido Burkard have now developed a theoretical model of how the information exchange between qubits could succeed by using photons as a “means of transport” for quantum information. The general idea: The information content (electron spin state) of the material qubit is converted into a “flying qubit,” namely a photon. Photons are “light quanta” that constitute the basic building blocks making up the electromagnetic radiation field. The special feature of the new model: stimulated Raman emissions are used for converting the qubit into a photon. This procedure allows more control over the photons. “We are proposing a paradigm shift from optimizing the control during the generation of the photon to directly optimizing the temporal shape of the light pulse in the flying qubit,” explains Guido Burkard.
    Benedikt Tissot compares the basic procedure with the Internet: “In a classic computer, we have our bits, which are encoded on a chip in the form of electrons. If we want to send information over long distances, the information content of the bits is converted into a light signal that is transmitted through optical fibers.” The principle of information exchange between qubits in a quantum computer is very similar: “Here, too, we have to convert the information into states that can be easily transmitted — and photons are ideal for this,” explains Tissot.
    A three-level system for controlling the photon
    “We need to consider several aspects,” says Tissot: “We want to control the direction in which the information flows — as well as when, how quickly and where it flows to. That’s why we need a system that allows for a high level of control.” The researchers’ method makes this control possible by means of resonator-enhanced, stimulated Raman emissions. Behind this term is a three-level system, which leads to a multi-stage procedure. These stages offer the physicists control over the photon that is created. “We have ‘more buttons’ here that we can operate to control the photon,” Tissot illustrates.
    Stimulated Raman emission are an established method in physics. However, using them to send qubit states directly is unusual. The new method might make it possible to balance the consequences of environmental perturbations and unwanted side effects of rapid changes in the temporal shape of the light pulse, so that information transport can be implemented more accurately. The detailed procedure was published in the journal Physical Review Research in February 2024. More