More stories

  • in

    The influence of AI on trust in human interaction

    As AI becomes increasingly realistic, our trust in those with whom we communicate may be compromised. Researchers at the University of Gothenburg have examined how advanced AI systems impact our trust in the individuals we interact with.
    In one scenario, a would-be scammer, believing he is calling an elderly man, is instead connected to a computer system that communicates through pre-recorded loops. The scammer spends considerable time attempting the fraud, patiently listening to the “man’s” somewhat confusing and repetitive stories. Oskar Lindwall, a professor of communication at the University of Gothenburg, observes that it often takes a long time for people to realize they are interacting with a technical system.
    He has, in collaboration with Professor of informatics Jonas Ivarsson, written an article titled Suspicious Minds: The Problem of Trust and Conversational Agents, exploring how individuals interpret and relate to situations where one of the parties might be an AI agent. The article highlights the negative consequences of harboring suspicion toward others, such as the damage it can cause to relationships.
    Ivarsson provides an example of a romantic relationship where trust issues arise, leading to jealousy and an increased tendency to search for evidence of deception. The authors argue that being unable to fully trust a conversational partner’s intentions and identity may result in excessive suspicion even when there is no reason for it.
    Their study discovered that during interactions between two humans, some behaviors were interpreted as signs that one of them was actually a robot.
    The researchers suggest that a pervasive design perspective is driving the development of AI with increasingly human-like features. While this may be appealing in some contexts, it can also be problematic, particularly when it is unclear who you are communicating with. Ivarsson questions whether AI should have such human-like voices, as they create a sense of intimacy and lead people to form impressions based on the voice alone.
    In the case of the would-be fraudster calling the “older man,” the scam is only exposed after a long time, which Lindwall and Ivarsson attribute to the believability of the human voice and the assumption that the confused behavior is due to age. Once an AI has a voice, we infer attributes such as gender, age, and socio-economic background, making it harder to identify that we are interacting with a computer.
    The researchers propose creating AI with well-functioning and eloquent voices that are still clearly synthetic, increasing transparency.
    Communication with others involves not only deception but also relationship-building and joint meaning-making. The uncertainty of whether one is talking to a human or a computer affects this aspect of communication. While it might not matter in some situations, such as cognitive-behavioral therapy, other forms of therapy that require more human connection may be negatively impacted.
    Jonas Ivarsson and Oskar Lindwall analyzed data made available on YouTube. They studied three types of conversations and audience reactions and comments. In the first type, a robot calls a person to book a hair appointment, unbeknownst to the person on the other end. In the second type, a person calls another person for the same purpose. In the third type, telemarketers are transferred to a computer system with pre-recorded speech. More

  • in

    Scurrying centipedes inspire many-legged robots that can traverse difficult landscapes

    Centipedes are known for their wiggly walk. With tens to hundreds of legs, they can traverse any terrain without stopping.
    “When you see a scurrying centipede, you’re basically seeing an animal that inhabits a world that is very different than our world of movement,” said Daniel Goldman, the Dunn Family Professor in the School of Physics. “Our movement is largely dominated by inertia. If I swing my leg, I land on my foot and I move forward. But in the world of centipedes, if they stop wiggling their body parts and limbs, they basically stop moving instantly.”
    Intrigued to see if the many limbs could be helpful for locomotion in this world, a team of physicists, engineers, and mathematicians at the Georgia Institute of Technology are using this style of movement to their advantage. They developed a new theory of multilegged locomotion and created many-legged robotic models, discovering the robot with redundant legs could move across uneven surfaces without any additional sensing or control technology as the theory predicted.
    These robots can move over complex, bumpy terrain — and there is potential to use them for agriculture, space exploration, and even search and rescue.
    The researchers presented their work in the papers, “Multilegged Matter Transport: A Framework for Locomotion on Noisy Landscapes,” in Science in May and “Self-Propulsion via Slipping: Frictional Swimming in Multilegged Locomotors,” in Proceedings of the National Academy of Sciences in March.
    A Leg Up
    For the Science paper, the researchers were motivated by mathematician Claude Shannon’s communication theory, which demonstrates how to reliably transmit signals over distance, to understand why a multilegged robot was so successful at locomotion. The theory of communication suggests that one way to ensure a message gets from point A to point B on a noisy line isn’t to send it as an analog signal, but to break it into discrete digital units and repeat these units with an appropriate code.

    “We were inspired by this theory, and we tried to see if redundancy could be helpful in matter transportation,” said Baxi Chong, a physics postdoctoral researcher. “So, we started this project to see what would happen if we had more legs on the robot: four, six, eight legs, and even 16 legs.”
    A team led by Chong, including School of Mathematics postdoctoral fellow Daniel Irvine and Professor Greg Blekherman, developed a theory that proposes that adding leg pairs to the robot increases its ability to move robustly over challenging surfaces — a concept they call spatial redundancy. This redundancy makes the robot’s legs successful on their own without the need for sensors to interpret the environment. If one leg falters, the abundance of legs keeps it moving regardless. In effect, the robot becomes a reliable system to transport itself and even a load from A to B on difficult or “noisy” landscapes. The concept is comparable to how punctuality can be guaranteed on wheeled transport if the track or rail is smooth enough but without having to engineer the environment to create this punctuality.
    “With an advanced bipedal robot, many sensors are typically required to control it in real time,” Chong said. “But in applications such as search and rescue, exploring Mars, or even micro robots, there is a need to drive a robot with limited sensing. There are many reasons for such sensor-free initiative. The sensors can be expensive and fragile, or the environments can change so fast that it doesn’t allow enough sensor-controller response time.”
    To test this, Juntao He, a Ph.D. student in robotics, conducted a series of experiments where he and Daniel Soto, a master’s student in the George W. Woodruff School of Mechanical Engineering, built terrains to mimic an inconsistent natural environment. He then tested the robot by increasing its number of legs by two each time, starting with six and eventually expanding to 16. As the leg count increased, the robot could more agilely move across the terrain, even without sensors[PGR1] , as the theory predicted. Eventually, they tested the robot outdoors on real terrain, where it was able to traverse in a variety of environments.
    “It’s truly impressive to witness the multilegged robot’s proficiency in navigating both lab-based terrains and outdoor environments,” Juntao said. “While bipedal and quadrupedal robots heavily rely on sensors to traverse complex terrain, our multilegged robot utilizes leg redundancy and can accomplish similar tasks with open-loop control.”
    Next Steps

    The researchers are already applying their discoveries to farming. Goldman has co-founded a company that aspires to use these robots to weed farmland where weedkillers are ineffective.
    “They’re kind of like a Roomba but outside for complex ground,” Goldman said. “A Roomba works because it has wheels that function well on flat ground. Until the development of our framework, we couldn’t confidently predict locomotor reliability on bumpy, rocky, debris-ridden terrain. We now have the beginnings of such a scheme, which could be used to ensure that our robots traverse a crop field in a certain amount of time.”
    The researchers also want to refine the robot. They know why the centipede robot framework is functional, but now they’re determining the optimal number of legs to achieve motion without sensing in a way that is cost-effective yet still retains the benefits.
    “In this paper, we asked, ‘How do you predict the minimum number of legs to achieve such tasks?'” Chong said. “Currently we only prove that the minimum number exists, but we don’t know that exact number of legs needed. Further, we need to better understand the tradeoff between energy, speed, power, and robustness in such a complex system.” More

  • in

    AI could run a million microbial experiments per year

    An artificial intelligence system enables robots to conduct autonomous scientific experiments — as many as 10,000 per day — potentially driving a drastic leap forward in the pace of discovery in areas from medicine to agriculture to environmental science.
    Reported today in Nature Microbiology, the team was led by a professor now at the University of Michigan.
    That artificial intelligence platform, dubbed BacterAI, mapped the metabolism of two microbes associated with oral health — with no baseline information to start with. Bacteria consume some combination of the 20 amino acids needed to support life, but each species requires specific nutrients to grow. The U-M team wanted to know what amino acids are needed by the beneficial microbes in our mouths so they can promote their growth.
    “We know almost nothing about most of the bacteria that influence our health. Understanding how bacteria grow is the first step toward reengineering our microbiome,” said Paul Jensen, U-M assistant professor of biomedical engineering who was at the University of Illinois when the project started.
    Figuring out the combination of amino acids that bacteria like is tricky, however. Those 20 amino acids yield more than a million possible combinations, just based on whether each amino acid is present or not. Yet BacterAI was able to discover the amino acid requirements for the growth of both Streptococcus gordonii and Streptococcus sanguinis.
    To find the right formula for each species, BacterAI tested hundreds of combinations of amino acids per day, honing its focus and changing combinations each morning based on the previous day’s results. Within nine days, it was producing accurate predictions 90% of the time.

    Unlike conventional approaches that feed labeled data sets into a machine-learning model, BacterAI creates its own data set through a series of experiments. By analyzing the results of previous trials, it comes up with predictions of what new experiments might give it the most information. As a result, it figured out most of the rules for feeding bacteria with fewer than 4,000 experiments.
    “When a child learns to walk, they don’t just watch adults walk and then say ‘Ok, I got it,’ stand up, and start walking. They fumble around and do some trial and error first,” Jensen said.
    “We wanted our AI agent to take steps and fall down, to come up with its own ideas and make mistakes. Every day, it gets a little better, a little smarter.”
    Little to no research has been conducted on roughly 90% of bacteria, and the amount of time and resources needed to learn even basic scientific information about them using conventional methods is daunting. Automated experimentation can drastically speed up these discoveries. The team ran up to 10,000 experiments in a single day.
    But the applications go beyond microbiology. Researchers in any field can set up questions as puzzles for AI to solve through this kind of trial and error.
    “With the recent explosion of mainstream AI over the last several months, many people are uncertain about what it will bring in the future, both positive and negative,” said Adam Dama, a former engineer in the Jensen Lab and lead author of the study. “But to me, it’s very clear that focused applications of AI like our project will accelerate everyday research.”
    The research was funded by the National Institutes of Health with support from NVIDIA. More

  • in

    Researchers develop manual for engineering spin dynamics in nanomagnets

    An international team of researchers at the University of California, Riverside, and the Institute of Magnetism in Kyiv, Ukraine, has developed a comprehensive manual for engineering spin dynamics in nanomagnets — an important step toward advancing spintronic and quantum-information technologies.
    Despite their small size, nanomagnets — found in most spintronic applications — reveal rich dynamics of spin excitations, or “magnons,” the quantum-mechanical units of spin fluctuations. Due to its nanoscale confinement, a nanomagnet can be considered to be a zero-dimensional system with a discrete magnon spectrum, similar to the spectrum of an atom.
    “The magnons interact with each other, thus constituting nonlinear spin dynamics,” said Igor Barsukov, an assistant professor of physics and astronomy at UC Riverside and a corresponding author on the study that appears in the journal Physical Review Applied. “Nonlinear spin dynamics is a major challenge and a major opportunity for improving the performance of spintronic technologies such as spin-torque memory, oscillators, and neuromorphic computing.”
    Barsukov explained that the interaction of magnons follows a set of rules — the selection rules. The researchers have now postulated these rules in terms of symmetries of magnetization configurations and magnon profiles.
    The new work continues the efforts to tame nanomagnets for next-generation computation technologies. In a previous publication, the team demonstrated experimentally that symmetries can be used for engineering magnon interactions.
    “We recognized the opportunity, but also noticed that much work needed to be done to understand and formulate the selection rules,” Barsukov said.
    According to the researchers, a comprehensive set of rules reveals the mechanisms behind the magnon interaction.
    “It can be seen as a guide for spintronics labs for debugging and designing nanomagnet devices,” said Arezoo Etesamirad, the first author of the paper who worked in the Barsukov lab and recently graduated with a doctoral degree in physics. “It lays the foundation for developing an experimental toolset for tunable magnetic neurons, switchable oscillators, energy-efficient memory, and quantum-magnonic and other next-generation nanomagnetic applications.”
    Barsukov and Etesamirad were joined in the research by Rodolfo Rodriguez of UCR; and Julia Kharlan and Roman Verba of the Institute of Magnetism in Kyiv, Ukraine.
    The study was funded by the U.S. National Science Foundation, National Academy of Sciences of Ukraine, National Research Foundation of Ukraine, National Science Center — Poland, and NVIDIA Corporation. More

  • in

    Researchers use generative AI to design novel proteins

    Researchers at the University of Toronto have developed an artificial intelligence system that can create proteins not found in nature using generative diffusion, the same technology behind popular image-creation platforms such as DALL-E and Midjourney.
    The system will help advance the field of generative biology, which promises to speed drug development by making the design and testing of entirely new therapeutic proteins more efficient and flexible.
    “Our model learns from image representations to generate fully new proteins, at a very high rate,” says Philip M. Kim, a professor in the Donnelly Centre for Cellular and Biomolecular Research at U of T’s Temerty Faculty of Medicine. “All our proteins appear to be biophysically real, meaning they fold into configurations that enable them to carry out specific functions within cells.”
    Today, the journal Nature Computational Science published the findings, the first of their kind in a peer-reviewed journal. Kim’s lab also published a pre-print on the model last summer through the open-access server bioRxiv, ahead of two similar pre-prints from last December, RF Diffusion by the University of Washington and Chroma by Generate Biomedicines.
    Proteins are made from chains of amino acids that fold into three-dimensional shapes, which in turn dictate protein function. Those shapes evolved over billions of years and are varied and complex, but also limited in number. With a better understanding of how existing proteins fold, researchers have begun to design folding patterns not produced in nature.
    But a major challenge, says Kim, has been to imagine folds that are both possible and functional. “It’s been very hard to predict which folds will be real and work in a protein structure,” says Kim, who is also a professor in the departments of molecular genetics and computer science at U of T. “By combining biophysics-based representations of protein structure with diffusion methods from the image generation space, we can begin to address this problem.”
    The new system, which the researchers call ProteinSGM, draws from a large set of image-like representations of existing proteins that encode their structure accurately. The researchers feed these images into a generative diffusion model, which gradually adds noise until each image becomes all noise. The model tracks how the images become noisier and then runs the process in reverse, learning how to transform random pixels into clear images that correspond to fully novel proteins.

    Jin Sub (Michael) Lee, a doctoral student in the Kim lab and first author on the paper, says that optimizing the early stage of this image generation process was one of the biggest challenges in creating ProteinSGM. “A key idea was the proper image-like representation of protein structure, such that the diffusion model can learn how to generate novel proteins accurately,” says Lee, who is from Vancouver but did his undergraduate degree in South Korea and master’s in Switzerland before choosing U of T for his doctorate.
    Also difficult was validation of the proteins produced by ProteinSGM. The system generates many structures, often unlike anything found in nature. Almost all of them look real according to standard metrics, says Lee, but the researchers needed further proof.
    To test their new proteins, Lee and his colleagues first turned to OmegaFold, an improved version of DeepMind’s software AlphaFold 2. Both platforms use AI to predict the structure of proteins based on amino acid sequences.
    With OmegaFold, the team confirmed that almost all their novel sequences fold into the desired and also novel protein structures. They then chose a smaller number to create physically in test tubes, to confirm the structures were proteins and not just stray strings of chemical compounds.
    “With matches in OmegaFold and experimental testing in the lab, we could be confident these were properly folded proteins. It was amazing to see validation of these fully new protein folds that don’t exist anywhere in nature,” Lee says.
    Next steps based on this work include further development of ProteinSGM for antibodies and other proteins with the most therapeutic potential, Kim says. “This will be a very exciting area for research and entrepreneurship,” he adds.
    Lee says he would like to see generative biology move toward joint design of protein sequences and structures, including protein side-chain conformations. Most research to date has focussed on generation of backbones, the primary chemical structures that hold proteins together.
    “Side-chain configurations ultimately determine protein function, and although designing them means an exponential increase in complexity, it may be possible with proper engineering,” Lee says. “We hope to find out.” More

  • in

    The future of data storage lies in DNA microcapsules

    Storing data in DNA sounds like science fiction, yet it lies in the near future. Professor Tom de Greef expects the first DNA data center to be up and running within five to ten years. Data won’t be stored as zeros and ones in a hard drive but in the base pairs that make up DNA: AT and CG. Such a data center would take the form of a lab, many times smaller than the ones today. De Greef can already picture it all. In one part of the building, new files will be encoded via DNA synthesis. Another part will contain large fields of capsules, each capsule packed with a file. A robotic arm will remove a capsule, read its contents and place it back.
    We’re talking about synthetic DNA. In the lab, bases are stuck together in a certain order to form synthetically produced strands of DNA. Files and photos that are currently stored in data centers can then be stored in DNA. For now, the technique is suitable only for archival storage. This is because the reading of stored data is very expensive, so you want to consult the DNA files as little as possible.
    Large, energy-guzzling data centers made obsolete
    Data storage in DNA offers many advantages. A DNA file can be stored much more compactly, for instance, and the lifespan of the data is also many times longer. But perhaps most importantly, this new technology renders large, energy-guzzling data centers obsolete. And this is desperately needed, warns De Greef, “because in three years, we will generate so much data worldwide that we won’t be able to store half of it.”
    Together with PhD student Bas Bögels, Microsoft and a group of university partners, De Greef has developed a new technique to make the innovation of data storage with synthetic DNA scalable. The results have been published today in the journal Nature Nanotechnology. De Greef works at the Department of Biomedical Engineering and the Institute for Complex Molecular Systems (ICMS) at TU Eindhoven and serves as a visiting professor at Radboud University.
    Scalable
    The idea of using strands of DNA for data storage emerged in the 1980s but was far too difficult and expensive at the time. It became technically possible three decades later, when DNA synthesis started to take off. George Church, a geneticist at Harvard Medical School, elaborated on the idea in 2011. Since then, synthesis and the reading of data have become exponentially cheaper, finally bringing the technology to the market.

    In recent years, De Greef and his group have looked mainly into reading the stored data. For the time being, this is the biggest problem facing this new technique. The PCR method currently used for this, called ‘random access’, is highly error-prone. You can therefore only read one file at a time and, in addition, the data quality deteriorates too much each time you read a file. Not exactly scalable.
    Here’s how it works: PCR (Polymerase Chain Reaction) creates millions of copies of the piece of DNA that you need by adding a primer with the desired DNA code. Corona tests in the lab, for example, are based on this: even a minuscule amount of coronavirus material from your nose is detectable when copied so many times. But if you want to read multiple files simultaneously, you need multiple primer pairs doing their work at the same time. This creates many errors in the copying process.
    Every capsule contains one file
    This is where the capsules come into play. De Greef’s group developed a microcapsule of proteins and a polymer and then anchored one file per capsule. De Greef: “These capsules have thermal properties that we can use to our advantage.” Above 50 degrees Celsius, the capsules seal themselves, allowing the PCR process to take place separately in each capsule. Not much room for error then. De Greef calls this ‘thermo-confined PCR’. In the lab, it has so far managed to read 25 files simultaneously without significant error.
    If you then lower the temperature again, the copies detach from the capsule and the anchored original remains, meaning that the quality of your original file does not deteriorate. De Greef: “We currently stand at a loss of 0.3 percent after three reads, compared to 35 percent with the existing method.”
    Searchable with fluorescence
    And that’s not all. De Greef has also made the data library even easier to search. Each file is given a fluorescent label and each capsule its own color. A device can then recognize the colors and separate them from one another. This brings us back to the imaginary robotic arm at the beginning of this story, which will neatly select the desired file from the pool of capsules in the future.
    This solves the problem of reading the data. De Greef: “Now it’s just a matter of waiting until the costs of DNA synthesis fall further. The technique will then be ready for application.” As a result, he hopes that the Netherlands will soon be able to open its inaugural DNA data center — a world first. More

  • in

    Quan­tum com­puter in reverse gear

    Today’s computers are based on microprocessors that execute so-called gates. A gate can, for example, be an AND operation, i.e. an operation that adds two bits. These gates, and thus computers, are irreversible. That is, algorithms cannot simply run backwards. “If you take the multiplication 2*2=4, you cannot simply run this operation in reverse, because 4 could be 2*2, but likewise 1*4 or 4*1,” explains Wolfgang Lechner, professor of theoretical physics at the University of Innsbruck. If this were possible, however, it would be feasible to factorize large numbers, i.e. divide them into their factors, which is an important pillar of cryptography.

    Martin Lanthaler, Ben Niehoff and Wolfgang Lechner from the Department of Theoretical Physics at the University of Innsbruck and the quantum spin-off ParityQC have now developed exactly this inversion of algorithms with the help of quantum computers. The starting point is a classical logic circuit, which multiplies two numbers. If two integers are entered as the input value, the circuit returns their product. Such a circuit is built from irreversible operations. “However, the logic of the circuit can be encoded within ground states of a quantum system,” explains Martin Lanthaler from Wolfgang Lechner’s team. “Thus, both multiplication and factorization can be understood as ground-state problems and solved using quantum optimization methods.”
    Superposition of all possible results
    “The core of our work is the encoding of the basic building blocks of the multiplier circuit, specifically AND gates, half and full adders with the parity architecture as the ground state problem on an ensemble of interacting spins,” says Martin Lanthaler. The coding allows the entire circuit to be built from repeating subsystems that can be arranged on a two-dimensional grid. By stringing several of these subsystems together, larger problem instances can be realized. Instead of the classical brute force method, where all possible factors are tested, quantum methods can speed up the search process: To find the ground state, and thus solve an optimization problem, it is not necessary to search the whole energy landscape, but deeper valleys can be reached by “tunneling.”
    The current research work provides a blueprint for a new type of quantum computer to solve the factorization problem, which is a cornerstone of modern cryptography. This blueprint is based on the parity architecture developed at the University of Innsbruck and can be implemented on all current quantum computing platforms.
    The results were recently published in Nature Communications Physics. Financial support for the research was provided by the Austrian Science Fund FWF, the European Union and the Austrian Research Promotion Agency FFG, among others. More

  • in

    Researchers detect and classify multiple objects without images

    Researchers have developed a new high-speed way to detect the location, size and category of multiple objects without acquiring images or requiring complex scene reconstruction. Because the new approach greatly decreases the computing power necessary for object detection, it could be useful for identifying hazards while driving.
    “Our technique is based on a single-pixel detector, which enables efficient and robust multi-object detection directly from a small number of 2D measurements,” said research team leader Liheng Bian from the Beijing Institute of Technology in China. “This type of image-free sensing technology is expected to solve the problems of heavy communication load, high computing overhead and low perception rate of existing visual perception systems.”
    Today’s image-free perception methods can only achieve classification, single object recognition or tracking. To accomplish all three at once, the researchers developed a technique known as image-free single-pixel object detection (SPOD). In the Optica Publishing Group journal Optics Letters, they report that SPOD can achieve an object detection accuracy of just over 80%.
    The SPOD technique builds on the research group’s previous accomplishments in developing imaging-free sensing technology as efficient scene perception technology. Their prior work includes image-free classification, segmentation and character recognition based on a single-pixel detector.
    “For autonomous driving, SPOD could be used with lidar to help improve scene reconstruction speed and object detection accuracy,” said Bian. “We believe that it has a high enough detection rate and accuracy for autonomous driving while also reducing the transmission bandwidth and computing resource requirements needed for object detection.”
    Detection without images
    Automating advanced visual tasks — whether used to navigate a vehicle or track a moving plane — usually require detailed images of a scene to extract the features necessary to identify an object. However, this requires either complex imaging hardware or complicated reconstruction algorithms, which leads to high computational cost, long running time and heavy data transmission load.For this reason, the traditional image first, perceive later approaches may not be best for object detection.

    Image-free sensing methods based on single-pixel detectors can cut down on the computational power needed for object detection. Instead of employing a pixelated detector such as a CMOS or CCD, single-pixel imaging illuminates the scene with a sequence of structured light patterns and then records the transmitted light intensity to acquire the spatial information of objects. This information is then used to computationally reconstruct the object or to calculate its properties.
    For SPOD, the researchers used a small but optimized structured light pattern to quickly scan the entire scene and obtain 2D measurements. These measurements are fed into a deep learning model known as a transformer-based encoder to extract the high-dimensional meaningful features in the scene. These features are then fed into a multi-scale attention network-based decoder, which outputs the class, location and size information of all targets in the scene simultaneously.
    “Compared to the full-size pattern used by other single-pixel detection methods, the small, optimized pattern produces better image-free sensing performance,” said group member Lintao Peng. “Also, the multi-scale attention network in the SPOD decoder reinforces the network’s attention to the target area in the scene. This allows more efficient extraction of scene features, enabling state-of-the art object detection performance.”
    Proof-of-concept demonstration
    To experimentally demonstrate SPOD, the researchers built a proof-of-concept setup. Images randomly selected from the Pascal Voc 2012 test dataset were printed on film and used as target scenes. When a sampling rate of 5% was used, the average time to complete spatial light modulation and image-free object detection per scene with SPOD was just 0.016 seconds. This is much faster than performing scene reconstruction first (0.05 seconds) and then object detection (0.018 seconds. SPOD showed an average detection accuracy of 82.2% for all the object classes included in the test dataset.
    “Currently, SPOD cannot detect every possible object category because the existing object detection dataset used to train the model only contains 80 categories,” said Peng. “However, when faced with a specific task, the pre-trained model can be fine-tuned to achieve image-free multi-object detection of new target classes for applications such as pedestrian, vehicle or boat detection.”
    Next, the researchers plan to extend the image-free perception technology to other kinds of detectors and computational acquisition systems to achieve reconstruction-free sensing technology. More