More stories

  • in

    Groundbreaking soft valve technology enabling sensing and control integration in soft robots

    Soft inflatable robots have emerged as a promising paradigm for applications that require inherent safety and adaptability. However, the integration of sensing and control systems in these robots has posed significant challenges without compromising their softness, form factor, or capabilities. Addressing this obstacle, a research team jointly led by Professor Jiyun Kim (Department of New Material Engineering, UNIST) and Professor Jonbum Bae (Department of Mechanical Engineering, UNIST) has developed groundbreaking “soft valve” technology — an all-in-one solution that integrates sensors and control valves while maintaining complete softness.
    Traditionally, soft robot bodies coexisted with rigid electronic components for perception purposes. The study conducted by this research team introduces a novel approach to overcome this limitation by creating soft analogs of sensors and control valves that operate without electricity. The resulting tube-shaped part serves dual functions: detecting external stimuli and precisely controlling driving motion using only air pressure. By eliminating the need for electricity-dependent components, these all-soft valves enable safe operation underwater or in environments where sparks may pose risks — while simultaneously reducing weight burdens on robotic systems. Moreover, each component is inexpensive at approximately 800 Won.
    “Previous soft robots had flexible bodies but relied on hard electronic parts for stimulus detection sensors and drive control units,” explained Professor Kim. “Our study focuses on making both sensors and drive control parts using soft materials.”
    The research team showcased various applications utilizing this groundbreaking technology. They created universal tongs capable of delicately picking up fragile items such as potato chips — preventing breakage caused by excessive force exerted by conventional rigid robot hands. Additionally, they successfully employed these all-soft components to develop wearable elbow assist robots designed to reduce muscle burden caused by repetitive tasks or strenuous activities involving arm movements. The elbow support automatically adjusts according to the angle at which an individual’s arm is bent — a breakthrough contributing to a 63% average decrease in the force exerted on the elbow when wearing the robot.
    The soft valve operates by utilizing air flow within a tube-shaped structure. When tension is applied to one end of the tube, a helically wound thread inside compresses it, controlling inflow and outflow of air. This accordion-like motion allows for precise and flexible movements without relying on electrical power.
    Furthermore, the research team confirmed that by programming different structures or numbers of threads within the tube, they could accurately control airflow variations. This programmability enables customized adjustments to suit specific situations and requirements — providing flexibility in driving unit response even with consistent external forces applied to the end of the tube.
    “These newly developed components can be easily employed using material programming alone, eliminating electronic devices,” expressed Professor Bae with excitement about this development. “This breakthrough will significantly contribute to advancements in various wearable systems.”
    This groundbreaking soft valve technology marks a significant step toward fully soft, electronics-free robots capable of autonomous operation — a crucial milestone for enhancing safety and adaptability across numerous industries.
    Support for this work was provided by various organizations including Korea’s National Research Foundation (NRF), Korea Institute of Materials Science (KIMS), and Korea Evaluation Institute of Industrial Technology (KEIT). More

  • in

    Are US teenagers more likely than others to exaggerate their math abilities?

    A major new study has revealed that American teenagers are more likely than any other nationality to brag about their math ability.
    Research using data from 40,000 15-year-olds from nine English-speaking nations internationally found those in North America were the most likely to exaggerate their mathematical knowledge, while those in Ireland and Scotland were least likely to do so.
    The study, published in the peer-reviewed journal Assessment in Education: Principles, Policy & Practice, used responses from the OECD Programme for International Student Assessment (PISA), in which participants took a two-hour maths test alongside a 30-minute background questionnaire.
    They were asked how familiar they were with each of 16 mathematical terms — but three of the terms were fake.
    Further questions revealed those who claimed familiarity with non-existent mathematical concepts were also more likely to display overconfidence in their academic prowess, problem-solving skills and perseverance.
    For instance, they claimed higher levels of competence in calculating a discount on a television and in finding their way to a destination. Two thirds of those most likely to overestimate their mathematical ability were confident they could work out the petrol consumption of a car, compared to just 40 per cent of those least likely to do so.
    Those likely to over-claim were also more likely to say if their mobile phone stopped sending texts they would consult a manual (41 per cent versus 30 per cent) while those less likely to do so tended to say they would react by pressing all the buttons (56 per cent versus 49 per cent). More

  • in

    AI-driven tool makes it easy to personalize 3D-printable models

    As 3D printers have become cheaper and more widely accessible, a rapidly growing community of novice makers are fabricating their own objects. To do this, many of these amateur artisans access free, open-source repositories of user-generated 3D models that they download and fabricate on their 3D printer.
    But adding custom design elements to these models poses a steep challenge for many makers, since it requires the use of complex and expensive computer-aided design (CAD) software, and is especially difficult if the original representation of the model is not available online. Plus, even if a user is able to add personalized elements to an object, ensuring those customizations don’t hurt the object’s functionality requires an additional level of domain expertise that many novice makers lack.
    To help makers overcome these challenges, MIT researchers developed a generative-AI-driven tool that enables the user to add custom design elements to 3D models without compromising the functionality of the fabricated objects. A designer could utilize this tool, called Style2Fab, to personalize 3D models of objects using only natural language prompts to describe their desired design. The user could then fabricate the objects with a 3D printer.
    “For someone with less experience, the essential problem they faced has been: Now that they have downloaded a model, as soon as they want to make any changes to it, they are at a loss and don’t know what to do. Style2Fab would make it very easy to stylize and print a 3D model, but also experiment and learn while doing it,” says Faraz Faruqi, a computer science graduate student and lead author of a paper introducing Style2Fab.
    Style2Fab is driven by deep-learning algorithms that automatically partition the model into aesthetic and functional segments, streamlining the design process.
    In addition to empowering novice designers and making 3D printing more accessible, Style2Fab could also be utilized in the emerging area of medical making. Research has shown that considering both the aesthetic and functional features of an assistive device increases the likelihood a patient will use it, but clinicians and patients may not have the expertise to personalize 3D-printable models.
    With Style2Fab, a user could customize the appearance of a thumb splint so it blends in with her clothing without altering the functionality of the medical device, for instance. Providing a user-friendly tool for the growing area of DIY assistive technology was a major motivation for this work, adds Faruqi. More

  • in

    Verbal nonsense reveals limitations of AI chatbots

    The era of artificial-intelligence chatbots that seem to understand and use language the way we humans do has begun. Under the hood, these chatbots use large language models, a particular kind of neural network. But a new study shows that large language models remain vulnerable to mistaking nonsense for natural language. To a team of researchers at Columbia University, it’s a flaw that might point toward ways to improve chatbot performance and help reveal how humans process language.
    In a paper published online today in Nature Machine Intelligence, the scientists describe how they challenged nine different language models with hundreds of pairs of sentences. For each pair, people who participated in the study picked which of the two sentences they thought was more natural, meaning that it was more likely to be read or heard in everyday life. The researchers then tested the models to see if they would rate each sentence pair the same way the humans had.
    In head-to-head tests, more sophisticated AIs based on what researchers refer to as transformer neural networks tended to perform better than simpler recurrent neural network models and statistical models that just tally the frequency of word pairs found on the internet or in online databases. But all the models made mistakes, sometimes choosing sentences that sound like nonsense to a human ear.
    “That some of the large language models perform as well as they do suggests that they capture something important that the simpler models are missing,” said Dr. Nikolaus Kriegeskorte, PhD, a principal investigator at Columbia’s Zuckerman Institute and a coauthor on the paper. “That even the best models we studied still can be fooled by nonsense sentences shows that their computations are missing something about the way humans process language.”
    Consider the following sentence pair that both human participants and the AI’s assessed in the study:
    That is the narrative we have been sold.
    This is the week you have been dying. More

  • in

    New camera offers ultrafast imaging at a fraction of the normal cost

    Capturing blur-free images of fast movements like falling water droplets or molecular interactions requires expensive ultrafast cameras that acquire millions of images per second. In a new paper, researchers report a camera that could offer a much less expensive way to achieve ultrafast imaging for a wide range of applications such as real-time monitoring of drug delivery or high-speed lidar systems for autonomous driving.
    “Our camera uses a completely new method to achieve high-speed imaging,” said Jinyang Liang from the Institut national de la recherche scientifique (INRS) in Canada. “It has an imaging speed and spatial resolution similar to commercial high-speed cameras but uses off-the-shelf components that would likely cost less than a tenth of today’s ultrafast cameras, which can start at close to $100,000.”
    In Optica, Optica Publishing Group’s journal for high-impact research, Liang together with collaborators from Concordia University in Canada and Meta Platforms Inc. show that their new diffraction-gated real-time ultrahigh-speed mapping (DRUM) camera can capture a dynamic event in a single exposure at 4.8 million frames per second. They demonstrate this capability by imaging the fast dynamics of femtosecond laser pulses interacting with liquid and laser ablation in biological samples.
    “In the long term, I believe that DRUM photography will contribute to advances in biomedicine and automation-enabling technologies such as lidar, where faster imaging would allow more accurate sensing of hazards,” said Liang. “However, the paradigm of DRUM photography is quite generic. In theory, it can be used with any CCD and CMOS cameras without degrading their other advantages such as high sensitivity.”
    Creating a better ultrafast camera
    Despite a great deal of progress in ultrafast imaging, today’s methods are still expensive and complex to implement. Their performance is also limited by trade-offs between the number of frames captured in each movie and light throughput or temporal resolution. To overcome these issues, the researchers developed a new time-gating method known as time-varying optical diffraction.
    Cameras use gates to control when light hits the sensor. For example, the shutter in a traditional camera is a type of gate that opens and closes once. In time-gating, the gate is opened and closed in quick succession a certain number of times before the sensor reads out the image. This captures a short high-speed movie of a scene. More

  • in

    Evolution wired human brains to act like supercomputers

    Scientists have confirmed that human brains are naturally wired to perform advanced calculations, much like a high-powered computer, to make sense of the world through a process known as Bayesian inference.
    In a study published in the journal Nature Communications, researchers from the University of Sydney, University of Queensland and University of Cambridge developed a specific mathematical model that closely matches how human brains work when it comes to reading vision. The model contained everything needed to carry out Bayesian inference.
    Bayesian inference is a statistical method that combines prior knowledge with new evidence to make intelligent guesswork. For example, if you know what a dog looks like and you see a furry animal with four legs, you might use your prior knowledge to guess it’s a dog.
    This inherent capability enables people to interpret the environment with extraordinary precision and speed, unlike machines that can be bested by simple CAPTCHA security measures when prompted to identify fire hydrants in a panel of images.
    The study’s senior investigator Dr Reuben Rideaux, from the University of Sydney’s School of Psychology, said: “Despite the conceptual appeal and explanatory power of the Bayesian approach, how the brain calculates probabilities is largely mysterious.”
    “Our new study sheds light on this mystery. We discovered that the basic structure and connections within our brain’s visual system are set up in a way that allows it to perform Bayesian inference on the sensory data it receives.
    “What makes this finding significant is the confirmation that our brains have an inherent design that allows this advanced form of processing, enabling us to interpret our surroundings more effectively.”
    The study’s findings not only confirm existing theories about the brain’s use of Bayesian-like inference but open doors to new research and innovation, where the brain’s natural ability for Bayesian inference can be harnessed for practical applications that benefit society. More

  • in

    Take the money now or later? Financial scarcity doesn’t lead to poor decision making

    When people feel that their resources are scarce — that they don’t have enough money or time to meet their needs — they often make decisions that favor short-term gains over long-term benefits. Because of that, researchers have argued that scarcity pushes people to make myopic, impulsive decisions. But a study published by the American Psychological Association provides support for a different, less widely held view: People experiencing scarcity make reasonable decisions based on their circumstances, and only prioritize short-term benefits over long-term gains when scarcity threatens their more immediate needs.
    “This research challenges the predominant view that when people feel poor or live in poverty, they become impatient and shortsighted and can’t or don’t think about the future,” said study co-author Eesha Sharma, Ph.D., of San Diego State University. “It provides a framework, instead, for understanding that when people are experiencing financial scarcity, they’re trying to make the best decision they can, given the circumstances they’re in.”
    The research was published in the Journal of Personality and Social Psychology.
    Sharma and co-authors Stephanie Tully, Ph.D., of the University of Southern California, and Xiang Wang, Ph.D., of Lingnan University in Hong Kong, wanted to distinguish between two competing ideas: That people’s preference for shorter-term gains reflects impatience and impulsivity, or that it reflects more intentional, deliberate decision-making. To do so, they examined how people’s decisions change depending on the timeline of the needs that they feel they don’t have enough money for.
    “Needs exist across a broad time horizon,” said Tully. “We often think about immediate needs like food or shelter, but people can experience scarcity related to future needs, too, such as replacing a run-down car before it dies, buying a house or paying for college. Yet research on scarcity has focused almost exclusively on immediate needs.”
    In the current study, the researchers conducted five experiments in which they measured or induced a sense of scarcity in participants, and examined how the choices people made changed depending on whether that scarcity was related to a shorter- or longer-term need.
    Overall, they found that when people feel that they don’t have enough resources to meet an immediate need, such as food or shelter, they are more likely to make decisions that offer an immediate payout, even if it comes at the expense of receiving a larger payout later. But when scarcity threatens a longer-term need, such as replacing a run-down car, people experiencing scarcity are no less willing to wait for larger, later rewards — and in some cases are more willing to wait — compared with people not experiencing scarcity. More

  • in

    Images of simulated cities help artificial intelligence to understand real streetscapes

    Recent advances in artificial intelligence and deep learning have revolutionized many industries, and might soon help recreate your neighborhood as well. Given images of a landscape, the analysis of deep-learning models can help urban landscapers visualize plans for redevelopment, thereby improving scenery or preventing costly mistakes.
    To accomplish this, however, models must be able to correctly identify and categorize each element in a given image. This step, called instance segmentation, remains challenging for machines owing to a lack of suitable training data. Although it is relatively easy to collect images of a city, generating the ‘ground truth’, that is, the labels that tell the model if its segmentation is correct, involves painstakingly segmenting each image, often by hand.
    Now, to address this problem, researchers at Osaka University have developed a way to train these data-hungry models using computer simulation. First, a realistic 3D city model is used to generate the segmentation ground truth. Then, an image-to-image model generates photorealistic images from the ground truth images. The result is a dataset of realistic images similar to those of an actual city, complete with precisely generated ground-truth labels that do not require manual segmentation.
    “Synthetic data have been used in deep learning before,” says lead author Takuya Kikuchi. “But most landscape systems rely on 3D models of existing cities, which remain hard to build. We also simulate the city structure, but we do it in a way that still generates effective training data for models in the real world.”
    After the 3D model of a realistic city is generated procedurally, segmentation images of the city are created with a game engine. Finally, a generative adversarial network, which is a neural network that uses game theory to learn how to generate realistic-looking images, is trained to convert images of shapes into images with realistic city textures This image-to-image model creates the corresponding street-view images.
    “This removes the need for datasets of real buildings, which are not publicly available. Moreover, several individual objects can be separated, even if they overlap in the image,” explains corresponding author Tomohiro Fukuda. “But most importantly, this approach saves human effort, and the costs associated with that, while still generating good training data.”
    To prove this, a segmentation model called a ‘mask region-based convolutional neural network’ was trained on the simulated data and another was trained on real data. The models performed similarly on instances of large, distinct buildings, even though the time to produce the dataset was reduced by 98%.
    The researchers plan to see if improvements to the image-to-image model increase performance under more conditions. For now, this approach generates large amounts of data with an impressively low amount of effort. The researchers’ achievement will address current and upcoming shortages of training data, reduce costs associated with dataset preparation and help to usher in a new era of deep learning-assisted urban landscaping. More