More stories

  • in

    Engineers develop breakthrough ‘robot skin’

    Smart, stretchable and highly sensitive, a new soft sensor developed by UBC and Honda researchers opens the door to a wide range of applications in robotics and prosthetics.
    When applied to the surface of a prosthetic arm or a robotic limb, the sensor skin provides touch sensitivity and dexterity, enabling tasks that can be difficult for machines such as picking up a piece of soft fruit. The sensor is also soft to the touch, like human skin, which helps make human interactions safer and more lifelike.
    “Our sensor can sense several types of forces, allowing a prosthetic or robotic arm to respond to tactile stimuli with dexterity and precision. For instance, the arm can hold fragile objects like an egg or a glass of water without crushing or dropping them,” said study author Dr. Mirza Saquib Sarwar, who created the sensor as part of his PhD work in electrical and computer engineering at UBC’s faculty of applied science.
    Giving machines a sense of touch
    The sensor is primarily composed of silicone rubber, the same material used to make many skin special effects in movies. The team’s unique design gives it the ability to buckle and wrinkle, just like human skin.
    “Our sensor uses weak electric fields to sense objects, even at a distance, much as touchscreens do. But unlike touchscreens, this sensor is supple and can detect forces into and along its surface. This unique combination is key to adoption of the technology for robots that are in contact with people,” explained Dr. John Madden, senior study author and a professor of electrical and computer engineering who leads the Advanced Materials and Process Engineering Laboratory (AMPEL) at UBC.
    The UBC team developed the technology in collaboration with Frontier Robotics, Honda’s research institute. Honda has been innovating in humanoid robotics since the 1980s, and developed the well-known ASIMO robot. It has also developed devices to assist walking, and the emerging Honda Avatar Robot. More

  • in

    Vision via sound for the blind

    ustralian researchers have developed cutting-edge technology known as “acoustic touch” that helps people ‘see’ using sound. The technology has the potential to transform the lives of those who are blind or have low vision.
    Around 39 million people worldwide are blind, according to the World Health Organisation, and an additional 246 million people live with low vision, impacting their ability to participate in everyday life activities.
    The next generation smart glasses, which translate visual information into distinct sound icons, were developed by researchers from the University of Technology Sydney and the University of Sydney, together with Sydney start-up ARIA Research.
    “Smart glasses typically use computer vision and other sensory information to translate the wearer’s surrounding into computer-synthesized speech,” said Distinguished Professor Chin-Teng Lin, a global leader in brain-computer interface research from the University of Technology Sydney.
    “However, acoustic touch technology sonifies objects, creating unique sound representations as they enter the device’s field of view. For example, the sound of rustling leaves might signify a plant, or a buzzing sound might represent a mobile phone,” he said.
    A study into the efficacy and usability of acoustic touch technology to assist people who are blind, led by Dr Howe Zhu from the University of Technology Sydney, has just been published in the journal PLOS ONE.
    The researchers tested the device with 14 participants; seven individuals with blindness or low vision and seven blindfolded sighted individuals who served as a control group. More

  • in

    Using sound to test devices, control qubits

    Acoustic resonators are everywhere. In fact, there is a good chance you’re holding one in your hand right now. Most smart phones today use bulk acoustic resonators as radio frequency filters to filter out noise that could degrade a signal. These filters are also used in most Wi-Fi and GPS systems.
    Acoustic resonators are more stable than their electrical counterparts, but they can degrade over time. There is currently no easy way to actively monitor and analyze the degradation of the material quality of these widely used devices.
    Now, researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), in collaboration with researchers at the OxideMEMS Lab at Purdue University, have developed a system that uses atomic vacancies in silicon carbide to measure the stability and quality of acoustic resonators. What’s more, these vacancies could also be used for acoustically-controlled quantum information processing, providing a new way to manipulate quantum states embedded in this commonly-used material.
    “Silicon carbide, which is the host for both the quantum reporters and the acoustic resonator probe, is a readily available commercial semiconductor that can be used at room temperature,” said Evelyn Hu, the Tarr-Coyne Professor of Applied Physics and of Electrical Engineering and the Robin Li and Melissa Ma Professor of Arts and Sciences, and senior author of the paper. “As an acoustic resonator probe, this technique in silicon carbide could be used in monitoring the performance of accelerometers, gyroscopes and clocks over their lifetime and, in a quantum scheme, has potential for hybrid quantum memories and quantum networking.”
    The research was published in Nature Electronics.
    A look inside acoustic resonators
    Silicon carbide is a common material for microelectromechanical systems (MEMS), which includes bulk acoustic resonators. More

  • in

    Clinical trials could yield better data with fewer patients thanks to new tool

    University of Alberta researchers have developed a new statistical tool to evaluate the results of clinical trials, with the aim of allowing smaller trials to ask more complex research questions and get effective treatments to patients more quickly.
    In their paper, the team reports on their new “Chauhan Weighted Trajectory Analysis,” which they developed to improve on the Kaplan-Meier estimator, the standard tool since 1959.
    The Kaplan-Meier test limits researchers because it can only assess binary questions, such as whether patients survived or died on a treatment. It can’t include other factors such as adverse drug reactions or quality-of-life measures such as being able to walk or care for yourself. The new tool allows simultaneous evaluation and visualization of multiple outcomes in one graph.
    “In general, diseases aren’t binary,” explains first author Karsh Chauhan, a fourth-year MD student at the U of A. “Now we can capture the severity of diseases — whether they make patients sick, whether they put them in hospital, whether they lead to death — and we can capture both the rise and the fall of how patients do on different treatments.”
    John Mackey, a breast cancer medical oncologist and professor emeritus of oncology, added this tool allows researchers to do a smaller, less expensive, quicker trial with fewer patients, and get the overall benefit of a new treatment more rapidly out there in the world.
    The two began working on the statistical tool three years ago when they were designing a clinical trial for a new device to prevent bedsores, which affect many patients with long-term illness. They wanted to look at how the severity of illness changed during treatment, but the Kaplan-Meier test wasn’t going to help.
    “Dr. Mackey said to me, ‘If the tool doesn’t exist, then why don’t you build it yourself?’ That was very exciting,” says Chauhan, who also has a BSc in engineering physics, which he calls a degree in “problem-solving.” More

  • in

    Can AI grasp related concepts after learning only one?

    Humans have the ability to learn a new concept and then immediately use it to understand related uses of that concept — once children know how to “skip,” they understand what it means to “skip twice around the room” or “skip with your hands up.”
    But are machines capable of this type of thinking? In the late 1980s, Jerry Fodor and Zenon Pylyshyn, philosophers and cognitive scientists, posited that artificial neural networks — the engines that drive artificial intelligence and machine learning — are not capable of making these connections, known as “compositional generalizations.” However, in the decades since, scientists have been developing ways to instill this capacity in neural networks and related technologies, but with mixed success, thereby keeping alive this decades-old debate.
    Researchers at New York University and Spain’s Pompeu Fabra University have now developed a technique — reported in the journal Nature — that advances the ability of these tools, such as ChatGPT, to make compositional generalizations. This technique, Meta-learning for Compositionality (MLC), outperforms existing approaches and is on par with, and in some cases better than, human performance. MLC centers on training neural networks — the engines driving ChatGPT and related technologies for speech recognition and natural language processing — to become better at compositional generalization through practice.
    Developers of existing systems, including large language models, have hoped that compositional generalization will emerge from standard training methods, or have developed special-purpose architectures in order to achieve these abilities. MLC, in contrast, shows how explicitly practicing these skills allow these systems to unlock new powers, the authors note.
    “For 35 years, researchers in cognitive science, artificial intelligence, linguistics, and philosophy have been debating whether neural networks can achieve human-like systematic generalization,” says Brenden Lake, an assistant professor in NYU’s Center for Data Science and Department of Psychology and one of the authors of the paper. “We have shown, for the first time, that a generic neural network can mimic or exceed human systematic generalization in a head-to-head comparison.”
    In exploring the possibility of bolstering compositional learning in neural networks, the researchers created MLC, a novel learning procedure in which a neural network is continuously updated to improve its skills over a series of episodes. In an episode, MLC receives a new word and is asked to use it compositionally — for instance, to take the word “jump” and then create new word combinations, such as “jump twice” or “jump around right twice.” MLC then receives a new episode that features a different word, and so on, each time improving the network’s compositional skills.
    To test the effectiveness of MLC, Lake, co-director of NYU’s Minds, Brains, and Machines Initiative, and Marco Baroni, a researcher at the Catalan Institute for Research and Advanced Studies and professor at the Department of Translation and Language Sciences of Pompeu Fabra University, conducted a series of experiments with human participants that were identical to the tasks performed by MLC.
    In addition, rather than learn the meaning of actual words — terms humans would already know — they also had to learn the meaning of nonsensical terms (e.g., “zup” and “dax”) as defined by the researchers and know how to apply them in different ways. MLC performed as well as the human participants — and, in some cases, better than its human counterparts. MLC and people also outperformed ChatGPT and GPT-4, which despite its striking general abilities, showed difficulties with this learning task.
    “Large language models such as ChatGPT still struggle with compositional generalization, though they have gotten better in recent years,” observes Baroni, a member of Pompeu Fabra University’s Computational Linguistics and Linguistic Theory research group. “But we think that MLC can further improve the compositional skills of large language models.” More

  • in

    Highest-resolution single-photon superconducting camera

    Researchers at the National Institute of Standards and Technology (NIST) and their colleagues have built a superconducting camera containing 400,000 pixels — 400 times more than any other device of its type.
    Superconducting cameras allow scientists to capture very weak light signals, whether from distant objects in space or parts of the human brain. Having more pixels could open up many new applications in science and biomedical research.
    The NIST camera is made up of grids of ultrathin electrical wires, cooled to near absolute zero, in which current moves with no resistance until a wire is struck by a photon. In these superconducting-nanowire cameras, the energy imparted by even a single photon can be detected because it shuts down the superconductivity at a particular location (pixel) on the grid. Combining all the locations and intensities of all the photons makes up an image.
    The first superconducting cameras capable of detecting single photons were developed more than 20 years ago. Since then, the devices have contained no more than a few thousand pixels — too limited for most applications.
    Creating a superconducting camera with a greater number of pixels has posed a serious challenge because it would become all but impossible to connect every single chilled pixel among many thousands to its own readout wire. The challenge stems from the fact that each of the camera’s superconducting components must be cooled to ultralow temperatures to function properly, and individually connecting every pixel among millions to the cooling system would be virtually impossible.
    NIST researchers Adam McCaughan and Bakhrom Oripov and their collaborators at NASA’s Jet Propulsion Laboratory in Pasadena, California, and the University of Colorado Boulder overcame that obstacle by combining the signals from many pixels onto just a few room-temperature readout wires.
    A general property of any superconducting wire is that it allows current to flow freely up to a certain maximum “critical” current. To take advantage of that behavior, the researchers applied a current just below the maximum to the sensors. Under that condition, if even a single photon strikes a pixel, it destroys the superconductivity. The current is no longer able to flow without resistance through the nanowire and is instead shunted to a small resistive heating element connected to each pixel. The shunted current creates an electrical signal that can rapidly be detected. More

  • in

    Climate change likely impacted human populations in the Neolithic and Bronze Age

    Human populations in Neolithic Europe fluctuated with changing climates, according to a study published October 25, 2023 in the open-access journal PLOS ONE by Ralph Großmann of Kiel University, Germany and colleagues.
    The archaeological record is a valuable resource for exploring the relationship between humans and the environment, particularly how each is affected by the other. In this study, researchers examined Central European regions rich in archaeological remains and geologic sources of climate data, using these resources to identify correlations between human population trends and climate change.
    The three regions examined are the Circumharz region of central Germany, the Czech Republic/Lower Austria region, and the Northern Alpine Foreland of southern Germany. Researchers compiled over 3400 published radiocarbon dates from archaeological sites in these regions to serve as indicators of ancient populations, following the logic that more dates are available from larger populations leaving behind more materials. Climate data came from cave formations in these regions which provide datable information about ancient climate conditions. These data span 3550-1550 BC, from the Late Neolithic to the Early Bronze Age.
    The study found a notable correlation between climate and human populations. During warm and wet times, populations tended to increase, likely bolstered by improved crops and economies. During cold and dry times, populations often decreased, sometimes experiencing major cultural shifts with potential evidence of increasing social inequality, such as the emergence of high status “princely burials” of some individuals in the Circumharz region.
    These results suggest that at least some of the trends in human populations over time can be attributed to the effects of changing climates. The authors acknowledge that these data are susceptible to skewing by limitations of the archaeological record in these regions, and that more data will be important to support these results. This type of study is crucial for understanding human connectivity to the environment and the impacts of changing climates on human cultures.
    The authors add: “Between 5500 and 3500 years ago, climate was a major factor in population development in the regions around the Harz Mountains, in the northern Alpine foreland and in the region of what is now the Czech Republic and Austria. However, not only the population size, but also the social structures changed with climate fluctuations.” More

  • in

    ‘Dim-witted’ pigeons use the same principles as AI to solve tasks

    A new study provides evidence that pigeons tackle some problems just as artificial intelligence would — allowing them to solve difficult tasks that would vex humans.
    Previous research had shown pigeons learned how to solve complex categorization tasks that human ways of thinking — like selective attention and explicit rule use — would not be useful in solving.
    Researchers had theorized that pigeons used a “brute force” method of solving problems that is similar to what is used in AI models, said Brandon Turner, lead author of the new study and professor of psychology at The Ohio State University.
    But this study may have proven it: Turner and a colleague tested a simple AI model to see if it could solve the problems in the way they thought pigeons did — and it worked.
    “We found really strong evidence that the mechanisms guiding pigeon learning are remarkably similar to the same principles that guide modern machine learning and AI techniques,” Turner said.
    “Our findings suggest that in the pigeon, nature may have found a way to make an incredibly efficient learner that has no ability to generalize or extrapolate like humans would.”
    Turner conducted the study with Edward Wasserman, a professor of psychology at the University of Iowa. Their results were published recently in the journal iScience. More