More stories

  • in

    This 3D printed gripper doesn’t need electronics to function

    This soft robotic gripper is not only 3D printed in one print, it also doesn’t need any electronics to work.
    The device was developed by a team of roboticists at the University of California San Diego, in collaboration with researchers at the BASF corporation, who detailed their work in a recent issue of Science Robotics.
    The researchers wanted to design a soft gripper that would be ready to use right as it comes off the 3D printer, equipped with built in gravity and touch sensors. As a result, the gripper can pick up, hold, and release objects. No such gripper existed before this work.
    “We designed functions so that a series of valves would allow the gripper to both grip on contact and release at the right time,” said Yichen Zhai, a postdoctoral researcher in the Bioinspired Robotics and Design Lab at the University of California San Diego and the leading author of the paper, which was published in the June 21 issue of Science Robotics. “It’s the first time such a gripper can both grip and release. All you have to do is turn the gripper horizontally. This triggers a change in the airflow in the valves, making the two fingers of the gripper release.”
    This fluidic logic allows the robot to remember when it has grasped an object and is holding on to it. When it detects the weight of the object pushing to the side, as it is rotating to the horizontal, it releases the object.
    Soft robotics holds the promise of allowing robots to interact safely with humans and delicate objects. This gripper can be mounted on a robotic arm for industrial manufacturing applications, food production and the handling of fruits and vegetables. It can also be mounted onto a robot for research and exploration tasks. In addition, it can function untethered, with a bottle of high-pressure gas as its only power source.
    Most 3D-printed soft robots often have a certain degree of stiffness; contain a large number of leaks when they come off the printer; and need a fair amount of processing and assembly after printing in order to be usable. More

  • in

    Fusion model hot off the wall

    Humans may never be able to tame the Sun, but hydrogen plasma — making up most of the Sun’s interior — can be confined in a magnetic field as part of fusion power generation: with a caveat.
    The extremely high temperature plasmas, typically as high as 100 million degrees Celsius, confined in the tokamaks — donut-shaped fusion reactors — cause damage to the containment walls of these mega devices. Researchers inject hydrogen and inert gases near the device wall to cool the plasma by radiation and recombination, which is the reverse of ionization. Heat load mitigation is critical to extending the lifetime of future fusion device.
    Understanding and predicting the process of the vibrational and rotational temperatures of hydrogen molecules near the walls could enhance the recombination, but effective strategies have remained elusive.
    An international team of researchers led by Kyoto University has recently found a way to explain the rotational temperatures measured in three different experimental fusion devices in Japan and the United States. Their model evaluates the surface interactions and electron-proton collisions of hydrogen molecules.
    “In our model, we targeted the evaluation on the rotational temperatures in the low energy levels, enabling us to explain the measurements from several experimental devices,” adds corresponding author Nao Yoneda of KyotoU’s Graduate School of Engineering.
    By enabling the prediction and control of the rotational temperature near the wall surface, the team was able to dissipate plasma heat flux and optimize the devices’ operative conditions.
    “We still need to understand the mechanisms of rotational-vibrational hydrogen excitations,” Yoneda reflects, “but we were pleased that the versatility of our model also allowed us to reproduce the measured rotational temperatures reported in literature.” More

  • in

    Breakthrough in Monte Carlo computer simulations

    Researchers at Leipzig University have developed a highly efficient method to investigate systems with long-range interactions that were previously puzzling to experts. These systems can be gases or even solid materials such as magnets whose atoms interact not only with their neighbours but also far beyond. Professor Wolfhard Janke and his team of researchers use Monte Carlo computer simulations for this purpose. This stochastic process , named after the Monte Carlo casino, generates random system states from which the desired properties of the system can be determined. In this way, Monte Carlo simulations provide deep insights into the physics of phase transitions. The researchers have developed a new algorithm that can perform these simulations in a matter of days, which would have taken centuries using conventional methods. They have published their new findings in the journal Physical Review X.
    A physical system is in equilibrium when its macroscopic properties such as pressure or temperature do not change over time. Nonequilibrium processes occur when environmental changes push a system out of equilibrium and the system then seeks a new state of equilibrium. “These processes are increasingly becoming the focus of attention for statistical physicists worldwide. While a large number of studies have analysed numerous aspects of nonequilibrium processes for systems with short-range interactions, we are only just beginning to understand the role of long-range interactions in such processes,” explains Janke.
    The curse of long-range interactions
    For short-range systems whose components interact only with their short-range neighbours, the number of operations needed to calculate the evolution of the entire system over time increases linearly with the number of components it contains. For long-range interacting systems, the interaction with all other components, even distant ones, must be included for each component. As the size of the system grows, the runtime increases quadratically. A team of scientists led by Professor Janke has now succeeded in reducing this algorithmic complexity by restructuring the algorithm and using a clever combination of suitable data structures. In the case of large systems, this leads to a massive reduction in the required computing time and allows completely new questions to be investigated.
    New horizons opened
    The article shows how the new method can be efficiently applied to nonequilibrium processes in systems with long-range interactions. One example describes spontaneous ordering processes in an initially disordered “hot” system, in which following an abrupt temperature drop ordered domains grow with time until an ordered equilibrium state is reached. From our daily lives, we know that when we take a hot shower and there is a cold window nearby, droplets will form on the window. The hot steam cools down quickly and the droplets get larger. A related example are processes with controlled slower cooling rates, where the formation of vortices and other structures is of particular interest as these play an important role in cosmology and in solid state physics.
    In addition, researchers at the Institute of Theoretical Physics have already successfully applied the algorithm to the process of phase separation, in which, for example, two types of particles spontaneously separate. Such nonequilibrium processes play a fundamental role both in industrial applications and in the functioning of cells in biological systems. These examples illustrate the wide range of application scenarios that this methodological advance offers for basic research and practical applications.
    Computer simulations form the third pillar of modern physics, alongside experiments and analytical approaches. A large number of issues in physics can only be approached approximately or not at all with analytical methods. With an experimental approach, certain issues are often difficult to access and require complex experimental set-ups, sometimes lasting years. Computer simulations have therefore contributed significantly to the understanding of a broad spectrum of physical systems in recent decades. More

  • in

    Researchers develop low-cost sensor to enhance robots’ sense of touch

    Researchers from Queen Mary University of London, along with collaborators from China and USA have developed an L3 F-TOUCH sensor to enhance tactile capabilities in robots, allowing it to “feel” objects and adjust its grip accordingly.
    Achieving human-level dexterity during manipulation and grasping has been a long-standing goal in robotics. To accomplish this, having a reliable sense of tactile information and force is essential for robots. A recent study, published in IEEE Robotics and Automation Letters, describes the L3 F-TOUCH sensor that enhances the force sensing capabilities of classic tactile sensors. The sensor is lightweight, low-cost, and wireless, making it an affordable option for retrofitting existing robot hands and graspers.
    The human hand can sense pressure, temperature, texture, and pain. Additionally, the human hand can distinguish between objects based on their shape, size, weight, and other physical properties. Many current robot hands or graspers are not even close to human hands as they do not have integrated haptic capabilities, complicating handling objects. Without knowledge about the interaction forces and the shape of the handled object, the robot fingers would not have any “feel of touch,” and objects could easily slip out of the robot hand’s fingers or even be crushed if they are fragile.
    The study, led by Professor Kaspar Althoefer of Queen Mary University of London, presents the new L3 F-TOUCH — high-resolution fingertip sensor, where L3 stands for Lightweight, Low-cost, wireLess communication. The sensor can measure an object’s geometry and determine the forces to interact with it. Unlike other sensors that estimate interaction forces via tactile information acquired by camera images, the L3 F-TOUCH measures interaction forces directly, achieving higher measurement accuracy.
    “In contrast to its competitors that estimate experienced interaction forces through reconstruction from camera images of the deformation of their soft elastomer, the L-3 F-TOUCH measures interaction forces directly through an integrated mechanical suspension structure with a mirror system achieving higher measurement accuracy and wider measurement range. The sensor is physically designed to decouple force measurements from geometry information. Therefore, the sensed three-axis force is immuned from contact geometry compared to its competitors. Through embedded wireless communications, the sensor also outperforms competitors with regards to integrability with robot hands.” says Professor Kaspar Althoefer.
    When the sensor touches the surface, a compact suspension structure enables the elastomer — a rubber-like material that deforms to measure high-resolution contact geometry exposed to an external force — to displace upon contact. To make sense of this data, the elastomer’s displacement is tracked by detecting the movement of a special marker, a so-called ARTag, allowing us to measure contact forces along the three major axes (x, y, and z) via a calibration process.
    “We will focus our future work on extending the sensor’s capabilities to measure not only force along the three major axes but also rotational forces such as twist, which could be experienced during screw fastening while remaining accurate and compact. These advancements can enable the sense of touch for more dynamic and agile robots in manipulation tasks, even in human-robot interaction settings, like for patient rehabilitation or physical support of the elderly.” adds Professor Althoefer.
    This breakthrough could pave the way for more advanced and reliable robotics in the future, as with the L3 F-TOUCH sensor, robots can have a sense of touch, making them more capable of handling objects and performing complex manipulation tasks. More

  • in

    A simpler method for learning to control a robot

    Researchers from MIT and Stanford University have devised a new machine-learning approach that could be used to control a robot, such as a drone or autonomous vehicle, more effectively and efficiently in dynamic environments where conditions can change rapidly.
    This technique could help an autonomous vehicle learn to compensate for slippery road conditions to avoid going into a skid, allow a robotic free-flyer to tow different objects in space, or enable a drone to closely follow a downhill skier despite being buffeted by strong winds.
    The researchers’ approach incorporates certain structure from control theory into the process for learning a model in such a way that leads to an effective method of controlling complex dynamics, such as those caused by impacts of wind on the trajectory of a flying vehicle. One way to think about this structure is as a hint that can help guide how to control a system.
    “The focus of our work is to learn intrinsic structure in the dynamics of the system that can be leveraged to design more effective, stabilizing controllers,” says Navid Azizan, the Esther and Harold E. Edgerton Assistant Professor in the MIT Department of Mechanical Engineering and the Institute for Data, Systems, and Society (IDSS), and a member of the Laboratory for Information and Decision Systems (LIDS). “By jointly learning the system’s dynamics and these unique control-oriented structures from data, we’re able to naturally create controllers that function much more effectively in the real world.”
    Using this structure in a learned model, the researchers’ technique immediately extracts an effective controller from the model, as opposed to other machine-learning methods that require a controller to be derived or learned separately with additional steps. With this structure, their approach is also able to learn an effective controller using fewer data than other approaches. This could help their learning-based control system achieve better performance faster in rapidly changing environments.
    “This work tries to strike a balance between identifying structure in your system and just learning a model from data,” says lead author Spencer M. Richards, a graduate student at Stanford University. “Our approach is inspired by how roboticists use physics to derive simpler models for robots. Physical analysis of these models often yields a useful structure for the purposes of control — one that you might miss if you just tried to naively fit a model to data. Instead, we try to identify similarly useful structure from data that indicates how to implement your control logic.”
    Additional authors of the paper are Jean-Jacques Slotine, professor of mechanical engineering and of brain and cognitive sciences at MIT, and Marco Pavone, associate professor of aeronautics and astronautics at Stanford. The research will be presented at the International Conference on Machine Learning (ICML). More

  • in

    Robotic hand rotates objects using touch, not vision

    Inspired by the effortless way humans handle objects without seeing them, a team led by engineers at the University of California San Diego has developed a new approach that enables a robotic hand to rotate objects solely through touch, without relying on vision.
    Using their technique, the researchers built a robotic hand that can smoothly rotate a wide array of objects, from small toys, cans, and even fruits and vegetables, without bruising or squishing them. The robotic hand accomplished these tasks using only information based on touch.
    The work could aid in the development of robots that can manipulate objects in the dark.
    The team recently presented their work at the 2023 Robotics: Science and Systems Conference.
    To build their system, the researchers attached 16 touch sensors to the palm and fingers of a four-fingered robotic hand. Each sensor costs about $12 and serves a simple function: detect whether an object is touching it or not.
    What makes this approach unique is that it relies on many low-cost, low-resolution touch sensors that use simple, binary signals — touch or no touch — to perform robotic in-hand rotation. These sensors are spread over a large area of the robotic hand.
    This contrasts with a variety of other approaches that rely on a few high-cost, high-resolution touch sensors affixed to a small area of the robotic hand, primarily at the fingertips. More

  • in

    A new type of quantum bit in semiconductor nanostructures

    Researchers have created a quantum superposition state in a semiconductor nanostructure that might serve as a basis for quantum computing. The trick: two optical laser pulses that act as a single terahertz laser pulse.
    A German-Chinese research team has successfully created a quantum bit in a semiconductor nanostructure. Using a special energy transition, the researchers created a superposition state in a quantum dot — a tiny area of the semiconductor — in which an electron hole simultaneously possessed two different energy levels. Such superposition states are fundamental for quantum computing. However, excitation of the state would require a large-scale free-electron laser that can emit light in the terahertz range. Additionally, this wavelength is too long to focus the beam on the tiny quantum dot. The German-Chinese team has now achieved the excitation with two finely tuned short-wavelength optical laser pulses.
    The team headed by Feng Liu from Zhejiang University in Hangzhou, together with a group led by Dr. Arne Ludwig from Ruhr University Bochum and other researchers from China and the UK, report their findings in the journal “Nature Nanotechnology,” published online on 24 July 2023.
    Lasers trigger the radiative Auger process
    The team made use of the so-called radiative Auger transition. In this process, an electron recombines with a hole, releasing its energy partly in the form of a single photon and partly by transferring the energy to another electron. The same process can also be observed with electron holes — in other words, missing electrons. In 2021, a research team succeeded for the first time in specifically stimulating the radiative Auger transition in a semiconductor.
    In the current project, the researchers showed that the radiative Auger process can be coherently driven: they used two different laser beams with intensities in a specific ratio to each other. With the first laser, they excited an electron-hole pair in the quantum dot to create a quasiparticle consisting of two holes and an electron. With a second laser, they triggered the radiative Auger process to elevate one hole to a series of higher energy states.
    Two states simultaneously
    The team used finely tuned laser pulses to create a superposition between the hole ground state and the higher energy state. The hole thus existed in both states simultaneously. Such superpositions are the basis for quantum bits, which, unlike conventional bits, exist not only in the states “0” and “1,” but also in superpositions of both.
    Hans-Georg Babin produced the high-purity semiconductor samples for the experiment at Ruhr University Bochum under the supervision of Dr. Arne Ludwig at the Chair for Applied Solid State Physics headed by Professor Andreas Wieck. In the process, the researchers increased the ensemble homogeneity of the quantum dots and ensured the high purity of the structures produced. These measures facilitated the performance of the experiments by the Chinese partners working with Jun-Yong Yan and Feng Liu. More

  • in

    AI can ask another AI for a second opinion on medical scans

    Researchers at Monash University have designed a new co-training AI algorithm for medical imaging that can effectively mimic the process of seeking a second opinion.
    Published recently in Nature Machine Intelligence, the research addressed the limited availability of human annotated, or labelled, medical images by using an adversarial, or competitive, learning approach against unlabelled data.
    This research, by Monash University faculties of Engineering and IT, will advance the field of medical image analysis for radiologists and other health experts.
    PhD candidate Himashi Peiris of the Faculty of Engineering, said the research design had set out to create a competition between the two components of a “dual-view” AI system.
    “One part of the AI system tries to mimic how radiologists read medical images by labelling them, while the other part of the system judges the quality of the AI-generated labelled scans by benchmarking them against the limited labelled scans provided by radiologists,” said Ms Peiris.
    “Traditionally radiologists and other medical experts annotate, or label, medical scans by hand highlighting specific areas of interest, such as tumours or other lesions. These labels provide guidance or supervision for training AI models.
    “This method relies on the subjective interpretation of individuals, is time-consuming and prone to errors and extended waiting periods for patients seeking treatments.”
    The availability of large-scale annotated medical image datasets is often limited, as it requires significant effort, time and expertise to annotate many images manually. More