More stories

  • in

    Female AI ‘teammate’ generates more participation from women

    An artificial intelligence-powered virtual teammate with a female voice boosts participation and productivity among women on teams dominated by men, according to new Cornell University research.
    The findings suggest that the gender of an AI’s voice can positively tweak the dynamics of gender-imbalanced teams and could help inform the design of bots used for human-AI teamwork, researchers said.
    The findings mirror previous research that shows minority teammates are more likely to participate if the team adds members similar to them, said Angel Hsing-Chi Hwang, postdoctoral associate in information science and lead author of the paper.
    To better understand how AI can help gender-imbalanced teams, Hwang and Andrea Stevenson Won, associate professor of communication and the paper’s co-author, carried out an experiment with around 180 men and women who were assigned to groups of three and asked to collaborate virtually on a set of tasks (the study only included participants who identified as either male or female).
    Each group had either one woman or one man and a fourth agent in the form of an abstract shape with either a male or female voice, which would appear on screen and read instructions, contribute an idea and handle timekeeping. There was a catch — the bot wasn’t completely automated. In what’s referred to in human-computer interaction as a “Wizard of Oz” experiment, Hwang was behind the scenes, feeding lines generated by ChatGPT into the bot.
    After the experiment, Hwang and Won analyzed the chat logs of team conversations to determine how often participants offered ideas or arguments. They also asked participants to reflect on the experience.
    “When we looked at participants’ actual behaviors, that’s where we started to see differences between men and women and how they were reacting when there was either a female agent or a male agent on the team,” she said.
    “One interesting thing about this study is that most participants didn’t express a preference for a male- or female-sounding voice,” Won said. “This implies that people’s social inferences about AI can be influential even when people don’t believe they are important.”
    When women were in the minority, they participated more when the AI’s voice was female, while men in the minority were more talkative but were less focused on tasks when working with a male-sounding bot, researchers found. Unlike the men, women reported significantly more positive perceptions of the AI teammate when women were the minority members, according to researchers.
    “With only a gendered voice, the AI agent can provide a small degree of support to women minority members in a group,” said Hwang. More

  • in

    3D-printed mini-actuators can move small soft robots, lock them into new shapes

    Researchers from North Carolina State University have demonstrated miniature soft hydraulic actuators that can be used to control the deformation and motion of soft robots that are less than a millimeter thick. The researchers have also demonstrated that this technique works with shape memory materials, allowing users to repeatedly lock the soft robots into a desired shape and return to the original shape as needed.
    “Soft robotics holds promise for many applications, but it is challenging to design the actuators that drive the motion of soft robots on a small scale,” says Jie Yin, corresponding author of a paper on the work and an associate professor of mechanical and aerospace engineering at NC State. “Our approach makes use of commercially available multi-material 3D printing technologies and shape memory polymers to create soft actuators on a microscale that allow us to control very small soft robots, which allows for exceptional control and delicacy.”
    The new technique relies on creating soft robots that consist of two layers. The first layer is a flexible polymer that is created using 3D printing technologies and incorporates a pattern of microfluidic channels — essentially very small tubes running through the material. The second layer is a flexible shape memory polymer. Altogether, the soft robot is only 0.8 millimeters thick.
    By pumping fluid into the microfluidic channels, users create hydraulic pressure that forces the soft robot to move and change shape. The pattern of microfluidic channels controls the motion and shape change of the soft robot — whether it bends, twists, or so on. In addition, the amount of fluid being introduced, and how quickly it is introduced, controls how quickly the soft robot moves and the amount of force the soft robot exerts.
    If users wish to ‘freeze’ the soft robot’s shape, they can apply moderate heat (64C, or 147F), and then let the robot cool briefly. This prevents the soft robot from reverting to its original shape, even after the liquid in the microfluidic channels is pumped out. If users want to return the soft robot to its original shape, they simply apply the heat again after pumping out the liquid, and the robot relaxes to its original configuration.
    “A key factor here is fine-tuning the thickness of the shape memory layer relative to the layer that contains the microfluidic channels,” says Yinding Chi, co-lead author of the paper and a former Ph.D. student at NC State. “You need the shape memory layer to be thin enough to bend when the actuator’s pressure is applied, but thick enough to get the soft robot to retain its shape even after the pressure is removed.”
    To demonstrate the technique, the researchers created a soft robot “gripper,” capable of picking up small objects. The researchers applied hydraulic pressure, causing the gripper to pinch closed on an object. By applying heat, the researchers were able to fix the gripper in its “closed” position, even after releasing pressure from the hydraulic actuator. The gripper could then be moved — transporting the object it held — into a new position. Researchers then applied heat again, causing the gripper to release the object it had picked up. Video of these soft robots in action can be found at https://youtu.be/5SIwsw9IyIc.

    “Because these soft robots are so thin, we can heat them up to 64C quickly and easily using a small infrared light source — and they also cool very quickly,” says Haitao Qing, co-lead author of the paper and a Ph.D. student at NC State. “So this entire series of operations only takes about two minutes.
    “And the movement does not have to be a gripper that pinches,” says Qing. “We’ve also demonstrated a gripper that was inspired by vines in nature. These grippers quickly wrap around an object and clasp it tightly, allowing for a secure grip.
    “This paper serves as a proof-of-concept for this new technique, and we’re excited about potential applications for this class of miniature soft actuators in small-scale soft robots, shape-shifting machines, and biomedical engineering.”
    This work was done with support from the National Science Foundation under grants 2126072 and 2329674. More

  • in

    Virtual reality as a reliable shooting performance-tracking tool

    Virtual reality technology can do more than teach weaponry skills in law enforcement and military personnel, a new study suggests: It can accurately record shooting performance and reliably track individuals’ progress over time.
    In the study of 30 people with a range of experience levels in handling a rifle, researchers at The Ohio State University found that a ballistic simulator captured data on the shooters’ accuracy, decision-making and reaction time — down to the millimeter in distance and millisecond in time — on a consistent basis.
    In addition to confirming that the simulator — called the VirTra V-100 — is a dependable research tool, the findings could lead to establishing the first-ever standardized performance scores for virtual reality ballistics training.
    “To our knowledge, we’re the first team to answer the question of whether the simulator could be converted to an assessment tool and if it’s credible to use it day-to-day,” said Alex Buga, first author of the study and a PhD student in kinesiology at Ohio State.
    “We’ve figured out how to export the data and interpret it. We’ve focused on the three big challenges of marksmanship, decision-making and reaction time to measure 21 relevant variables — allowing us to put a report in a user’s hand and say, ‘This is how accurate, precise, focused and fast you are.'”
    The study was published online June 6 in The Journal of Strength and Conditioning Research.
    U.S. military leaders and law enforcement agencies have shown an interest in increasing the use of virtual reality for performance assessment, said Buga and senior study author Jeff Volek, professor of human sciences at Ohio State. Earlier this year, an Ohio Attorney General Task Force on the Future of Police Training in Ohio recommended incorporating virtual reality technology into training protocols.

    Volek is the principal investigator on a $10 million U.S. Department of Defense grant focused on improving the health of military service members, veterans and the American public. As part of that initiative, the research team is investigating the extent to which nutritional ketosis reduces detrimental effects of sleep loss on cognitive and physical performance in ROTC cadets — including their shooting ability as measured by the VirTra simulator. Verifying the simulator’s results for research purposes triggered the attempt to extract and analyze its data.
    “We were using it as an outcome variable for research, and we found that it has very good day-to-day reproducibility of performance, which is crucial for research,” Volek said. “You want a sensitive and reproducible outcome in your test where there’s not a lot of device or equipment variation.”
    Because the lab also focuses on human performance in first responders, researchers’ conversations with military and law enforcement communities convinced Buga that data collected by the simulator could be more broadly useful.
    “I created a few programs that enabled us to calculate the shooting data and produce objective training measures,” he said. “This equipment is close to what the military and police use every day, so this has potential to be used as a screening tool across the country.”
    Users of the simulator operate the infrared-guided M4 rifle by shooting at a large screen onto which different digitally generated visuals are projected — no headset required. The rifle at Ohio State has been retrofitted to produce the same recoil as a police or military weapon.
    The study participants included civilians, police and SWAT officers, and ROTC cadets. Each was first familiarized in a single learning session with the simulator and then completed multiple rounds of three different tasks in each of three study performance sessions.

    In the first task, participants fired at the same target a total of 50 times to produce measures of shooting precision. The decision-making assessment involved shooting twice within two seconds at designated shapes and colors on a screen displaying multiple shape and color choices. In the reaction-time scenario, participants shot at a series of plates from left to right as rapidly as possible.
    Internal consistency ratings showed the simulator generated good to excellent test-retest agreement on the 21 variables measured.
    All participants were well-rested and completed the study sessions at about the same time of day. Self-evaluations showed that participants’ overall confidence about their shooting performance increased from their first to final sessions. They also rated the simulator as a realistic and a low-stress shooting assessment tool.
    The low stress and well-rested conditions were important to establishing baseline performance measures, the researchers noted, which then would enable evaluating how injuries and other physical demands of first-responder professions affect shooting performance.
    “This simulator could be used to assess the effectiveness of specific training programs designed to improve shooting performance, or to evaluate marksmanship in response to various stressors encountered by the same law enforcement and military personnel,” Buga said. “These novel lines of evidence have enabled us to push the boundaries of tactical research and set the groundwork for using virtual reality in sophisticated training scenarios that support national defense goals.”
    Additional co-authors, all from Ohio State, included Drew Decker, Bradley Robinson, Christopher Crabtree, Justen Stoner, Lucas Arce, Xavier El-Shazly, Madison Kackley, Teryn Sapper, John Paul Anders and William Kraemer. More

  • in

    Researchers harness AI for autonomous discovery and optimization of materials

    Today, researchers are developing ways to accelerate discovery by combining automated experiments, artificial intelligence and high-performance computing. A novel tool developed at Oak Ridge National Laboratory that leverages those technologies has demonstrated that AI can influence materials synthesis and conduct associated experiments without human supervision.
    This autonomous materials synthesis tool uses pulsed laser deposition, or PLD, to deposit a thin layer of substance onto a base material. It then employs AI to analyze how the quality of the newly created material relates to the synthesis conditions, such as temperature, pressure and energy emitted during the PLD process. The AI suggests a revised set of conditions that may yield improved quality and then controls the PLD equipment to conduct the next experiment.
    “We built computer control of all processes into the system and incorporated some hardware innovations to enable AI to drive experimentation,” said the study’s leader, Sumner Harris of the Center for Nanophase Materials Sciences at ORNL. “The automation allows us to perform our work 10 times faster, and the AI can understand huge parameter spaces with far fewer samples.” More

  • in

    Algae offer real potential as a renewable electricity source

    The need to transition away from fossil fuels to more sustainable energy production is critical. That’s why a team of Concordia researchers is looking at a potential power source that not only produces no carbon emissions but removes carbon as it works: algae.
    Researchers at the Optical-Bio Microsystems Lab recently published a new paper on this topic in the journal Energies. In it, they describe their method of extracting energy from the photosynthesis process of algae suspended in a specialized solution and housed in small power cells. Configured correctly, these cells can generate enough energy to power low- and ultra-low power devices such as Internet of Things (IoT) sensors.
    “The idea of the micro photosynthetic power cell is to extract electrons produced through the process of photosynthesis,” says Kirankumar Kuruvinashetti, PhD 20, now a Mitacs postdoctoral associate at the University of Calgary.
    “Photosynthesis produces oxygen and electrons. Our model traps the electrons, which allows us to generate electricity. So more than being a zero-emission technology, it’s a negative carbon emission technology: it absorbs carbon dioxide from the atmosphere and gives you a current. Its only byproduct is water.”
    Power generated day and night
    The micro photosynthetic power cell consists of an anode and a cathode chamber separated by a honeycomb-shaped proton exchange membrane. The researchers fabricated microelectrodes on both sides of the membrane to collect the charges released by the algae during photosynthesis. Each chamber measures only two centimetres by two centimetres by four millimetres.
    The algae are suspended in a two-millilitre solution in the anode chamber while the cathode is filled with potassium ferricyanide, a type of electron acceptor. Once the algae undergo photosynthesis and begin to release electrons, the electrons will be collected through the membrane’s electrodes and conducted, creating a current.

    The protons, meanwhile, will pass through the membrane into the cathode and cause oxidation, resulting in a potassium ferrocyanide reduction.
    The process also works without direct sunlight, though at a lower intensity, explains PhD candidate and paper co-author Dhilippan Panneerselvam.
    “Just like humans, algae are constantly breathing — but they intake carbon dioxide and release oxygen. Due to their photosynthesis machinery, they also release electrons during respiration. The electricity generation is not stopped. The electrons are continuously harvested.”
    Muthukumaran Packirisamy, professor in the Department of Mechanical, Industrial and Aerospace Engineering and the paper’s corresponding author, admits the system is not yet able to compete in power generation with others like photovoltaic cells. The maximum possible terminal voltage of a single micro photosynthetic power cell is only 1.0V.
    But he believes that, with enough research and development, including artificial intelligence-assisted integration technologies, this technology has the potential to be a viable, affordable and clean power source in the future.
    It also offers significant manufacturing advantages over other systems, he says.
    “Our system does not use any of the hazardous gases or microfibres needed for the silicon fabrication technology that photovoltaic cells rely on. Furthermore, disposing of silicon computer chips is not easy. We use biocompatible polymers, so the whole system is easily decomposable and very cheap to manufacture.” More

  • in

    Researchers create realistic virtual rodent

    The agility with which humans and animals move is an evolutionary marvel that no robot has yet been able to closely emulate. To help probe the mystery of how brains control movement, Harvard neuroscientists have created a virtual rat with an artificial brain that can move around just like a real rodent.
    Bence Ölveczky, professor in the Department of Organismic and Evolutionary Biology, led a group of researchers who collaborated with scientists at Google’s DeepMind AI lab to build a biomechanically realistic digital model of a rat. Using high-resolution data recorded from real rats, they trained an artificial neural network — the virtual rat’s “brain” — to control the virtual body in a physics simulator called MuJoco, where gravity and other forces are present.
    Publishing in Nature, the researchers found that activations in the virtual control network accurately predicted neural activity measured from the brains of real rats producing the same behaviors, said Ölveczky, who is an expert at training (real) rats to learn complex behaviors in order to study their neural circuitry. The feat represents a new approach to studying how the brain controls movement, Ölveczky said, by leveraging advances in deep reinforcement learning and AI, as well as 3D movement-tracking in freely behaving animals.
    The collaboration was “fantastic,” Ölveczky said. “DeepMind had developed a pipeline to train biomechanical agents to move around complex environments. We simply didn’t have the resources to run simulations like those, to train these networks.”
    Working with the Harvard researchers was, likewise, “a really exciting opportunity for us,” said co-author and Google DeepMind Senior Director of Research Matthew Botvinick. “We’ve learned a huge amount from the challenge of building embodied agents: AI systems that not only have to think intelligently, but also have to translate that thinking into physical action in a complex environment. It seemed plausible that taking this same approach in a neuroscience context might be useful for providing insights in both behavior and brain function.”
    Graduate student Diego Aldarondo worked closely with DeepMind researchers to train the artificial neural network to implement what are called inverse dynamics models, which scientists believe our brains use to guide movement. When we reach for a cup of coffee, for example, our brain quickly calculates the trajectory our arm should follow and translates this into motor commands. Similarly, based on data from actual rats, the network was fed a reference trajectory of the desired movement and learned to produce the forces to generate it. This allowed the virtual rat to imitate a diverse range of behaviors, even ones it hadn’t been explicitly trained on.
    These simulations may launch an untapped area of virtual neuroscience in which AI-simulated animals, trained to behave like real ones, provide convenient and fully transparent models for studying neural circuits, and even how such circuits are compromised in disease. While Ölveczky’s lab is interested in fundamental questions about how the brain works, the platform could be used, as one example, to engineer better robotic control systems.
    A next step might be to give the virtual animal autonomy to solve tasks akin to those encountered by real rats. “From our experiments, we have a lot of ideas about how such tasks are solved, and how the learning algorithms that underlie the acquisition of skilled behaviors are implemented,” Ölveczky continued. “We want to start using the virtual rats to test these ideas and help advance our understanding of how real brains generate complex behavior.” More

  • in

    New technique could help build quantum computers of the future

    Quantum computers have the potential to solve complex problems in human health, drug discovery, and artificial intelligence millions of times faster than some of the world’s fastest supercomputers. A network of quantum computers could advance these discoveries even faster. But before that can happen, the computer industry will need a reliable way to string together billions of qubits — or quantum bits — with atomic precision.
    Connecting qubits, however, has been challenging for the research community. Some methods form qubits by placing an entire silicon wafer in a rapid annealing oven at very high temperatures. With these methods, qubits randomly form from defects (also known as color centers or quantum emitters) in silicon’s crystal lattice. And without knowing exactly where qubits are located in a material, a quantum computer of connected qubits will be difficult to realize.
    But now, getting qubits to connect may soon be possible. A research team led by Lawrence Berkeley National Laboratory (Berkeley Lab) says that they are the first to use a femtosecond laser to create and “annihilate” qubits on demand, and with precision, by doping silicon with hydrogen.
    The advance could enable quantum computers that use programmable optical qubits or “spin-photon qubits” to connect quantum nodes across a remote network. It could also advance a quantum internet that is not only more secure but could also transmit more data than current optical-fiber information technologies.
    “To make a scalable quantum architecture or network, we need qubits that can reliably form on-demand, at desired locations, so that we know where the qubit is located in a material. And that’s why our approach is critical,” said Kaushalya Jhuria, a postdoctoral scholar in Berkeley Lab’s Accelerator Technology & Applied Physics (ATAP) Division. She is the first author on a new study that describes the technique in the journal Nature Communications. “Because once we know where a specific qubit is sitting, we can determine how to connect this qubit with other components in the system and make a quantum network.”
    “This could carve out a potential new pathway for industry to overcome challenges in qubit fabrication and quality control,” said principal investigator Thomas Schenkel, head of the Fusion Science & Ion Beam Technology Program in Berkeley Lab’s ATAP Division. His group will host the first cohort of students from the University of Hawaii in June as part of a DOE Fusion Energy Sciences-funded RENEW project on workforce development where students will be immersed in color center/qubit science and technology.
    Forming qubits in silicon with programmable control
    The new method uses a gas environment to form programmable defects called “color centers” in silicon. These color centers are candidates for special telecommunications qubits or “spin photon qubits.” The method also uses an ultrafast femtosecond laser to anneal silicon with pinpoint precision where those qubits should precisely form. A femtosecond laser delivers very short pulses of energy within a quadrillionth of a second to a focused target the size of a speck of dust.

    Spin photon qubits emit photons that can carry information encoded in electron spin across long distances — ideal properties to support a secure quantum network. Qubits are the smallest components of a quantum information system that encodes data in three different states: 1, 0, or a superposition that is everything between 1 and 0.
    With help from Boubacar Kanté, a faculty scientist in Berkeley Lab’s Materials Sciences Division and professor of electrical engineering and computer sciences (EECS) at UC Berkeley, the team used a near-infrared detector to characterize the resulting color centers by probing their optical (photoluminescence) signals.
    What they uncovered surprised them: a quantum emitter called the Ci center. Owing to its simple structure, stability at room temperature, and promising spin properties, the Ci center is an interesting spin photon qubit candidate that emits photons in the telecom band. “We knew from the literature that Ci can be formed in silicon, but we didn’t expect to actually make this new spin photon qubit candidate with our approach,” Jhuria said.
    The researchers learned that processing silicon with a low femtosecond laser intensity in the presence of hydrogen helped to create the Ci color centers. Further experiments showed that increasing the laser intensity can increase the mobility of hydrogen, which passivates undesirable color centers without damaging the silicon lattice, Schenkel explained.
    A theoretical analysis performed by Liang Tan, staff scientist in Berkeley Lab’s Molecular Foundry, shows that the brightness of the Ci color center is boosted by several orders of magnitude in the presence of hydrogen, confirming their observations from laboratory experiments.
    “The femtosecond laser pulses can kick out hydrogen atoms or bring them back, allowing the programmable formation of desired optical qubits in precise locations,” Jhuria said.

    The team plans to use the technique to integrate optical qubits in quantum devices such as reflective cavities and waveguides, and to discover new spin photon qubit candidates with properties optimized for selected applications.
    “Now that we can reliably make color centers, we want to get different qubits to talk to each other — which is an embodiment of quantum entanglement — and see which ones perform the best. This is just the beginning,” said Jhuria.
    “The ability to form qubits at programmable locations in a material like silicon that is available at scale is an exciting step towards practical quantum networking and computing,” said Cameron Geddes, Director of the ATAP Division.
    Theoretical analysis for the study was performed at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) at Berkeley Lab with support from the NERSC QIS@Perlmutter program.
    The Molecular Foundry and NERSC are DOE Office of Science user facilities at Berkeley Lab.
    This work was supported by the DOE Office of Fusion Energy Sciences. More

  • in

    Trash-sorting robot mimics complex human sense of touch

    Today’s intelligent robots can accurately recognize many objects through vision and touch. Tactile information, obtained through sensors, along with machine learning algorithms, enables robots to identify objects previously handled.
    However, sensing is often confused when presented with objects similar in size and shape, or objects unknown to the robot. Other factors restrictive to robot perception include background noise and the same type of object with different shapes and sizes.
    In Applied Physics Reviews, by AIP Publishing, researchers from Tsinghua University worked to break through the difficulties of robotic recognition of various common, yet complex, items.
    Humans possess many different types of touch sensing, one of which is thermal feeling. This allows us to sense the wind blowing, perceive hot and cold, and discriminate between matter types, such as wood and metal, because of the different cooling sensations produced. The researchers aimed to mimic this ability by designing a robotic tactile sensing method that incorporated thermal sensations for more robust and accurate object detection.
    “We propose utilizing spatiotemporal tactile sensing during hand grasping to extend the robotic function and ability to simultaneously perceive multi-attributes of the grasped object, including thermal conductivity, thermal diffusivity, surface roughness, contact pressure, and temperature,” said author Rong Zhu.
    The team created a layered sensor with material detection at the surface and pressure sensitivity at the bottom, with a porous middle layer sensitive to thermal changes. They paired this sensor with an efficient cascade classification algorithm that rules out object types in order, from easy to hard, starting with simple categories like empty cartons before moving on to orange peels or scraps of cloth.
    To test the capabilities of their method, the team created an intelligent robot tactile system to sort garbage. The robot picked up a range of common trash items, including empty cartons, bread scraps, plastic bags, plastic bottles, napkins, sponges, orange peels, and expired drugs. It sorted the trash into separate containers for recyclables, food scraps, hazardous waste, and other waste. Their system achieved a classification accuracy of 98.85% in recognizing diverse garbage objects not encountered previously. This successful garbage sorting behavior could greatly reduce human labor in real-life scenarios and provide a broad applicability for smart life technologies.
    Future research in this area will focus on enhancing robotic embodied intelligence and autonomous implementation.
    “In addition, by combining this sensor with brain-computer interface technology, tactile information collected by the sensor could be converted into neural signals acceptable to the human brain, re-empowering tactile perception capabilities for people with hand disabilities,” said Zhu. More