More stories

  • in

    ‘Smart’ tech is coming to a city near you

    If you own an internet-connected “smart” device, chances are it knows a lot about your home life.
    If you raid the pantry at 2 a.m. for a snack, your smart lights can tell. That’s because they track every time they’re switched on and off.
    Your Roomba knows the size and layout of your home and sends it to the cloud. Smart speakers eavesdrop on your every word, listening for voice commands.
    But the data-driven smart tech trend also extends far beyond our kitchens and living rooms. Over the past 20 years, city governments have been partnering with tech companies to collect real-time data on daily life in our cities, too.
    In urban areas worldwide, sidewalks, streetlights and buildings are equipped with sensors that log foot traffic, driving and parking patterns, even detect and pinpoint where gunshots may have been fired.
    In Singapore, for example, thousands of sensors and cameras installed across the city track everything from crowd density and traffic congestion to smoking where it’s not allowed.

    Copenhagen uses smart air quality sensors to monitor and map pollution levels.
    A 2016 report from the National League of Cities estimates that 66% of American cities had already invested in some type of ‘smart city’ technology, from intelligent meters that collect and share data on residents’ energy or water usage to sensor-laden street lights that can detect illegally parked cars.
    Proponents say the data collected will make cities cleaner, safer, more efficient. But many Americans worry that the benefits and harms of smart city tech may not be evenly felt across communities, says Pardis Emami-Naeini, assistant professor of computer science and director of the InSPIre Lab at Duke University.
    That’s one of the key takeaways of a survey Emami-Naeini and colleagues presented April 25 at the ACM CHI Conference on Human Factors in Computing Systems (CHI 2023) in Hamburg, Germany.
    Nearly 350 people from across the United States participated in the survey. In addition, the researchers conducted qualitative interviews with 21 people aged 24 to 71 from underserved neighborhoods in Seattle that have been prioritized for smart city projects over the next 10 to 15 years.

    The study explored public attitudes on a variety of smart city technologies currently in use, from air quality sensors to surveillance cameras.
    While public awareness of smart cities was limited — most of the study respondents had never even heard of the term — researchers found that Americans have concerns about the ethical implications of the data being collected, particularly from marginalized communities.
    One of the technologies participants had significant concerns about was gunshot detection, which uses software and microphones placed around a neighborhood to detect gunfire and pinpoint its location, rather than relying solely on 911 calls to police.
    The technology is used in more than 135 cities across the U.S., including Chicago, Sacramento, Philadelphia and Durham.
    Though respondents acknowledged the potential benefits to public safety, they worried that the tech could contribute to racial disparities in policing, particularly when disproportionately installed in Black and brown neighborhoods.
    Some said the mere existence of smart city tech such as gunshot detectors or security cameras in their neighborhood could contribute to negative perceptions of safety that deter future home buyers and businesses.
    Even collecting and sharing seemingly innocuous data such as air quality raised concerns for some respondents, who worried it could potentially drive up insurance rates in poorer neighborhoods exposed to higher levels of pollution.
    In both interviews and surveys, people with lower incomes expressed more concern about the ethical implications of smart city tech than those with higher income levels.
    Emami-Naeini has spent several years studying the privacy concerns raised by smart devices and appliances in the home. But when she started asking people how they felt about the risks posed by smart tech in cities, she noticed a shift. Even when people weren’t concerned about the impacts of particular types of data collection on a personal level, she says they were still concerned about potential harms for the larger community.
    “They were concerned about how their neighborhoods would be perceived,” Emami-Naeini says. “They thought that it would widen disparities that they already see in marginalized neighborhoods.”
    Lack of attention to such concerns can hamstring smart city efforts, Emami-Naeini says.
    A proposed high-tech development in Toronto, for example, was cancelled after citizens and civic leaders raised concerns about what would happen with the data collected by the neighborhood’s sensors and devices, and how much of the city the tech company wanted to control.
    In 2017, San Diego launched a $30 million project to cover half the city with smart streetlights in an attempt to improve traffic congestion, but faced backlash after it surfaced that police had been quietly using the footage to solve crimes.
    “It’s not just a waste of resources — it damages people’s trust,” Emami-Naeini says.
    Worldwide, spending on smart cities initiatives is expected to reach $203 billion by 2024. But amid the enthusiasm, Emami-Naeini says, a key component has been neglected: the needs and views of city residents.
    “There’s a lack of user-centered research on this topic, especially from a privacy and ethics perspective” Emami-Naeini says.
    To make sure the ‘smart cities’ of the future are designed with residents firmly in mind, “transparency and communication are really important.”
    Her team’s findings indicate that people want to know things like where sensors are located, what kinds of data they collect and how often, how the data will be used, who has access, whether they have the ability to opt in or opt out, and who to contact if something goes wrong.
    The researchers hope the insights generated from their research will help inform the design of smart city initiatives and keep people front and center in all stages of a project, from brainstorming to deployment.
    “Communities that come together can actually change the fate of these projects,” Emami-Naeini says. “I think it’s really important to make sure that people’s voices are being heard, proactively and not reactively.”
    This work was supported by the U.S. National Science Foundation (CNS-1565252 and CNS-2114230), the University of Washington Tech Policy Lab (which receives support from the William and Flora Hewlett Foundation, the John D. and Catherine T. MacArthur Foundation, Microsoft, and the Pierre and Pamela Omidyar Fund at the Silicon Valley Community Foundation), and gifts from Google and Woven Planet. More

  • in

    How a horse whisperer can help engineers build better robots

    Humans and horses have enjoyed a strong working relationship for nearly 10,000 years — a partnership that transformed how food was produced, people were transported and even how wars were fought and won. Today, we look to horses for companionship, recreation and as teammates in competitive activities like racing, dressage and showing.
    Can these age-old interactions between people and their horses teach us something about building robots designed to improve our lives? Researchers with the University of Florida say yes.
    “There are no fundamental guiding principles for how to build an effective working relationship between robots and humans,” said Eakta Jain, an associate professor of computer and information science and engineering at UF’s Herbert Wertheim College of Engineering. “As we work to improve how humans interact with autonomous vehicles and other forms of AI, it occurred to me that we’ve done this before with horses. This relationship has existed for millennia but was never leveraged to provide insights for human-robot interaction.”
    Jain, who did her doctoral work at the Robotics Institute at Carnegie Mellon University, conducted a year of field work observing the special interactions among horses and humans at the UF Horse Teaching Unit in Gainesville, Florida. She will present her findings today at the ACM Conference on Human Factors in Computing Systems in Hamburg, Germany.
    Like horses did thousands of years before, robots are entering our lives and workplaces as companions and teammates. They vacuum our floors, help educate and entertain our children, and studies are showing that social robots can be effective therapy tools to help improve mental and physical health. Increasingly, robots are found in factories and warehouses, working collaboratively with human workers and sometimes even called co-bots.
    As a member of the UF Transportation Institute, Jain was leading the human factor subgroup that examines how humans should interact with autonomous vehicles, or AVs.

    “For the first time, cars and trucks can observe nearby vehicles and keep an appropriate distance from them as well as monitor the driver for signs of fatigue and attentiveness,” Jain said. “However, the horse has had these capabilities for a long time. I thought why not learn from our partnership with horses for transportation to help solve the problem of natural interaction between humans and AVs.”
    Looking at our history with animals to help shape our future with robots is not a new concept, though most studies have been inspired by the relationship humans have with dogs. Jain and her colleagues in the College of Engineering and UF Equine Sciences are the first to bring together engineering and robotics researchers with horse experts and trainers to conduct on-the-ground field studies with the animals.
    The multidisciplinary collaboration involved expertise in engineering, animal sciences and qualitative research methodologies, Jain explained. She first reached out Joel McQuagge, from UF’s equine behavior and management program who oversees the UF Horse Teaching Unit. He hadn’t thought about this connection between horses and robots, but he provided Jain with full access, and she spent months observing classes. She interviewed and observed horse experts, including thoroughbred trainers and devoted horse owners. Christina Gardner-McCune, an associate professor in UF’s department of computer and information science and engineering, provided expertise in qualitative data analysis.
    Data collected through observations and thematical analyses resulted in findings that can be applied by human-robot interaction researchers and robot designers.
    “Some of the findings are concrete and easy to visualize, while others are more abstract,” she says. “For example, we learned that a horse speaks with its body. You can see its ears pointing to where something caught its attention. We could build in similar types of nonverbal expressions in our robots, like ears that point when there is a knock on the door or something visual in the car when there’s a pedestrian on that side of the street.”
    A more abstract and groundbreaking finding is the notion of respect. When a trainer first works with a horse, he looks for signs of respect from the horse for its human partner.

    “We don’t typically think about respect in the context of human-robot interactions,” Jain says. “What ways can a robot show you that it respects you? Can we design behaviors similar to what the horse uses? Will that make the human more willing to work with the robot?”
    Jain, originally from New Delhi, says she grew up with robots the way people grow up with animals. Her father is an engineer who made educational and industrial robots, and her mother was a computer science teacher who ran her school’s robotics club.
    “Robots were the subject of many dinner table conversations,” she says, “so I was exposed to human-robot interactions early.”
    However, during her yearlong study of the human-horse relationship, she learned how to ride a horse and says she hopes to one day own a horse.
    “At first, I thought I could learn by observing and talking to people,” she says. “There is no substitute for doing, though. I had to feel for myself how the horse-human partnership works. From the first time I got on a horse, I fell in love with them.” More

  • in

    Jellyfish-like robots could one day clean up the world’s oceans

    Most of the world is covered in oceans, which are unfortunately highly polluted. One of the strategies to combat the mounds of waste found in these very sensitive ecosystems — especially around coral reefs — is to employ robots to master the cleanup. However, existing underwater robots are mostly bulky with rigid bodies, unable to explore and sample in complex and unstructured environments, and are noisy due to electrical motors or hydraulic pumps. For a more suitable design, scientists at the Max Planck Institute for Intelligent Systems (MPI-IS) in Stuttgart looked to nature for inspiration. They configured a jellyfish-inspired, versatile, energy-efficient and nearly noise-free robot the size of a hand. Jellyfish-Bot is a collaboration between the Physical Intelligence and Robotic Materials departments at MPI-IS. “A Versatile Jellyfish-like Robotic Platform for Effective Underwater Propulsion and Manipulation” was published in Science Advances.
    To build the robot, the team used electrohydraulic actuators through which electricity flows. The actuators serve as artificial muscles which power the robot. Surrounding these muscles are air cushions as well as soft and rigid components which stabilize the robot and make it waterproof. This way, the high voltage running through the actuators cannot contact the surrounding water. A power supply periodically provides electricity through thin wires, causing the muscles to contract and expand. This allows the robot to swim gracefully and to create swirls underneath its body.
    “When a jellyfish swims upwards, it can trap objects along its path as it creates currents around its body. In this way, it can also collect nutrients. Our robot, too, circulates the water around it. This function is useful in collecting objects such as waste particles. It can then transport the litter to the surface, where it can later be recycled. It is also able to collect fragile biological samples such as fish eggs. Meanwhile, there is no negative impact on the surrounding environment. The interaction with aquatic species is gentle and nearly noise-free,” Tianlu Wang explains. He is a postdoc in the Physical Intelligence Department at MPI-IS and first author of the publication.
    His co-author Hyeong-Joon Joo from the Robotic Materials Department continues: “70% of marine litter is estimated to sink to the seabed. Plastics make up more than 60% of this litter, taking hundreds of years to degrade. Therefore, we saw an urgent need to develop a robot to manipulate objects such as litter and transport it upwards. We hope that underwater robots could one day assist in cleaning up our oceans.”
    Jellyfish-Bots are capable of moving and trapping objects without physical contact, operating either alone or with several in combination. Each robot works faster than other comparable inventions, reaching a speed of up to 6.1 cm/s. Moreover, Jellyfish-Bot only requires a low input power of around 100 mW. And it is safe for humans and fish should the polymer material insulating the robot one day be torn apart. Meanwhile, the noise from the robot cannot be distinguished from background levels. In this way Jellyfish-Bot interacts gently with its environment without disturbing it — much like its natural counterpart.
    The robot consists of several layers: some stiffen the robot, others serve to keep it afloat or insulate it. A further polymer layer functions as a floating skin. Electrically powered artificial muscles known as HASELs are embedded into the middle of the different layers. HASELs are liquid dielectric-filled plastic pouches that are partially covered by electrodes. Applying a high voltage across an electrode charges it positively, while surrounding water is charged negatively. This generates a force between positively-charged electrode and negatively-charged water that pushes the oil inside the pouches back and forth, causing the pouches to contract and relax — resembling a real muscle. HASELs can sustain the high electrical stresses generated by the charged electrodes and are protected against water by an insulating layer. This is important, as HASEL muscles were never before used to build an underwater robot.
    The first step was to develop Jellyfish-Bot with one electrode with six fingers or arms. In the second step, the team divided the single electrode into separated groups to independently actuate them.
    “We achieved grasping objects by making four of the arms function as a propeller, and the other two as a gripper. Or we actuated only a subset of the arms, in order to steer the robot in different directions. We also looked into how we can operate a collective of several robots. For instance, we took two robots and let them pick up a mask, which is very difficult for a single robot alone. Two robots can also cooperate in carrying heavy loads. However, at this point, our Jellyfish-Bot needs a wire. This is a drawback if we really want to use it one day in the ocean,” Hyeong-Joon Joo says.
    Perhaps wires powering robots will soon be a thing of the past. “We aim to develop wireless robots. Luckily, we have achieved the first step towards this goal. We have incorporated all the functional modules like the battery and wireless communication parts so as to enable future wireless manipulation,” Tianlu Wang continues. The team attached a buoyancy unit at the top of the robot and a battery and microcontroller to the bottom. They then took their invention for a swim in the pond of the Max Planck Stuttgart campus, and could successfully steer it along. So far, however, they could not direct the wireless robot to change course and swim the other way. More

  • in

    Creating a tsunami early warning system using artificial intelligence

    Tsunamis are incredibly destructive waves that can destroy coastal infrastructure and cause loss of life. Early warnings for such natural disasters are difficult because the risk of a tsunami is highly dependent on the features of the underwater earthquake that triggers it.
    In Physics of Fluids, by AIP Publishing, researchers from the University of California, Los Angeles and Cardiff University in the U.K. developed an early warning system that combines state-of-the-art acoustic technology with artificial intelligence to immediately classify earthquakes and determine potential tsunami risk.
    Underwater earthquakes can trigger tsunamis if a large amount of water is displaced, so determining the type of earthquake is critical to assessing the tsunami risk.
    “Tectonic events with a strong vertical slip element are more likely to raise or lower the water column compared to horizontal slip elements,” said co-author Bernabe Gomez. “Thus, knowing the slip type at the early stages of the assessment can reduce false alarms and enhance the reliability of the warning systems through independent cross-validation.”
    In these cases, time is of the essence, and relying on deep ocean wave buoys to measure water levels often leaves insufficient evacuation time. Instead, the researchers propose measuring the acoustic radiation (sound) produced by the earthquake, which carries information about the tectonic event and travels significantly faster than tsunami waves. Underwater microphones, called hydrophones, record the acoustic waves and monitor tectonic activity in real time.
    “Acoustic radiation travels through the water column much faster than tsunami waves. It carries information about the originating source and its pressure field can be recorded at distant locations, even thousands of kilometers away from the source. The derivation of analytical solutions for the pressure field is a key factor in the real-time analysis,” co-author Usama Kadri said.
    The computational model triangulates the source of the earthquake from the hydrophones and AI algorithms classify its slip type and magnitude. It then calculates important properties like effective length and width, uplift speed, and duration, which dictate the size of the tsunami.
    The authors tested their model with available hydrophone data and found it almost instantaneously and successfully described the earthquake parameters with low computational demand. They are improving the model by factoring in more information to increase the tsunami characterization’s accuracy.
    Their work predicting tsunami risk is part of a larger project to enhance hazard warning systems. The tsunami classification is a back-end aspect of a software that can improve the safety of offshore platforms and ships. More

  • in

    Scientists have full state of a quantum liquid down cold

    A team of physicists has illuminated certain properties of quantum systems by observing how their fluctuations spread over time. The research offers an intricate understanding of a complex phenomenon that is foundational to quantum computing — a method that can perform certain calculations significantly more efficiently than conventional computing.
    “In an era of quantum computing it’s vital to generate a precise characterization of the systems we are building,” explains Dries Sels, an assistant professor in New York University’s Department of Physics and an author of the paper, which appears in the journal Nature Physics. “This work reconstructs the full state of a quantum liquid, consistent with the predictions of a quantum field theory — similar to those that describe the fundamental particles in our universe.”
    Sels adds that the breakthrough offers promise for technological advancement.
    “Quantum computing relies on the ability to generate entanglement between different subsystems, and that’s exactly what we can probe with our method,” he notes. “The ability to do such precise characterization could also lead to better quantum sensors — another application area of quantum technologies.”
    The research team, which included scientists from Vienna University of Technology, ETH Zurich, Free University of Berlin, and the Max-Planck Institute of Quantum Optics, performed a tomography of a quantum system — the reconstruction of a specific quantum state with the aim of seeking experimental evidence of a theory.
    The studied quantum system consisted of ultracold atoms — slow-moving atoms that make the movement easier to analyze because of their near-zero temperature — trapped on an atom chip.
    In their work, the scientists created two “copies” of this quantum system — cigar-shaped clouds of atoms that evolve over time without influencing each other. At different stages of this process, the team performed a series of experiments that revealed the two copies’ correlations.
    “By constructing an entire history of these correlations, we can infer what is the initial quantum state of the system and extract its properties,” explains Sels. “Initially, we have a very strongly coupled quantum liquid, which we split into two so that it evolves as two independent liquids, and then we recombine it to reveal the ripples that are in the liquid.
    “It’s like watching the ripples in a pond after throwing a rock in it and inferring the properties of the rock, such as its size, shape, and weight.”
    This research was supported by grants from the Air Force Office of Scientific Research (FA9550-21-1-0236) and the U.S. Army Research Office (W911NF-20-1-0163) as well as the Austrian Science Fund (FWF) and the German Research Research Foundation (DRG). More

  • in

    Researchers use AI to discover new planet outside solar system

    A University of Georgia research team has confirmed evidence of a previously unknown planet outside of our solar system, and they used machine learning tools to detect it.
    A recent study by the team showed that machine learning can correctly determine if an exoplanet is present by looking in protoplanetary disks, the gas around newly formed stars.
    The newly published findings represent a first step toward using machine learning to identify previously overlooked exoplanets.
    “We confirmed the planet using traditional techniques, but our models directed us to run those simulations and showed us exactly where the planet might be,” said Jason Terry, doctoral student in the UGA Franklin College of Arts and Sciences department of physics and astronomy and lead author on the study.
    “When we applied our models to a set of older observations, they identified a disk that wasn’t known to have a planet despite having already been analyzed. Like previous discoveries, we ran simulations of the disk and found that a planet could re-create the observation.”
    According to Terry, the models suggested a planet’s presence, indicated by several images that strongly highlighted a particular region of the disk that turned out to have the characteristic sign of a planet — an unusual deviation in the velocity of the gas near the planet.
    “This is an incredibly exciting proof of concept. We knew from our previous work that we could use machine learning to find known forming exoplanets,” said Cassandra Hall, assistant professor of computational astrophysics and principal investigator of the Exoplanet and Planet Formation Research Group at UGA. “Now, we know for sure that we can use it to make brand new discoveries.”
    The discovery highlights how machine learning has the power to enhance scientists’ work, utilizing artificial intelligence as an added tool to expand researchers’ accuracy and more efficiently economize their time when engaged in such a vast endeavor as investigating deep, outer space.
    The models were able to detect a signal in data that people had already analyzed; they found something that previously had gone undetected.
    “This demonstrates that our models — and machine learning in general — have the ability to quickly and accurately identify important information that people can miss. This has the potential to dramatically speed up analysis and subsequent theoretical insights,” Terry said. “It only took about an hour to analyze that entire catalog and find strong evidence for a new planet in a specific spot, so we think there will be an important place for these types of techniques as our datasets get even larger.” More

  • in

    New programmable smart fabric responds to temperature and electricity

    A new smart material developed by researchers at the University of Waterloo is activated by both heat and electricity, making it the first ever to respond to two different stimuli.
    The unique design paves the way for a wide variety of potential applications, including clothing that warms up while you walk from the car to the office in winter and vehicle bumpers that return to their original shape after a collision.
    Inexpensively made with polymer nano-composite fibres from recycled plastic, the programmable fabric can change its colour and shape when stimuli are applied.
    “As a wearable material alone, it has almost infinite potential in AI, robotics and virtual reality games and experiences,” said Dr. Milad Kamkar, a chemical engineering professor at Waterloo. “Imagine feeling warmth or a physical trigger eliciting a more in-depth adventure in the virtual world.”
    The novel fabric design is a product of the happy union of soft and hard materials, featuring a combination of highly engineered polymer composites and stainless steel in a woven structure.
    Researchers created a device similar to a traditional loom to weave the smart fabric. The resulting process is extremely versatile, enabling design freedom and macro-scale control of the fabric’s properties.
    The fabric can also be activated by a lower voltage of electricity than previous systems, making it more energy-efficient and cost-effective. In addition, lower voltage allows integration into smaller, more portable devices, making it suitable for use in biomedical devices and environment sensors.
    “The idea of these intelligent materials was first bred and born from biomimicry science,” said Kamkar, director of the Multi-scale Materials Design (MMD) Centre at Waterloo.
    “Through the ability to sense and react to environmental stimuli such as temperature, this is proof of concept that our new material can interact with the environment to monitor ecosystems without damaging them.”
    The next step for researchers is to improve the fabric’s shape-memory performance for applications in the field of robotics. The aim is to construct a robot that can effectively carry and transfer weight to complete tasks. More

  • in

    Better superconductors with palladium

    It is one of the most exciting races in modern physics: How can we produce the best superconductors that remain superconducting even at the highest possible temperatures and ambient pressure? In recent years, a new era of superconductivity has begun with the discovery of nickelates. These superconductors are based on nickel, which is why many scientists speak of the “nickel age of superconductivity research.” In many respects, nickelates are similar to cuprates, which are based on copper and were discovered in the 1980s.
    But now a new class of materials is coming into play: In a cooperation between TU Wien and universities in Japan, it was possible to simulate the behaviour of various materials more precisely on the computer than before. There is a “Goldilocks zone” in which superconductivity works particularly well. And this zone is reached neither with nickel nor with copper, but with palladium. This could usher in a new “age of palladates” in superconductivity research. The results have now been published in the scientific journal Physical Review Letters.
    The search for higher transition temperatures
    At high temperatures, superconductors behave very similar to other conducting materials. But when they are cooled below a certain “critical temperature,” they change dramatically: their electrical resistance disappears completely and suddenly they can conduct electricity without any loss. This limit, at which a material changes between a superconducting and a normally conducting state, is called the “critical temperature.”
    “We have now been able to calculate this “critical temperature” for a whole range of materials. With our modelling on high-performance computers, we were able to predict the phase diagram of nickelate superconductivity with a high degree of accuracy, as the experiments then showed later,” says Prof. Karsten Held from the Institute of Solid State Physics at TU Wien.
    Many materials become superconducting only just above absolute zero (-273.15°C), while others retain their superconducting properties even at much higher temperatures. A superconductor that still remains superconducting at normal room temperature and normal atmospheric pressure would fundamentally revolutionise the way we generate, transport and use electricity. However, such a material has not yet been discovered. Nevertheless, high-temperature superconductors, including those from the cuprate class, play an important role in technology — for example, in the transmission of large currents or in the production of extremely strong magnetic fields.
    Copper? Nickel? Or Palladium?
    The search for the best possible superconducting materials is difficult: there are many different chemical elements that come into question. You can put them together in different structures, you can add tiny traces of other elements to optimise superconductivity. “To find suitable candidates, you have to understand on a quantum-physical level how the electrons interact with each other in the material,” says Prof. Karsten Held.
    This showed that there is an optimum for the interaction strength of the electrons. The interaction must be strong, but also not too strong. There is a “golden zone” in between that makes it possible to achieve the highest transition temperatures.
    Palladates as the optimal solution
    This golden zone of medium interaction can be reached neither with cuprates nor with nickelates — but one can hit the bull’s eye with a new type of material: so-called palladates. “Palladium is directly one line below nickel in the periodic table. The properties are similar, but the electrons there are on average somewhat further away from the atomic nucleus and each other, so the electronic interaction is weaker,” says Karsten Held.
    The model calculations show how to achieve optimal transition temperatures for palladium data. “The computational results are very promising,” says Karsten Held. “We hope that we can now use them to initiate experimental research. If we have a whole new, additional class of materials available with palladates to better understand superconductivity and to create even better superconductors, this could bring the entire research field forward.” More