More stories

  • in

    Unraveling the mathematics behind wiggly worm knots

    For millennia, humans have used knots for all kinds of reasons — to tie rope, braid hair, or weave fabrics. But there are organisms that are better at tying knots and far superior — and faster — at untangling them.
    Tiny California blackworms intricately tangle themselves by the thousands to form ball-shaped blobs that allow them to execute a wide range of biological functions. But, most striking of all, while the worms tangle over a period of several minutes, they can untangle in mere milliseconds, escaping at the first sign of a threat from a predator.
    Saad Bhamla, assistant professor in the School of Chemical and Biomolecular Engineering at Georgia Tech, wanted to understand precisely how the blackworms execute their tangling and untangling movements. To investigate, Bhamla and a team of researchers at Georgia Tech linked up with mathematicians at MIT. Their research, published in Science, could influence the design of fiber-like, shapeshifting robotics that self-assemble and move in ways that are fast and reversible. The study also highlights how cross-disciplinary collaboration can answer some of the most perplexing questions in disparate fields.
    Capturing the Inside of a Worm Blob
    Fascinated by the science of ultrafast movement and collective behavior, Bhamla and Harry Tuazon, a graduate student in Bhamla’s lab, have studied California blackworms for years, observing how they use collective movement to form blobs and then disperse.
    “We wanted to understand the exact mechanics behind how the worms change their movement dynamics to achieve tangling and ultrafast untangling,” Bhamla said. “Also, these are not just typical filaments like string, ethernet cables, or spaghetti — these are living, active tangles that are out of equilibrium, which adds a fascinating layer to the question.”
    Tuazon, a co-first author of the study, collected videos of his experiments with the worms, including macro videos of the worms’ collective dispersal mechanism and microscopic videos of one, two, three, and several worms to capture their movements.

    “I was shocked when I pointed a UV light toward the worm blobs and they dispersed so explosively,” Tuazon said. “But to understand this complex and mesmerizing maneuver, I started conducting experiments with only a few worms.”
    Bhamla and Tuazon approached MIT mathematicians Jörn Dunkel and Vishal Patil (a graduate student at the time and now a postdoctoral fellow at Stanford University) about a collaboration. After seeing Tuazon’s videos, the two theorists, who specialize in knots and topology, were eager to join.
    “Knots and tangles are a fascinating area where physics and mechanics meet some very interesting math,” said Patil, co-first author on the paper. “These worms seemed like a good playground to investigate topological principles in systems made up of filaments.”
    A key moment for Patil was when he viewed Tuazon’s video of a single worm that had been provoked into the escape response. Patil noticed the worm moved in a figure-eight pattern, turning its head in clockwise and counterclockwise spirals as its body followed.
    The researchers thought this helical gait pattern might play a role in the worms’ ability to tangle and untangle. But to mathematically quantify the worm tangle structures and model how they braid around each other, Patil and Dunkel needed experimental data.

    Bhamla and Tuazon set about to find an imaging technique that would allow them to peer inside the worm blob so they could gather more data. After much trial and error, they landed on an unexpected solution: ultrasound. By placing a live worm blob in nontoxic jelly and using a commercial ultrasound machine, they were finally able to observe the inside of the intricate worm tangles.
    “Capturing the inside structure of a live worm blob was a real challenge,” Tuazon said. “We tried all sorts of imaging techniques for months, including X-rays, confocal microscopy, and tomography, but none of them gave us the real-time resolution we needed. Ultimately, ultrasound turned out to be the solution.”
    After analyzing the ultrasound videos, Tuazon and other researchers in Bhamla’s lab painstakingly tracked the movement of the worms by hand, plotting more than 46,000 data points for Patil and Dunkel to use to understand the mathematics behind the movements.
    Explaining Tangling and Untangling
    Answering the questions of how the worms untangle quickly required a combination of mechanics and topology. Patil built a mathematical model to explain how helical gaits can lead to tangling and untangling. By testing the model using a simulation framework, Patil was able to create a visualization of worms tangling.
    The model predicted that each worm formed a tangle with at least two other worms, revealing why the worm blobs were so cohesive. Patil then showed that the same class of helical gaits could explain how they untangle. The simulations were uncanny in their resemblance to real ultrasound images and showed that the worms’ alternating helical wave motions enabled the tangling and the ultrafast untangling escape mechanism.  
    “What’s striking is these tangled structures are extremely complicated. They are disordered and complex structures, but these living worm structures are able to manipulate these knots for crucial functions,” Patil said.
    While it has been known for decades that the worms move in a helical gait, no one had ever made the connection between that movement and how they escape. The researchers’ work revealed how the mechanical movements of individual worms determine their emergent collective behavior and topological dynamics. It is also the first mathematical theory of active tangling and untangling.
    “This observation may seem like a mere curiosity, but its implications are far-reaching. Active filaments are ubiquitous in biological structures, from DNA strands to entire organisms,” said Eva Kanso, program director at the National Science Foundation and professor of mechanical engineering at the University of Southern California.
    “These filaments serve myriads of functions and can provide a general motif for engineering multifunctional structures and materials that change properties on demand. Just as the worm blobs perform remarkable tangling and untangling feats, so may future bioinspired materials defy the limits of conventional structures by exploiting the interplay between mechanics, geometry, and activity.”
    The researchers’ model demonstrates the advantages of different types of tangles, which could allow for programming a wide range of behaviors into multifunctional, filament-like materials, from polymers to shapeshifting soft robotic systems. Many companies, such as 3M, already use nonwoven materials made of tangling fibers in products, including bandages and N95 masks. The worms could inspire new nonwoven materials and topological shifting matter.
    “Actively shapeshifting topological matter is currently the stuff of science fiction,” said Bhamla. “Imagine a soft, nonwoven material made of millions of stringlike filaments that can tangle and untangle on command, forming a smart adhesive bandage that shape-morphs as a wound heals, or a smart filtration material that alters pore topology to trap particles of different sizes or chemical properties. The possibilities are endless.”
    Georgia Tech researchers Emily Kaufman, Tuhin Chakrabortty, and David Qin contributed to this study.
    CITATION: Patil, et al. “Ultrafast reversible self-assembly of living tangled matter.” Science. 28 April 2023.
    DOI: https://www.science.org/doi/10.1126/science.ade7759
    Writer: Catherine Barzler, Georgia Tech
    Video: Candler Hobbs, Georgia Tech
    Original footage and photography: Georgia Tech
    Simulations: MIT More

  • in

    ChatGPT scores nearly 50 per cent on board certification practice test for ophthalmology, study shows

    A study of ChatGPT found the artificial intelligence tool answered less than half of the test questions correctly from a study resource commonly used by physicians when preparing for board certification in ophthalmology.
    The study, published in JAMA Ophthalmology and led by St. Michael’s Hospital, a site of Unity Health Toronto, found ChatGPT correctly answered 46 per cent of questions when initially conducted in Jan. 2023. When researchers conducted the same test one month later, ChatGPT scored more than 10 per cent higher.
    The potential of AI in medicine and exam preparation has garnered excitement since ChatGPT became publicly available in Nov. 2022. It’s also raising concern for the potential of incorrect information and cheating in academia. ChatGPT is free, available to anyone with an internet connection, and works in a conversational manner.
    “ChatGPT may have an increasing role in medical education and clinical practice over time, however it is important to stress the responsible use of such AI systems,” said Dr. Rajeev H. Muni, principal investigator of the study and a researcher at the Li Ka Shing Knowledge Institute at St. Michael’s. “ChatGPT as used in this investigation did not answer sufficient multiple choice questions correctly for it to provide substantial assistance in preparing for board certification at this time.”
    Researchers used a dataset of practice multiple choice questions from the free trial of OphthoQuestions, a common resource for board certification exam preparation. To ensure ChatGPT’s responses were not influenced by concurrent conversations, entries or conversations with ChatGPT were cleared prior to inputting each question and a new ChatGPT account was used. Questions that used images and videos were not included because ChatGPT only accepts text input.
    Of 125 text-based multiple-choice questions, ChatGPT answered 58 (46 per cent) questions correctly when the study was first conducted in Jan. 2023. Researchers repeated the analysis on ChatGPT in Feb. 2023, and the performance improved to 58 per cent.
    “ChatGPT is an artificial intelligence system that has tremendous promise in medical education. Though it provided incorrect answers to board certification questions in ophthalmology about half the time, we anticipate that ChatGPT’s body of knowledge will rapidly evolve,” said Dr. Marko Popovic, a co-author of the study and a resident physician in the Department of Ophthalmology and Vision Sciences at the University of Toronto.
    ChatGPT closely matched how trainees answer questions, and selected the same multiple-choice response as the most common answer provided by ophthalmology trainees 44 per cent of the time. ChatGPT selected the multiple-choice response that was least popular among ophthalmology trainees 11 per cent of the time, second least popular 18 per cent of the time, and second most popular 22 per cent of the time.
    “ChatGPT performed most accurately on general medicine questions, answering 79 per cent of them correctly. On the other hand, its accuracy was considerably lower on questions for ophthalmology subspecialties. For instance, the chatbot answered 20 per cent of questions correctly on oculoplastics and zero per cent correctly from the subspecialty of retina. The accuracy of ChatGPT will likely improve most in niche subspecialties in the future,” said Andrew Mihalache, lead author of the study and undergraduate student at Western University. More

  • in

    Speedy robo-gripper reflexively organizes cluttered spaces

    When manipulating an arcade claw, a player can plan all she wants. But once she presses the joystick button, it’s a game of wait-and-see. If the claw misses its target, she’ll have to start from scratch for another chance at a prize.
    The slow and deliberate approach of the arcade claw is similar to state-of-the-art pick-and-place robots, which use high-level planners to process visual images and plan out a series of moves to grab for an object. If a gripper misses its mark, it’s back to the starting point, where the controller must map out a new plan.
    Looking to give robots a more nimble, human-like touch, MIT engineers have now developed a gripper that grasps by reflex. Rather than start from scratch after a failed attempt, the team’s robot adapts in the moment to reflexively roll, palm, or pinch an object to get a better hold. It’s able to carry out these “last centimeter” adjustments (a riff on the “last mile” delivery problem) without engaging a higher-level planner, much like how a person might fumble in the dark for a bedside glass without much conscious thought.
    The new design is the first to incorporate reflexes into a robotic planning architecture. For now, the system is a proof of concept and provides a general organizational structure for embedding reflexes into a robotic system. Going forward, the researchers plan to program more complex reflexes to enable nimble, adaptable machines that can work with and among humans in ever-changing settings.
    “In environments where people live and work, there’s always going to be uncertainty,” says Andrew SaLoutos, a graduate student in MIT’s Department of Mechanical Engineering. “Someone could put something new on a desk or move something in the break room or add an extra dish to the sink. We’re hoping a robot with reflexes could adapt and work with this kind of uncertainty.”
    SaLoutos and his colleagues will present a paper on their design in May at the IEEE International Conference on Robotics and Automation (ICRA). His MIT co-authors include postdoc Hongmin Kim, graduate student Elijah Stanger-Jones, Menglong Guo SM ’22, and professor of mechanical engineering Sangbae Kim, the director of the Biomimetic Robotics Laboratory at MIT.

    High and low
    Many modern robotic grippers are designed for relatively slow and precise tasks, such as repetitively fitting together the same parts on a a factory assembly line. These systems depend on visual data from onboard cameras; processing that data limits a robot’s reaction time, particularly if it needs to recover from a failed grasp.
    “There’s no way to short-circuit out and say, oh shoot, I have to do something now and react quickly,” SaLoutos says. “Their only recourse is just to start again. And that takes a lot of time computationally.”
    In their new work, Kim’s team built a more reflexive and reactive platform, using fast, responsive actuators that they originally developed for the group’s mini cheetah — a nimble, four-legged robot designed to run, leap, and quickly adapt its gait to various types of terrain.
    The team’s design includes a high-speed arm and two lightweight, multijointed fingers. In addition to a camera mounted to the base of the arm, the team incorporated custom high-bandwidth sensors at the fingertips that instantly record the force and location of any contact as well as the proximity of the finger to surrounding objects more than 200 times per second.

    The researchers designed the robotic system such that a high-level planner initially processes visual data of a scene, marking an object’s current location where the gripper should pick the object up, and the location where the robot should place it down. Then, the planner sets a path for the arm to reach out and grasp the object. At this point, the reflexive controller takes over.
    If the gripper fails to grab hold of the object, rather than back out and start again as most grippers do, the team wrote an algorithm that instructs the robot to quickly act out any of three grasp maneuvers, which they call “reflexes,” in response to real-time measurements at the fingertips. The three reflexes kick in within the last centimeter of the robot approaching an object and enable the fingers to grab, pinch, or drag an object until it has a better hold.
    They programmed the reflexes to be carried out without having to involve the high-level planner. Instead, the reflexes are organized at a lower decision-making level, so that they can respond as if by instinct, rather than having to carefully evaluate the situation to plan an optimal fix.
    “It’s like how, instead of having the CEO micromanage and plan every single thing in your company, you build a trust system and delegate some tasks to lower-level divisions,” Kim says. “It may not be optimal, but it helps the company react much more quickly. In many cases, waiting for the optimal solution makes the situation much worse or irrecoverable.”
    Cleaning via reflex
    The team demonstrated the gripper’s reflexes by clearing a cluttered shelf. They set a variety of household objects on a shelf, including a bowl, a cup, a can, an apple, and a bag of coffee grounds. They showed that the robot was able to quickly adapt its grasp to each object’s particular shape and, in the case of the coffee grounds, squishiness. Out of 117 attempts, the gripper quickly and successfully picked and placed objects more than 90 percent of the time, without having to back out and start over after a failed grasp.
    A second experiment showed how the robot could also react in the moment. When researchers shifted a cup’s position, the gripper, despite having no visual update of the new location, was able to readjust and essentially feel around until it sensed the cup in its grasp. Compared to a baseline grasping controller, the gripper’s reflexes increased the area of successful grasps by over 55 percent.
    Now, the engineers are working to include more complex reflexes and grasp maneuvers in the system, with a view toward building a general pick-and-place robot capable of adapting to cluttered and constantly changing spaces.
    “Picking up a cup from a clean table — that specific problem in robotics was solved 30 years ago,” Kim notes. “But a more general approach, like picking up toys in a toybox, or even a book from a library shelf, has not been solved. Now with reflexes, we think we can one day pick and place in every possible way, so that a robot could potentially clean up the house.”
    This research was supported, in part, by Advanced Robotics Lab of LG Electronics and the Toyota Research Institute.
    Video: https://youtu.be/XxDi-HEpXn4 More

  • in

    Can jack-of-all-trades AI reshape medicine?

    The vast majority of AI models used in medicine today are “narrow specialists,” trained to perform one or two tasks, such as scanning mammograms for signs of breast cancer or detecting lung disease on chest X-rays.
    But the everyday practice of medicine involves an endless array of clinical scenarios, symptom presentations, possible diagnoses, and treatment conundrums. So, if AI is to deliver on its promise to reshape clinical care, it must reflect that complexity of medicine and do so with high fidelity, says Pranav Rajpurkar, assistant professor of biomedical informatics in the Blavatnik Institute at HMS.
    Enter generalist medical AI, a more evolved form of machine learning capable of performing complex tasks in a wide range of scenarios.
    Akin to general medicine physicians, Rajpurkar explained, generalist medical AI models can integrate multiple data types — such as MRI scans, X-rays, blood test results, medical texts, and genomic testing — to perform a range of tasks, from making complex diagnostic calls to supporting clinical decisions to choosing optimal treatment. And they can be deployed in a variety of settings, from the exam room to the hospital ward to the outpatient GI procedure suite to the cardiac operating room.
    While the earliest versions of generalist medical AI have started to emerge, its true potential and depth of capabilities have yet to materialize.
    “The rapidly evolving capabilities in the field of AI have completely redefined what we can do in the field of medical AI,” writes Rajpurkar in a newly published perspective in Nature, on which he is co-senior author with Eric Topol of the Scripps Research Institute and colleagues from Stanford University, Yale University, and the University of Toronto.

    Generalist medical AI is on the cusp of transforming clinical medicine as we know it, but with this opportunity come serious challenges, the authors say.
    In the article, the authors discuss the defining features of generalist medical AI, identify various clinical scenarios where these models can be used, and chart the road forward for their design, development, and deployment.
    Features of generalist medical AI
    Key characteristics that render generalist medical AI models superior to conventional models are their adaptability, their versatility, and their ability to apply existing knowledge to new contexts.
    For example, a traditional AI model trained to spot brain tumors on a brain MRI will look at a lesion on an image to determine whether it’s a tumor. It can provide no information beyond that. By contrast, a generalist model would look at a lesion and determine what type of lesion it is — a tumor, a cyst, an infection, or something else. It may recommend further testing and, depending on the diagnosis, suggest treatment options.

    “Compared with current models, generalist medical AI will be able to perform more sophisticated reasoning and integrate multiple data types, which lets it build a more detailed picture of a patient’s case,” said study co-first author Oishi Banerjee, a research associate in the Rajpurkar lab, which is already working on designing such models.
    According to the authors, generalist models will be able to: Adapt easily to new tasks without the need for formal retraining. They will perform the task by simply having it explained to them in plain English or another language. Analyze various types of data — images, medical text, lab results, genetic sequencing, patient histories, or any combination thereof — and generate a decision. In contrast, conventional AI models are limited to using predefined data types — text only, image only — and only in certain combinations. Apply medical knowledge to reason through previously unseen tasks and use medically accurate language to explain their reasoning.Clinical scenarios for use of generalist medical AI
    The researchers outline many areas in which generalist medical AI models would offer comprehensive solutions.
    Some of them are: Radiology reports. Generalist medical AI would act as a versatile digital radiology assistant to reduce workload and minimize rote work. These models could draft radiology reports that describe both abnormalities and relevant normal findings, while also taking into account the patient’s history. These models would also combine text narrative with visualization to highlight areas on an image described by the text. The models would also be able to compare previous and current findings on a patient’s image to illuminate telltale changes suggestive of disease progression. Real-time surgery assistance. If an operating team hits a roadblock during a procedure — such as failure to find a mass in an organ — the surgeon could ask the model to review the last 15 minutes of the procedure to look for any misses or oversights. If a surgeon encounters an ultra-rare anatomic feature during surgery, the model could rapidly access all published work on this procedure to offer insight in real time. Decision support at the patient bedside. Generalist models would offer alerts and treatment recommendations for hospitalized patients by continuously monitoring their vital signs and other parameters, including the patient’s records. The models would be able to anticipate looming emergencies before they occur. For example, a model might alert the clinical team when a patient is on the brink of going into circulatory shock and immediately suggest steps to avert it. Ahead, promise and peril
    Generalist medical AI models have the potential to transform health care, the authors say. They can alleviate clinician burnout, reduce clinical errors, and expedite and improve clinical decision-making.
    Yet, these models come with unique challenges. Their strongest features — extreme versatility and adaptability — also pose the greatest risks, the researchers caution, because they will require the collection of vast and diverse data.
    Some critical pitfalls include: Need for extensive, ongoing training. To ensure the models can switch data modalities quickly and adapt in real time depending on the context and type of question asked, they will need to undergo extensive training on diverse data from multiple complementary sources and modalities. That training would have to be undertaken periodically to keep up with new information. For instance, in the case of new SARS-CoV-2 variants, a model must be able to quickly retrieve key features on X-ray images of pneumonia caused by an older variant to contrast with lung changes associated with a new variant. Validation. Generalist models will be uniquely difficult to validate due to the versatility and complexity of tasks they will be asked to perform. This means the model needs to be tested on a wide range of cases it might encounter to ensure its proper performance. What this boils down to, Rajpurkar said, is defining the conditions under which the models perform and the conditions under which they fail. Verification. Compared with conventional models, generalist medical AI will handle much more data, more varied types of data, and data of greater complexity. This will make it that much more difficult for clinicians to determine how accurate a model’s decision is. For instance, a conventional model would look at an imaging study or a whole-slide image when classifying a patient’s tumor. A single radiologist or pathologist could verify whether the model was correct. By comparison, a generalist model could analyze pathology slides, CT scans, and medical literature, among many other variables, to classify and stage the disease and make a treatment recommendation. Such a complex decision would require verification by a multidisciplinary panel that includes radiologists, pathologists, and oncologists to assess the accuracy of the model. The researchers note that designers could make this verification process easier by incorporating explanations, such as clickable links to supporting passages in the literature, to allow clinicians to efficiently verify the model’s predictions. Another important feature would be building models that quantify their level of uncertainty. Biases. It is no secret that medical AI models can perpetuate biases, which they can acquire during training when exposed to limited datasets obtained from non-diverse populations. Such risks will be magnified when designing generalist medical AI due to the unprecedented scale and complexity of the datasets needed during their training. To minimize this risk, generalist medical AI models must be thoroughly validated to ensure that they do not underperform on particular populations, such as minority groups, the researchers recommend. Additionally, they will need to undergo continuous auditing and regulation after deployment. “These are serious but not insurmountable hurdles,” Rajpurkar said. “Having a clear-eyed understanding of all the challenges early on will help ensure that generalist medical AI delivers on its tremendous promise to change the practice of medicine for the better.” More

  • in

    Tunneling electrons

    By superimposing two laser fields of different strengths and frequency, the electron emission of metals can be measured and controlled precisely to a few attoseconds. Physicists from Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), the University of Rostock and the University of Konstanz have shown that this is the case. The findings could lead to new quantum-mechanical insights and enable electronic circuits that are a million times faster than today. The researchers have now published their findings in the journal Nature.
    Light is capable of releasing electrons from metal surfaces. This observation was already made in the first half of the 19th century by Alexandre Edmond Becquerel and later confirmed in various experiments, among others by Heinrich Hertz and Wilhelm Hallwachs. Since the photoelectric effect could not be reconciled with the light wave theory, Albert Einstein came to the conclusion that light must consist not only of waves, but also of particles. He laid the foundation for quantum mechanics.
    Strong laser light allows electrons to tunnel
    With the development of laser technology, research into the photoelectric effect has gained a new impetus. “Today, we can produce extremely strong and ultrashort laser pulses in a wide variety of spectral colors,” explains Prof. Dr. Peter Hommelhoff, Chair for Laser Physics at the Department of Physics at FAU. “This inspired us to capture and control the duration and intensity of the electron release of metals with greater accuracy.” So far, scientists have only been able to determine laser-induced electron dynamics precisely in gases — with an accuracy of a few attoseconds. Quantum dynamics and emission time windows have not yet been measured on solids.
    This is exactly what the researchers at FAU, the University of Rostock and the University of Konstanz have now succeeded in doing for the first time. They used a special strategy for this: Instead of just a strong laser pulse, which emits the electrons a pointy tungsten tip, they also used a second weaker laser with twice the frequency. “In principle, you have to know that with very strong laser light, the individual photons are no longer responsible for the release of the electrons, but rather the electric field of the laser,” explains Dr. Philip Dienstbier, a research associate at Peter Hommelhoff’s chair and leading author of the study. “The electrons then tunnel through the metal interface into the vacuum.” By deliberately superimposing the two light waves, physicists can control the shape and strength of the laser field — and thus also the emission of the electrons.
    Circuits a million times faster
    In the experiment, the researchers were able to determine the duration of the electron flow to 30 attoseconds — thirty billionths of a billionth of a second. This ultra-precise limitation of the emission time window could advance basic and application-related research in equal measure. “The phase shift of the two laser pulses allows us to gain deeper insights into the tunnel process and the subsequent movement of the electron in the laser field,” says Philip Dienstbier. “This enables new quantum mechanical insights into both the emission from the solid state body and the light fields used.”
    The most important field of application is light-field-driven electronics: With the proposed two-color method, the laser light can be modulated in such a way that an exactly defined sequence of electron pulses and thus of electrical signals could be generated. Dienstbier: “In the foreseeable future, it will be possible to integrate the components of our test setup — light sources, metal tip, electron detector — into a microchip.” Complex circuits with bandwidths up to the petahertz range are then conceivable — that would be almost a million times faster than current electronics. More

  • in

    Thawing permafrost may unleash industrial pollution across the Arctic

    As the Arctic’s icebound ground warms, it may unleash toxic substances across the region.

    By the end of the century, the thaw threatens to destabilize facilities at more than 2,000 industrial sites, such as mines and pipelines, and further compromise more than 5,000 already contaminated areas, researchers report March 28 in Nature Communications.

    Those numbers come from the first comprehensive study to pinpoint where Arctic permafrost thaw could release industrial pollutants. But there are probably even more contaminated areas that we don’t know about, says permafrost researcher Moritz Langer of the Alfred Wegener Institute in Potsdam, Germany. “We only see the tip of the iceberg.”

    .email-conversion {
    border: 1px solid #ffcccb;
    color: white;
    margin-top: 50px;
    background-image: url(“/wp-content/themes/sciencenews/client/src/images/cta-module@2x.jpg”);
    padding: 20px;
    clear: both;
    }

    .zephr-registration-form{max-width:440px;margin:20px auto;padding:20px;background-color:#fff;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form *{box-sizing:border-box}.zephr-registration-form-text > *{color:var(–zephr-color-text-main)}.zephr-registration-form-relative-container{position:relative}.zephr-registration-form-flex-container{display:flex}.zephr-registration-form-input.svelte-blfh8x{display:block;width:100%;height:calc(var(–zephr-input-height) * 1px);padding-left:8px;font-size:16px;border:calc(var(–zephr-input-borderWidth) * 1px) solid var(–zephr-input-borderColor);border-radius:calc(var(–zephr-input-borderRadius) * 1px);transition:border-color 0.25s ease, box-shadow 0.25s ease;outline:0;color:var(–zephr-color-text-main);background-color:#fff;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-input.svelte-blfh8x::placeholder{color:var(–zephr-color-background-tinted)}.zephr-registration-form-input-checkbox.svelte-blfh8x{width:auto;height:auto;margin:8px 5px 0 0;float:left}.zephr-registration-form-input-radio.svelte-blfh8x{position:absolute;opacity:0;cursor:pointer;height:0;width:0}.zephr-registration-form-input-color[type=”color”].svelte-blfh8x{width:50px;padding:0;border-radius:50%}.zephr-registration-form-input-color[type=”color”].svelte-blfh8x::-webkit-color-swatch{border:none;border-radius:50%;padding:0}.zephr-registration-form-input-color[type=”color”].svelte-blfh8x::-webkit-color-swatch-wrapper{border:none;border-radius:50%;padding:0}.zephr-registration-form-input.disabled.svelte-blfh8x,.zephr-registration-form-input.disabled.svelte-blfh8x:hover{border:calc(var(–zephr-input-borderWidth) * 1px) solid var(–zephr-input-borderColor);background-color:var(–zephr-color-background-tinted)}.zephr-registration-form-input.error.svelte-blfh8x{border:1px solid var(–zephr-color-warning-main)}.zephr-registration-form-input-label.svelte-1ok5fdj.svelte-1ok5fdj{margin-top:10px;display:block;line-height:30px;font-size:12px;color:var(–zephr-color-text-tinted);font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-input-label.svelte-1ok5fdj span.svelte-1ok5fdj{display:block}.zephr-registration-form-button.svelte-17g75t9{height:calc(var(–zephr-button-height) * 1px);line-height:0;padding:0 20px;text-decoration:none;text-transform:capitalize;text-align:center;border-radius:calc(var(–zephr-button-borderRadius) * 1px);font-size:calc(var(–zephr-button-fontSize) * 1px);font-weight:normal;cursor:pointer;border-style:solid;border-width:calc(var(–zephr-button-borderWidth) * 1px);border-color:var(–zephr-color-action-tinted);transition:backdrop-filter 0.2s, background-color 0.2s;margin-top:20px;display:block;width:100%;background-color:var(–zephr-color-action-main);color:#fff;position:relative;overflow:hidden;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-button.svelte-17g75t9:hover{background-color:var(–zephr-color-action-tinted);border-color:var(–zephr-color-action-tinted)}.zephr-registration-form-button.svelte-17g75t9:disabled{background-color:var(–zephr-color-background-tinted);border-color:var(–zephr-color-background-tinted)}.zephr-registration-form-button.svelte-17g75t9:disabled:hover{background-color:var(–zephr-color-background-tinted);border-color:var(–zephr-color-background-tinted)}.zephr-registration-form-text.svelte-i1fi5{font-size:19px;text-align:center;margin:20px auto;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-input-inner-text.svelte-lvlpcn{cursor:pointer;position:absolute;top:50%;transform:translateY(-50%);right:10px;color:var(–zephr-color-text-main);font-size:12px;font-weight:bold;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-divider-container.svelte-mk4m8o{display:flex;align-items:center;justify-content:center;margin:40px 0}.zephr-registration-form-divider-line.svelte-mk4m8o{height:1px;width:50%;margin:0 5px;background-color:var(–zephr-color-text-tinted);;}.zephr-registration-form-divider-text.svelte-mk4m8o{margin:0 12px;color:var(–zephr-color-text-main);font-size:14px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);white-space:nowrap}.zephr-registration-form-response-message.svelte-179421u{text-align:center;padding:10px 30px;border-radius:5px;font-size:15px;margin-top:10px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-response-message-title.svelte-179421u{font-weight:bold;margin-bottom:10px}.zephr-registration-form-response-message-success.svelte-179421u{background-color:#baecbb;border:1px solid #00bc05}.zephr-registration-form-response-message-error.svelte-179421u{background-color:#fcdbec;border:1px solid #d90c00}.zephr-recaptcha-tcs.svelte-1wyy3bx{margin:20px 0 0 0;font-size:15px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-recaptcha-inline.svelte-1wyy3bx{margin:20px 0 0 0}.zephr-registration-form-social-sign-in.svelte-gp4ky7{align-items:center}.zephr-registration-form-social-sign-in-button.svelte-gp4ky7{height:55px;padding:0 15px;color:#000;background-color:#fff;box-shadow:0px 0px 5px rgba(0, 0, 0, 0.3);border-radius:10px;font-size:17px;display:flex;align-items:center;cursor:pointer;margin-top:20px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-social-sign-in-button.svelte-gp4ky7:hover{background-color:#fafafa}.zephr-registration-form-social-sign-in-icon.svelte-gp4ky7{display:flex;justify-content:center;margin-right:30px;width:25px}.zephr-form-link-message.svelte-rt4jae{margin:10px 0 10px 20px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-progress-bar.svelte-8qyhcl{width:100%;border:0;border-radius:20px;margin-top:10px}.zephr-registration-form-progress-bar.svelte-8qyhcl::-webkit-progress-bar{background-color:var(–zephr-color-background-tinted);border:0;border-radius:20px}.zephr-registration-form-progress-bar.svelte-8qyhcl::-webkit-progress-value{background-color:var(–zephr-color-text-tinted);border:0;border-radius:20px}.zephr-registration-progress-bar-step.svelte-8qyhcl{margin:auto;color:var(–zephr-color-text-tinted);font-size:12px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-progress-bar-step.svelte-8qyhcl:first-child{margin-left:0}.zephr-registration-progress-bar-step.svelte-8qyhcl:last-child{margin-right:0}.zephr-registration-form-input-inner-text.svelte-lvlpcn{cursor:pointer;position:absolute;top:50%;transform:translateY(-50%);right:10px;color:var(–zephr-color-text-main);font-size:12px;font-weight:bold;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-input-error-text.svelte-19a73pq{color:var(–zephr-color-warning-main);font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-input-select.svelte-19a73pq{display:block;appearance:auto;width:100%;height:calc(var(–zephr-input-height) * 1px);font-size:16px;border:calc(var(–zephr-input-borderWidth) * 1px) solid var(–zephr-color-text-main);border-radius:calc(var(–zephr-input-borderRadius) * 1px);transition:border-color 0.25s ease, box-shadow 0.25s ease;outline:0;color:var(–zephr-color-text-main);background-color:#fff;padding:10px}.zephr-registration-form-input-select.disabled.svelte-19a73pq{border:1px solid var(–zephr-color-background-tinted)}.zephr-registration-form-input-select.unselected.svelte-19a73pq{color:var(–zephr-color-background-tinted)}.zephr-registration-form-input-select.error.svelte-19a73pq{border-color:var(–zephr-color-warning-main)}.zephr-registration-form-input-textarea.svelte-19a73pq{background-color:#fff;border:1px solid #ddd;color:#222;font-size:14px;font-weight:300;padding:16px;width:100%}.zephr-registration-form-input-slider-output.svelte-19a73pq{margin:13px 0 0 10px}.spin.svelte-1cj2gr0{animation:svelte-1cj2gr0-spin 2s 0s infinite linear}.pulse.svelte-1cj2gr0{animation:svelte-1cj2gr0-spin 1s infinite steps(8)}@keyframes svelte-1cj2gr0-spin{0%{transform:rotate(0deg)}100%{transform:rotate(360deg)}}.zephr-registration-form-input-radio.svelte-1qn5n0t{position:absolute;opacity:0;cursor:pointer;height:0;width:0}.zephr-registration-form-radio-label.svelte-1qn5n0t{display:flex;align-items:center;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-radio-dot.svelte-1qn5n0t{position:relative;box-sizing:border-box;height:23px;width:23px;background-color:#fff;border:1px solid #ebebeb;border-radius:50%;margin-right:12px}.checked.svelte-1qn5n0t{border-color:#009fe3}.checked.svelte-1qn5n0t:after{content:””;position:absolute;width:17px;height:17px;background:#009fe3;background:linear-gradient(#009fe3, #006cb5);border-radius:50%;top:2px;left:2px}.disabled.checked.svelte-1qn5n0t:after{background:var(–zephr-color-background-tinted)}.error.svelte-1qn5n0t{border:1px solid var(–zephr-color-warning-main)}.zephr-registration-form-checkbox.svelte-1gzpw2y{position:absolute;opacity:0;cursor:pointer;height:0;width:0}.zephr-registration-form-checkbox-label.svelte-1gzpw2y{display:flex;align-items:center;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-checkmark.svelte-1gzpw2y{position:relative;box-sizing:border-box;height:23px;width:23px;background-color:#fff;border:1px solid var(–zephr-color-text-main);border-radius:6px;margin-right:12px;cursor:pointer}.zephr-registration-form-checkmark.checked.svelte-1gzpw2y{border-color:#009fe3}.zephr-registration-form-checkmark.checked.svelte-1gzpw2y:after{content:””;position:absolute;width:6px;height:13px;border:solid #009fe3;border-width:0 2px 2px 0;transform:rotate(45deg);top:3px;left:8px;box-sizing:border-box}.zephr-registration-form-checkmark.disabled.svelte-1gzpw2y{border:1px solid var(–zephr-color-background-tinted)}.zephr-registration-form-checkmark.disabled.checked.svelte-1gzpw2y:after{border:solid var(–zephr-color-background-tinted);border-width:0 2px 2px 0}.zephr-registration-form-checkmark.error.svelte-1gzpw2y{border:1px solid var(–zephr-color-warning-main)}.zephr-registration-form-google-icon.svelte-1jnblvg{width:20px}.zephr-form-link.svelte-64wplc{margin:10px 0;color:#6ba5e9;text-decoration:underline;cursor:pointer;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-form-link-disabled.svelte-64wplc{color:var(–zephr-color-text-main);cursor:none;text-decoration:none}.zephr-registration-form-password-progress.svelte-d1zv9r{display:flex;margin-top:10px}.zephr-registration-form-password-bar.svelte-d1zv9r{width:100%;height:4px;border-radius:2px}.zephr-registration-form-password-bar.svelte-d1zv9r:not(:first-child){margin-left:8px}.zephr-registration-form-password-requirements.svelte-d1zv9r{margin:20px 0;justify-content:center}.zephr-registration-form-password-requirement.svelte-d1zv9r{display:flex;align-items:center;color:var(–zephr-color-text-tinted);font-size:12px;height:20px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-password-requirement-icon.svelte-d1zv9r{margin-right:10px;font-size:15px}.zephr-registration-form-password-progress.svelte-d1zv9r{display:flex;margin-top:10px}.zephr-registration-form-password-bar.svelte-d1zv9r{width:100%;height:4px;border-radius:2px}.zephr-registration-form-password-bar.svelte-d1zv9r:not(:first-child){margin-left:8px}.zephr-registration-form-password-requirements.svelte-d1zv9r{margin:20px 0;justify-content:center}.zephr-registration-form-password-requirement.svelte-d1zv9r{display:flex;align-items:center;color:var(–zephr-color-text-tinted);font-size:12px;height:20px;font-family:var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont)}.zephr-registration-form-password-requirement-icon.svelte-d1zv9r{margin-right:10px;font-size:15px}
    .zephr-registration-form {
    max-width: 100%;
    background-image: url(/wp-content/themes/sciencenews/client/src/images/cta-module@2x.jpg);
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    margin: 0px auto;
    margin-bottom: 4rem;
    padding: 20px;
    }

    .zephr-registration-form-text h6 {
    font-size: 0.8rem;
    }

    .zephr-registration-form h4 {
    font-size: 3rem;
    }

    .zephr-registration-form h4 {
    font-size: 1.5rem;
    }

    .zephr-registration-form-button.svelte-17g75t9:hover {
    background-color: #fc6a65;
    border-color: #fc6a65;
    width: 150px;
    margin-left: auto;
    margin-right: auto;
    }
    .zephr-registration-form-button.svelte-17g75t9:disabled {
    background-color: #e04821;
    border-color: #e04821;
    width: 150px;
    margin-left: auto;
    margin-right: auto;
    }
    .zephr-registration-form-button.svelte-17g75t9 {
    background-color: #e04821;
    border-color: #e04821;
    width: 150px;
    margin-left: auto;
    margin-right: auto;
    }
    .zephr-registration-form-text > * {
    color: #FFFFFF;
    font-weight: bold
    font: 25px;
    }
    .zephr-registration-form-progress-bar.svelte-8qyhcl {
    width: 100%;
    border: 0;
    border-radius: 20px;
    margin-top: 10px;
    display: none;
    }
    .zephr-registration-form-response-message-title.svelte-179421u {
    font-weight: bold;
    margin-bottom: 10px;
    display: none;
    }
    .zephr-registration-form-response-message-success.svelte-179421u {
    background-color: #8db869;
    border: 1px solid #8db869;
    color: white;
    margin-top: -0.2rem;
    }
    .zephr-registration-form-text.svelte-i1fi5:nth-child(1){
    font-size: 18px;
    text-align: center;
    margin: 20px auto;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    }
    .zephr-registration-form-text.svelte-i1fi5:nth-child(5){
    font-size: 18px;
    text-align: left;
    margin: 20px auto;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    }
    .zephr-registration-form-text.svelte-i1fi5:nth-child(7){
    font-size: 18px;
    text-align: left;
    margin: 20px auto;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    }
    .zephr-registration-form-text.svelte-i1fi5:nth-child(9){
    font-size: 18px;
    text-align: left;
    margin: 20px auto;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    }
    .zephr-registration-form-input-label.svelte-1ok5fdj span.svelte-1ok5fdj {
    display: none;
    color: white;
    }
    .zephr-registration-form-input.disabled.svelte-blfh8x, .zephr-registration-form-input.disabled.svelte-blfh8x:hover {
    border: calc(var(–zephr-input-borderWidth) * 1px) solid var(–zephr-input-borderColor);
    background-color: white;
    }
    .zephr-registration-form-checkbox-label.svelte-1gzpw2y {
    display: flex;
    align-items: center;
    font-family: var(–zephr-typography-body-font), var(–zephr-typography-body-fallbackFont);
    color: white;
    font-size: 20px;
    margin-bottom: -20px;
    }

    Toxic substances released from these locations could jeopardize fish and other animals living in Arctic waterways, as well as the health of people who depend on them.

    Permafrost is any soil, sediment or rock that remains frozen for at least two years. Step on the ground in the Arctic and chances are that permafrost lies underfoot. For decades, people have treated the frozen earth as staunch and largely immobile. Industries constructed infrastructure atop its firmness, and within it they buried their refuse and sludge. In some places, scientists and others have used permafrost to store radioactive waste.

    But the Arctic is warming nearly four times as fast as the rest of the planet as a result of climate change, and as much as 65 percent of the region’s permafrost may disappear by 2100 (SN: 8/11/22).

    That could release some worrisome things, says climate scientist Kimberley Miner of NASA’s Jet Propulsion Laboratory in Pasadena, Calif., who wasn’t involved in the study. In 2021, Miner and her colleagues warned that the thawing of Arctic permafrost could release antibiotic-resistant bacteria, viruses and radioactive waste from nuclear-testing programs into the environment.

    Keen to identify where the warming could spread industrial pollutants, Langer and his colleagues first analyzed the range of Arctic permafrost and whereabouts of industrial infrastructure. They identified about 4,500 sites — including oil fields, mines and abandoned military installations — in places where permafrost probably exists. Next, the team used contamination data from Alaska and Canada — regions with accessible records — and found that as of January 2021, about 3,600 contaminated locations occupy the two regions. These include waste areas and places where pollutants were accidentally released.

    Realistically, these numbers are probably deflated, Langer says, because many incidents of contamination have probably gone undocumented.

    .subscribe-cta {
    color: black;
    margin-top: 0px;
    background-image: url(“”);
    background-size: cover;
    padding: 20px;
    border: 1px solid #ffcccb;
    border-top: 5px solid #e04821;
    clear: both;
    }

    Subscribe to Science News

    Get great science journalism, from the most trusted source, delivered to your doorstep.

    Focusing on Alaska, the researchers found that diesel, gasoline and related petrochemicals make up about half of the pollutants reported. Lead, arsenic and mercury — substances toxic to fish, people and other organisms — were reported too. But in many cases, the type of pollutant was not documented. “That’s a big problem,” Langer says, in part because it makes understanding the risks of a particular leak or spill much harder.

    Using the locations of industrial sites and North American contamination data, Langer and colleagues extrapolated where industrial contamination and permafrost might coexist across the entirety of the Arctic, finding 13,000 to 20,000 such sites may exist today. Then, they used computer simulations to investigate the impact of current and future levels of climate change.

    Today, there may already be a risk of permafrost degrading at about 1,000 of the known industrial sites and 2,200 to 4,800 of the known and estimated contaminated locations, they found.

    In a low-emissions scenario in which warming rises by up to 2 degrees Celsius above preindustrial levels by the end of the century, those numbers increase to more than 2,100 industrial sites and 5,600 to 10,000 contaminated areas. An increase of about 4.3 degrees C would probably affect almost all the known and projected locations.

    “We’re going to need to think about keeping [pollutants] where they need to be,” Miner says, “not just leaving them on the landscape where we feel like.”

    The new findings are probably conservative, Langer says, partly because the analysis didn’t consider that infrastructure itself can warm the ground. What’s more, even if it doesn’t fully thaw, “warming of the permafrost causes quite a bit of problem,” says civil engineer Guy Doré of Université Laval in Quebec City, who wasn’t involved in the study. Permafrost that warms from –5° C to –2° C can lose half of its load-bearing capacity, he says, destabilizing infrastructure.

    Today, no international regulations mandate industries in the Arctic to document the substances they use and store, or what happens to them. Without that information, Langer says, it’ll be difficult to assess and manage the growing risk of contamination.

    He plans to visit decades-old oil drilling facilities in Canada to study how the changing permafrost has affected the containment of drilling fluids. “That’s the next step,” he says, “to understand better how [industrial contaminants] spread into the landscape.” More

  • in

    ‘Smart’ tech is coming to a city near you

    If you own an internet-connected “smart” device, chances are it knows a lot about your home life.
    If you raid the pantry at 2 a.m. for a snack, your smart lights can tell. That’s because they track every time they’re switched on and off.
    Your Roomba knows the size and layout of your home and sends it to the cloud. Smart speakers eavesdrop on your every word, listening for voice commands.
    But the data-driven smart tech trend also extends far beyond our kitchens and living rooms. Over the past 20 years, city governments have been partnering with tech companies to collect real-time data on daily life in our cities, too.
    In urban areas worldwide, sidewalks, streetlights and buildings are equipped with sensors that log foot traffic, driving and parking patterns, even detect and pinpoint where gunshots may have been fired.
    In Singapore, for example, thousands of sensors and cameras installed across the city track everything from crowd density and traffic congestion to smoking where it’s not allowed.

    Copenhagen uses smart air quality sensors to monitor and map pollution levels.
    A 2016 report from the National League of Cities estimates that 66% of American cities had already invested in some type of ‘smart city’ technology, from intelligent meters that collect and share data on residents’ energy or water usage to sensor-laden street lights that can detect illegally parked cars.
    Proponents say the data collected will make cities cleaner, safer, more efficient. But many Americans worry that the benefits and harms of smart city tech may not be evenly felt across communities, says Pardis Emami-Naeini, assistant professor of computer science and director of the InSPIre Lab at Duke University.
    That’s one of the key takeaways of a survey Emami-Naeini and colleagues presented April 25 at the ACM CHI Conference on Human Factors in Computing Systems (CHI 2023) in Hamburg, Germany.
    Nearly 350 people from across the United States participated in the survey. In addition, the researchers conducted qualitative interviews with 21 people aged 24 to 71 from underserved neighborhoods in Seattle that have been prioritized for smart city projects over the next 10 to 15 years.

    The study explored public attitudes on a variety of smart city technologies currently in use, from air quality sensors to surveillance cameras.
    While public awareness of smart cities was limited — most of the study respondents had never even heard of the term — researchers found that Americans have concerns about the ethical implications of the data being collected, particularly from marginalized communities.
    One of the technologies participants had significant concerns about was gunshot detection, which uses software and microphones placed around a neighborhood to detect gunfire and pinpoint its location, rather than relying solely on 911 calls to police.
    The technology is used in more than 135 cities across the U.S., including Chicago, Sacramento, Philadelphia and Durham.
    Though respondents acknowledged the potential benefits to public safety, they worried that the tech could contribute to racial disparities in policing, particularly when disproportionately installed in Black and brown neighborhoods.
    Some said the mere existence of smart city tech such as gunshot detectors or security cameras in their neighborhood could contribute to negative perceptions of safety that deter future home buyers and businesses.
    Even collecting and sharing seemingly innocuous data such as air quality raised concerns for some respondents, who worried it could potentially drive up insurance rates in poorer neighborhoods exposed to higher levels of pollution.
    In both interviews and surveys, people with lower incomes expressed more concern about the ethical implications of smart city tech than those with higher income levels.
    Emami-Naeini has spent several years studying the privacy concerns raised by smart devices and appliances in the home. But when she started asking people how they felt about the risks posed by smart tech in cities, she noticed a shift. Even when people weren’t concerned about the impacts of particular types of data collection on a personal level, she says they were still concerned about potential harms for the larger community.
    “They were concerned about how their neighborhoods would be perceived,” Emami-Naeini says. “They thought that it would widen disparities that they already see in marginalized neighborhoods.”
    Lack of attention to such concerns can hamstring smart city efforts, Emami-Naeini says.
    A proposed high-tech development in Toronto, for example, was cancelled after citizens and civic leaders raised concerns about what would happen with the data collected by the neighborhood’s sensors and devices, and how much of the city the tech company wanted to control.
    In 2017, San Diego launched a $30 million project to cover half the city with smart streetlights in an attempt to improve traffic congestion, but faced backlash after it surfaced that police had been quietly using the footage to solve crimes.
    “It’s not just a waste of resources — it damages people’s trust,” Emami-Naeini says.
    Worldwide, spending on smart cities initiatives is expected to reach $203 billion by 2024. But amid the enthusiasm, Emami-Naeini says, a key component has been neglected: the needs and views of city residents.
    “There’s a lack of user-centered research on this topic, especially from a privacy and ethics perspective” Emami-Naeini says.
    To make sure the ‘smart cities’ of the future are designed with residents firmly in mind, “transparency and communication are really important.”
    Her team’s findings indicate that people want to know things like where sensors are located, what kinds of data they collect and how often, how the data will be used, who has access, whether they have the ability to opt in or opt out, and who to contact if something goes wrong.
    The researchers hope the insights generated from their research will help inform the design of smart city initiatives and keep people front and center in all stages of a project, from brainstorming to deployment.
    “Communities that come together can actually change the fate of these projects,” Emami-Naeini says. “I think it’s really important to make sure that people’s voices are being heard, proactively and not reactively.”
    This work was supported by the U.S. National Science Foundation (CNS-1565252 and CNS-2114230), the University of Washington Tech Policy Lab (which receives support from the William and Flora Hewlett Foundation, the John D. and Catherine T. MacArthur Foundation, Microsoft, and the Pierre and Pamela Omidyar Fund at the Silicon Valley Community Foundation), and gifts from Google and Woven Planet. More

  • in

    How a horse whisperer can help engineers build better robots

    Humans and horses have enjoyed a strong working relationship for nearly 10,000 years — a partnership that transformed how food was produced, people were transported and even how wars were fought and won. Today, we look to horses for companionship, recreation and as teammates in competitive activities like racing, dressage and showing.
    Can these age-old interactions between people and their horses teach us something about building robots designed to improve our lives? Researchers with the University of Florida say yes.
    “There are no fundamental guiding principles for how to build an effective working relationship between robots and humans,” said Eakta Jain, an associate professor of computer and information science and engineering at UF’s Herbert Wertheim College of Engineering. “As we work to improve how humans interact with autonomous vehicles and other forms of AI, it occurred to me that we’ve done this before with horses. This relationship has existed for millennia but was never leveraged to provide insights for human-robot interaction.”
    Jain, who did her doctoral work at the Robotics Institute at Carnegie Mellon University, conducted a year of field work observing the special interactions among horses and humans at the UF Horse Teaching Unit in Gainesville, Florida. She will present her findings today at the ACM Conference on Human Factors in Computing Systems in Hamburg, Germany.
    Like horses did thousands of years before, robots are entering our lives and workplaces as companions and teammates. They vacuum our floors, help educate and entertain our children, and studies are showing that social robots can be effective therapy tools to help improve mental and physical health. Increasingly, robots are found in factories and warehouses, working collaboratively with human workers and sometimes even called co-bots.
    As a member of the UF Transportation Institute, Jain was leading the human factor subgroup that examines how humans should interact with autonomous vehicles, or AVs.

    “For the first time, cars and trucks can observe nearby vehicles and keep an appropriate distance from them as well as monitor the driver for signs of fatigue and attentiveness,” Jain said. “However, the horse has had these capabilities for a long time. I thought why not learn from our partnership with horses for transportation to help solve the problem of natural interaction between humans and AVs.”
    Looking at our history with animals to help shape our future with robots is not a new concept, though most studies have been inspired by the relationship humans have with dogs. Jain and her colleagues in the College of Engineering and UF Equine Sciences are the first to bring together engineering and robotics researchers with horse experts and trainers to conduct on-the-ground field studies with the animals.
    The multidisciplinary collaboration involved expertise in engineering, animal sciences and qualitative research methodologies, Jain explained. She first reached out Joel McQuagge, from UF’s equine behavior and management program who oversees the UF Horse Teaching Unit. He hadn’t thought about this connection between horses and robots, but he provided Jain with full access, and she spent months observing classes. She interviewed and observed horse experts, including thoroughbred trainers and devoted horse owners. Christina Gardner-McCune, an associate professor in UF’s department of computer and information science and engineering, provided expertise in qualitative data analysis.
    Data collected through observations and thematical analyses resulted in findings that can be applied by human-robot interaction researchers and robot designers.
    “Some of the findings are concrete and easy to visualize, while others are more abstract,” she says. “For example, we learned that a horse speaks with its body. You can see its ears pointing to where something caught its attention. We could build in similar types of nonverbal expressions in our robots, like ears that point when there is a knock on the door or something visual in the car when there’s a pedestrian on that side of the street.”
    A more abstract and groundbreaking finding is the notion of respect. When a trainer first works with a horse, he looks for signs of respect from the horse for its human partner.

    “We don’t typically think about respect in the context of human-robot interactions,” Jain says. “What ways can a robot show you that it respects you? Can we design behaviors similar to what the horse uses? Will that make the human more willing to work with the robot?”
    Jain, originally from New Delhi, says she grew up with robots the way people grow up with animals. Her father is an engineer who made educational and industrial robots, and her mother was a computer science teacher who ran her school’s robotics club.
    “Robots were the subject of many dinner table conversations,” she says, “so I was exposed to human-robot interactions early.”
    However, during her yearlong study of the human-horse relationship, she learned how to ride a horse and says she hopes to one day own a horse.
    “At first, I thought I could learn by observing and talking to people,” she says. “There is no substitute for doing, though. I had to feel for myself how the horse-human partnership works. From the first time I got on a horse, I fell in love with them.” More