More stories

  • in

    This soft robot withstands crushing pressures at the ocean’s greatest depths

    Inspired by a strange fish that can withstand the punishing pressures of the deepest reaches of the ocean, scientists have devised a soft autonomous robot capable of keeping its fins flapping — even in the deepest part of the Mariana Trench.
    The team, led by roboticist Guorui Li of Zhejiang University in Hangzhou, China, successfully field-tested the robot’s ability to swim at depths ranging from 70 meters to nearly 11,000 meters, it reports March 4 in Nature.
    Challenger Deep is the lowest of the low, the deepest part of the Mariana Trench. It bottoms out at about 10,900 meters below sea level (SN: 12/11/12). The pressure from all that overlying water is about a thousand times the atmospheric pressure at sea level, translating to about 103 million pascals (or 15,000 pounds per square inch). “It’s about the equivalent of an elephant standing on top of your thumb,” says deep-sea physiologist and ecologist Mackenzie Gerringer of State University of New York at Geneseo, who was not involved in the new study.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    The tremendous pressures at these hadal depths — the deepest ocean zone, between 6,000 and 11,000 meters — present a tough engineering challenge, Gerringer says. Traditional deep-sea robots or manned submersibles are heavily reinforced with rigid metal frames so as not to crumple — but these vessels are bulky and cumbersome, and the risk of structural failure remains high.
    To design robots that can maneuver gracefully through shallower waters, scientists have previously looked to soft-bodied ocean creatures, such as the octopus, for inspiration (SN: 9/17/14). As it happens, such a deep-sea muse also exists: Pseudoliparis swirei, or the Mariana hadal snailfish, a mostly squishy, translucent fish that lives as much as 8,000 meters deep in the Mariana Trench.
    In 2018, researchers described three newly discovered species of deep-sea snailfish (one shown) found in the Pacific Ocean’s Atacama Trench, living at depths down to about 7,500 meters. Also found in the Mariana Trench, such fish are well adapted for living in high-pressure, deep-sea environments, with only partially hardened skulls and soft, streamlined, energy-efficient bodies.Newcastle University
    Gerringer, one of the researchers who first described the deep-sea snailfish in 2014, constructed a 3-D printed soft robot version of it several years later to better understand how it swims. Her robot contained a synthesized version of the watery goo inside the fish’s body that most likely adds buoyancy and helps it swim more efficiently (SN: 1/3/18).
    But devising a robot that can swim under extreme pressure to investigate the deep-sea environment is another matter. Autonomous exploration robots require electronics not only to power their movement, but also to perform various tasks, whether testing water chemistry, lighting up and filming the denizens of deep ocean trenches, or collecting samples to bring back to the surface. Under the squeeze of water pressure, these electronics can grind against one another.
    So Li and his colleagues decided to borrow one of the snailfish’s adaptations to high-pressure life: Its skull is not completely fused together with hardened bone. That extra bit of malleability allows the pressure on the skull to equalize. In a similar vein, the scientists decided to distribute the electronics — the “brain” — of their robot fish farther apart than they normally would, and then encase them in soft silicone to keep them from touching.
    The design of the new soft robot (left) was inspired by the deep-sea snailfish (illustrated, right), which is adapted to live in the very high-pressure environments of the deepest parts of the ocean. The snailfish’s skull is incompletely ossified, or hardened, which allows external and internal pressures to equalize. Spreading apart the robot’s sensitive electronics and encasing them in silicone keeps the parts from squeezing together. The robots flapping fins are inspired by the thin pectoral fins of the fish (although the real fish doesn’t use its fins to swim).Li et al/ Nature 2021
    The team also designed a soft body that slightly resembles the snailfish, with two fins that the robot can use to propel itself through the water. (Gerringer notes that the actual snailfish doesn’t flap its fins, but wriggles its body like a tadpole.) To flap the fins, the robot is equipped with batteries that power artificial muscles: electrodes sandwiched between two membranes that deform in response to the electrical charge.
    The team tested the robot in several environments: 70 meters deep in a lake; about 3,200 meters deep in the South China Sea; and finally, at the very bottom of the ocean. The robot was allowed to swim freely in the first two trials. For the Challenger Deep trial, however, the researchers kept a tight grip, using the extendable arm of a deep-sea lander to hold the robot while it flapped its fins.
    This machine “pushes the boundaries of what can be achieved” with biologically inspired soft robots, write robotocists Cecilia Laschi of the National University of Singapore and Marcello Calisti of the University of Lincoln in England. The pair have a commentary on the research in the same issue of Nature. That said, the machine is still a long way from deployment, they note. It swims more slowly than other underwater robots, and doesn’t yet have the power to withstand powerful underwater currents. But it “lays the foundations” for future such robots to help answer lingering questions about these mysterious reaches of the ocean, they write.
    [embedded content]
    Researchers successfully ran a soft autonomous robot through several field tests at different depths in the ocean. At 3,224 meters deep in the South China Sea, the tests demonstrated that the robot could swim autonomously (free swim test). The team also tested the robot’s ability to move under even the most extreme pressures in the ocean. A deep-sea lander’s extendable arm held the robot as it flapped its wings at a depth of 10,900 meters in the Challenger Deep, the lowest part of the Mariana Trench (extreme pressure test). These tests suggest that such robots may, in future, be able to aid in autonomous exploration of the deepest parts of the ocean, the researchers say.
    Deep-sea trenches are known to be teeming with microbial life, which happily feed on the bonanza of organic material — from algae to animal carcasses — that finds its way to the bottom of the sea. That microbial activity hints that the trenches may play a significant role in Earth’s carbon cycle, which is in turned linked to the planet’s regulation of its climate.
    The discovery of microplastics in Challenger Deep is also incontrovertible evidence that even the bottom of the ocean isn’t really that far away, Gerringer says (SN: 11/20/20). “We’re impacting these deep-water systems before we’ve even found out what’s down there. We have a responsibility to help connect these seemingly otherworldly systems, which are really part of our planet.” More

  • in

    Helping soft robots turn rigid on demand

    Imagine a robot.
    Perhaps you’ve just conjured a machine with a rigid, metallic exterior. While robots armored with hard exoskeletons are common, they’re not always ideal. Soft-bodied robots, inspired by fish or other squishy creatures, might better adapt to changing environments and work more safely with people.
    Roboticists generally have to decide whether to design a hard- or soft-bodied robot for a particular task. But that tradeoff may no longer be necessary.
    Working with computer simulations, MIT researchers have developed a concept for a soft-bodied robot that can turn rigid on demand. The approach could enable a new generation of robots that combine the strength and precision of rigid robots with the fluidity and safety of soft ones.
    “This is the first step in trying to see if we can get the best of both worlds,” says James Bern, the paper’s lead author and a postdoc in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).
    Bern will present the research at the IEEE International Conference on Soft Robotics next month. Bern’s advisor, Daniela Rus, who is the CSAIL director and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, is the paper’s other author.

    advertisement

    Roboticists have experimented with myriad mechanisms to operate soft robots, including inflating balloon-like chambers in a robot’s arm or grabbing objects with vacuum-sealed coffee grounds. However, a key unsolved challenge for soft robotics is control — how to drive the robot’s actuators in order to achieve a given goal.
    Until recently, most soft robots were controlled manually, but in 2017 Bern and his colleagues proposed that an algorithm could take the reigns. Using a simulation to help control a cable-driven soft robot, they picked a target position for the robot and had a computer figure out how much to pull on each of the cables in order to get there. A similar sequence happens in our bodies each time we reach for something: A target position for our hand is translated into contractions of the muscles in our arm.
    Now, Bern and his colleagues are using similar techniques to ask a question that goes beyond the robot’s movement: “If I pull the cables in just the right way, can I get the robot to act stiff?” Bern says he can — at least in a computer simulation — thanks to inspiration from the human arm. While contracting the biceps alone can bend your elbow to a certain degree, contracting the biceps and triceps simultaneously can lock your arm rigidly in that position. Put simply, “you can get stiffness by pulling on both sides of something,” says Bern. So, he applied the same principle to his robots.
    The researchers’ paper lays out a way to simultaneously control the position and stiffness of a cable-driven soft robot. The method takes advantage of the robots’ multiple cables — using some to twist and turn the body, while using others to counterbalance each other to tweak the robot’s rigidity. Bern emphasizes that the advance isn’t a revolution in mechanical engineering, but rather a new twist on controlling cable-driven soft robots.
    “This is an intuitive way of expanding how you can control a soft robot,” he says. “It’s just encoding that idea [of on-demand rigidity] into something a computer can work with.” Bern hopes his roadmap will one day allow users to control a robot’s rigidity as easily as its motion.
    On the computer, Bern used his roadmap to simulate movement and rigidity adjustment in robots of various shapes. He tested how well the robots, when stiffened, could resist displacement when pushed. Generally, the robots remained rigid as intended, though they were not equally resistant from all angles.
    Bern is building a prototype robot to test out his rigidity-on-demand control system. But he hopes to one day take the technology out of the lab. “Interacting with humans is definitely a vision for soft robotics,” he says. Bern points to potential applications in caring for human patients, where a robot’s softness could enhance safety, while its ability to become rigid could allow for lifting when necessary.
    “The core message is to make it easy to control robots’ stiffness,” says Bern. “Let’s start making soft robots that are safe but can also act rigid on demand, and expand the spectrum of tasks robots can perform.” More

  • in

    New generation of tiny, agile drones introduced

    If you’ve ever swatted a mosquito away from your face, only to have it return again (and again and again), you know that insects can be remarkably acrobatic and resilient in flight. Those traits help them navigate the aerial world, with all of its wind gusts, obstacles, and general uncertainty. Such traits are also hard to build into flying robots, but MIT Assistant Professor Kevin Yufeng Chen has built a system that approaches insects’ agility.
    Chen, a member of the Department of Electrical Engineering and Computer Science and the Research Laboratory of Electronics, has developed insect-sized drones with unprecedented dexterity and resilience. The aerial robots are powered by a new class of soft actuator, which allows them to withstand the physical travails of real-world flight. Chen hopes the robots could one day aid humans by pollinating crops or performing machinery inspections in cramped spaces.
    Chen’s work appears this month in the journal IEEE Transactions on Robotics. His co-authors include MIT PhD student Zhijian Ren, Harvard University PhD student Siyi Xu, and City University of Hong Kong roboticist Pakpong Chirarattananon.
    Typically, drones require wide open spaces because they’re neither nimble enough to navigate confined spaces nor robust enough to withstand collisions in a crowd. “If we look at most drones today, they’re usually quite big,” says Chen. “Most of their applications involve flying outdoors. The question is: Can you create insect-scale robots that can move around in very complex, cluttered spaces?”
    According to Chen, “The challenge of building small aerial robots is immense.” Pint-sized drones require a fundamentally different construction from larger ones. Large drones are usually powered by motors, but motors lose efficiency as you shrink them. So, Chen says, for insect-like robots “you need to look for alternatives.”
    The principal alternative until now has been employing a small, rigid actuator built from piezoelectric ceramic materials. While piezoelectric ceramics allowed the first generation of tiny robots to take flight, they’re quite fragile. And that’s a problem when you’re building a robot to mimic an insect — foraging bumblebees endure a collision about once every second.
    Chen designed a more resilient tiny drone using soft actuators instead of hard, fragile ones. The soft actuators are made of thin rubber cylinders coated in carbon nanotubes. When voltage is applied to the carbon nanotubes, they produce an electrostatic force that squeezes and elongates the rubber cylinder. Repeated elongation and contraction causes the drone’s wings to beat — fast.
    Chen’s actuators can flap nearly 500 times per second, giving the drone insect-like resilience. “You can hit it when it’s flying, and it can recover,” says Chen. “It can also do aggressive maneuvers like somersaults in the air.” And it weighs in at just 0.6 grams, approximately the mass of a large bumble bee. The drone looks a bit like a tiny cassette tape with wings, though Chen is working on a new prototype shaped like a dragonfly.
    Building insect-like robots can provide a window into the biology and physics of insect flight, a longstanding avenue of inquiry for researchers. Chen’s work addresses these questions through a kind of reverse engineering. “If you want to learn how insects fly, it is very instructive to build a scale robot model,” he says. “You can perturb a few things and see how it affects the kinematics or how the fluid forces change. That will help you understand how those things fly.” But Chen aims to do more than add to entomology textbooks. His drones can also be useful in industry and agriculture.
    Chen says his mini-aerialists could navigate complex machinery to ensure safety and functionality. “Think about the inspection of a turbine engine. You’d want a drone to move around [an enclosed space] with a small camera to check for cracks on the turbine plates.”
    Other potential applications include artificial pollination of crops or completing search-and-rescue missions following a disaster. “All those things can be very challenging for existing large-scale robots,” says Chen. Sometimes, bigger isn’t better.

    Story Source:
    Materials provided by Massachusetts Institute of Technology. Original written by Daniel Ackerman. Note: Content may be edited for style and length. More

  • in

    Environmental impact of computation and the future of green computing

    When you think about your carbon footprint, what comes to mind? Driving and flying, probably. Perhaps home energy consumption or those daily Amazon deliveries. But what about watching Netflix or having Zoom meetings? Ever thought about the carbon footprint of the silicon chips inside your phone, smartwatch or the countless other devices inside your home?
    Every aspect of modern computing, from the smallest chip to the largest data center comes with a carbon price tag. For the better part of a century, the tech industry and the field of computation as a whole have focused on building smaller, faster, more powerful devices — but few have considered their overall environmental impact.
    Researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) are trying to change that.
    “Over the next decade, the demand, number and types of devices is only going to grow,” said Udit Gupta, a PhD candidate in Computer Science at SEAS. “We want to know what impact that will have on the environment and how we, as a field, should be thinking about how we adopt more sustainable practices.”
    Gupta, along with Gu-Yeon Wei, the Robert and Suzanne Case Professor of Electrical Engineering and Computer Science, and David Brooks, the Haley Family Professor of Computer Science, will present a paper on the environmental footprint of computing at the IEEE International Symposium on High-Performance Computer Architecture on March 3rd, 2021.
    The SEAS research is part of a collaboration with Facebook, where Gupta is an intern, and Arizona State University.

    advertisement

    The team not only explored every aspect of computing, from chip architecture to data center design, but also mapped the entire lifetime of a device, from manufacturing to recycling, to identify the stages where the most emissions occur.
    The team found that most emissions related to modern mobile and data-center equipment come from hardware manufacturing and infrastructure.
    “A lot of the focus has been on how we reduce the amount of energy used by computers, but we found that it’s also really important to think about the emissions from just building these processors,” said Brooks. “If manufacturing is really important to emissions, can we design better processors? Can we reduce the complexity of our devices so that manufacturing emissions are lower?”
    Take chip design, for example.
    Today’s chips are optimized for size, performance and battery life. The typical chip is about 100 square millimeters of silicon and houses billions of transistors. But at any given time, only a portion of that silicon is being used. In fact, if all the transistors were fired up at the same time, the device would exhaust its battery life and overheat. This so-called dark silicon improves a device’s performance and battery life but it’s wildly inefficient if you consider the carbon footprint that goes into manufacturing the chip.

    advertisement

    “You have to ask yourself, what is the carbon impact of that added performance,” said Wei. “Dark silicon offers a boost in energy efficiency but what’s the cost in terms of manufacturing? Is there a way to design a smaller and smarter chip that uses all of the silicon available? That is a really intricate, interesting, and exciting problem.”
    The same issues face data centers. Today, data centers, some of which span many millions of square feet, account for 1 percent of global energy consumption, a number that is expected to grow.
    As cloud computing continues to grow, decisions about where to run applications — on a device or in a data center — are being made based on performance and battery life, not carbon footprint.
    “We need to be asking what’s greener, running applications on the device or in a data center,” said Gupta. “These decisions must optimize for global carbon emissions by taking into account application characteristics, efficiency of each hardware device, and varying power grids over the day.”
    The researchers are also challenging industry to look at the chemicals used in manufacturing.
    Adding environmental impact to the parameters of computational design requires a massive cultural shift in every level of the field, from undergraduate CS students to CEOs.
    To that end, Brooks has partnered with Embedded EthiCS, a Harvard program that embeds philosophers directly into computer science courses to teach students how to think through the ethical and social implications of their work. Brooks is including an Embedded EthiCS module on computational sustainability in COMPSCI 146: Computer Architecture this spring.
    The researchers also hope to partner with faculty from Environmental Science and Engineering at SEAS and the Harvard University Center for the Environment to explore how to enact change at the policy level.
    “The goal of this paper is to raise awareness of the carbon footprint associated with computing and to challenge the field to add carbon footprint to the list of metrics we consider when designing new processes, new computing systems, new hardware, and new ways to use devices. We need this to be a primary objective in the development of computing overall,” said Wei.
    The paper was co-authored by Sylvia Lee, Jordan Tse, Hsien-Hsin S. Lee and Carole-Jean Wu from Facebook and Young Geun Kim from Arizona State University. More

  • in

    A quantum internet is closer to reality, thanks to this switch

    When quantum computers become more powerful and widespread, they will need a robust quantum internet to communicate.
    Purdue University engineers have addressed an issue barring the development of quantum networks that are big enough to reliably support more than a handful of users.
    The method, demonstrated in a paper published in Optica, could help lay the groundwork for when a large number of quantum computers, quantum sensors and other quantum technology are ready to go online and communicate with each other.
    The team deployed a programmable switch to adjust how much data goes to each user by selecting and redirecting wavelengths of light carrying the different data channels, making it possible to increase the number of users without adding to photon loss as the network gets bigger.
    If photons are lost, quantum information is lost — a problem that tends to happen the farther photons have to travel through fiber optic networks.
    “We show a way to do wavelength routing with just one piece of equipment — a wavelength-selective switch — to, in principle, build a network of 12 to 20 users, maybe even more,” said Andrew Weiner, Purdue’s Scifres Family Distinguished Professor of Electrical and Computer Engineering. “Previous approaches have required physically interchanging dozens of fixed optical filters tuned to individual wavelengths, which made the ability to adjust connections between users not practically viable and photon loss more likely.”
    Instead of needing to add these filters each time that a new user joins the network, engineers could just program the wavelength-selective switch to direct data-carrying wavelengths over to each new user — reducing operational and maintenance costs as well as making a quantum internet more efficient.

    advertisement

    The wavelength-selective switch also can be programmed to adjust bandwidth according to a user’s needs, which has not been possible with fixed optical filters. Some users may be using applications that require more bandwidth than others, similarly to how watching shows through a web-based streaming service uses more bandwidth than sending an email.
    For a quantum internet, forming connections between users and adjusting bandwidth means distributing entanglement, the ability of photons to maintain a fixed quantum mechanical relationship with one another no matter how far apart they may be to connect users in a network. Entanglement plays a key role in quantum computing and quantum information processing.
    “When people talk about a quantum internet, it’s this idea of generating entanglement remotely between two different stations, such as between quantum computers,” said Navin Lingaraju, a Purdue Ph.D. student in electrical and computer engineering. “Our method changes the rate at which entangled photons are shared between different users. These entangled photons might be used as a resource to entangle quantum computers or quantum sensors at the two different stations.”
    Purdue researchers performed the study in collaboration with Joseph Lukens, a research scientist at Oak Ridge National Laboratory. The wavelength-selective switch that the team deployed is based on similar technology used for adjusting bandwidth for today’s classical communication.
    The switch also is capable of using a “flex grid,” like classical lightwave communications now uses, to partition bandwidth to users at a variety of wavelengths and locations rather than being restricted to a series of fixed wavelengths, each of which would have a fixed bandwidth or information carrying capacity at fixed locations.
    “For the first time, we are trying to take something sort of inspired by these classical communications concepts using comparable equipment to point out the potential advantages it has for quantum networks,” Weiner said.
    The team is working on building larger networks using the wavelength-selective switch. The work was funded by the U.S. Department of Energy, the National Science Foundation and Oak Ridge National Laboratory.

    Story Source:
    Materials provided by Purdue University. Original written by Kayla Wiles. Note: Content may be edited for style and length. More

  • in

    New research highlights impact of the digital divide

    The coronavirus pandemic has drawn new attention to the digital divide, as the need for online schooling and working from home has disproportionately hurt those without computer equipment and skills.
    Research by Paul A. Pavlou, dean of the C. T. Bauer College of Business at the University of Houston, found that people with basic Information Technology (IT) skills — including the ability to use email, copy and paste files and work with an Excel spreadsheet — are more likely to be employed, even in jobs that aren’t explicitly tied to those skills.
    People with more advanced IT skills generally earned higher salaries, the researchers found. The work is described in Information Systems Research.
    “Unemployment and low wages remain pressing societal challenges in the wake of increased automation, more so for traditionally-disadvantaged groups in the labor market, such as women, minorities, and the elderly,” the researchers wrote. “However, workers who possess relevant IT skills might have an edge in an increasingly digital economy.”
    The findings, Pavlou said, reinforce the need for robust public policy to ensure people, especially women, older workers and others who are more likely to face employment discrimination, have the basic IT skills needed for the modern working world, since few companies provide on-the-job training in those skills.
    “Very few people can get these skills from their employer. Workers are expected to obtain these IT skills themselves, in order to get a job in the first place” he said. “And the less-privileged population they are, the harder time they have obtaining these skills that require computer equipment and internet access.”
    That leaves many workers, especially from under-represented populations in the labor market, unable to even apply for work, as more job applications — and now, interviews — are handled online.
    In addition to Pavlou, co-authors on the paper include Hilal Atasoy of Rutgers University and Rajiv Banker from Temple University.
    The analysis was conducted using two datasets from the Turkish Statistical Institute, and Pavlou said the findings are especially relevant for the developing world, where people are less likely to have IT skills and access to computer equipment than they are in the United States.
    But the pandemic has laid bare unequal access to technology in the United States, too, as schools and universities struggle to provide students with computers, internet hotspots and other equipment to continue their educations online.
    The work thus has implications for marginalized workers in the United States and other developed countries, Pavlou said. That includes women and older workers, who are more likely to opt out of the labor force if they cannot work from home — jobs that are more likely to require at least basic tech savvy.
    “The digital divide is a major societal problem,” Pavlou said. “I think the pandemic will make it even more pronounced. People with basic IT skills will have access to more opportunities, and it is imperative for educational institutions to provide these IT skills, especially in traditionally-disadvantaged populations.”

    Story Source:
    Materials provided by University of Houston. Note: Content may be edited for style and length. More

  • in

    Study highlights pitfalls associated with 'cybervetting' job candidates

    A recent study of how human resources professionals review online information and social media profiles of job candidates highlights the ways in which so-called “cybervetting” can introduce bias and moral judgment into the hiring process.
    “The study drives home that cybervetting is ultimately assessing each job candidate’s moral character,” says Steve McDonald, corresponding author of the study and a professor of sociology at North Carolina State University. “It is equally clear that many of the things hiring professionals are looking at make it more likely for bias to play a role in hiring.”
    For this study, the researchers conducted in-depth interviews with 61 human resources professionals involved in recruitment and hiring across many industries. Study participants ranged from in-house HR staff to executive recruitment consultants to professionals at staffing agencies.
    “One of the things that cropped up repeatedly was that cybervetting not only judges people’s behavior, but how that behavior is presented,” says Amanda Damarin, co-author of the paper and an associate professor of sociology at Georgia State University. “For example, one participant noted that his organization had no problem with employees drinking alcohol, but did not want to see any photos of alcohol in an employee’s social media feed.
    “There’s a big disconnect here. One the one hand, HR professionals view social media as being an ‘authentic’ version of who people really are; but those same HR professionals are also demanding that people carefully curate how they present themselves on social media.”
    “It was also clear that people were rarely looking for information related to job tasks — a point some study participants brought up themselves,” McDonald says. “And the things they did look for reflected their explicit or implicit biases.”
    For example, study participants referenced looking for things like posts about hiking and family photos of Christmas. But most people who hike are white, and most people who post Christmas photos are Christians. Study participants also expressed a preference for online profiles that signaled “active” and “energetic” lifestyles, which could lead to discrimination against older or disabled job seekers.
    And it was often unclear what job candidates could do to address concerns about bias in cybervetting. For example, while many study participants noted that putting a photo online created the opportunity for bias to affect the hiring process, other study participants noted that not having a “professional” profile picture was in itself a “red flag.”
    “Some workers have a social media profile that sends the right signals and can take advantage of cybervetting,” McDonald says. “But for everyone else, they are not only at a disadvantage, but they don’t even know they are at a disadvantage — much less why they are at a disadvantage. Because they don’t necessarily know what employers are looking for.”
    “Some of the people we interviewed were very aware that cybervetting could lead to increased bias; some even avoided cybervetting for that reason,” Damarin says. “But others were enthusiastic about its use.”
    Researchers say one of the key takeaways from the work is that there need to be clear guidelines or best practices for the use of cybervetting, if it is going to be used at all.
    “The second takeaway is that the biases and moral judgments we are hearing about from these HR professionals are almost certainly being incorporated into software programs designed to automate the review of job candidates,” McDonald says. “These prejudices will simply be baked into the algorithms, making them a long-term problem for both organizations and job seekers.”

    Story Source:
    Materials provided by North Carolina State University. Original written by Matt Shipman. Note: Content may be edited for style and length. More

  • in

    Human instinct can be as useful as algorithms in detecting online 'deception'

    Travellers looking to book a hotel should trust their gut instinct when it comes to online reviews rather than relying on computer algorithms to weed out the fake ones, a new study suggests.
    Research, led by the University of York in collaboration with Nanyang Technological University, Singapore, shows the challenges of online ‘fake’ reviews for both users and computer algorithms. It suggests that a greater awareness of the linguistic characteristics of ‘fake’ reviews can allow online users to spot the ‘real’ from the ‘fake’ for themselves.
    Dr Snehasish Banerjee, Lecturer in Marketing from the University of York’s Management School, said: “Reading and writing online reviews of hotels, restaurants, venues and so on, is a popular activity for online users, but alongside this, ‘fake’ reviews have also increased.
    “Companies can now use computer algorithms to distinguish the ‘fake’ from the ‘real’ with a good level of accuracy, but the extent to which company websites use these algorithms is unclear and so some ‘fake’ reviews slip through the net.
    “We wanted to understand whether human analysis was capable of filling this gap and whether more could be done to educate online users on how to approach these reviews.”
    The researchers tasked 380 people to respond to questions about three hotel reviews — some authentic, others fake — based on their perception of the reviews. The users could rely on the same cues that computer algorithm use to discern ‘fake’ reviews, which includes the number of superlatives in the review, the level of details, if it was easy to read, and appeared noncommittal.
    For those who already sceptical of online reviews this was a relatively straightforward task, but most could not identity factors such as ‘easy to read’ and ‘non-committal’ like a computer algorithm could. In the absence of this skill, the participants relied on ‘gut instinct’.
    Dr Banerjee said: “The outcomes were surprisingly effective. We often assume that the human brain is no match for a computer, but in actual fact there are certain things we can do to train the mind in approaching some aspects of life differently.
    “Following this study, we are recommending that people need to curb their instincts on truth and deception bias — the tendency to either approach online content with the assumption that it is all true or all fake respectively — as neither method works in the online environment.
    “Online users often fail to detect fake reviews because they do not proactively look for deception cues. There is a need to change this default review reading habit, and if reading habit is practised long enough, they will eventually be able to rely on their gut instinct for fake review detection.”
    The research also reminds businesses that ethical standards should be upheld to ensure that genuine experiences of their services are reflected online.

    Story Source:
    Materials provided by University of York. Note: Content may be edited for style and length. More