More stories

  • in

    Extending battery life in smartphones, electric cars

    A University of Central Florida researcher is working to make portable devices and electric vehicles stay charged longer by extending the life of the rechargeable lithium-ion batteries powering them.
    Assistant Professor Yang Yang is doing this by making the batteries more efficient, with some of his latest work focusing on keeping an internal metal structure, the anode, from falling apart over time by applying a thin, film-like coating of copper and tin. The new technique is detailed in a recent study in the journal Advanced Materials.
    An anode generates electrons that travel to a similar structure, the cathode, inside the battery, thus creating a current and power.
    “Our work has shown that the rate of degradation of the anode can be reduced by more than 1,000 percent by using a copper-tin film compared to a tin film that is often used,” said Yang, who is with UCF’s NanoScience Technology Center.
    Yang is an expert in battery improvement including making them safer and able to withstand extreme temperatures.
    The technique is unique because of its use of the copper-tin alloy and is an important improvement in stabilizing rechargeable battery performance, Yang says.
    It is also scalable for use in the smallest smartphone battery to larger batteries that power electric cars and trucks.
    “We are motivated by our most recent research progress in alloyed materials for various applications,” he says. “Each alloy is unique in composition, structure and property.”

    Story Source:
    Materials provided by University of Central Florida. Original written by Robert Wells. Note: Content may be edited for style and length. More

  • in

    Divide and conquer :A new formula to minimize 'mathemaphobia'

    Maths — it’s the subject some kids love to hate, yet despite its lack of popularity, mathematics is critical for a STEM-capable workforce and vital for Australia’s current and future productivity.
    In a new study by the University of South Australia in collaboration with the Australian Council for Educational Research, researchers have been exploring the impact of anxiety on learning maths, finding that boosting student confidence, is pivotal to greater engagement with the subject.
    Maths anxiety, or ‘mathemaphobia’, is the sense of fear, worry and nervousness that students may experience when participating in mathematical tasks.
    In Australia a quarter to a third of Australian secondary students report feeling tense, nervous or helpless when doing maths, and it’s this reaction that’s influencing their decisions to study maths.
    Lead researcher, Dr Florence Gabriel says maths anxiety is one of the biggest barriers to students choosing to study it, especially at senior school and tertiary levels.
    “Many of us would have felt some sort of maths anxiety in the past — a sense of panic or worry, feelings of failure, or even a faster heart rate — all of which are associated with stress,” Dr Gabriel says.

    advertisement

    “Maths anxiety is essentially an emotional reaction, but it’s just like stress in other situations.
    “When students experience maths anxiety, they’ll tend to hurry through maths questions, lose focus, or simply give up when it all seems too hard. Not surprisingly, these reactions compound and lead to poor maths achievement — and later a reluctance to engage with the subject at all.
    “To break this cycle, our research shows that we need to build and grow student confidence in maths, especially before starting a new maths concept.
    “This draws on the notion of self-regulated learning ¬- where students have the ability to understand, track and control their own learning.
    “By drawing a student’s attention to instances where they’ve previously overcome a difficult maths challenge, or to a significant maths success, we’re essentially building their confidence and belief in their own abilities, and it’s this that will start to counteract negative emotions.”
    The study assessed the responses of 4295 Australian 15-year-old students that participated in the 2012 cycle of the OECD’s Program for International Student Assessment (PISA).

    advertisement

    It focussed on the psychological factors of maths learning: motivation (the belief that maths is important and useful for their future); maths self-concept (the belief in their ability to do maths); maths anxiety (self-feelings when doing maths); perseverance (their willingness to continue to work on difficult problems); maths self-efficacy (their self-belief that they can successfully solve maths problems); and maths literacy (the ability to apply maths to the real world).
    “Importantly, our research shows the domino effect that these variables have on one another,” Dr Gabriel says.
    “Through structural equation modelling, our data shows that low motivation and self-concept will lead to maths anxiety, which in turn affects perseverance, self-efficacy and, ultimately, maths achievement.
    “By developing a student’s ability to reflect on past successes — before maths anxiety sets in — we can break through some of the negative and emotional beliefs about maths and, hopefully, pave the way for students to accept and engage with maths in the future.” More

  • in

    Kid influencers are promoting junk food brands on YouTube — garnering more than a billion views

    Kids with wildly popular YouTube channels are frequently promoting unhealthy food and drinks in their videos, warn researchers at NYU School of Global Public Health and NYU Grossman School of Medicine in a new study published in the journal Pediatrics.
    Food and beverage companies spend $1.8 billion dollars a year marketing their products to young people. Although television advertising is a major source of food marketing, companies have dramatically increased online advertising in response to consumers’ growing social media use.
    “Kids already see several thousand food commercials on television every year, and adding these YouTube videos on top of it may make it even more difficult for parents and children to maintain a healthy diet,” said Marie Bragg, assistant professor of public health nutrition at NYU School of Global Public Health and assistant professor in the Department of Population Health at NYU Langone. “We need a digital media environment that supports healthy eating instead of discouraging it.”
    YouTube is the second most visited website in the world and is a popular destination for kids seeking entertainment. More than 80 percent of parents with a child younger than 12 years old allow their child to watch YouTube, and 35 percent of parents report that their kid watches YouTube regularly.
    “The allure of YouTube may be especially strong in 2020 as many parents are working remotely and have to juggle the challenging task of having young kids at home because of COVID-19,” said Bragg, the study’s senior author.
    When finding videos for young children to watch, millions of parents turn to videos of “kid influencers,” or children whose parents film them doing activities such as science experiments, playing with toys, or celebrating their birthdays. The growing popularity of these YouTube videos have caught the attention of companies, who advertise or sponsor posts to promote their products before or during videos. In fact, the highest-paid YouTube influencer of the past two years was an 8-year-old who earned $26 million last year.

    advertisement

    “Parents may not realize that kid influencers are often paid by food companies to promote unhealthy food and beverages in their videos. Our study is the first to quantify the extent to which junk food product placements appear in YouTube videos from kid influencers,” said Bragg.
    Bragg and her colleagues identified the five most popular kid influencers on YouTube of 2019 — whose ages ranged from 3 to 14 years old — and analyzed their most-watched videos. Focusing on a sample of 418 YouTube videos, they recorded whether food or drinks were shown in the videos, what items and brands were shown, and assessed their nutritional quality.
    The researchers found that nearly half of the most-popular videos from kid influencers (42.8 percent) promoted food and drinks. More than 90 percent of the products shown were unhealthy branded food, drinks, or fast food toys, with fast food as the most frequently featured junk food, followed by candy and soda. Only a few videos featured unhealthy unbranded items like hot dogs (4 percent), healthy unbranded items like fruit (3 percent), and healthy branded items like yogurt brands (2 percent).
    The videos featuring junk food product placements were viewed more than 1 billion times — a staggering level of exposure for food and beverage companies.
    “It was concerning to see that kid influencers are promoting a high volume of junk food in their YouTube videos, and that those videos are generating enormous amounts of screen time for these unhealthy products,” said Bragg.
    While the researchers do not know which food and drink product placements were paid endorsements, they find these videos problematic for public health because they enable food companies to directly — but subtly — promote unhealthy foods to young children and their parents.
    “It’s a perfect storm for encouraging poor nutrition — research shows that people trust influencers because they appear to be ‘everyday people,’ and when you see these kid influencers eating certain foods, it doesn’t necessarily look like advertising. But it is advertising, and numerous studies have shown that children who see food ads consume more calories than children who see non-food ads, which is why the National Academy of Medicine and World Health Organization identify food marketing as a major driver of childhood obesity,” said Bragg.
    The researchers encourage federal and state regulators to strengthen and enforce regulations of junk food advertising by kid influencers.
    “We hope that the results of this study encourage the Federal Trade Commission and state attorneys general to focus on this issue and identify strategies to protect children and public health,” said study co-author Jennifer Pomeranz, assistant professor of public health policy and management at NYU School of Global Public Health. More

  • in

    Extreme events in quantum cascade lasers

    Extreme events occur in many observable contexts. Nature is a prolific source: rogue water waves surging high above the swell, monsoon rains, wildfire, etc. From climate science to optics, physicists have classified the characteristics of extreme events, extending the notion to their respective domains of expertise. For instance, extreme events can take place in telecommunication data streams. In fiber-optic communications where a vast number of spatio-temporal fluctuations can occur in transoceanic systems, a sudden surge is an extreme event that must be suppressed, as it can potentially alter components associated with the physical layer or disrupt the transmission of private messages.
    Recently, extreme events have been observed in quantum cascade lasers, as reported by researchers from Télécom Paris (France) in collaboration with UC Los Angeles (USA) and TU Darmstad (Germany). The giant pulses that characterize these extreme events can contribute the sudden, sharp bursts necessary for communication in neuromorphic systems inspired by the brain’s powerful computational abilities. Based on a quantum cascade laser (QCL) emitting mid-infrared light, the researchers developed a basic optical neuron system operating 10,000× faster than biological neurons. Their report is published in Advanced Photonics.
    Giant pulses, fine tuning
    Olivier Spitz, Télécom Paris research fellow and first author on the paper, notes that the giant pulses in QCLs can be triggered successfully by adding a “pulse-up excitation,” a short-time small-amplitude increase of bias current. Senior author Frédéric Grillot, Professor at Télécom Paris and the University of New Mexico, explains that this triggering ability is of paramount importance for applications such as optical neuron-like systems, which require optical bursts to be triggered in response to a perturbation.
    The team’s optical neuron system demonstrates behaviors like those observed in biological neurons, such as thresholding, phasic spiking, and tonic spiking. Fine tuning of modulation and frequency allows control of time intervals between spikes. Grillot explains, “The neuromorphic system requires a strong, super-threshold stimulus for the system to fire a spiking response, whereas phasic and tonic spiking correspond to single or continuous spike firing following the arrival of a stimulus.” To replicate the various biological neuronal responses, interruption of regular successions of bursts corresponding to neuronal activity is also required.
    Quantum cascade laser
    Grillot notes that the findings reported by his team demonstrate the increasingly superior potential of quantum cascade lasers compared to standard diode lasers or VCSELs, for which more complex techniques are currently required to achieve neuromorphic properties.
    Experimentally demonstrated for the first time in 1994, quantum cascade lasers were originally developed for use under cryogenic temperatures. Their development has advanced rapidly, allowing use at warmer temperatures, up to room temperature. Due to the large number of wavelengths they can achieve (from 3 to 300 microns), QCLs contribute to many industrial applications such as spectroscopy, optical countermeasures, and free-space communications.
    According to Grillot, the physics involved in QCLs is totally different than that in diode lasers. “The advantage of quantum cascade lasers over diode lasers comes from the sub-picosecond electronic transitions among the conduction-band states (subbands) and a carrier lifetime much shorter than the photon lifetime,” says Grillot. He remarks that QCLs exhibit completely different light emission behaviors under optical feedback, including but not limited to giant pulse occurrences, laser responses to modulation, and frequency comb dynamics.

    Story Source:
    Materials provided by SPIE–International Society for Optics and Photonics. Original written by Renae Keep. Note: Content may be edited for style and length. More

  • in

    Future VR could employ new ultrahigh-res display

    By expanding on existing designs for electrodes of ultra-thin solar panels, Stanford researchers and collaborators in Korea have developed a new architecture for OLED — organic light-emitting diode — displays that could enable televisions, smartphones and virtual or augmented reality devices with resolutions of up to 10,000 pixels per inch (PPI). (For comparison, the resolutions of new smartphones are around 400 to 500 PPI.)
    Such high-pixel-density displays will be able to provide stunning images with true-to-life detail — something that will be even more important for headset displays designed to sit just centimeters from our faces.
    The advance is based on research by Stanford University materials scientist Mark Brongersma in collaboration with the Samsung Advanced Institute of Technology (SAIT). Brongersma was initially put on this research path because he wanted to create an ultra-thin solar panel design.
    “We’ve taken advantage of the fact that, on the nanoscale, light can flow around objects like water,” said Brongersma, who is a professor of materials science and engineering and senior author of the Oct. 22 Science paper detailing this research. “The field of nanoscale photonics keeps bringing new surprises and now we’re starting to impact real technologies. Our designs worked really well for solar cells and now we have a chance to impact next generation displays.”
    In addition to having a record-setting pixel density the new “metaphotonic” OLED displays would also be brighter and have better color accuracy than existing versions, and they’d be much easier and cost-effective to produce as well.
    Hidden gems
    At the heart of an OLED are organic, light-emitting materials. These are sandwiched between highly-reflective and semi-transparent electrodes that enable current injection into the device. When electricity flows through an OLED, the emitters give off red, green or blue light. Each pixel in an OLED display is composed of smaller sub-pixels that produce these primary colors. When the resolution is sufficiently high, the pixels are perceived as one color by the human eye. OLEDs are an attractive technology because they are thin, light and flexible and produce brighter and more colorful images than other kinds of displays.

    advertisement

    This research aims to offer an alternative to the two types of OLED displays that are currently commercially available. One type — called a red-green-blue OLED — has individual sub-pixels that each contain only one color of emitter. These OLEDs are fabricated by spraying each layer of materials through a fine metal mesh to control the composition of each pixel. They can only be produced on a small scale, however, like what would be used for a smartphone.
    Larger devices like TVs employ white OLED displays. Each of these sub-pixels contains a stack of all three emitters and then relies on filters to determine the final sub-pixel color, which is simpler to fabricate. Since the filters reduce the overall output of light, white OLED displays are more power-hungry and prone to having images burn into the screen.
    OLED displays were on the mind of Won-Jae Joo, a SAIT scientist, when he visited Stanford from 2016 to 2018. During that time, Joo listened to a presentation by Stanford graduate student Majid Esfandyarpour about an ultrathin solar cell technology he was developing in Brongersma’s lab and realized it had applications beyond renewable energy.
    “Professor Brongersma’s research themes were all very academically profound and were like hidden gems for me as an engineer and researcher at Samsung Electronics,” said Joo, who is lead author of the Science paper.
    Joo approached Esfandyarpour after the presentation with his idea, which led to a collaboration between researchers at Stanford, SAI and Hanyang University in Korea.

    advertisement

    “It was quite exciting to see that a problem that we have already thought about in a different context can have such an important impact on OLED displays,” said Esfandyarpour.
    A fundamental foundation
    The crucial innovation behind both the solar panel and the new OLED is a base layer of reflective metal with nanoscale (smaller than microscopic) corrugations, called an optical metasurface. The metasurface can manipulate the reflective properties of light and thereby allow the different colors to resonate in the pixels. These resonances are key to facilitating effective light extraction from the OLEDs.
    “This is akin to the way musical instruments use acoustic resonances to produce beautiful and easily audible tones,” said Brongersma, who conducted this research as part of the Geballe Laboratory for Advanced Materials at Stanford.
    For example, red emitters have a longer wavelength of light than blue emitters, which, in conventional RGB-OLEDs, translates to sub-pixels of different heights. In order to create a flat screen overall, the materials deposited above the emitters have to be laid down in unequal thicknesses. By contrast, in the proposed OLEDs, the base layer corrugations allow each pixel to be the same height and this facilitates a simpler process for large-scale as well as micro-scale fabrication.
    In lab tests, the researchers successfully produced miniature proof-of-concept pixels. Compared with color-filtered white-OLEDs (which are used in OLED televisions) these pixels had a higher color purity and a twofold increase in luminescence efficiency — a measure of how bright the screen is compared to how much energy it uses. They also allow for an ultrahigh pixel density of 10,000 pixels-per-inch.
    The next steps for integrating this work into a full-size display is being pursued by Samsung, and Brongersma eagerly awaits the results, hoping to be among the first people to see the meta-OLED display in action. More

  • in

    How genetic variation gives rise to differences in mathematical ability

    DNA variation in a gene called ROBO1 is associated with early anatomical differences in a brain region that plays a key role in quantity representation, potentially explaining how genetic variability might shape mathematical performance in children, according to a study published October 22nd in the open-access journal PLOS Biology by Michael Skeide of the Max Planck Institute for Human Cognitive and Brain Sciences, and colleagues. Specifically, the authors found that genetic variants of ROBO1 in young children are associated with grey matter volume in the right parietal cortex, which in turn predicts mathematical test scores in second grade.
    Mathematical ability is known to be heritable and related to several genes that play a role for brain development. But it has not been clear how math-related genes might sculpt the developing human brain. As a result, it is an open question how genetic variation could give rise to differences in mathematical ability. To address this gap in knowledge, Skeide and his collaborators combined genotyping with brain imaging in unschooled children without mathematical training.
    The authors analyzed 18 single nucleotide polymorphisms (SNPs) — genetic variants affecting a single DNA building block — in 10 genes previously implicated in mathematical performance. They then examined the relationship between these variants and the volume of grey matter (which mainly consists of nerve cell bodies), across the whole brain in a total of 178 three- to six-year-old children who underwent magnetic resonance imaging. Finally, they identified brain regions whose grey matter volumes could predict math test scores in second grade.
    They found that variants in ROBO1, a gene that regulates prenatal growth of the outermost layer of neural tissue in the brain, are associated with the grey matter volume in the right parietal cortex, a key brain region for quantity representation. Moreover, grey matter volume within these regions predicted the children’s math test scores at seven to nine years of age. According to the authors, the results suggest that genetic variability might shape mathematical ability by influencing the early development of the brain’s basic quantity processing system.

    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    AI detects hidden earthquakes

    Measures of Earth’s vibrations zigged and zagged across Mostafa Mousavi’s screen one morning in Memphis, Tenn. As part of his PhD studies in geophysics, he sat scanning earthquake signals recorded the night before, verifying that decades-old algorithms had detected true earthquakes rather than tremors generated by ordinary things like crashing waves, passing trucks or stomping football fans.
    “I did all this tedious work for six months, looking at continuous data,” Mousavi, now a research scientist at Stanford’s School of Earth, Energy & Environmental Sciences (Stanford Earth), recalled recently. “That was the point I thought, ‘There has to be a much better way to do this stuff.'”
    This was in 2013. Handheld smartphones were already loaded with algorithms that could break down speech into sound waves and come up with the most likely words in those patterns. Using artificial intelligence, they could even learn from past recordings to become more accurate over time.
    Seismic waves and sound waves aren’t so different. One moves through rock and fluid, the other through air. Yet while machine learning had transformed the way personal computers process and interact with voice and sound, the algorithms used to detect earthquakes in streams of seismic data have hardly changed since the 1980s.
    That has left a lot of earthquakes undetected.
    Big quakes are hard to miss, but they’re rare. Meanwhile, imperceptibly small quakes happen all the time. Occurring on the same faults as bigger earthquakes — and involving the same physics and the same mechanisms — these “microquakes” represent a cache of untapped information about how earthquakes evolve — but only if scientists can find them.

    advertisement

    In a recent paper published in Nature Communications, Mousavi and co-authors describe a new method for using artificial intelligence to bring into focus millions of these subtle shifts of the Earth. “By improving our ability to detect and locate these very small earthquakes, we can get a clearer view of how earthquakes interact or spread out along the fault, how they get started, even how they stop,” said Stanford geophysicist Gregory Beroza, one of the paper’s authors.
    Focusing on what matters
    Mousavi began working on technology to automate earthquake detection soon after his stint examining daily seismograms in Memphis, but his models struggled to tune out the noise inherent to seismic data. A few years later, after joining Beroza’s lab at Stanford in 2017, he started to think about how to solve this problem using machine learning.
    The group has produced a series of increasingly powerful detectors. A 2018 model called PhaseNet, developed by Beroza and graduate student Weiqiang Zhu, adapted algorithms from medical image processing to excel at phase-picking, which involves identifying the precise start of two different types of seismic waves. Another machine learning model, released in 2019 and dubbed CRED, was inspired by voice-trigger algorithms in virtual assistant systems and proved effective at detection. Both models learned the fundamental patterns of earthquake sequences from a relatively small set of seismograms recorded only in northern California.
    In the Nature Communications paper, the authors report they’ve developed a new model to detect very small earthquakes with weak signals that current methods usually overlook, and to pick out the precise timing of the seismic phases using earthquake data from around the world. They call it Earthquake Transformer.

    advertisement

    According to Mousavi, the model builds on PhaseNet and CRED, and “embeds those insights I got from the time I was doing all of this manually.” Specifically, Earthquake Transformer mimics the way human analysts look at the set of wiggles as a whole and then hone in on a small section of interest.
    People do this intuitively in daily life — tuning out less important details to focus more intently on what matters. Computer scientists call it an “attention mechanism” and frequently use it to improve text translations. But it’s new to the field of automated earthquake detection, Mousavi said. “I envision that this new generation of detectors and phase-pickers will be the norm for earthquake monitoring within the next year or two,” he said.
    The technology could allow analysts to focus on extracting insights from a more complete catalog of earthquakes, freeing up their time to think more about what the pattern of earthquakes means, said Beroza, the Wayne Loel Professor of Earth Science at Stanford Earth.
    Hidden faults
    Understanding patterns in the accumulation of small tremors over decades or centuries could be key to minimizing surprises — and damage — when a larger quake strikes.
    The 1989 Loma Prieta quake ranks as one of the most destructive earthquake disasters in U.S. history, and as one of the largest to hit northern California in the past century. It’s a distinction that speaks less to extraordinary power in the case of Loma Prieta than to gaps in earthquake preparedness, hazard mapping and building codes — and to the extreme rarity of large earthquakes.
    Only about one in five of the approximately 500,000 earthquakes detected globally by seismic sensors every year produce shaking strong enough for people to notice. In a typical year, perhaps 100 quakes will cause damage.
    In the late 1980s, computers were already at work analyzing digitally recorded seismic data, and they determined the occurrence and location of earthquakes like Loma Prieta within minutes. Limitations in both the computers and the waveform data, however, left many small earthquakes undetected and many larger earthquakes only partially measured.
    After the harsh lesson of Loma Prieta, many California communities have come to rely on maps showing fault zones and the areas where quakes are likely to do the most damage. Fleshing out the record of past earthquakes with Earthquake Transformer and other tools could make those maps more accurate and help to reveal faults that might otherwise come to light only in the wake of destruction from a larger quake, as happened with Loma Prieta in 1989, and with the magnitude-6.7 Northridge earthquake in Los Angeles five years later.
    “The more information we can get on the deep, three-dimensional fault structure through improved monitoring of small earthquakes, the better we can anticipate earthquakes that lurk in the future,” Beroza said.
    Earthquake Transformer
    To determine an earthquake’s location and magnitude, existing algorithms and human experts alike look for the arrival time of two types of waves. The first set, known as primary or P waves, advance quickly — pushing, pulling and compressing the ground like a Slinky as they move through it. Next come shear or S waves, which travel more slowly but can be more destructive as they move the Earth side to side or up and down.
    To test Earthquake Transformer, the team wanted to see how it worked with earthquakes not included in training data that are used to teach the algorithms what a true earthquake and its seismic phases look like. The training data included one million hand-labeled seismograms recorded mostly over the past two decades where earthquakes happen globally, excluding Japan. For the test, they selected five weeks of continuous data recorded in the region of Japan shaken 20 years ago by the magnitude-6.6 Tottori earthquake and its aftershocks.
    The model detected and located 21,092 events — more than two and a half times the number of earthquakes picked out by hand, using data from only 18 of the 57 stations that Japanese scientists originally used to study the sequence. Earthquake Transformer proved particularly effective for the tiny earthquakes that are harder for humans to pick out and being recorded in overwhelming numbers as seismic sensors multiply.
    “Previously, people had designed algorithms to say, find the P wave. That’s a relatively simple problem,” explained co-author William Ellsworth, a research professor in geophysics at Stanford. Pinpointing the start of the S wave is more difficult, he said, because it emerges from the erratic last gasps of the fast-moving P waves. Other algorithms have been able to produce extremely detailed earthquake catalogs, including huge numbers of small earthquakes missed by analysts — but their pattern-matching algorithms work only in the region supplying the training data.
    With Earthquake Transformer running on a simple computer, analysis that would ordinarily take months of expert labor was completed within 20 minutes. That speed is made possible by algorithms that search for the existence of an earthquake and the timing of the seismic phases in tandem, using information gleaned from each search to narrow down the solution for the others.
    “Earthquake Transformer gets many more earthquakes than other methods, whether it’s people sitting and trying to analyze things by looking at the waveforms, or older computer methods,” Ellsworth said. “We’re getting a much deeper look at the earthquake process, and we’re doing it more efficiently and accurately.”
    The researchers trained and tested Earthquake Transformer on historic data, but the technology is ready to flag tiny earthquakes almost as soon as they happen. According to Beroza, “Earthquake monitoring using machine learning in near real-time is coming very soon.” More

  • in

    Individuals may legitimize hacking when angry with system or authority

    University of Kent research has found that when individuals feel that a system or authority is unresponsive to their demands, they are more likely to legitimise hacker activity at an organisation’s expense.
    Individuals are more likely to experience anger when they believe that systems or authorities have overlooked pursuing justice on their behalf or listening to their demands. In turn, the study found that if the systems or authorities in question were a victim of hacking, individuals would be more likely to legitimise the hackers’ disruptive actions as a way to manifest their own anger against the organisation.
    With more organisations at risk to cyber security breaches, and more elements of individuals’ social lives taking place online, this research is timely in highlighting how hackers are perceived by individuals seeking justice.
    The research, led by Maria Heering and Dr Giovanni Travaglino at the University of Kent’s School of Psychology, was carried out with British undergraduate students and participants on academic survey crowdsourcer, Prolific Academic. The participants were presented with fictional scenarios of unfair treatment from authorities, with complaints either dismissed or pursued, before they were told that hackers had defaced the authorities’ websites. Participants were then asked to indicate how much they disagreed or agreed with the hackers’ actions. These hackers were predominantly supported by participants perceiving them as a way to ‘get back at’ the systems who do not listen to their demands.
    Maria Heering said: ‘When individuals perceive a system as unjust, they are motivated to participate in political protest and collective action to promote social change. However, if they believe they will not have voice, they will legitimise groups and individuals who disrupt the system on their behalf. While this study explored individuals’ feelings of anger, there is certainly more to be explored in this research area. For example, there might be important differences between the psychological determinations of individuals’ support for humorous, relatively harmless forms of hacking, and more serious and dangerous ones.’

    Story Source:
    Materials provided by University of Kent. Note: Content may be edited for style and length. More