More stories

  • in

    Renewable energy transition makes dollars and sense

    Making the transition to a renewable energy future will have environmental and long-term economic benefits and is possible in terms of energy return on energy invested (EROI), UNSW Sydney researchers have found.
    Their research, published in the international journal Ecological Economics recently, disproves the claim that a transition to large-scale renewable energy technologies and systems will damage the macro-economy by taking up too large a chunk of global energy generation.
    Honorary Associate Professor Mark Diesendorf, in collaboration with Prof Tommy Wiedmann of UNSW Engineering, analysed dozens of studies on renewable electricity systems in regions where wind and/or solar could provide most of the electricity generation in future, such as Australia and the United States.
    The Clean Energy Australia report states that renewable energy’s contribution to Australia’s total electricity generation is already at 24 per cent.
    Lead author A/Prof Diesendorf is a renewable energy researcher with expertise in electricity generation, while co-author Prof Tommy Wiedmann is a sustainability scientist.
    A/Prof Diesendorf said their findings were controversial in light of some fossil fuel and nuclear power supporters, as well as some economists, rejecting a transition to large-scale electricity renewables.

    advertisement

    “These critics claim the world’s economy would suffer because they argue renewables require too much lifecycle energy to build, to the point of diverting all that energy away from other uses,” he said.
    “Our paper shows that there is no credible scientific evidence to support such claims, many of which are founded upon a study published in 2014 that used data up to 30 years old.
    “There were still research papers coming out in 2018 using the old data and that prompted me to examine the errors made by those perpetuating the misconception.”
    A/Prof Diesendorf said critics’ reliance on outdated figures was “ridiculous” for both solar and wind technology.
    “It was very early days back then and those technologies have changed so dramatically just in the past 10 years, let alone the past three decades,” he said.

    advertisement

    “This evolution is reflected in their cost reductions: wind by about 30 per cent and solar by 85 to 90 per cent in the past decade. These cost reductions reflect increasing EROIs.”
    A/Prof Diesendorf said fears about macro-economic damage from a transition to renewable energy had been exaggerated.
    “Not only did these claims rely on outdated data, but they also failed to consider the energy efficiency advantages of transitioning away from fuel combustion and they also overestimated storage requirements,” he said.
    “I was unsurprised by our results, because I have been following the literature for several years and doubted the quality of the studies that supported the previous beliefs about low EROIs for wind and solar.”
    Spotlight on wind and solar
    A/Prof Diesendorf said the study focused on wind and solar renewables which could provide the vast majority of electricity, and indeed almost all energy, for many parts of the world in future.
    “Wind and solar are the cheapest of all existing electricity generation technologies and are also widely available geographically,” he said.
    “We critically examined the case for large-scale electricity supply-demand systems in regions with high solar and/or high wind resources that could drive the transition to 100 per cent renewable electricity, either within these regions or where power could be economically transmitted to these regions.
    “In these regions — including Australia, the United States, Middle East, North Africa, China, parts of South America and northern Europe — variable renewable energy (VRE) such as wind and/or solar can provide the major proportion of annual electricity generation.
    “For storage, we considered hydroelectricity, including pumped hydro, batteries charged with excess wind and/or solar power, and concentrated solar thermal (CST) with thermal storage, which is a solar energy technology that uses sunlight to generate heat.”
    Energy cost/benefit ratio approach
    Co-author Prof Wiedmann said the researchers used Net Energy Analysis as their conceptual framework within which to identify the strengths and weaknesses of past studies in determining the EROI of renewable energy technologies and systems.
    “We used the established Net Energy Analysis method because it’s highly relevant to the issue of EROI: it aims to calculate all energy inputs into making a technology in order to understand the full impact,” Prof Wiedmann said.
    “From mining the raw materials and minerals processing, to building and operating the technology, and then deconstructing it at the end of its life. So, it’s a lifecycle assessment of all energy which humans use to make a technology.”
    Renewable transition possible
    A/Prof Diesendorf said their findings revealed that a transition from fossil fuels to renewable energy was worthwhile, contradicting the assumptions and results of many previous studies on the EROIs of wind and solar.
    “We found that the EROIs of wind and solar technologies are generally high and increasing; typically, solar at a good site could generate the lifecycle primary energy required to build itself in one to two years of operation, while large-scale wind does it in three to six months,” he said.
    “The impact of storage on EROI depends on the quantities and types of storage adopted and their operational strategies. In the regions we considered, the quantity of storage required to maintain generation reliability is relatively small.
    “We also discovered that taking into account the low energy conversion efficiency of fossil-fuelled electricity greatly increases the relative EROIs of wind and solar.
    “Finally, we found the macro-economic impact of a rapid transition to renewable electricity would at worst be temporary and would be unlikely to be major.”
    A more sustainable future
    A/Prof Diesendorf said he hoped the study’s results would give renewed confidence to businesses and governments considering or already making a transition to more sustainable electricity technologies and systems.
    “This could be supported by government policy, which is indeed the case in some parts of Australia — including the ACT, Victoria and South Australia — where there’s strong support for the transition,” he said.
    “A number of mining companies in Australia are also going renewable, such as a steel producer which has a power purchase agreement with a solar farm to save money, while a zinc refinery built its own solar farm to supply cheaper electricity.”
    A/Prof Diesendorf said the Australian Government, however, could help with more policies to smooth the transition to renewable energy.
    “In Australia the transition is happening because renewable energy is much cheaper than fossil fuels, but there are many roadblocks and potholes in the way,” he said.
    “For example, wind and solar farms have inadequate transmission lines to feed power into cities and major industries, and we need more support for storage to better balance the variability of wind and solar.
    “So, I hope our research will help bolster support to continuing with the transition, because we have discredited the claim that the EROIs of electricity renewables are so low that a transition could displace investment in other sectors.” More

  • in

    Agriculture and fossil fuels are driving record-high methane emissions

    Methane levels in the atmosphere are at an all-time high. But curbing emissions of that potent greenhouse gas requires knowing where methane is being released, and why. Now, a global inventory of methane sources reveals the major culprits behind rising methane pollution in the 21st century.
    Agriculture, landfill waste and fossil fuel use were the primary reasons that Earth’s atmosphere absorbed about 40 million metric tons more methane from human activities in 2017 than it did per year in the early 2000s. Expanding agriculture dominated methane release in places like Africa, South Asia and Oceania, while increasing fossil fuel use heightened emissions in China and the United States, researchers report online July 14 in Environmental Research Letters.
    Methane “is one of the most important greenhouse gases — arguably the second most important after CO2,” says Alexander Turner, an atmospheric scientist who will join the University of Washington in Seattle in 2021.
    Although there is far less methane than carbon dioxide in the atmosphere, methane can trap about 30 times as much heat over a century as the same amount of CO2. Tallying methane sources “is really important if you want to understand how the climate is going to evolve,” says Turner, who wasn’t involved in the new study. It can also help prioritize strategies to quell pollution, like consuming less meat to cut down on emissions from cattle ranches and using aircraft or satellites to scout out leaky gas pipelines to fix (SN: 11/14/19).  

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Marielle Saunois, an atmospheric scientist at the Pierre Simon Laplace Institute in Paris, and colleagues cataloged global methane pollution in 2017 — the most recent year with complete data — using atmospheric measurements from towers and aircraft around the world. The isotope, or type of carbon, in methane samples contained clues about its source — such as whether the methane was emitted by the oil and gas industry, or by microbes living in rice paddies, landfills or the guts of belching cattle (SN: 11/18/15). The team compared the 2017 observations with average annual emissions from 2000 to 2006.
    In 2017, human activities pumped about 364 million metric tons of methane into the atmosphere, compared with 324 million tons per year, on average, in the early 2000s. About half of that 12 percent increase was the result of expanding agriculture and landfills, while the other half arose from fossil fuels. Emissions from natural sources like wetlands, on the other hand, held relatively steady.
    Emissions rose most sharply in Africa and the Middle East, and South Asia and Oceania. Both regions ramped up emissions by 10 million to 15 million metric tons. Agricultural sources, such as cattle ranches and paddy fields, were responsible for a 10-million-ton rise in emissions from South Asia and Oceania and a surge almost as big in Africa, the authors estimate. Emissions swelled by 5 to 10 million tons in China and North America, where fossil fuels drove pollution. In the United States alone, fossil fuels boosted methane release by about 4 million tons.

    One region that did not show an uptick in methane was the Arctic. That’s curious, because the Arctic is warming faster than anywhere else in the world, and is covered in permafrost — which is expected to release lots of methane into the air as it thaws, says Tonya DelSontro, an aquatic biogeochemist at the University of Geneva not involved in the work (SN: 7/1/20).
    The new findings could mean that the Arctic has not bled much methane into the atmosphere yet — or that scientists have not collected enough data from this remote area to accurately gauge its methane emission trends, DelSontro says (SN: 12/19/16). 
    The new methane budget may track emissions only through 2017, but “the atmosphere does not suggest that anything has slowed down for methane emissions in the last two years,” says study coauthor Rob Jackson, an environmental scientist at Stanford University. “If anything, it’s possibly speeding up.” By the end of 2019, the methane concentration in the atmosphere reached about 1,875 parts per billion — up from about 1,857 parts per billion in 2017, according to the U.S. National Oceanic and Atmospheric Administration. More

  • in

    A Raspberry Pi-based virtual reality system for small animals

    The Raspberry Pi Virtual Reality system (PiVR) is a versatile tool for presenting virtual reality environments to small, freely moving animals (such as flies and fish larvae), according to a study published July 14, 2020 in the open-access journal PLOS Biology by David Tadres and Matthieu Louis of the University of California, Santa Barbara. The use of PiVR, together with techniques like optogenetics, will facilitate the mapping and characterization of neural circuits involved in behavior.
    PiVR consists of a behavioral arena, a camera, a Raspberry Pi microcomputer, an LED controller, and a touchscreen. This system can implement a feedback loop between real-time behavioral tracking and delivery of a stimulus. PiVR is a versatile, customizable system that costs less than $500, takes less than six hours to build (using a 3D printer), and was designed to be accessible to a wide range of neuroscience researchers.
    In the new study, Tadres and Louis used their PiVR system to present virtual realities to small, freely moving animals during optogenetic experiments. Optogenetics is a technique that enables researchers to use light to control the activity of neurons in living animals, allowing them to examine causal relationships between the activity of genetically-labeled neurons and specific behaviors.
    As a proof-of-concept, Tadres and Louis used PiVR to study sensory navigation in response to gradients of chemicals and light in a range of animals. They showed how fruit fly larvae change their movements in response to real and virtual odor gradients. They then demonstrated how adult flies adapt their speed of movement to avoid locations associated with bitter tastes evoked by optogenetic activation of their bitter-sensing neurons. In addition, they showed that zebrafish larvae modify their turning maneuvers in response to changes in the intensity of light mimicking spatial gradients. According to the authors, PiVR represents a low-barrier technology that should empower many labs to characterize animal behavior and study the functions of neural circuits.
    “More than ever,” the authors note, “neuroscience is technology-driven. In recent years, we have witnessed a boom in the use of closed-loop tracking and optogenetics to create virtual sensory realities. Integrating new interdisciplinary methodology in the lab can be daunting. With PiVR, our goal has been to make virtual reality paradigms accessible to everyone, from professional scientists to high-school students. PiVR should help democratize cutting-edge technology to study behavior and brain functions.”

    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Ups and downs in COVID-19 data may be caused by data reporting practices

    As data accumulates on COVID-19 cases and deaths, researchers have observed patterns of peaks and valleys that repeat on a near-weekly basis. But understanding what’s driving those patterns has remained an open question.
    A study published this week in mSystems reports that those oscillations arise from variations in testing practices and data reporting, rather than from societal practices around how people are infected or treated. The findings suggest that epidemiological models of infectious disease should take problems with diagnosis and reporting into account.
    “The practice of acquiring data is as important at times as the data itself,” said computational biologist Aviv Bergman, Ph.D., at the Albert Einstein College of Medicine in New York City, and microbiologist Arturo Casadevall, M.D., Ph.D., at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland. Bergman and Casadevall worked on the study with Yehonatan Sella, Ph.D., at Albert Einstein, and physician-scientist Peter Agre, Ph.D., at Johns Hopkins.
    The study began when Agre, who co-won the 2003 Nobel Prize in Chemistry, noticed that precise weekly fluctuations in the data were clearly linked to the day of the week. “We became very suspicious,” said Bergman.
    The researchers collected the total number of daily tests, positive tests, and deaths in U.S. national data over 161 days, from January through the end of June. They also collected New York City-specific data and Los Angeles-specific data from early March through late June. To better understand the oscillating patterns, they conducted a power spectrum analysis, which is a methodology for identifying different frequencies within a signal. (It’s often used in signal and image processing, but the authors believe the new work represents the first application to epidemiological data.)
    The analysis pointed to a 7-day cycle in the rise and fall of national new cases, and 6.8-day and 6.9-day cycles in New York City and Los Angeles, respectively. Those oscillations are reflected in analyses that have found, for example, that the mortality rate is higher at the end of the week or on the weekend.
    Alarmed by the consistency of the signal, the researchers looked for an explanation. They reported that an increase in social gatherings on the weekends was likely not a factor, since the time from exposure to the coronavirus to showing symptoms can range from 4-14 days. Previous analyses have also suggested that patients receive lower-quality care later in the week, but the new analysis didn’t support that hypothesis.
    The researchers then examined reporting practices. Some areas, like New York City and Los Angeles, report deaths according to when the individual died. But national data publishes deaths according to when the death was reported — not when it occurred. In large datasets that report the date of death, rather than the date of the report, the apparent oscillations vanish. Similar discrepancies in case reporting explained the oscillations found in new case data.
    The authors of the new study note that weekend interactions or health care quality may influence outcomes, but these societal factors do not significantly contribute to the repeated patterns.
    “These oscillations are a harbinger of problems in the public health response,” said Casadevall.
    The researchers emphasized that no connection exists between the number of tests and the number of cases, and that unless data reporting practices change, the oscillations will remain. “And as long as there are infected people, these oscillations, due to fluctuations in the number of tests administered and reporting, will always be observed,” said Bergman, “even if the number of cases drops.” More

  • in

    A new path for electron optics in solid-state systems

    Electrons can interfere in the same manner as water, acoustical or light waves do. When exploited in solid-state materials, such effects promise novel functionality for electronic devices, in which elements such as interferometers, lenses or collimators could be integrated for controlling electrons at the scale of mirco- and nanometres. However, so far such effects have been demonstrated mainly in one-dimensional devices, for example in nanotubes, or under specific conditions in two-dimensional graphene devices. Writing in Physical Review X, a collaboration including the Department of Physics groups of Klaus Ensslin, Thomas Ihn and Werner Wegscheider in the Laboratory for Solid State Physics and Oded Zilberberg at the Institute of Theoretical Physics, now introduces a novel general scenario for realizing electron optics in two dimensions.

    advertisement

    The main functional principle of optical interferometers is the interference of monochromatic waves that propagate in the same direction. In such interferometers, the interference can be observed as a periodic oscillation of the transmitted intensity on varying the wavelength of the light. However, the period of the interference pattern strongly depends on the incident angle of the light, and, as a result, the interference pattern is averaged out if light is sent through the interferometer at all possible incident angles at once. The same arguments apply to the interference of matter waves as described by quantum mechanics, and in particular to interferometers in which electrons interfere.
    As part of their PhD projects, experimentalist Matija Karalic and theorist Antonio Štrkalj have investigated the phenomenon of electronic interference in a solid-state system consisting of two coupled semiconductor layers, InAs and GaSb. They discovered that the band inversion and hybridization present in this system provide a novel transport mechanism that guarantees non-vanishing interference even when all angles of incidence occur. Through a combination of transport measurements and theoretical modelling, they found that their devices operate as a Fabry-Pérot interferometer in which electrons and holes form hybrid states and interfere.
    The significance of these results goes firmly beyond the specific InAs/GaSb realization explored in this work, as the reported mechanism requires solely the two ingredients of band inversion and hybridization. Therefore new paths are now open for engineering electron-optical phenomena in a broad variety of materials.

    make a difference: sponsored opportunity

    Story Source:
    Materials provided by ETH Zurich Department of Physics. Note: Content may be edited for style and length.

    Journal Reference:
    Matija Karalic, Antonio Štrkalj, Michele Masseroni, Wei Chen, Christopher Mittag, Thomas Tschirky, Werner Wegscheider, Thomas Ihn, Klaus Ensslin, Oded Zilberberg. Electron-Hole Interference in an Inverted-Band Semiconductor Bilayer. Physical Review X, 2020; 10 (3) DOI: 10.1103/PhysRevX.10.031007

    Cite This Page:

    ETH Zurich Department of Physics. “A new path for electron optics in solid-state systems.” ScienceDaily. ScienceDaily, 14 July 2020. .
    ETH Zurich Department of Physics. (2020, July 14). A new path for electron optics in solid-state systems. ScienceDaily. Retrieved July 14, 2020 from www.sciencedaily.com/releases/2020/07/200714132737.htm
    ETH Zurich Department of Physics. “A new path for electron optics in solid-state systems.” ScienceDaily. www.sciencedaily.com/releases/2020/07/200714132737.htm (accessed July 14, 2020). More

  • in

    Wireless aquatic robot could clean water and transport cells

    Researchers at Eindhoven University of Technology developed a tiny plastic robot, made of responsive polymers, which moves under the influence of light and magnetism. In the future this ‘wireless aquatic polyp’ should be able to attract and capture contaminant particles from the surrounding liquid or pick up and transport cells for analysis in diagnostic devices. The researchers published their results in the journal PNAS.
    The mini robot is inspired by a coral polyp; a small soft creature with tentacles, which makes up the corals in the ocean. Doctoral candidate Marina Pilz Da Cunha: “I was inspired by the motion of these coral polyps, especially their ability to interact with the environment through self-made currents.” The stem of the living polyps makes a specific movement that creates a current which attracts food particles. Subsequently, the tentacles grab the food particles floating by.
    The developed wireless artificial polyp is 1 by 1 cm, has a stem that reacts to magnetism, and light steered tentacles. “Combining two different stimuli is rare since it requires delicate material preparation and assembly, but it is interesting for creating untethered robots because it allows for complex shape changes and tasks to be performed,” explains Pilz Da Cunha. The tentacles move by shining light on them. Different wavelengths lead to different results. For example, the tentacles ‘grab’ under the influence of UV light, while they ‘release’ with blue light.
    FROM LAND TO WATER
    The device now presented can grab and release objects underwater, which is a new feature of the light-guided package delivery mini robot the researchers presented earlier this year. This land-based robot couldn’t work underwater, because the polymers making up that robot act through photothermal effects. The heat generated by the light fueled the robot, instead of the light itself. Pilz Da Cunha: “Heat dissipates in water, which makes it impossible to steer the robot under water.” She therefore developed a photomechanical polymer material that moves under the influence of light only. Not heat.

    advertisement

    And that is not its only advantage. Next to operating underwater, this new material can hold its deformation after being activated by light. While the photothermal material immediately returns to its original shape after the stimuli has been removed, the molecules in the photomechanical material actually take on a new state. This allows different stable shapes, to be maintained for a longer period of time. “That helps to control the gripper arm; once something has been captured, the robot can keep holding it until it is addressed by light once again to release it,” says Pilz Da Cunha.
    FLOW ATTRACTS PARTICLES
    By placing a rotating magnet underneath the robot, the stem circles around its axis. Pilz Da Cunha: “It was therefore possible to actually move floating objects in the water towards the polyp, in our case oil droplets.”
    The position of the tentacles (open, closed or something in between), turned out to have an influence on the fluid flow. “Computer simulations, with different tentacle positions, eventually helped us to understand and get the movement of the stem exactly right. And to ‘attract’ the oil droplets towards the tentacles,” explains Pilz Da Cunha.
    OPERATION INDEPENDENT OF THE WATER COMPOSITION
    An added advantage is that the robot operates independently from the composition of the surrounding liquid. This is unique, because the dominant stimuli-responsive material used for underwater applications nowadays, hydrogels, are sensitive for their environment. Hydrogels therefore behave differently in contaminated water. Pilz Da Cunha: “Our robot also works in the same way in salt water, or water with contaminants. In fact, in the future the polyp may be able to filter contaminants out of the water by catching them with its tentacles.”

    advertisement

    NEXT STEP: SWIMMING ROBOT
    PhD student Pilz Da Cunha is now working on the next step: an array of polyps that can work together. She hopes to realize transport of particles, in which one polyp passes on a package to the other. A swimming robot is also on her wish list. Here, she thinks of biomedical applications such as capturing specific cells.
    To achieve this, the researchers still have to work on the wavelengths to which the material responds. “UV light affects cells and the depth of penetration in the human body is limited. In addition, UV light might damage the robot itself, making it less durable. Therefore we want to create a robot that doesn’t need UV light as a stimuli,” concludes Pilz Da Cunha.
    Video: https://www.youtube.com/watch?v=QYklipdzesI&feature=emb_logo More

  • in

    'Knock codes' for smartphone security are easily predicted

    Smartphone owners who unlock their devices with knock codes aren’t as safe as they think, according to researchers from New Jersey Institute of Technology, the George Washington University and Ruhr University Bochum.
    Knock codes work by letting people select patterns to tap on a phone’s locked screen. LG popularized the method in 2014, and now there are approximately 700,000 people using this method in the U.S. alone, along with one million downloads worldwide of clone applications for Google Android devices generally, the researchers said.
    Raina Samuel, a doctoral student in computer science at NJIT’s Ying Wu College of Computing, said she had the idea for this research while attending a security conference in 2017.
    “During that conference I heard our co-author Adam Aviv give a presentation. He was talking about passwords, PINs, shoulder surfing and how these mobile methods of authentication can be manipulated and insecure sometimes,” she said. “At the time, I had an LG phone and I was using the knock codes. It was a bit of a personal interest for me.”
    Knock codes typically present users with a 2-by-2 grid, which must be tapped in the correct sequence to unlock their phone. The sequence is between six and ten taps. The researchers analyzed how easily an attacker could guess a tapping pattern.
    In an online study, 351 participants picked codes. The researchers found that 65% of users started their codes in the top left corner, often proceeding to the top right corner next, which could be attributed to Western reading habits. They also found that increasing the size of the grid didn’t help, instead making the users more likely to pick shorter codes.

    advertisement

    “Knock codes really intrigued me as I have spent a lot of time working on other mobile authentication options, such as PINs or Android patterns, and had never heard of these,” Aviv, an associate professor of computer science at GW, said. “Turns out, while less popular than PINs or patterns, there are still a surprising number of people using knock codes, so it’s important to understand the security and usability properties of them.”
    The researchers also tested a blocklist of common codes, so that survey participants would pick something harder to guess. The list contained the 30 most popular codes. The first three were:
    Top left, top right, bottom left, bottom right, top left, top right (Hourglass shape)
    Top left, top right, bottom right, bottom left, top left, top right (Square shape)
    Top left, top left, top right, top right, bottom left, bottom left. (Number 7 shape)
    The researchers said there should be a feature that blocks codes which are too easy to guess and advises users to pick stronger ones, similar to how some websites respond when users create password-protected accounts.
    The study showed that knock codes are difficult to memorize. Approximately one in ten participants forgot their code by the end of the study, even though it lasted only five minutes. In addition, entering such a code to unlock the display took 5 seconds on average, compared to entering a PIN which typically takes 4.5 seconds and an Android unlock pattern needing only 3 seconds.
    The research team also included Ruhr University’s Philipp Markert. Aviv asked Markert to join their project when peer reviewers said the study of knock code patterns should be done on phones, not on computer simulations. Markert adapted the study’s programming for this change.
    “I’m always interested in new authentication schemes, and I worked with Adam on a similar project about PINs, so when he asked me to join the team, I didn’t think twice.” Markert said.
    The paper will be presented at the 16th Symposium on Usable Privacy and Security, held concurrently with the prestigious USENIX Security Symposium August 9-11. Funding was supplied by the Army Research Laboratory, National Science Foundation and Germany’s North Rhine-Westphalian Experts on Research in Digitalization. More

  • in

    Links between video games and gambling run deeper than previously thought, study reveals

    A range of video game practices have potentially dangerous links to problem gambling, a study has revealed.
    Building on previous research by the same author, which exposed a link between problem gambling and video game loot boxes, the new study suggests that a number of other practices in video games, such as token wagering, real-money gaming, and social casino spending, are also significantly linked to problem gambling.
    The research provides evidence that players who engage in these practices are also more likely to suffer from disordered gaming — a condition where persistent and repeated engagement with video games causes an individual significant impairment or distress.
    Author of the study, Dr David Zendle from the Department of Computer Science at the University of York, said: “These findings suggest that the relationship between gaming and problem gambling is more complex than many people think.”
    “When we go beyond loot boxes, we can see that there are multiple novel practices in gaming that incorporate elements of gambling. All of them are linked to problem gambling, and all seem prevalent. This may pose an important public health risk. Further research is urgently needed.”
    For the study, a group of just under 1,100 participants were quota-sampled to represent the UK population in terms of age, gender, and ethnicity. They were then asked about their gaming and gambling habits.
    The study revealed that a significant proportion (18.5%) of the participants had engaged in some behaviour that related to both gaming and gambling, such as playing a social casino game or spending money on a loot box.
    Dr Zendle added: “There are currently loopholes that mean some gambling related elements of video games avoid regulation. For example social casinos are ‘video games’ that are basically a simulation of gambling: you can spend real money in them, and the only thing that stops them being regulated as proper gambling is that winnings cannot be converted into cash.
    “We need to have regulations in place that address all of the similarities between gambling and video games. Loot boxes aren’t the only element of video games that overlaps with gambling: They’re just a tiny symptom of this broader convergence.”
    Last year, University of York academics, including Dr David Zendle, contributed to a House of Commons select committee inquiry whose report called for video game loot boxes to be regulated under gambling law and for their sale to children to be banned. Dr Zendle also provided key evidence to the recent House of Lords select committee inquiry that likewise produced a report recommending the regulation of loot boxes as gambling.

    Story Source:
    Materials provided by University of York. Note: Content may be edited for style and length. More