More stories

  • in

    Ultrafast laser pulses could lessen data storage energy needs

    A discovery from an experiment with magnets and lasers could be a boon to energy-efficient data storage.
    “We wanted to study the physics of light-magnet interaction,” said Rahul Jangid, who led the data analysis for the project while earning his Ph.D. in materials science and engineering at UC Davis under associate professor Roopali Kukreja. “What happens when you hit a magnetic domain with very short pulses of laser light?”
    Domains are areas within a magnet that flip from north to south poles. This property is used for data storage, for example in computer hard drives.
    Jangid and his colleagues found that when a magnet is hit with a pulsed laser, the domain walls in the ferromagnetic layers move at a speed of approximately 66 km/s, which is about 100 times faster than the speed limit previously thought.
    Domain walls moving at this speed could drastically affect the way data is stored and processed, offering a means of faster, more stable memory and reducing energy consumption in spintronic devices such as hard disk drives that use the spin of electrons within magnetic metallic multilayers to store, process or transmit information.
    “No one thought it was possible to move these walls that fast because they should hit their limit,” said Jangid. “It sounds absolutely bananas, but it’s true.”
    It’s “bananas,” because of the Walker breakdown phenomenon, which says that domain walls can only be pushed so far at a given velocity before they effectively break down and stop moving. This research, however, gives evidence that the domain walls can be driven at previously unknown velocities using lasers.

    While most personal devices like laptops and cell phones use faster flash drives, data centers use cheaper, slower hard disk drives. However, each time a bit of information is processed, or flipped, the drive uses a magnetic field to conduct heat through a coil of wire, burning a lot of energy. If, rather, a drive could use laser pulses on the magnetic layers, the device would operate at a lower voltage and bit flips would take significantly less energy to process.
    Current projections indicate that by 2030, information and communications technology will account for 21% of the world’s energy demand, exacerbating climate change. This finding, which was highlighted in a paper by Jangid and co-authors titled “Extreme Domain Wall Speeds under Ultrafast Optical Excitation” in the journal Physical Review Letters on Dec. 19, comes at a time when finding energy-efficient technologies is paramount.
    When laser meets magnet
    To conduct the experiment, Jangid and his collaborators, including researchers from the National Institute of Science and Technology; UC San Diego; University of Colorado, Colorado Springs and Stockholm University used the Free Electron Laser Radiation for Multidisciplinary Investigations, or FERMI facility, a free electron laser source based in Trieste, Italy.
    “Free electron lasers are insane facilities,” Jangid said. “It’s a 2-mile-long vacuum tube, and you take a small number of electrons, accelerate them up to the speed of light, and at the end wiggle them to create X-rays so bright, that if you’re not careful, your sample could be vaporized. Think of it like taking all the sunlight falling on the Earth and focusing it all on a penny — that’s how much photon flux we have at free electron lasers.”
    At FERMI, the group utilized X-rays to measure what occurs when a nano-scale magnet with multiple layers of cobalt, iron and nickel is excited by femtosecond pulses. A femtosecond is defined as 10 to the negative 15th of a second or one-millionth of one-billionth of a second.

    “There are more femtoseconds in one second than there are days in the age of the universe,” Jangid said. “These are extremely small, extremely fast measurements that are difficult to wrap your head around.”
    Jangid, who was analyzing the data saw that it was these ultrafast laser pulses exciting the ferromagnetic layers that resulted in the movement of the domain walls. Based on how fast those domain walls were moving, the study posits that these ultrafast laser pulses can switch a stored bit of information approximately 1,000 times faster than the magnetic field or spin current-based methods being used now.
    The future of ultrafast phenomena
    The technology is far from being practically applied, as current lasers consume a lot of power. However, a process similar to the way compact discs, or CDs, use lasers to store information and CD players use lasers to play it back could potentially work in the future, Jangid said.
    The next steps include further exploring the physics of mechanisms that enable ultrafast domain wall velocities higher than the previously known limits, as well as imaging the domain wall motion.
    This research will continue at UC Davis under Kukreja. Jangid is now pursuing similar research at National Synchrotron Light Source 2 at Brookhaven National Laboratory.
    “There are so many aspects of ultrafast phenomenon that we are just starting to understand,” Jangid said. “I’m eager to tackle the open questions that could unlock transformative advancements in low power spintronics, data storage, and information processing.” More

  • in

    Tiny AI-based bio-loggers revealing the interesting bits of a bird’s day

    Have you ever wondered what wildlife animals do all day? Documentaries offer a glimpse into their lives, but animals under the watchful eye do not do anything interesting. The true essence of their behaviors remains elusive. Now, researchers from Japan have developed a camera that allows us to capture these behaviors.
    In a study recently published in PNAS Nexus, researchers from Osaka University have created a small sensor-based data logger (called a bio-logger) that automatically detects and records video of infrequent behaviors in wild seabirds without supervision by researchers.
    Infrequent behaviors, such as diving into the water for food, can lead to new insights or even new directions in research. But observing enough of these behaviors to infer any results is difficult, especially when these behaviors take place in an environment that is not hospitable to humans, such as the open ocean. As a result, the detailed behaviors of these animals remain largely unknown.
    “Video cameras attached to the animal are an excellent way to observe behavior,” says Kei Tanigaki, lead author of the study. However, video cameras are very power hungry, and this leads to a trade-off. “Either the video only records until the battery runs out, in which case you might miss the rare behavior, or you use a larger, heavier battery, which is not suitable for the animal.”
    To avoid having to make this choice for the wild seabirds under study, the team use low-power sensors, such as accelerometers, to determine when an unusual behavior is taking place. The camera is then turned on, the behavior is recorded, and the camera powers off until the next time. This bio-logger is the first to use artificial intelligence to do this task.
    “We use a method called an isolation forest,” says Takuya Maekawa, senior author. “This method detects outlier events well, but like many other artificial intelligence algorithms, it is computationally complex. This means, like the video cameras, it is power hungry.” For the bio-loggers, the researchers needed a light-weight algorithm, so they trained the original isolation forest on their data and then used it as a “teacher” to train a smaller “student” outlier detector installed on the bio-logger.
    The final bio-logger is 23 g, which is less than 5% of the body weight of the Streaked Shearwater birds under study. Eighteen bio-loggers were deployed, a total of 205 hours of low-power sensor data were collected, and 76 5-min videos were collected. The researchers were able to collect enough data to reveal novel aspects of head-shaking and foraging behaviors of the birds.
    This approach, which overcomes the battery-life limitation of most bio-loggers, will help us understand the behaviors of wildlife that venture into human-inhabited areas. It will also enable animals in extreme environments inaccessible to humans to be observed. This means that many other rare behaviors — from sweet-potato washing by Japanese monkeys to penguins feeding on jellyfish — can now be studied in the future. More

  • in

    New AI makes better permafrost maps

    New insights from artificial intelligence about permafrost coverage in the Arctic may soon give policy makers and land managers the high-resolution view they need to predict climate-change-driven threats to infrastructure such as oil pipelines, roads and national security facilities.
    “The Arctic is warming four times faster than the rest of the globe, and permafrost is a component of the Arctic that’s changing really rapidly,” said Evan Thaler, a Chick Keller Postdoctoral Fellow at Los Alamos National Laboratory. Thaler is corresponding author of a paper published in the journal Earth and Space Science on an innovative application of AI to permafrost data.
    “Current models don’t give the resolution needed to understand how permafrost thaw is changing the environment and affecting infrastructure,” Thaler said. “Our model creates high-resolution maps telling us where permafrost is now and where it is likely to change in the future.”
    The AI models also identify the landscape and ecological features driving the predictions, such as vegetative greenness, landscape slope angle and the duration of snow cover.
    AI versus field data
    Thaler was part of a team with fellow Los Alamos researchers Joel Rowland, Jon Schwenk and Katrina Bennett, plus collaborators from Lawrence Berkeley National Laboratory, that used a form of AI called supervised machine learning. The work tested the accuracy of three different AI approaches against field data collected by Los Alamos researchers from three watersheds with patchy permafrost on the Seward Peninsula in Alaska.
    Permafrost, or ground that stays below freezing temperature two years or more, covers about one-sixth of the exposed land in the Northern Hemisphere, Thaler said. Thawing permafrost is already disrupting roads, oil pipelines and other facilities built over it and carries a range of environmental hazards as well.

    As air temperatures warm under climate change, the thawing ground releases water. It flows to lower terrain, rivers, lakes and the ocean, causing land-surface subsidence, transporting minerals, altering the direction of groundwater, changing soil chemistry and releasing carbon to the atmosphere.
    Useful results
    The resolution of the most widely used current pan-arctic model for permafrost is about one-third square mile, far too coarse to predict how changing permafrost will undermine a road or pipeline, for instance. The new Los Alamos AI model determines surface permafrost coverage to a resolution of just under 100 square feet, smaller than a typical parking space and far more practical for assessing risk at a specific location.
    Using their AI model trained on data from three sites on the Seward Peninsula, the team generated a map showing large areas without any permafrost around the Seward sites, matching the field data with 83% accuracy. Using the pan-arctic model for comparison, the team generated a map of the same sites with only 50% accuracy.
    “It’s the highest accuracy pan-arctic product to date, but it obviously isn’t good enough for site-specific predictions,” Thaler said. “The pan-arctic product predicts 100% of that site is permafrost, but our model predicts only 68%, which we know is closer to the real percentage based on field data.”
    Feeding the AI models
    This initial study proved the concept of the Los Alamos model on the Seward data, delivering acceptable accuracy for terrain similar to the location where the field data was collected. To measure each model’s transferability, the team also trained it on data from one site then ran the model using data from a second site with different terrain that the model had not been trained on. None of the models transferred well by creating a map matching actual findings at the second site.

    Thaler said the team will do additional work on the AI algorithms to improve the model’s transferability to other areas across the Arctic. “We want to be able to train on one data set and then apply the model to a place it hasn’t seen before. We just need more data from more diverse landscapes to train the models, and we hope to collect that data soon,” he said.
    Part of the study involved comparing the accuracy of three different AI approaches — extremely randomized trees, support vector machines and an artificial neural network — to see which model came closest to matching the “ground truth” data gathered in field observations at the Seward Peninsula. Part of that data was used to train the AI models. Each model then generated a map based on unseen data predicting the extent of near-surface permafrost.
    While the Los Alamos research demonstrated a marked improvement over the best — and widely used — pan-arctic model, the results from the team’s three AI models were mixed, with the support vector machines showing the most promise for transferability. More

  • in

    Online versus reality: Social media influences perceptions

    People may form inaccurate impressions about us from our social media posts, finds new Cornell University research that is the first to examine perceptions of our personalities based on online posts.
    An analysis of Facebook status updates found substantial discrepancies between how viewers saw the authors across a range of personality traits, and the authors’ self-perceptions. Viewers rated the Facebook users on average as having lower self-esteem and being more self-revealing, for example, than the users rated themselves.
    Status updates containing photos, video or links in addition to text facilitated more accurate assessments than those with just text, the researchers found. Overall, they said, the study sheds light on the dynamic process by which a cyber audience tries to make sense of who we are from isolated fragments of shared information, jointly constructing our digital identity.
    “The impression people form about us on social media based on what we post can differ from the way we view ourselves,” said Qi Wang, professor of psychology and director of the Culture & Cognition Lab. “A mismatch between who we are and how people perceive us could influence our ability to feel connected online and the benefits of engaging in social media interaction.”
    Wang is the lead author of “The Self Online: When Meaning-Making is Outsourced to the Cyber Audience,” published in PLOS One.
    Prior research has focused on perceptions of personality traits gleaned from personal websites, such as blogs or online profiles, finding that readers can assess them accurately. The Cornell researchers believe their study is the first to investigate audience perceptions of social media users through their posts, on platforms where users often don’t share cohesive personal narratives while interacting with “friends” they may know only a little or sometimes not at all.
    Interestingly, the study found that Facebook status updates generated perceptions of users that were consistent with cultural norms in offline contexts concerning gender and ethnicity — even though viewers were blind to their identities. For example, female Facebook users were rated as more extraverted than male users, in line with general findings that women score higher on extraversion. White Facebook users were seen as being more extraverted and having greater self-esteem than Asian users, whose cultures place more emphasis on modesty, Wang said.

    “We present ourselves in line with our cultural frameworks,” she said, “and others can discern our ‘cultured persona’ through meaning making of our posts.”
    The scholars said future research should explore this “outsourced meaning-making process” with larger samples of posts, and on other popular platforms such as Instagram and X, formerly known as Twitter.
    Wang said the findings could help developers design interfaces that allow people to express themselves most authentically. For users, misunderstandings about who they are on social media might not cause direct harm, she said, but could hinder their efforts to foster good communication and relationships.
    “If people’s view of us is very different from who we actually are, or how we would like to be perceived,” Wang said, “it could undermine our social life and well-being.” More

  • in

    New deepfake detector designed to be less biased

    The image spoke for itself.
    University at Buffalo computer scientist and deepfake expert Siwei Lyu created a photo collage out of the hundreds of faces that his detection algorithms had incorrectly classified as fake — and the new composition clearly had a predominantly darker skin tone.
    “A detection algorithm’s accuracy should be statistically independent from factors like race,” Lyu says, “but obviously many existing algorithms, including our own, inherit a bias.”
    Lyu, PhD, co-director of the UB Center for Information Integrity, and his team have now developed what they believe are the first-ever deepfake detection algorithms specifically designed to be less biased.
    Their two machine learning methods — one which makes algorithms aware of demographics and one that leaves them blind to them — reduced disparities in accuracy across races and genders, while, in some cases, still improving overall accuracy.
    The research was presented at the Winter Conference on Applications of Computer Vision (WACV), held Jan. 4-8, and was supported in part by the U.S. Defense Advanced Research Projects Agency (DARPA).
    Lyu, the study’s senior author, collaborated with his former student, Shu Hu, PhD, now an assistant professor of computer and information technology at Indiana University-Purdue University Indianapolis, as well as George Chen, PhD, assistant professor of information systems at Carnegie Mellon University. Other contributors include Yan Ju, a PhD student in Lyu’s Media Forensic Lab at UB, and postdoctoral researcher Shan Jia.

    Ju, the study’s first author, says detection tools are often less scrutinized than the artificial intelligence tools they keep in check, but that doesn’t mean they don’t need to be held accountable, too.
    “Deepfakes have been so disruptive to society that the research community was in a hurry to find a solution,” she says, “but even though these algorithms were made for a good cause, we still need to be aware of their collateral consequences.”
    Demographic aware vs. demographic agnostic
    Recent studies have found large disparities in deepfake detection algorithms’ error rates — up to a 10.7% difference in one study — among different races. In particular, it’s been shown that some are better at guessing the authenticity of lighter-skinned subjects than darker-skinned ones.
    This can result in certain groups being more at risk of having their real image pegged as a fake, or perhaps even more damaging, a doctored image of them pegged as real.
    The problem is not necessarily the algorithms themselves, but the data they’ve been trained on. Middle-aged white men are often overly represented in such photo and video datasets, so the algorithms are better at analyzing them than they are underrepresented groups, says Lyu, SUNY Empire Professor in the UB Department of Computer Science and Engineering, within the School of Engineering and Applied Sciences.

    “Say one demographic group has 10,000 samples in the dataset and the other only has 100. The algorithm will sacrifice accuracy on the smaller group in order to minimize errors on the larger group,” he adds. “So it reduces overall errors, but at the expense of the smaller group.”
    While other studies have attempted to make databases more demographically balanced — a time-consuming process — Lyu says his team’s study is the first attempt to actually improve the fairness of the algorithms themselves.
    To explain their method, Lyu uses an analogy of a teacher being evaluated by student test scores.
    “If a teacher has 80 students do well and 20 students do poorly, they’ll still end up with a pretty good average,” he says. “So instead we want to give a weighted average to the students around the middle, forcing them to focus more on everyone instead of the dominating group.”
    First, their demographic-aware method supplied algorithms with datasets that labeled subjects’ gender — male or female — and race — white, Black, Asian or other — and instructed it to minimize errors on the less represented groups.
    “We’re essentially telling the algorithms that we care about overall performance, but we also want to guarantee that the performance of every group meets certain thresholds, or at least is only so much below the overall performance,” Lyu says.
    However, datasets typically aren’t labeled for race and gender. Thus, the team’s demographic-agnostic method classifies deepfake videos not based on the subjects’ demographics — but on features in the video not immediately visible to the human eye.
    “Maybe a group of videos in the dataset corresponds to a particular demographic group or maybe it corresponds with some other feature of the video, but we don’t need demographic information to identify them,” Lyu says. “This way, we do not have to handpick which groups should be emphasized. It’s all automated based on which groups make up that middle slice of data.”
    Improving fairness — and accuracy
    The team tested their methods using the popular FaceForensic++ dataset and state-of-the-art Xception detection algorithm. This improved all of the algorithm’s fairness metrics, such as equal false positive rate among races, with the demographic-aware method performing best of all.
    Most importantly, Lyu says, their methods actually increased the overall detection accuracy of the algorithm — from 91.49% to as high as 94.17%.
    However, when using the Xception algorithm with different datasets and the FF+ dataset with different algorithms, the methods — while still improving most fairness metrics — slightly reduced overall detection accuracy.
    “There can be a small tradeoff between performance and fairness, but we can guarantee that the performance degradation is limited,” Lyu says. “Of course, the fundamental solution to the bias problem is improving the quality of the datasets, but for now, we should incorporate fairness into the algorithms themselves.” More

  • in

    ‘Smart glove’ can boost hand mobility of stroke patients

    This month, a group of stroke survivors in B.C. will test a new technology designed to aid their recovery, and ultimately restore use of their limbs and hands.
    Participants will wear a new groundbreaking “smart glove” capable of tracking their hand and finger movements during rehabilitation exercises supervised by Dr. Janice Eng, a leading stroke rehabilitation specialist and professor of medicine at UBC.
    The glove incorporates a sophisticated network of highly sensitive sensor yarns and pressure sensors that are woven into a comfortable stretchy fabric, enabling it to track, capture and wirelessly transmit even the smallest hand and finger movements.
    “With this glove, we can monitor patients’ hand and finger movements without the need for cameras. We can then analyze and fine-tune their exercise programs for the best possible results, even remotely,” says Dr. Eng.
    Precision in a wearable device
    UBC electrical and computer engineering professor Dr. Peyman Servati, PhD student Arvin Tashakori and their team at their startup, Texavie, created the smart glove for collaboration on the stroke project. Dr. Servati highlighted a number of breakthroughs, described in a paper published last week in Nature Machine Intelligence.
    “This is the most accurate glove we know of that can track hand and finger movement and grasping force without requiring motion-capture cameras. Thanks to machine learning models we developed, the glove can accurately determine the angles of all finger joints and the wrist as they move. The technology is highly precise and fast, capable of detecting small stretches and pressures and predicting movement with at least 99-per-cent accuracy — matching the performance of costly motion-capture cameras.”
    Unlike other products in the market, the glove is wireless and comfortable, and can be easily washed after removing the battery. Dr. Servati and his team have developed advanced methods to manufacture the smart gloves and related apparel at a relatively low cost locally.

    Augmented reality and robotics
    Dr. Servati envisions a seamless transition of the glove into the consumer market with ongoing improvements, in collaboration with different industrial partners. The team also sees potential applications in virtual reality and augmented reality, animation and robotics.
    “Imagine being able to accurately capture hand movements and interactions with objects and have it automatically display on a screen. There are endless applications. You can type text without needing a physical keyboard, control a robot, or translate American Sign Language into written speech in real time, providing easier communication for individuals who are deaf or hard of hearing.” More

  • in

    Advancement in thermoelectricity could light up the Internet of Things

    Imagine stoplights and cars communicating with each other to optimize the flow of traffic. This isn’t science fiction — it’s the Internet of Things (IoT), i.e., objects that sense their surroundings and respond via the internet. As the global population rises and such technologies continue to develop, you might wonder — what will power this digital world of tomorrow?
    Wind, solar, yes. Something all around us might not immediately come to mind though — heat. Now, in a study recently published in Nature Communications, a multi-institutional research team including Osaka University has unveiled a breakthrough in clean energy: greatly improved thermoelectric conversion. One of its many potential applications? That’s right, the IoT.
    Large-scale, global integration of the IoT is limited by the lack of a suitable energy supply. Realistically, an energy supply for the IoT must be local and small scale. Miniaturization of thermoelectric conversion can help solve this energy-supply problem by applying the otherwise wasted heat from microelectronics as a source of electricity. However, for practical applications, the efficiency of current thermoelectric-energy conversion is insufficient. Improving this efficiency was the goal of the research team’s study.
    “In our work, we demonstrate a two-dimensional electron gas (2DEG) system with multiple subbands that uses gallium arsenide. The system is different from conventional methods of thermoelectric conversion,” explain Yuto Uematsu and Yoshiaki Nakamura, lead and senior authors of the study. “Our system facilitates better conversion from temperature (heat) to electricity, and improves the mobility of electrons in their 2D sheet. This readily benefits everyday devices like semiconductors.”
    Incredibly, the researchers were able to improve the power factor of thermoelectric conversion by a factor of 4 compared with conventional 2DEG systems. Other technologies like resonant scattering have not been as efficient for thermoelectric conversion.
    The team’s findings could open the way to a sustainable power source for the IoT. Thin thermoelectric films on substrates made of gallium arsenide would be suitable for IoT application. For example, these could power environmental monitoring systems in remote locations or wearable devices for medical monitoring.
    “We’re excited because we have expanded upon the principles of a process that is crucial to clean energy and the development of a sustainable IoT,” says Yoshiaki Nakamura, senior author. “What’s more, our methodology can be applied to any element-based material; the practical applications are far reaching.”
    This work is an important step forward in maximizing the utility of thermoelectric power generation in modern microelectronics and is especially suitable for the IoT. As the results are not limited to gallium arsenide, further advancements to the system are possible, with sustainability and the IoT potentially benefitting greatly. More

  • in

    Do violent video games numb us towards real violence?

    Neuroscientists from the University of Vienna and the Karolinska Institute in Stockholm have investigated whether playing violent video games leads to a reduction in human empathy. To do this, they had adult test subjects play a violent video game repeatedly over the course of an experiment lasting several weeks. Before and after, their empathic responses to the pain of another person were measured. It was found that the violent video game had no discernible effect on empathy and underlying brain activity. These results have now been published in the journal eLife.
    Video games have become an integral part of the everyday life of many children and adults. Many of the most popular video games contain explicit depictions of extreme violence. Therefore, concerns have been raised that these games may blunt the empathy of their players and could therefore lower the inhibition threshold for real violence. An international research team led by Viennese neuroscientists Claus Lamm and Lukas Lengersdorff has now investigated whether this is actually the case.
    The Austrian and Swedish researchers invited a total of 89 adult male subjects to take part in the study. A key selection criterion was that the subjects had had little or no previous contact with violent video games. This ensured that the results were not influenced by different experiences with these games. In a first experimental study, the baseline level of empathy of the test subjects was assessed. Brain scans were used to record how the test subjects reacted when a second person was administered painful electric shocks. Then, the video game phase of the experiment began, during which the test subjects came to the research laboratory seven times to play a video game for one hour each time. The participants in the experimental group played a highly violent version of the game Grand Theft Auto V and were given the task of killing as many other game characters as possible. In the control group, all violence had been removed from the game and the participants were given the task of taking photos of other game characters. Finally, after the video game phase was over, the test subjects were examined a second time to determine whether their empathic responses had changed.
    The analysis of the data showed that the video game violence had no discernible effect on the empathic abilities of the test subjects. The reactions of the participants in the experimental group who were confronted with extreme depictions of violence did not differ statistically from those of the participants who only had to take photos. In addition, there were no significant differences in the activity of brain regions that had been identified in other studies as being associated with empathy — such as the anterior insular and anterior midcingulate cortex.
    Does that mean that concerns about violence in video games are unfounded? The authors advise against jumping to conclusions. “Precisely because this is such a sensitive topic, we have to be very careful when interpreting these results,” explains lead author Lukas Lengersdorff, who carried out the study as part of his doctoral studies. “The conclusion should not be that violent video games are now definitively proven to be harmless. Our study lacks the data to make such statements.” According to the neuroscientist and statistician, the value of the study lies rather in the fact that it allows a sober look at previous results. “A few hours of video game violence have no significant influence on the empathy of mentally healthy adult test subjects. We can clearly draw this conclusion. Our results thus contradict those of previous studies, in which negative effects were reported after just a few minutes of play.” In these previous studies, participants had played the violent video game immediately before data collection. “Such experimental designs are not able to distinguish the short-term and long-term effects of video games,” explains Lengersdorff.
    According to research group leader and co-author Claus Lamm, the study also sets a new standard for future research in this area: “Strong experimental controls and longitudinal research designs that allow causal conclusions to be drawn are needed to make clear statements about the effects of violent video games. We wanted to take a step in this direction with our study.” Now it is the task of further research to check whether there are no negative consequences even after significantly longer exposure to video game violence — and whether this is also the case for vulnerable subpopulations. “The most important question is of course: are children and young people also immune to violence in video games? The young brain is highly plastic, so repeated exposure to depictions of violence could have a much greater effect. But of course these questions are difficult to investigate experimentally without running up against the limits of scientific ethics,” says Lamm. More