More stories

  • in

    Revealing the complex magnetization reversal mechanism with topological data analysis

    Spintronic devices and their operation are governed by the microstructures of magnetic domains. These magnetic domain structures undergo complex, drastic changes when an external magnetic field is applied to the system. The resulting fine structures are not reproducible, and it is challenging to quantify the complexity of magnetic domain structures. Our understanding of the magnetization reversal phenomenon is, thus, limited to crude visual inspections and qualitative methods, representing a severe bottleneck in material design. It has been difficult to even predict the stability and shape of the magnetic domain structures in Permalloy, which is a well-known material studied over a century.
    Addressing this issue, a team of researchers headed by Professor Masato Kotsugi from Tokyo University of Science, Japan, recently developed an AI-based method for analyzing material functions in a more quantitative manner. In their work published in Science and Technology of Advanced Materials: Methods, the team used topological data analysis and developed a super-hierarchical and explanatory analysis method for magnetic reversal processes. In simple words, super-hierarchical means, according to research team, the connection between micro and macro properties, which are usually treated as isolated but, in the big scheme, contribute jointly to the physical explanation.
    The team quantified the complexity of the magnetic domain structures using persistent homology, a mathematical tool used in computational topology that measures topological features of data persisting across multiple scales. The team further visualized the magnetization reversal process in two-dimensional space using principal component analysis, a data analysis procedure that summarizes large datasets by smaller “summary indices,” facilitating better visualization and analysis. As Prof. Kotsugi explains, “The topological data analysis can be used for explaining the complex magnetization reversal process and evaluating the stability of the magnetic domain structure quantitatively.” The team discovered that slight changes in the structure invisible to the human eye that indicated a hidden feature dominating the metastable/stable reversal processes can be detected by this analysis. They also successfully determined the cause of the branching of the macroscopic reversal process in the original microscopic magnetic domain structure.
    The novelty of this research lies in its ability to connect magnetic domain microstructures and macroscopic magnetic functions freely across hierarchies by applying the latest mathematical advances in topology and machine learning. This enables the detection of subtle microscopic changes and subsequent prediction of stable/metastable states in advance that was hitherto impossible. “This super-hierarchical and explanatory analysis would improve the reliability of spintronics devices and our understanding of stochastic/deterministic magnetization reversal phenomena,” says Prof. Kotsugi.
    Interestingly, the new algorithm, with its superior explanatory capability, can also be applied to study chaotic phenomenon as the butterfly effect. On the technological front, it could potentially improve the reliability of next generation magnetic memory writing, aid the development of new hardware for the next generation of devices.
    Story Source:
    Materials provided by Tokyo University of Science. Note: Content may be edited for style and length. More

  • in

    Researchers propose methods for automatic detection of doxing

    A new automated approach to detect doxing — a form of cyberbullying in which certain private or personally identifiable information is publicly shared without an individual’s consent or knowledge — may help social media platforms better protect their users, according to researchers from Penn State’s College of Information Sciences and Technology.
    The research on doxing could lead to more immediate flagging and removal of sensitive personal information that has been shared without the owner’s authorization. To date, the research team has only studied Twitter, where their novel proposed approach uses machine learning to differentiate which tweet containing personally identifiable information is maliciously shared rather than self-disclosed.
    They have identified an approach that was able to automatically detect doxing on Twitter with over 96% accuracy, which could help the platform — and eventually other social media platforms — more quickly and easily identify true cases of doxing.
    “The focus is to identify cases where people collect sensitive personal information about others and publicly disclose it as a way of scaring, defaming, threatening or silencing them,” said Younes Karimi, doctoral candidate and lead author on the paper. “This is dangerous because once this information is posted, it can quickly be shared with many people and even go beyond Twitter. The person to whom the information belongs needs to be protected.”
    In their work, the researchers collected and curated a dataset of nearly 180,000 tweets that were likely to contain doxed information. Using machine learning techniques, they categorized the data as containing personal information tied to either an individual’s identity — their social security number — or an individual’s location — their IP address — and manually labeled more than 3,100 of the tweets that were found to contain either piece of information. They then further classified the data to differentiate malicious disclosures from self-disclosures. Next, the researchers examined the tweets for common potential motivations behind disclosures, determined whether the intent was likely defensive or malicious, and indicated whether it could be characterized as doxing.
    “Not all doxing instances are necessarily malicious,” explained Karimi. “For example, a parent of a missing child might benignly share their private information with the desperate hope of finding them.”
    Next, the researchers used nine different approaches based on existing natural language processing methods and models to automatically detect instances of doxing and malicious disclosures of two types of most sensitive private information, social security number and IP address, in their collected dataset. They compared the results and identified the approach with the highest accuracy rate, presenting their findings in November at the 25th ACM Conference on Computer-Supported Cooperative Work and Social Computing.
    According to Karimi, this work is especially critical in a time when leading social media platforms — including Twitter — are conducting mass layoffs, minimizing the number of workers responsible for reviewing content that may violate the platforms’ terms of service. One platform’s policy, for example, states that unless a case of doxing has clearly abusive intent, the owner of the publicly shared information or their authorized representative must contact the platform before enforcement action is taken. Under this policy, private information could remain publicly available for long periods of time if the owner of the information is not aware that it has been shared.
    “While there exist some prior studies on detection of private information in general and some automated approaches for detecting cyberbullying are applied by social media platforms, they do not differentiate self-disclosures from malicious disclosures of second- and third-parties in tweets,” he said. “Fewer people are now in charge of taking action for these manual user reports, so adding automation can help them to narrow down the most important and sensitive reports and prioritize them.”
    Karimi collaborated with Anna Squicciarini, Frymoyer Chair in Information Sciences and Technology, and Shomir Wilson, assistant professor of information sciences and technology, on the paper.
    Story Source:
    Materials provided by Penn State. Original written by Jess Hallman. Note: Content may be edited for style and length. More

  • in

    Screen time linked to OCD in U.S. preteens

    During the holidays, kids often spend more time on screens, leaving parents to wonder: Is it causing harm? Possibly.
    For preteens, the odds of developing OCD over a two-year period increased by 13% for every hour they played video games and by 11% for every hour they watched videos, according to a new national study led by UC San Francisco researchers that publishes Dec. 12 in the Journal of Adolescent Health.
    “Children who spend excessive time playing video games report feeling the need to play more and more and being unable to stop despite trying,” said Jason Nagata, MD, lead author of the study and assistant professor of pediatrics at UCSF. “Intrusive thoughts about video game content could develop into obsessions or compulsions.”
    Watching videos, too, can allow for compulsive viewing of similar content — and algorithms and advertisements can exacerbate that behavior, he added.
    OCD is a mental health condition involving recurrent and unwanted thoughts as well as repetitive behaviors that a person feels driven to perform. These intrusive thoughts and behaviors can become severely disabling for the sufferers and those close to them.
    “Screen addictions are associated with compulsivity and loss of behavioral control, which are core symptoms of OCD,” Nagata said.
    Create a Family Media Plan
    Researchers asked 9,204 preteens ages 9-10 years how much time they spent on different types of platforms; the average was 3.9 hours per day. Two years later, the researchers asked their caregivers about OCD symptoms and diagnoses. Use of screens for educational purposes was excluded.
    At the two-year mark, 4.4% of preteens had developed new-onset OCD. Video games and streaming videos were each connected to higher risk of developing OCD. Texting, video chat and social media didn’t link individually with OCD, but that may be because the preteens in the sample didn’t use them much, researchers said. Results may differ for older teens, they added.
    In July, Nagata and his colleagues discovered excessive screen time was linked to disruptive behavior disorders in 9-11 year olds, though social media was the biggest contributor in that case. In 2021, they found adolescent screen time had doubled during the pandemic.
    “Although screen time can have important benefits such as education and increased socialization, parents should be aware of the potential risks, especially to mental health,” said Nagata. “Families can develop a media use plan which could include screen-free times including before bedtime.”
    Story Source:
    Materials provided by University of California – San Francisco. Original written by Jess Berthold. Note: Content may be edited for style and length. More

  • in

    Frequently using digital devices to soothe young children may backfire

    It’s a scene many parents have experienced — just as they’re trying to cook dinner, take a phone call or run an errand, their child has a meltdown.
    And sometimes, handing a fussy preschooler a digital device seems to offer a quick fix. But this calming strategy could be linked to worse behavior challenges down the road, new findings suggest.
    Frequent use of devices like smartphones and tablets to calm upset children ages 3-5 was associated with increased emotional dysregulation in kids, particularly in boys, according to a Michigan Medicine study in JAMA Pediatrics.
    “Using mobile devices to settle down a young child may seem like a harmless, temporary tool to reduce stress in the household, but there may be long term consequences if it’s a regular go-to soothing strategy,” said lead author Jenny Radesky, M.D., a developmental behavioral pediatrician at University of Michigan Health C.S. Mott Children’s Hospital.
    “Particularly in early childhood, devices may displace opportunities for development of independent and alternative methods to self-regulate.”
    The study included 422 parents and 422 children ages 3-5 who participated between August 2018 and January 2020, before the COVID-19 pandemic started. Researchers analyzed parent and caregiver responses to how often they used devices as a calming tool and associations to symptoms of emotional reactivity or dysregulation over a six-month period. More

  • in

    Computer vision technology effective at determining proper mask wearing in a hospital setting, pilot study finds

    In early 2020, before COVID-19 vaccines and effective treatments were widely available, universal mask wearing was a central strategy for preventing the transmission of COVID-19. But hospitals and other settings with mask mandates faced a challenge. Reminding patients, visitors and employees to wear masks needed to be done manually, which was time consuming and labor intensive. Researchers from Brigham and Women’s Hospital (BWH), a founding member of the Mass General Brigham health care system, and Massachusetts Institute of Technology (MIT) set out to test a tool to automate monitoring and reminders about mask adherence using a computer vision algorithm. The team conducted a pilot study among hospital employees who volunteered to participate and found that the technology worked effectively and most participants reported a positive experience interacting with the system at a hospital entrance. Results of the study are published in BMJ Open.
    “To change a behavior, like mask wearing, takes a lot of effort, even among healthcare professionals,” said lead author Peter Chai, MD, MMS, of the Department of Emergency Medicine. “Our study suggests that a computer visualization system like this could be helpful the next time there is a respiratory, viral pandemic for which masking is an essential strategy in a hospital setting for controlling the spread of infection.”
    “We recognize the challenges in ensuring appropriate mask usage and potential barriers associated with personnel-based notification of mask misuse by colleagues and here we describe a computer vision-based alternative and our colleagues’ assessment of initial acceptability of the platform,” said senior author C. Giovanni Traverso, MB, BChir, PhD, of the Department of Medicine at BWH and in the Department of Mechanical Engineering at MIT.
    For the study, the team used a computer vision program that was developed using lower resolution closed circuit television still frames to detect mask wearing. Between April 26, 2020 and April 30, 2020, researchers invited employees who were entering one of the main hospital entrances to participate in an observational study that tested the computer vision model. The team enrolled 111 participants who interacted with the system and were surveyed about their experience.
    The computer visualization system accurately detected the presence of mask adherence 100 percent of the time. Most participants — 87 percent — reported a positive experience interacting with the system in the hospital.
    The pilot was limited to employees at a single hospital and may not be generalizable to other settings. In addition, behaviors and attitudes toward masking have changed throughout the course of the pandemic and may differ across the United States. Future study is needed to identify barriers to implementing computer visualization systems in healthcare settings versus other public institutions.
    “Our data suggest that individuals in a hospital setting are receptive to the use of computer visualization systems to help detect and offer reminders about effective mask wearing, particularly at the height of a pandemic as a way to keep themselves safe while serving on the front lines of a healthcare emergency,” said Chai. “Continued development of detection systems could give us a useful tool in the context of the COVID-19 pandemic or in preparation for preventing the spread of future airborne pathogens.”
    Story Source:
    Materials provided by Brigham and Women’s Hospital. Note: Content may be edited for style and length. More

  • in

    Deep-space optical communication demonstration project forges ahead

    Researchers report new results from the NASA Deep Space Optical Communications (DSOC) technology demonstration project, which develops and tests new advanced laser sources for deep-space optical communication. The ability to perform free-space optical communication throughout the solar system would go beyond the capabilities of the radio communication systems used now and provide the bandwidth necessary for future space missions to transmit large amounts of data, including high-definition images and video.
    The demonstration system consists of a flight laser transceiver, a ground laser transmitter and a ground laser receiver. The downlink transmitter has been installed on the Psyche spacecraft, which will travel to a unique metal asteroid also called Psyche, which orbits the Sun between Mars and Jupiter.
    Malcolm. W. Wright, from the Jet Propulsion Laboratory, California Institute of Technology, will present the functional and environmental test results of the DSOC downlink flight laser transmitter assembly and ground uplink transmitter assembly at the Optica Laser Congress, 11 — 15 December 2022.
    Validating deep space optical communications will allow streaming back high-definition imagery during robotic and manned exploration of planetary bodies, utilizing resources comparable to state-of-art radio-frequency telecommunications.
    Transmitting into deep space
    Although free-space optical communications from space to ground have been demonstrated at distances as far away as the moon, extending such links to deep space ranges requires new types of laser transmitters. The downlink flight laser must have a high photon efficiency while supporting near kilowatt peak power. The uplink laser requires multi-kilowatt average powers with narrow linewidth, good beam quality and low modulation rates.
    The flight laser transmitter assembly uses a 5 W average power Er-Yb co-doped fiber-based master oscillator power amplifier laser with discrete pulse widths from 0.5 to 8 ns in a polarized output beam at 1550 nm with an extinction ratio of more than 33 dB. The laser passed verification and environmental tests before being integrated into spacecraft. End-to-end testing of the flight laser transmitter with the ground receiver assembly also validated the optical link performance for a variety of pulse formats and verified the interface to the DSOC electronics assembly.
    Launching a new approach
    The ground uplink transmitter assembly can support optical links with up to 5.6 kW average power at 1064 nm. It includes ten kilowatt-class continuous wavelength fiber-based laser transmitters modified to support the modulation formats. A remotely placed chiller provides thermal management for the lasers and power supplies. The uplink laser will also provide a light beacon onto which the flight transceiver can lock.
    “Using multiple individual laser sources that propagate through sub-apertures on the telescope’s primary mirror relieves the power requirement from a single source,” said Wright. “It also allows atmospheric turbulence mitigation and reduces the power density on the telescope mirrors.”
    Now that spacecraft-level testing is complete, the Psyche spacecraft — with the flight laser transceiver aboard — will be integrated into a launch vehicle. The DSOC technology demonstration will begin shortly after launch and continue for one year as the spacecraft travels away from Earth and eventually performs a flyby of Mars.
    Story Source:
    Materials provided by Optica. Note: Content may be edited for style and length. More

  • in

    Curved spacetime in the lab

    In a laboratory experiment, researchers from Heidelberg University have succeeded in realising an effective spacetime that can be manipulated. In their research on ultracold quantum gases, they were able to simulate an entire family of curved universes to investigate different cosmological scenarios and compare them with the predictions of a quantum field theoretical model.
    According to Einstein’s Theory of Relativity, space and time are inextricably connected. In our Universe, whose curvature is barely measurable, the structure of this spacetime is fixed. In a laboratory experiment, researchers from Heidelberg University have succeeded in realising an effective spacetime that can be manipulated. In their research on ultracold quantum gases, they were able to simulate an entire family of curved universes to investigate different cosmological scenarios and compare them with the predictions of a quantum field theoretical model. The research results were published in Nature.
    The emergence of space and time on cosmic time scales from the Big Bang to the present is the subject of current research that can only be based on the observation of our single Universe. The expansion and curvature of space are essential to cosmological models. In a flat space like our current Universe, the shortest distance between two points is always a straight line. “It is conceivable, however, that our Universe was curved in its early phase. Studying the consequences of a curved spacetime is therefore a pressing question in research,” states Prof. Dr Markus Oberthaler, a researcher at the Kirchhoff Institute for Physics at Heidelberg University. With his “Synthetic Quantum Systems” research group, he developed a quantum field simulator for this purpose.
    The quantum field simulator created in the lab consists of a cloud of potassium atoms cooled to just a few nanokelvins above absolute zero. This produces a Bose-Einstein condensate — a special quantum mechanical state of the atomic gas that is reached at very cold temperatures. Prof. Oberthaler explains that the Bose-Einstein condensate is a perfect background against which the smallest excitations, i.e. changes in the energy state of the atoms, become visible. The form of the atomic cloud determines the dimensionality and the properties of spacetime on which these excitations ride like waves. In our Universe, there are three dimensions of space as well as a fourth: time.
    In the experiment conducted by the Heidelberg physicists, the atoms are trapped in a thin layer. The excitations can therefore only propagate in two spatial directions — the space is two-dimensional. At the same time, the atomic cloud in the remaining two dimensions can be shaped in almost any way, whereby it is also possible to realise curved spacetimes. The interaction between the atoms can be precisely adjusted by a magnetic field, changing the propagation speed of the wavelike excitations on the Bose-Einstein condensate.
    “For the waves on the condensate, the propagation speed depends on the density and the interaction of the atoms. This gives us the opportunity to create conditions like those in an expanding universe,” explains Prof. Dr Stefan Flörchinger. The researcher, who previously worked at Heidelberg University and joined the University of Jena at the beginning of this year, developed the quantum field theoretical model used to quantitatively compare the experimental results.
    Using the quantum field simulator, cosmic phenomena, such as the production of particles based on the expansion of space, and even the spacetime curvature can be made measurable. “Cosmological problems normally take place on unimaginably large scales. To be able to specifically study them in the lab opens up entirely new possibilities in research by enabling us to experimentally test new theoretical models,” states Celia Viermann, the primary author of the “Nature” article. “Studying the interplay of curved spacetime and quantum mechanical states in the lab will occupy us for some time to come,” says Markus Oberthaler, whose research group is also part of the STRUCTURES Cluster of Excellence at Ruperto Carola.
    The work was conducted as part of Collaborative Research Centre 1225, “Isolated Quantum Systems and Universality in Extreme Conditions” (ISOQUANT), of Heidelberg University.
    Story Source:
    Materials provided by Heidelberg University. Note: Content may be edited for style and length. More

  • in

    How AI found the words to kill cancer cells

    Using new machine learning techniques, researchers at UC San Francisco (UCSF), in collaboration with a team at IBM Research, have developed a virtual molecular library of thousands of “command sentences” for cells, based on combinations of “words” that guided engineered immune cells to seek out and tirelessly kill cancer cells.
    The work, published online Dec. 8, 2022, in Science, represents the first time such sophisticated computational approaches have been applied to a field that, until now, has progressed largely through ad hoc tinkering and engineering cells with existing, rather than synthesized, molecules.
    The advance allows scientists to predict which elements — natural or synthesized — they should include in a cell to give it the precise behaviors required to respond effectively to complex diseases.
    “This is a vital shift for the field,” said Wendell Lim, PhD, the Byers Distinguished Professor of Cellular and Molecular Pharmacology, who directs the UCSF Cell Design Institute and led the study. “Only by having that power of prediction can we get to a place where we can rapidly design new cellular therapies that carry out the desired activities.”
    Meet the Molecular Words That Make Cellular Command Sentences
    Much of therapeutic cell engineering involves choosing or creating receptors that, when added to the cell, will enable it to carry out a new function. Receptors are molecules that bridge the cell membrane to sense the outside environment and provide the cell with instructions on how to respond to environmental conditions. More