More stories

  • in

    Time watching videos may stunt toddler language development, but it depends on why they’re watching

    A new study from SMU psychologist Sarah Kucker and colleagues reveals that passive video use among toddlers can negatively affect language development, but their caregiver’s motivations for exposing them to digital media could also lessen the impact.
    Results show that children between the ages of 17 and 30 months spend an average of nearly two hours per day watching videos — a 100 percent increase from prior estimates gathered before the COVID pandemic. The research reveals a negative association between high levels of digital media watching and children’s vocabulary development.
    Children exposed to videos by caregivers for their calming or “babysitting” benefits tended to use phrases and sentences with fewer words. However, the negative impact on language skills was mitigated when videos were used for educational purposes or to foster social connections — such as through video chats with family members.
    “In those first couple years of life, language is one of the core components of development that we know media can impact,” said Kucker, assistant professor of psychology in SMU’s Dedman College of Humanities & Sciences. “There’s less research focused on toddlers using digital media than older ages, which is why we’re trying to understand better how digital media affects this age group and what type of screen time is beneficial and what is not.”
    Published in the journal Acta Paediatrica, the study involved 302 caregivers of children between 17 and 30 months. Caregivers answered questions about their child’s words, sentences, and how much time they spend on different media activities each day. Those activities included video/TV, video games, video chat, and e-books, with caregivers explaining why they use each activity with their child. Print book reading was also compared.
    Researchers looked at the amount of media use and the reasons provided by caregivers for their children’s media use. These factors were then compared to the children’s vocabulary and length using two or more words together (the mean length of utterance).
    Kucker suggests that caregivers need to consider what kind of videos their children are watching (whether for learning or fun) and how they interact with toddlers watching videos. She acknowledges that parents often use digital media to occupy children while they complete tasks. Kucker recommends caregivers consider how much digital media they allow young children and if they can interact with them while using it.
    The study’s findings underscore the need for parents, caregivers, and educators to be aware of the potential effects of digital media on language development in children 30 months and under. By understanding the types of digital media children are exposed to and the reasons behind its usage, appropriate measures can be taken to ensure more healthy language development.
    Future research by Kucker and her colleagues will continue to explore the types of videos young children watch, how they use screens with others, and if young children watching digital media for two hours is the new normal and, if so, how that impacts language development.
    Research team members included Rachel Barr, from Georgetown University and Lynn K. Perry, from the University of Miami. Research reported in this press release was supported by the Eunice Kennedy Shriver National Institute Of Child Health & Human Development of the National Institutes of Health under Award Number R15HD101841. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. More

  • in

    Engineers achieve breakthrough in quantum sensing

    A collaborative project led by Professor Zhiqin Chu, Professor Can Li and Professor Ngai Wong, at the Department of Electrical and Electronic Engineering of the University of Hong Kong (HKU) has made a breakthrough in enhancing the speed and resolution of widefield quantum sensing, leading to new opportunities in scientific research and practical applications.
    By collaborating with scientists from Mainland China and Germany, the team has successfully developed a groundbreaking quantum sensing technology using a neuromorphic vision sensor, which is designed to mimic the human vision system. This sensor is capable of encoding changes in fluorescence intensity into spikes during optically detected magnetic resonance (ODMR) measurements. The key advantage of this approach is that it results in highly compressed data volumes and reduced latency, making the system more efficient than traditional methods. This breakthrough in quantum sensing holds potential for various applications in fields such as monitoring dynamic processes in biological systems.
    The research paper has been published in the journal Advanced Science titled “Widefield Diamond Quantum Sensing with Neuromorphic Vision Sensors.”
    “Researchers worldwide have spent much effort looking into ways to improve the measurement accuracy and spatiotemporal resolution of camera sensors. But a fundamental challenge remains: handling the massive amount of data in the form of image frames that need to be transferred from the camera sensors for further processing. This data transfer can significantly limit the temporal resolution, which is typically no more than 100 fps due to the use of frame-based image sensors. What we did was trying to overcome the bottleneck,” said Zhiyuan Du, the first author of the paper and PhD candidate at the Department of Electrical and Electronic Engineering
    Du said his professor’s focus on quantum sensing had inspired him and other team members to break new ground in the area. He is also driven by a passion for integrating sensing and computing.
    “The latest development provides new insights for high-precision and low-latency widefield quantum sensing, with possibilities for integration with emerging memory devices to realise more intelligent quantum sensors,” he added.
    The team’s experiment with an off-the-shelf event camera demonstrated a 13× improvement in temporal resolution, with comparable precision in detecting ODMR resonance frequencies with the state-of-the-art highly specialized frame-based approach. The new technology was successfully deployed in monitoring dynamically modulated laser heating of gold nanoparticles coated on a diamond surface. “It would be difficult to perform the same task using existing approaches,” Du said.

    Unlike traditional sensors that record the light intensity levels, neuromorphic vision sensors process the light intensity change into “spikes” similar to biological vision systems, leading to improved temporal resolution (≈µs) and dynamic range ( >120 dB). This approach is particularly effective in scenarios where image changes are infrequent, such as object tracking and autonomous vehicles, as it eliminates redundant static background signals.
    “We anticipate that our successful demonstration of the proposed method will revolutionise widefield quantum sensing, significantly improving performance at an affordable cost,” said Professor Zhiqin Chu.
    “This also brings closer the realisation of near-sensor processing with emerging memory-based electronic synapse devices,” said Professor Can Li.
    “The technology’s potential for industrial use should be explored further, such as studying dynamic changes in currents in materials and identifying defects in microchips,” said Professor Ngai Wong. More

  • in

    Accelerating the discovery of single-molecule magnets with deep learning

    Synthesizing or studying certain materials in a laboratory setting often poses challenges due to safety concerns, impractical experimental conditions, or cost constraints. In response, scientists are increasingly turning to deep learning methods which involve developing and training machine learning models to recognize patterns and relationships in data that include information about material properties, compositions, and behaviors. Using deep learning, scientists can quickly make predictions about material properties based on the material’s composition, structure, and other relevant features, identify potential candidates for further investigation, and optimize synthesis conditions.
    Now, in a study published on 1 February 2024 in the International Union of Crystallography Journal (IUCrJ), Professor Takashiro Akitsu, Assistant Professor Daisuke Nakane, and Mr. Yuji Takiguchi from Tokyo University of Science (TUS) have used deep learning to predict single-molecule magnets (SMMs) from a pool of 20,000 metal complexes. This innovative strategy streamlines the material discovery process by minimizing the need for lengthy experiments.
    Single-molecule magnets (SMMs) are metal complexes that demonstrate magnetic relaxation behavior at the individual molecule level, where magnetic moments undergo changes or relaxation over time. These materials have potential applications in the development of high-density memory, quantum molecular spintronic devices, and quantum computing devices. SMMs are characterized by having a high effective energy barrier (Ueff) for the magnetic moment to flip. However, these values are typically in the range of tens to hundreds of Kelvins, making SMMs challenging to synthesize.
    The researchers used deep-learning to identify the relationship between molecular structures and SMM behavior in metal complexes with salen-type ligands. These metal complexes were chosen as they can be easily synthesized by complexing aldehydes and amines with various 3d and 4f metals. For the dataset, the researchers worked extensively to screen 800 papers from 2011 to 2021, collecting information on the crystal structure and determining if these complexes exhibited SMM behavior. Additionally, they obtained 3D structural details of the molecules from the Cambridge Structural Database.
    The molecular structure of the complexes was represented using voxels or 3D pixels, where each element was assigned a unique RGB value. Subsequently, these voxel representations served as input to a 3D Convolutional Neural Network model based on the ResNet architecture. This model was specifically designed to classify molecules as either SMMs or non-SMMs by analyzing their 3D molecular images.
    When the model was trained on a dataset of crystal structures of metal complexes containing salen type complexes, it achieved a 70% accuracy rate in distinguishing between the two categories. When the model was tested on 20,000 crystal structures of metal complexes containing Schiff bases, it successfully discovered the metal complexes reported as single-molecule magnets. “This is the first report of deep learning on the molecular structures of SMMs,” says Prof. Akitsu.
    Many of the predicted SMM structures involved multinuclear dysprosium complexes, known for their high Ueff values. While this method simplifies the SMM discovery process, it is important to note that the model’s predictions are solely based on training data and do not explicitly link chemical structures with their quantum chemical calculations, a preferred method in AI-assisted molecular design. Further experimental research is required to obtain the data of SMM behavior under uniform conditions.
    However, this simplified approach has its advantages. It reduces the need for complex computational calculations and avoids the challenging task of simulating magnetism. Prof. Akitsu concludes: “Adopting such an approach can guide the design of innovative molecules, bringing about significant savings in time, resources, and costs in the development of functional materials.” More

  • in

    Tapping into the 300 GHz band with an innovative CMOS transmitter

    New phased-array transmitter design overcomes common problems of CMOS technology in the 300 GHz band, as reported by scientists from Tokyo Tech. Thanks to its remarkable area efficiency, low power consumption, and high data rate, the proposed transmitter could pave the way to many technological applications in the 300 GHz band, including body and cell monitoring, radar, 6G wireless communications, and terahertz sensors.
    Today, most frequencies above the 250 GHz mark remain unallocated. Accordingly, many researchers are developing 300 GHz transmitters/receivers to capitalize on the low atmospheric absorption at these frequencies, as well as the potential for extremely high data rates that comes with it.
    However, high-frequency electromagnetic waves become weaker at a fast pace when travelling through free space. To combat this problem, transmitters must compensate by achieving a large effective radiated power. While some interesting solutions have been proposed over the past few years, no 300 GHz-band transmitter manufactured via conventional CMOS processes has simultaneously realized high output power and small chip size.
    Now, a research team led by Professor Kenichi Okada from Tokyo Institute of Technology(Tokyo Tech) and NTT Corporation (Headquarters: Chiyoda-ku, Tokyo; President & CEO: Akira Shimada; “NTT”) have recently developed a 300 GHz-band transmitter that solves these issues through several key innovations. Their work will be presented in the 2024 IEEE International Solid-State Circuits Conference (ISSCC).
    The proposed solution is a phased-array transmitter composed of 64 radiating elements, which are arranged in 16 integrated circuits with four antennas each. Since the elements are arranged in three dimensions by stacking printed circuit boards (PCBs), this transmitter supports 2D beam steering. Simply put, the transmitted power can be aimed both vertically and horizontally, allowing for fast beam steering and tracking receivers efficiently. Notably, the antennas used are Vivaldi antennas, which can be implemented directly on-chip and have a suitable shape and emission profile for high frequencies.
    An important feature of the proposed transmitter is its power amplifier (PA)-last architecture. By placing the amplification stage right before the antennas, the system only needs to amplify signals that have already been conditioned and processed. This leads to higher efficiency and better amplifier performance.
    The researchers also addressed a few common problems that arise with conventional transistor layouts in CMOS processes, namely high gate resistance and large parasitic capacitances. They optimized their layout by adding additional drain paths and vias and by altering the geometry and element placing between metal layers. “Compared to the standard transistor layout, the parasitic resistance and capacitances in the proposed transistor layout are all mitigated,” remarks Prof. Okada. “In turn, the transistor-gain corner frequency, which is the point where the transistor’s amplification starts to decrease at higher frequencies, was increased from 250 to 300 GHz.”
    On top of these innovations, the team designed and implemented a multi-stage 300 GHz power amplifier to be used with each antenna. Thanks to excellent impedance matching between stages, the amplifiers demonstrated outstanding performance, as Prof. Okada highlights: “The proposed power amplifiers achieved a gain higher than 20 dB from 237 to 267 GHz, with a sharp cut-off frequency to suppress out-of-band undesired signals.” The proposed amplifier also achieves a noise figure of 15 dB which was evaluated by the noise measurement system in 300-GHz band.
    The researchers tested their design through both simulations and experiments, obtaining very promising results. Remarkably, the proposed transmitter achieved a data rate of 108 Gb/s in on-PCB probe measurements, which is substantially higher than other state-of-the-art 300 GHz-band transmitters.
    Moreover, the transmitter also displayed remarkable area efficiency compared to other CMOS-based designs alongside low power consumption, highlighting its potential for miniaturized and power-constrained applications. Some notable use cases are sixth-generation (6G) wireless communications, high-resolution terahertz sensors, and human body and cell monitoring. More

  • in

    Study identifies distinct brain organization patterns in women and men

    A new study by Stanford Medicine investigators unveils a new artificial intelligence model that was more than 90% successful at determining whether scans of brain activity came from a woman or a man.
    The findings, to be published Feb. 19 in the Proceedings of the National Academy of Sciences, help resolve a long-term controversy about whether reliable sex differences exist in the human brain and suggest that understanding these differences may be critical to addressing neuropsychiatric conditions that affect women and men differently.
    “A key motivation for this study is that sex plays a crucial role in human brain development, in aging, and in the manifestation of psychiatric and neurological disorders,” said Vinod Menon, PhD, professor of psychiatry and behavioral sciences and director of the Stanford Cognitive and Systems Neuroscience Laboratory. “Identifying consistent and replicable sex differences in the healthy adult brain is a critical step toward a deeper understanding of sex-specific vulnerabilities in psychiatric and neurological disorders.”
    Menon is the study’s senior author. The lead authors are senior research scientist Srikanth Ryali, PhD, and academic staff researcher Yuan Zhang, PhD.
    “Hotspots” that most helped the model distinguish male brains from female ones include the default mode network, a brain system that helps us process self-referential information, and the striatum and limbic network, which are involved in learning and how we respond to rewards.
    The investigators noted that this work does notweigh in on whether sex-related differences arise early in life or may be driven by hormonal differences or the different societal circumstances that men and women may be more likely to encounter.
    Uncovering brain differences
    The extent to which a person’s sex affects how their brain is organized and operates has long been a point of dispute among scientists. While we know the sex chromosomes we are born with help determine the cocktail of hormones our brains are exposed to — particularly during early development, puberty and aging — researchers have long struggled to connect sex to concrete differences in the human brain. Brain structures tend to look much the same in men and women, and previous research examining how brain regions work together has also largely failed to turn up consistent brain indicators of sex.

    In their current study, Menon and his team took advantage of recent advances in artificial intelligence, as well as access to multiple large datasets, to pursue a more powerful analysis than has previously been employed. First, they created a deep neural network model, which learns to classify brain imaging data: As the researchers showed brain scans to the model and told it that it was looking at a male or female brain, the model started to “notice” what subtle patterns could help it tell the difference.
    This model demonstrated superior performance compared with those in previous studies, in part because it used a deep neural network that analyzes dynamic MRI scans. This approach captures the intricate interplay among different brain regions. When the researchers tested the model on around 1,500 brain scans, it could almost always tell if the scan came from a woman or a man.
    The model’s success suggests that detectable sex differences do exist in the brain but just haven’t been picked up reliably before. The fact that it worked so well in different datasets, including brain scans from multiple sites in the U.S. and Europe, make the findings especially convincing as it controls for many confounds that can plague studies of this kind.
    “This is a very strong piece of evidence that sex is a robust determinant of human brain organization,” Menon said.
    Making predictions
    Until recently, a model like the one Menon’s team employed would help researchers sort brains into different groups but wouldn’t provide information about how the sorting happened. Today, however, researchers have access to a tool called “explainable AI,” which can sift through vast amounts of data to explain how a model’s decisions are made.

    Using explainable AI, Menon and his team identified the brain networks that were most important to the model’s judgment of whether a brain scan came from a man or a woman. They found the model was most often looking to the default mode network, striatum, and the limbic network to make the call.
    The team then wondered if they could create another model that could predict how well participants would do on certain cognitive tasks based on functional brain features that differ between women and men. They developed sex-specific models of cognitive abilities: One model effectively predicted cognitive performance in men but not women, and another in women but not men. The findings indicate that functional brain characteristics varying between sexes have significant behavioral implications.
    “These models worked really well because we successfully separated brain patterns between sexes,” Menon said. “That tells me that overlooking sex differences in brain organization could lead us to miss key factors underlying neuropsychiatric disorders.”
    While the team applied their deep neural network model to questions about sex differences, Menon says the model can be applied to answer questions regarding how just about any aspect of brain connectivity might relate to any kind of cognitive ability or behavior. He and his team plan to make their model publicly available for any researcher to use.
    “Our AI models have very broad applicability,” Menon said. “A researcher could use our models to look for brain differences linked to learning impairments or social functioning differences, for instance — aspects we are keen to understand better to aid individuals in adapting to and surmounting these challenges.”
    The research was sponsored by the National Institutes of Health (grants MH084164, EB022907, MH121069, K25HD074652 and AG072114), the Transdisciplinary Initiative, the Uytengsu-Hamilton 22q11 Programs, the Stanford Maternal and Child Health Research Institute, and the NARSAD Young Investigator Award. More

  • in

    Online digital data and AI for monitoring biodiversity

    The random information posted online could be used to generate information about biodiversity and its conservation.
    “I think it’s quite amazing that images and comments that people post online can be used to infer changes on biodiversity,” says Dr. Andrea Soriano-Redondo, the lead-author of a new article published in the journal Plos Biology and a researcher at the Helsinki Lab of Interdisciplinary Conservation Science at the University of Helsinki.
    Scientists from the University of Helsinki together with colleagues from other universities and institutions around the world propose a strategy for integrating online digital data from media platforms to complement monitoring efforts to help address the global biodiversity crisis in light of the Kunming-Montreal Global Biodiversity Framework.
    “Online digital data, such as social media data, can be used to strengthen existing assessments of the status and trends of biodiversity, the pressures upon it, and the conservation solutions being implemented, as well as to generate novel insights about human-nature interactions,” says Dr. Andrea Soriano-Redondo.
    “The most common sources of online biodiversity data include web pages, news media, social media, image- and video-sharing platforms, and digital books and encyclopedias. These data, for example geolocated distribution data, can be filtered and processed by researchers to target specific research questions and are increasingly being used to explore ecological processes and to investigate the distribution, spatiotemporal trends, phenology, ecological interactions, or behavior of species or assemblages and their drivers of change,” she continues.
    Data generated through the framework in near real-time could be continuously integrated with other independently collected biodiversity datasets and used for real-time applications.
    “Data relevant to assessment of species extinction or ecosystem collapse risk, for example, could be mobilized into the workflows for generating the IUCN Red List of Threatened Species and Red List of Ecosystems,” says Dr. Thomas Brooks, chief scientist of the International Union for the Conservation of Nature and a co-author in the article.

    “Other data on sites of global significance for the persistence of biodiversity could be served to the appropriate national coordination groups to strengthen their efforts in identifying Key Biodiversity Areas,” he continues.
    Data on the illegal wildlife trade could also be integrated with the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) Trade Database or the Trade Records Analysis of Flora and Fauna in Commerce (TRAFFIC) open-source wildlife seizure and incident data.
    Online digital data can also be used to explore human-nature interactions from multiple perspectives.
    “We have successfully used social media data to identify instances of illegal wildlife trade. There is great potential to use these data to provide novel insights into human-nature interactions and how they shape, both positively and negatively, biodiversity conservation,” says Professor Enrico Di Minin, senior co-author in the article, from the University of Helsinki.
    “The necessary technology to implement the work is available, but it will require harnessing expertise from multiple sectors and academic disciplines, as well as the collaboration of digital media companies. Most importantly we need to ensure full access to the data as to maximize its full potential to help address the global biodiversity crisis and other sustainability challenges,” he continues. More

  • in

    New chip opens door to AI computing at light speed

    Penn Engineers have developed a new chip that uses light waves, rather than electricity, to perform the complex math essential to training AI. The chip has the potential to radically accelerate the processing speed of computers while also reducing their energy consumption.
    The silicon-photonic (SiPh) chip’s design is the first to bring together Benjamin Franklin Medal Laureate and H. Nedwill Ramsey Professor Nader Engheta’s pioneering research in manipulating materials at the nanoscale to perform mathematical computations using light — the fastest possible means of communication — with the SiPh platform, which uses silicon, the cheap, abundant element used to mass-produce computer chips.
    The interaction of light waves with matter represents one possible avenue for developing computers that supersede the limitations of today’s chips, which are essentially based on the same principles as chips from the earliest days of the computing revolution in the 1960s.
    In a paper in Nature Photonics, Engheta’s group, together with that of Firooz Aflatouni, Associate Professor in Electrical and Systems Engineering, describes the development of the new chip. “We decided to join forces,” says Engheta, leveraging the fact that Aflatouni’s research group has pioneered nanoscale silicon devices.
    Their goal was to develop a platform for performing what is known as vector-matrix multiplication, a core mathematical operation in the development and function of neural networks, the computer architecture that powers today’s AI tools.
    Instead of using a silicon wafer of uniform height, explains Engheta, “you make the silicon thinner, say 150 nanometers,” but only in specific regions. Those variations in height — without the addition of any other materials — provide a means of controlling the propagation of light through the chip, since the variations in height can be distributed to cause light to scatter in specific patterns, allowing the chip to perform mathematical calculations at the speed of light.
    Due to the constraints imposed by the commercial foundry that produced the chips, Aflatouni says, this design is already ready for commercial applications, and could potentially be adapted for use in graphics processing units (GPUs), the demand for which has skyrocketed with the widespread interest in developing new AI systems. “They can adopt the Silicon Photonics platform as an add-on,” says Aflatouni, “and then you could speed up training and classification.”
    In addition to faster speed and less energy consumption, Engheta and Aflatouni’s chip has privacy advantages: because many computations can happen simultaneously, there will be no need to store sensitive information in a computer’s working memory, rendering a future computer powered by such technology virtually unhackable. “No one can hack into a non-existing memory to access your information,” says Aflatouni.
    This study was conducted at the University of Pennsylvania School of Engineering and Applied science and supported in part by a grant from the U.S. Air Force Office of Scientific Research’s (AFOSR) Multidisciplinary University Research Initiative (MURI)to Engheta (FA9550-21-1-0312)and a grant from the U.S. Office of Naval Research (ONR) to Afaltouni (N00014-19-1-2248).
    Other co-authors include Vahid Nikkhah, Ali Pirmoradi, Farshid Ashtiani and Brian Edwards of Penn Engineering. More

  • in

    A new design for quantum computers

    Creating a quantum computer powerful enough to tackle problems we cannot solve with current computers remains a big challenge for quantum physicists. A well-functioning quantum simulator — a specific type of quantum computer — could lead to new discoveries about how the world works at the smallest scales. Quantum scientist Natalia Chepiga from Delft University of Technology has developed a guide on how to upgrade these machines so that they can simulate even more complex quantum systems. The study is now published in Physical Review Letters.
    “Creating useful quantum computers and quantum simulators is one of the most important and debated topics in quantum science today, with the potential to revolutionise society,” says researcher Natalia Chepiga. Quantum simulators are a type of quantum computer, Chepiga explains: “Quantum simulators are meant to address open problems of quantum physics to further push our understanding of nature. Quantum computers will have wide applications in various areas of social life, for example in finances, encryption and data storage.”
    Steering wheel
    “A key ingredient of a useful quantum simulator is a possibility to control or manipulate it,” says Chepiga. “Imagine a car without a steering wheel. It can only go forward but cannot turn. Is it useful? Only if you need to go in one particular direction, otherwise the answer will be ‘no!’. If we want to create a quantum computer that will be able to discover new physics phenomena in the near-future, we need to build a ‘steering wheel’ to tune into what seems interesting. In my paper I propose a protocol that creates a fully controllable quantum simulator.”
    Recipe
    The protocol is a recipe — a set of ingredients that a quantum simulator should have to be tunable. In the conventional setup of a quantum simulator, rubidium (Rb) or cesium (Cs) atoms are targeted by a single laser. As a result, these particles will take up electrons, and thereby become more energetic; they become excited. “I show that if we were to use two lasers with different frequencies or colours, thereby exciting these atoms to different states, we could tune the quantum simulators to many different settings,” Chepiga explains.
    The protocol offers an additional dimension of what can be simulated. “Imagine that you have only seen a cube as a sketch on a flat piece of paper, but now you get a real 3D cube that you can touch, rotate and explore in different ways,” Chepiga continues. “Theoretically we can add even more dimensions by bringing in more lasers.”
    Simulating many particles
    “The collective behaviour of a quantum system with many particles is extremely challenging to simulate,” Chepiga explains. “Beyond a few dozens of particles, modelling with our usual computer or a supercomputer has to rely on approximations.” When taking the interaction of more particles, temperature and motion into account, there are simply too many calculations to perform for the computer.
    Quantum simulators are composed of quantum particles, which means that the components are entangled. “Entanglement is some sort of mutual information that quantum particles share between themselves. It is an intrinsic property of the simulator and therefore allows to overcome this computational bottleneck.” More