More stories

  • in

    Making sense of life’s random rhythms

    Life’s random rhythms surround us-from the hypnotic, synchronized blinking of fireflies…to the back-and-forth motion of a child’s swing… to slight variations in the otherwise steady lub-dub of the human heart.
    But truly understanding those rhythms — called stochastic, or random, oscillations — has eluded scientists. While researchers and clinicians have some success in parsing brain waves and heartbeats, they’ve been unable to compare or catalogue an untold number of variations and sources.
    Gaining such insight into the underlying source of oscillations “could lead to advances in neural science, cardiac science and any number of different fields,” said Peter Thomas, a professor of applied mathematics at Case Western Reserve University.
    Thomas is part of an international team that says it has developed a novel, universal framework for comparing and contrasting oscillations — regardless of their different underlying mechanisms — which could become a critical step toward someday fully understanding them.
    Their findings were recently published in Proceedings of the National Academy of Sciences.
    “We turned the problem of comparing oscillators into a linear algebra problem,” Thomas said. “What we have done is vastly more precise than what was available before. It’s a major conceptual advance.”
    The researchers say others can now compare, better understand — and even manipulate — oscillators previously considered to have completely different properties. More

  • in

    Can AI help hospitals spot patients in need of extra non-medical assistance?

    In the rush to harness artificial intelligence and machine learning tools to make care more efficient at hospitals nationwide, a new study points to another possible use: identifying patients with non-medical needs that could affect their health and ability to receive care.
    These social determinants of health — everything from transportation and housing to food supply and availability of family and friends as supports — can play a major role in a patient’s health and use of health care services.
    The new study focuses on a patient population with especially complex needs: people with Alzheimer’s disease or other forms of dementia. Their condition can make them especially reliant on others to get them to medical appointments and social activities, handle medications and finances, shop and prepare food, and more.
    The results of the study show that a rule-based natural language processing tool successfully identified patients with unstable access to transportation, food insecurity, social isolation, financial problems and signs of abuse, neglect, or exploitation.
    The researchers found that a rule-based NLP tool — a kind of AI that analyzes human speech or writing — was far superior to deep learning and regularized logistic regression algorithms for identifying patients’ social determinants of health.
    However, even the NLP tool did not do well enough at identifying needs related to housing or affording or taking medication.
    The study was led by Elham Mahmoudi, Ph.D., a health economist at Michigan Medicine, the University of Michigan’s academic medical center, and Wenbo Wu, Ph.D., who completed the work while earning a doctorate at the U-M School of Public Health and is now at New York University. Mahmoudi and two other authors are in the Department of Family Medicine. More

  • in

    Distribution of genetic information during bacterial cell division

    The precise segregation of DNA and the faithful inheritance of plasmids are crucial steps in bacterial cell division. Now, a team of researchers led by Seán Murray at the Max Planck Institute for Terrestrial Microbiology has developed a computational simulation that explains a key mechanism of DNA segregation. Their findings pave the way for experimental testing and reveal fundamental biochemical principles relevant to synthetic biology and medical applications.
    The faithful inheritance of genetic material to the next generation is a fundamental process underlying all forms of life. Central to this process is the accurate transmission of copied genetic material during cell division. A research team led by Seán Murray at the Max Planck Institute for Terrestrial Microbiology has now successfully developed a computational simulation for this central process. Unlike experimental techniques, which are often limited by their resolution, stochastic modeling makes it possible to unravel the underlying processes of DNA segregation and to understand the fine structure of the proteins involved.
    An essential part of this process, in many bacteria, is the formation of a large macromolecular complex called the partition complex, which is formed as part of the ParABS system. Here the ParB protein operates by moving the DNA through interacting with DNA-bound ParA-ATP, thereby allowing active separation of the DNA. For its correct functioning it requires precise interactions between its protein subparts and the DNA.
    “Sliding and bridging” principle
    Despite their significance, both the structure of the protein complexes and the mechanisms behind their assembly have remained elusive. Building on recent discoveries, the research team has developed a model showing that the DNA and ParB dimers can follow a “sliding and bridging” principle.
    Graduate student Lara Connolley, first author of the study, focused on the process of loading ParB dimers onto DNA, which occurs at specific regions known as parS sites. “According to our stochastic model, ParB dimers attach to DNA at parS sites by forming a protein clamp and then slide along the DNA strand, much like beads on a chain. We also predict that short-lived bridges organize the DNA into hairpin and helical structures to condense the DNA. Furthermore, these bridges do not interfere with sliding,” explains Lara Connolley. Research group leader Seán Murray adds: “The bridging interactions between dimers lead to DNA bending and the formation of a variety of structures. Further research into these structural variations could potentially be the key to understanding the role of ParB in different biological contexts.” The study opens the door for further research and experimentation to build on the findings.
    The next step is to carry out experiments to test and validate the model predictions in more detail. In addition, studies in different bacterial species would help to better understand the diversity present in the structure of the partition complex. “Our study provides a deeper insight into the world of DNA segregation and has potential relevance to many different bacterial species, as well as low copy number plasmids, which are also segregated by the ParABS system,” says Max Planck scientist Seán Murray. “Antibiotic resistance genes are located on such plasmids. Therefore, in addition to being important as basic research, these results could also be important for public health.” More

  • in

    Scientists discover novel way of reading data in antiferromagnets, unlocking their use as computer memory

    Scientists led by Nanyang Technological University, Singapore (NTU Singapore) investigators have made a significant advance in developing alternative materials for the high-speed memory chips that let computers access information quickly and that bypass the limitations of existing materials.
    They have discovered a way that allows them to make sense of previously hard-to-read data stored in these alternative materials, known as antiferromagnets.
    Researchers consider antiferromagnets to be attractive materials for making computer memory chips because they are potentially more energy efficient than traditional ones made of silicon. Memory chips made of antiferromagnets are not subject to the size and speed constraints nor corruption issues that are inherent to chips made with certain magnetic materials.
    Computer data is stored as code comprising a string of 1s and 0s. Currently, methods exist to “write” data onto antiferromagnets, by configuring them so that they can represent either the number 1 or 0.
    However, “reading” this data from antiferromagnets has proved elusive to researchers as there were no practical methods in the past that could figure out which number the materials were coded as.
    Now scientists led by Associate Professor Gao Weibo from NTU’s School of Physical and Mathematical Sciences (SPMS) have found a solution.
    Results from their experiments, published online in the scientific journal Nature in June 2023, showed that at ultra-low temperatures close to the coldness of outer space, if they passed a current through antiferromagnets, a unique voltage was measured across them. More

  • in

    Scientists invent smallest known way to guide light

    Directing light from place to the place is the backbone of our modern world. Beneath the oceans and across continents, fiber optic cables carry light that encodes everything from YouTube videos to banking transmissions — all inside strands about the size of a hair.
    University of Chicago Prof. Jiwoong Park, however, wondered what would happen if you made even thinner and flatter strands — in effect, so thin that they’re actually 2D instead of 3D. What would happen to the light?
    Through a series of innovative experiments, he and his team found that a sheet of glass crystal just a few atoms thick could trap and carry light. Not only that, but it was surprisingly efficient and could travel relatively long distances — up to a centimeter, which is very far in the world of light-based computing.
    The research, published Aug. 10 in Science, demonstrates what are essentially 2D photonic circuits, and could open paths to new technology.
    “We were utterly surprised by how powerful this super-thin crystal is; not only can it hold energy, but deliver it a thousand times further than anyone has seen in similar systems,” said lead study author Jiwoong Park, a professor and chair of chemistry and faculty member of the James Franck Institute and Pritzker School of Molecular Engineering. “The trapped light also behaved like it is traveling in a 2D space.”
    Guiding light
    The newly invented system is a way to guide light — known as a waveguide — that is essentially two-dimensional. In tests, the researchers found they could use extremely tiny prisms, lenses, and switches to guide the path of the light along a chip — all the ingredients for circuits and computations. More

  • in

    Arrays of quantum rods could enhance TVs or virtual reality devices

    Flat screen TVs that incorporate quantum dots are now commercially available, but it has been more difficult to create arrays of their elongated cousins, quantum rods, for commercial devices. Quantum rods can control both the polarization and color of light, to generate 3D images for virtual reality devices.
    Using scaffolds made of folded DNA, MIT engineers have come up with a new way to precisely assemble arrays of quantum rods. By depositing quantum rods onto a DNA scaffold in a highly controlled way, the researchers can regulate their orientation, which is a key factor in determining the polarization of light emitted by the array. This makes it easier to add depth and dimensionality to a virtual scene.
    “One of the challenges with quantum rods is: How do you align them all at the nanoscale so they’re all pointing in the same direction?” says Mark Bathe, an MIT professor of biological engineering and the senior author of the new study. “When they’re all pointing in the same direction on a 2D surface, then they all have the same properties of how they interact with light and control its polarization.”
    MIT postdocs Chi Chen and Xin Luo are the lead authors of the paper, which appears today in Science Advances. Robert Macfarlane, an associate professor of materials science and engineering; Alexander Kaplan PhD ’23; and Moungi Bawendi, the Lester Wolfe Professor of Chemistry, are also authors of the study.
    Nanoscale structures
    Over the past 15 years, Bathe and others have led in the design and fabrication of nanoscale structures made of DNA, also known as DNA origami. DNA, a highly stable and programmable molecule, is an ideal building material for tiny structures that could be used for a variety of applications, including delivering drugs, acting as biosensors, or forming scaffolds for light-harvesting materials.
    Bathe’s lab has developed computational methods that allow researchers to simply enter a target nanoscale shape they want to create, and the program will calculate the sequences of DNA that will self-assemble into the right shape. They also developed scalable fabrication methods that incorporate quantum dots into these DNA-based materials. More

  • in

    Artificial intelligence designs advanced materials

    In a world where annual economic losses from corrosion surpass 2.5 trillion US Dollars, the quest for corrosion-resistant alloys and protective coatings is unbroken. Artificial intelligence (AI) is playing an increasingly pivotal role in designing new alloys. Yet, the predictive power of AI models in foreseeing corrosion behaviour and suggesting optimal alloy formulas has remained elusive.
    Scientists of the Max-Planck-Institut für Eisenforschung (MPIE) have now developed a machine learning model that enhances the predictive accuracy by up to 15% compared to existing frameworks. This model uncovers new, but realistic corrosion-resistant alloy compositions. Its distinct power arises from fusing both numerical and textual data. Initially developed for the critical realm of resisting pitting corrosion in high-strength alloys, this model’s versatility can be extended to all alloy properties. The researchers published their latest results in the journal Science Advances.
    Merging texts and numbers
    “Every alloy has unique properties concerning its corrosion resistance. These properties do not only depend on the alloy composition itself, but also on the alloy’s manufacturing process. Current machine learning models are only able to benefit from numerical data. However, processing methodologies and experimental testing protocols, which are mostly documented by textual descriptors, are crucial to explain corrosion,,” explains Kasturi Narasimha Sasidhar, lead author of the publication and former postdoctoral researcher at MPIE. The researcher team used language processing methods, akin to ChatGPT, in combination with machine learning (ML) techniques for numerical data and developed a fully automated natural language processing framework. Moreover, involving textual data into the ML framework allows to identify enhanced alloy compositions resistant to pitting corrosion. “We trained the deep-learning model with intrinsic data that contain information about corrosion properties and composition. Now the model is capable of identifying alloy compositions that are critical for corrosion-resistance even if the individual elements were not fed initially into the model,” says Michael Rohwerder, co-author of the publication and head of the group Corrosion at MPIE.
    Pushing boundaries: automated data mining and image processing
    In the recently devised framework, Sasidhar and his team harnessed manually gathered data as textual descriptors. Presently, their objective lies in automating the process of data mining and seamlessly integrating it into the existing framework. The incorporation of microscopy images marks another milestone, envisioning the next generation of AI frameworks that converge textual, numerical, and image-based data. More

  • in

    A roadmap to help AI technologies speak African languages

    From text-generating ChatGPT to voice-activated Siri, artificial intelligence-powered tools are designed to aid our everyday life — as long as you speak a language they support. These technologies are out of reach for billions of people who don’t use English, French, Spanish or other mainstream languages, but researchers in Africa are looking to change that. In a study published August 11 in the journal Patterns, scientists draw a roadmap to develop better AI-driven tools for African languages.
    “It doesn’t make sense to me that there are limited AI tools for African languages,” says first author and AI researcher Kathleen Siminyu of the Masakhane Research Foundation, a grassroots network of African scientists who aim to spur accessible AI tools for those who speak African languages. “Inclusion and representation in the advancement of language technology is not a patch you put at the end — it’s something you think about up front.”
    Many of these tools rely on a field of AI called natural language processing, a technology that enables computers to understand human languages. Computers can master a language through training, where they pick up on patterns in speech and text data. However, they fail when data in a particular language is scarce, as seen in African languages. To fill the gap, the research team first identified key players involved in developing African language tools and explored their experience, motivation, focuses, and challenges. These people include writers and editors who create and curate content, as well as linguists, software engineers, and entrepreneurs who are crucial in establishing the infrastructure for language tools.
    Interviews with the key players revealed four central themes to consider in designing African language tools: First, bearing the impact of colonization, Africa is a multilingual society where African language is central to people’s cultural identities and is key to societal participation in education, politics, economy, and more. Second, there is a need to support African content creation. This includes building basic tools such as dictionaries, spell checkers, and keyboards for African languages and removing financial and administrative barriers for translating government communications to multiple national languages, which includes African languages. Third, the creation of African language technologies will benefit from collaborations between linguistics and computer science. Also, there should be focus on creating tools that are human centered, which help individuals unlock greater potential. Fourth, developers should be mindful of communities and ethical practices during the collection, curation, and use of data.”There’s a growing number of organizations working in this space, and this study allows us to coordinate efforts in building impactful language tools,” says Siminyu. “The findings highlight and articulate what the priorities are, in terms of time and financial investments.”
    Next, the team plans to expand the study and include more participants to understand the communities that AI language technologies may impact. They will also address barriers that may hinder people’s access to the technology. The team hopes their study could serve as a roadmap to help develop a wide range of language tools, from translation services to misinformation-catching content moderators. The findings may also pave the way to preserve indigenous African languages.
    “I would love for us to live in a world where Africans can have as good quality of life and access to information and opportunities as somebody fluent in English, French, Mandarin, or other languages,” says Siminyu. More