More stories

  • in

    Novel software that combines gene activity and tissue location to decode disease mechanisms

    In disease research, it’s important to know gene expression and where in a tissue the expression is happening, but marrying the two sets of information can be challenging.
    “Single-cell technologies, especially in the emerging field of spatial transcriptomics, help scientists see where in a tissue the genes are turned on or off. It combines information about gene activity with the exact locations within the disease tissues,” explains Fan Zhang, PhD, assistant professor of medicine with a secondary appointment in the Department of Biomedical Informatics at the University of Colorado School of Medicine.
    “This is really valuable because it lets physicians and researchers see not just which genes are active, but also where they are active, which can give key insights into how different cells behave and interact in diseased conditions,” she continues.
    Effectively combining location and genetic information has been a tough obstacle for researchers — until now.
    Zhang and her lab developed a new computational machine learning method — called Spatial Transcriptomic multi-viEW, or “STew” for short — that enables the joint analysis of spatial variation and gene expression changes in a scalable way that can handle large amounts of cells.
    This new technology may help researchers learn more about the spatial biology behind many different diseases and lead them to better treatment therapies.
    A path toward an accurate target for effective treatment
    The new technology is accurate in finding significant patterns that show where specific cell activities happen, which is important for understanding how cells work and how clinical tissues are structured in diseases. Zhang’s lab has already successfully applied STew on human tissues, including human brains, skin with inflammation, and breast cancer tumors.

    For Zhang, who studies inflammatory diseases using computational AI tools and translational approaches, finding a good target for treatment is often a challenge, but STew could help change that.
    “With inflamed joints, for example, the genes causing inflammation could be closer to the blood vessel through interacting with mesenchymal structures, or they could be farther away, but knowing that exact location and cell-cell communication patterns helps us better understand the underlying mechanisms,” she says.
    By merging spatial biology and molecular diversity, STew gives researchers a new dimension in classifying patient heterogeneity.
    “If you only use gene expression to classify patients, you don’t have the full picture,” Zhang says. “Once you add in spatial information, you have a more comprehensive understanding.”
    “We expect STew to be effective in uncovering critical molecular and cellular signals in various clinical conditions, like different types of tumors and autoimmune disorders, opening new avenues for dysregulated immune pathways for therapeutic intervention for theses disease,” she continues.
    A novel software-driven route to empowering collaboration
    There’s another perk that comes with the development of STew: collaboration. Scientific discoveries often benefit from experts from different fields working together.

    Because STew has a wide application, Zhang says the software will bring researchers together in new and exciting ways that will ultimately benefit the field of medicine and offer promise to patients in need of treatments.
    “We want to encourage researchers across specialties, skillsets, and even departments, to collaborate in ways that they previously might not have been able to do,” Zhang says. “We can accomplish more together, so it’s important to boost data-driven and AI tool-motivated collaboration in a way that is meaningful.” More

  • in

    Aiding the displaced with data

    In times of crisis, effective humanitarian aid depends largely on the fast and efficient allocation of resources and personnel. Accurate data about the locations and movements of affected people in these situations is essential for this. Researchers from the University of Tokyo, working with the World Bank, have produced a framework to analyze and visualize population mobility data, which could help in such cases.
    Wars, famines, outbreaks, natural disasters … There are, sadly, many reasons why populations might be forced or feel compelled to leave their homes in search of refuge elsewhere, and these cases continue to grow. The United Nations estimated in 2023 that there were over 100 million forcibly displaced people in the world. Over 62 million of these individuals are considered internally displaced people (IDPs), those in particularly vulnerable situations due to being stuck within the borders of their countries, from which they might be trying to flee.
    The circumstances that displace populations are inevitably chaotic and certainly, but not exclusively, in cases of conflict, information infrastructure can be impeded. So, authorities and agencies trying to get a handle on crises are often operating with limited data on the people they are trying to help. But the lack of data alone is not the only problem; being able to easily interpret data, so that nonexperts can make effective decisions based on it, is also an issue, especially in rapidly evolving situations where the stakes, and tensions, are high.
    “It’s practically impossible to provide aid agencies and others with accurate real time data on affected populations. The available data will often be too fragmented to be useful directly,” said Associate Professor Yuya Shibuya from the Interfaculty Initiative in Information Studies. “There have been many efforts to use GPS data for such things, and in normal situations, it has been shown to be useful to model population behavior. But in times of crisis, patterns of predictability break down and the quality of data decreases. As data scientists, we explore ways to mitigate these problems and have developed a tracking framework for monitoring population movements by studying IDPs displaced in Russia’s invasion of Ukraine in 2022.”
    Even though Ukraine has good enough network coverage throughout to acquire GPS data, the data generated is not representative of the entire population. There are also privacy concerns, and likely other significant gaps in data due to the nature of conflict itself. As such, it’s no trivial task to model the way populations move. Shibuya and her team had access to a limited dataset which covered the period a few weeks before and a few weeks after the initial invasion on Feb. 24, 2022. This data contained over 9 million location records from over 100,000 anonymous IDPs who opted in to share their location data.
    “From these records, we could estimate people’s home locations at the regional level based on regular patterns in advance of the invasion. To make sure this limited data could be used to represent the entire population, we compared our estimates to survey data from the International Organization for Migration of the U.N.,” said Shibuya. “From there, we looked at when and where people moved just prior to and for some time after the invasion began. The majority of IDPs were from the capital, Kyiv, and some people left as early as five weeks before Feb. 24, perhaps in anticipation, though it was two weeks after that day that four times as many people left. However, a week later still, there was evidence some people started to return.”
    That some people return to afflicted areas is just one factor that confounds population mobility models — in actual fact, people may move between locations, sometimes multiple times. Trying to represent this with a simple map with arrows to show populations could get cluttered fast. Shibuya’s team used color-coded charts to visualize its data, which allow you to see population movements in and out of regions at different times, or dynamic data, in a single image.
    “I want visualizations like these to help humanitarian agencies gauge how to allocate human resources and physical resources like food and medicine. As they tell you about dynamic changes in populations, not just A to B movements, I think it could mean aid gets to where it’s needed and when it’s needed more efficiently, reducing waste and overheads,” said Shibuya. “Another thing we found that could be useful is that people’s migration patterns vary, and socioeconomic status seems to be a factor in this. People from more affluent areas tended to move farther from their homes than others. There is demographic diversity and good simulations ought to reflect this diversity and not make too many assumptions.”
    The team worked with the World Bank on this study, as the international organization could provide the data necessary for the analyses. They hope to look into other kinds of situations too, such as natural disasters, political conflicts, environmental issues and more. Ultimately, by performing research like this, Shibuya hopes to produce better general models of human behavior in crisis situations in order to alleviate some of the impacts those situations can create. More

  • in

    Best of both worlds: Innovative positioning system enhances versatility and accuracy of drone-viewpoint mixed reality applications

    A research group at Osaka University has developed an innovative positioning system, correctly aligning the coordinates of the real and virtual worlds without the need to define routes in advance. This is achieved by integrating two vision-based self-location estimation methods: visual positioning systems (VPS) and natural feature-based tracking. This development will lead to the realization of versatile drone-based mixed reality (MR) using drones available on the market. Drone-based MR is expected to see use in a variety of applications in the future, such as urban landscape simulation and support for maintenance and inspection work, contributing to further development of drone applications, especially in the fields of architecture, engineering, and construction (AEC).
    In recent years, there has been a growing interest in the integration of drones across diverse sectors, particularly within AEC. The use of drones in AEC has expanded due to their superior features in terms of time, accuracy, safety, and cost. The amalgamation of drones with MR stands out as a promising avenue as it is not restricted by the user’s range of action and is effective when performing landscape simulations for large-scale spaces such as cities and buildings. Previous studies proposed methods to integrate MR and commercial drones using versatile technologies such as screen sharing and streaming delivery; however, these methods required predefined drone flight routes to match the movements of the real and virtual world, thus reducing the versatility of the application and limiting use cases of MR.
    While this research does not implement a drone-based MR application for actual use, the proposed alignment system is highly versatile and has the potential for various additional functionalities in the future. This brings us one step closer to realizing drone-centric MR applications that can be utilized throughout the entire lifecycle of architectural projects, from the initial stages of design and planning to later stages such as maintenance and inspection.
    First author Airi Kinoshita mentions, “The integration of drones and MR has the potential to solve various social issues, such as those in urban planning and infrastructure development and maintenance, disaster response and humanitarian aid, cultural protection and tourism, and environmental conservation by freeing MR users from the constraints of experiencing only their immediate vicinity, enabling MR expression from a freer perspective.” More

  • in

    The embryo assembles itself

    Biological processes depend on puzzle pieces coming together and interacting. Under specific conditions, these interactions can create something new without external input. This is called self-organization, as seen in a school of fish or a flock of birds. Interestingly, the mammalian embryo develops similarly. In PNAS, David Brückner and Gašper Tkačik from the Institute of Science and Technology Austria (ISTA) introduce a mathematical framework that analyzes self-organization from a single cell to a multicellular organism.
    When an embryo develops, many types of cells with different functions need to be generated. For example, some cells will become part of the eye and record visual stimuli, while others will be part of the gut and help digest food. To determine their roles, cells are constantly communicating with each other using chemical signals.
    Thanks to this communication, during development, everything is well synchronized and coordinated, and yet there is no central control responsible for this. The cell collective is self-organized and orchestrated by the interactions between the individuals. Each cell reacts to signals of its neighbors. Based on such self-organization, the mammalian embryo develops from a single fertilized egg cell into a multicellular organism.
    David Brückner and Gašper Tkačik from the Institute of Science and Technology Austria (ISTA) have now established a mathematical framework that helps analyze this process and predict its optimal parameters. Published in PNAS, this approach represents a unifying mathematical language to describe biological self-organization in embryonic development and beyond.
    The self-assembling embryo
    In nature, self-organization is all around us: we can observe it in fish schools, bird flocks, or insect collectives, and even in microscopic processes regulated by cells. NOMIS fellow and ISTA postdoc David Brückner is interested in getting a better understanding of these processes from a theoretical standpoint. His focus lies on embryonic development — a complex process governed by genetics and cells communicating with each other.
    During embryonic development, a single fertilized cell turns into a multicellular embryo containing organs with lots of different features. “For many steps in this developmental process, the system has no extrinsic signal that directs it what to do. There is an intrinsic property of the system that allows it to establish patterns and structures,” says Brückner. “The intrinsic property is what is known as self-organization.” Even with unpredictable factors — which physicists call “noise” — the embryonic patterns are formed reliably and consistently. In recent years, scientists have gained a deeper understanding of the molecular details that drive this complex process. A mathematical framework to analyze and quantify its performance, however, was lacking. The language of information theory provides answers.

    Bridging expertise
    “Information theory is a universal language to quantify structure and regularity in statistical ensembles, which are a collection of replicates of the same process. Embryonic development can be seen as such a process that reproducibly generates functional organisms that are very similar but not identical,” says Gašper Tkačik, professor at ISTA and expert in this field. For a long time, Tkačik has been studying how information gets processed in biological systems, for instance in the fly embryo. “In the early fly embryo, patterns are not self-organized,” he continues. “The mother fly puts chemicals into the egg that instruct the cells on what actions to take.” As the Tkačik group had already developed a framework for this system, Brückner reached out to develop one for the mammalian embryo as well. “With Gašper’s expertise in information theory, we were able to put it together,” Brückner adds excitedly.
    Beyond embryo development?
    During embryonic development, cells exchange signals and are constantly subject to random, unpredictable fluctuations (noise). Therefore, cellular interactions must be robust. The new framework measures how these interactions are possibly optimized to withstand noise. Using computer simulations of interacting cells, the scientists explored the conditions under which a system can still have a stable final result despite introducing fluctuations.
    Although the framework has proven to be successful on three different developmental models that all rely on chemical and mechanical signaling, additional work will be required to apply it to experimental recordings of developmental systems. “In the future, we want to study more complex models with more parameters and dimensions,” Tkačik says. “By quantifying more complex models, we could also apply our framework to experimentally measured patterns of chemical signals in developing embryos,” adds Brückner. For this purpose, the two theoretical scientists will team up with experimentalists. More

  • in

    Groundbreaking progress in quantum physics: How quantum field theories decay and fission

    An international research team around Marcus Sperling, a researcher at the Faculty of Physics, University of Vienna, has sparked interest in the scientific community with pioneering results in quantum physics: In their current study, the researchers reinterpret the Higgs mechanism, which gives elementary particles mass and triggers phase transitions, using the concept of “magnetic quivers.” The work has now been published in the journal “Physical Review Letters.”
    The foundation of Marcus Sperling’s research, which lies at the intersection of physics and mathematics, is Quantum Field Theory (QFT) — a physical-mathematical concept within quantum physics focused on describing particles and their interactions at the subatomic level. Since 2018, he has developed the so-called “magnetic quivers” along with colleagues — a graphical tool that summarizes all the information needed to define a QFT, thus displaying complex interactions between particle fields or other physical quantities clearly and intuitively.
    Metaphorical Magnetic Quivers
    A quiver consists of directed arrows and nodes. The arrows represent the quantum fields (matter fields), while the nodes represent the interactions — e.g., strong, weak, or electromagnetic — between the fields. The direction of the arrows indicates how the fields are charged under the interactions, e.g., what electric charge the particles carry. Marcus Sperling explains, “The term ‘magnetic’ is also used metaphorically here to point to the unexpected quantum properties that are made visible by these representations. Similar to the spin of an electron, which can be detected through a magnetic field, magnetic quivers reveal certain properties or structures in the QFTs that may not be obvious at first glance.” Thus, they offer a practical way to visualize and analyze complex quantum phenomena, facilitating new insights into the underlying mechanisms of the quantum world.
    Supersymmetric QFTs
    For the current study, the stable ground states (vacua) — the lowest energy configuration in which no particles or excitations are present — in a variety of “supersymmetric QFTs” were explored. These QFTs, with their simplified space-time symmetry, serve as a laboratory environment, as they resemble real physical systems of subatomic particles but have certain mathematical properties that facilitate calculations. FWF START award winner Sperling said, “Our research deals with the fundamentals of our understanding of physics. Only after we have understood the QFTs in our laboratory environment can we apply these insights to more realistic QFT models.” The concept of magnetic quivers — one of the main research topics of Sperling’s START project at the University of Vienna — was used as a tool to provide a precise geometric description of the new quantum vacua.
    Decay & Fission: Higgs Mechanism Reinterpreted
    With calculations based on linear algebra, the research team demonstrated that — analogous to radioactivity in atomic nuclei — a magnetic quiver can decay into a more stable state or fission into two separate quivers. These transformations offer a new understanding of the Higgs mechanism in QFTs, which either decay into simpler QFTs or fission into separate, independent QFTs. Physicist Sperling stated, “The Higgs mechanism explains how elementary particles acquire their mass by interacting with the Higgs field, which permeates the entire universe. Particles interact with this field as they move through space — similar to a swimmer moving through water.” A particle that has no mass usually moves at the speed of light. However, when it interacts with the Higgs field, it “sticks” to this field and becomes sluggish, leading to the manifestation of its mass. The Higgs mechanism is thus a crucial concept for understanding the fundamental building blocks and forces of the universe. Mathematically, the “decay and fission” algorithm is based on the principles of linear algebra and a clear definition of stability. It operates autonomously and requires no external inputs. The results achieved through physics-inspired methods are not only relevant in physics but also in mathematical research: They offer a fundamental and universally valid description of the complex, intertwined structures of the quantum vacua, representing a significant advance in mathematics. More

  • in

    Development of revolutionary color-tunable photonic devices

    A team at Pohang University of Science and Technology (POSTECH), spearheaded by Professor Su Seok Choi and Ph.D. candidate Seungmin Nam from the Department of Electrical Engineering, has developed a novel stretchable photonic device that can control light wavelengths in all directions. This pioneering study was published in Light: Science & Applications on May 22.
    Structural colors are produced through the interaction of light with microscopic nanostructures, creating vibrant hues without relying on traditional color mixing methods. Conventional displays and image sensors blend the three primary colors (red, green, and blue), while structural color technology leverages the inherent wavelengths of light, resulting in more vivid and diverse color displays. This innovative approach is gaining recognition as a promising technology in the nano-optics and photonics industries.
    Traditional color mixing techniques, which use dyes or luminescent materials, are limited to passive and fixed color representation. In contrast, tunable color technology dynamically controls nanostructures corresponding to specific light wavelengths, allowing for the free adjustment of pure colors. Previous research has primarily been limited to unidirectional color tuning, typically shifting colors from red to blue. Reversing this shift — from blue to red, which has a longer wavelength — has been a significant challenge. Current technology only allows adjustments towards shorter wavelengths, making it difficult to achieve diverse color representation in the ideal free wavelength direction. Therefore, a new optical device capable of bidirectional and omnidirectional wavelength adjustment is needed to maximize the utilization of wavelength control technology.
    Professor Choi’s team addressed these challenges by integrating chiral liquid crystal elastomers (CLCEs) with dielectric elastomer actuators (DEAs). CLCEs are flexible materials capable of structural color changes, while DEAs induce flexible deformation of dielectrics in response to electrical stimuli. The team optimized the actuator structure to allow both expansion and contraction, combining it with CLCEs, and developed a highly adaptable stretchable device. This device can freely adjust the wavelength position across the visible spectrum, from shorter to longer wavelengths and vice versa.
    In their experiments, the researchers demonstrated that their CLCE-based photonic device could control structural colors over a broad range of visible wavelengths (from blue at 450nm to red at 650nm) using electrical stimuli. This represents a significant advancement over previous technologies, which were limited to unidirectional wavelength tuning.
    This research not only establishes a foundational technology for advanced photonic devices but also highlights its potential for various industrial applications.
    Professor Choi remarked, “This technology can be applied in displays, optical sensors, optical camouflage, direct optical analogue encryption, biomimetic sensors, and smart wearable devices, among many other applications involving light, color, and further broadband electromagnetic waves beyond visible band. We aim to expand its application scope through ongoing research.” More

  • in

    Enhancing nanofibrous acoustic energy harvesters with artificial intelligence

    Scientists at the Terasaki Institute for Biomedical Innovation (TIBI), have employed artificial intelligence techniques to improve the design and production of nanofibers used in wearable nanofiber acoustic energy harvesters (NAEH). These acoustic devices capture sound energy from the environment and convert it into electrical energy, which can then be applied in useful devices, such as hearing aids.
    Many efforts have been made to capture naturally occurring and abundant energy sources from our surrounding environment. Relatively recent advances such as solar panels and wind turbines allow us to efficiently harvest energy from the sun and wind, convert it into electrical energy, and store it for various applications. Similarly, conversions of acoustic energy can be seen in amplifying devices such as microphones, as well as in wearable, flexible electronic devices for personalized healthcare.
    Currently, there has been much interest in using piezoelectric nanogenerators — devices that convert mechanical vibrations, stress, or strain into electrical power — as acoustic energy harvesters. These nanogenerators can convert mechanical energy from sound waves to generate electricity; however, this conversion of sound waves is inefficient, as it occurs mainly in the high frequency sound range, and most environmental sound waves are in the low frequency range. Additionally, choosing optimal materials, structural design, and fabrication parameters make the production of piezoelectric nanogenerators challenging.
    As described in their paper in Nano Research, the TIBI scientists’ approach to these challenges was two-fold: first, they chose their materials strategically and elected to fabricate nanofibers using polyvinylfluoride (PVDF), which are known for their ability to capture acoustic energy efficiently. When making the nanofiber mixture, polyurethane (PU) was added to the PVDF solution to impart flexibility, and electrospinning (a technique for generating ultrathin fibers) was used to produce the composite PVDF/PU nanofibers.
    Secondly, the team applied artificial intelligence (AI) techniques to determine the best fabrication parameters involved in electrospinning the PVDF/polyurethane nanofibers; these parameters included the applied voltage, electrospinning time, and drum rotation speed. Employing these techniques allowed the team to tune the parameter values to obtain maximum power generation from their PVDF/PU nanofibers.
    To make their nanoacoustic energy harvester, the TIBI scientists fashioned their PVDF/PU nanofibers into a nanofibrous mat and sandwiched it between aluminum mesh layers that functioned as electrodes. The entire assembly was then encased by two flexible frames.
    In tests against conventionally fabricated NAEHs, the resultant AI-generated PVDF/PU NAEHs were found to have better overall performance, yielding a power density level more than 2.5 times higher and a significantly higher energy conversion efficiency (66% vs 42%). Furthermore, the AI-generated PVDF/PU NAEHs were able to obtain these results when tested with a wide range of low-frequency sound — well within the levels found in ambient background noise. This allows for excellent sound recognition and the ability to distinguish words with high resolution.
    “Models using artificial intelligence optimization, such as the one described here, minimize time spent on trial and error and maximize the effectiveness of the finished product,” said Ali Khademhosseini, Ph.D., TIBI’s director and CEO. “This can have far-reaching effects on the fabrication of medical devices with significant practicability.” More

  • in

    Researchers develop technology that may allow stroke patients to undergo rehab at home

    For survivors of strokes, which afflict nearly 800,000 Americans each year, regaining fine motor skills like writing and using utensils is critical for recovering independence and quality of life. But getting intensive, frequent rehabilitation therapy can be challenging and expensive.
    Now, researchers at NYU Tandon School of Engineering are developing a new technology that could allow stroke patients to undergo rehabilitation exercises at home by tracking their wrist movements through a simple setup: a smartphone strapped to the forearm and a low-cost gaming controller called the Novint Falcon.
    The Novint Falcon, a desktop robot typically used for video games, can guide users through specific arm motions and track the trajectory of its controller. But it cannot directly measure the angle of the user’s wrist, which is essential data for therapists providing remote rehabilitation.
    In a paper presented at SPIE Smart Structures + Nondestructive Evaluation 2024, the researchers proposed using the Falcon in tandem with a smartphone’s built-in motion sensors to precisely monitor wrist angles during rehab exercises.
    “Patients would strap their phone to their forearm and manipulate this robot,” said Maurizio Porfiri, NYU Tandon Institute Professor and director of its Center for Urban Science + Progress (CUSP), who is the paper’s senior author. “Data from the phone’s inertial sensors can then be combined with the robot’s measurements through machine learning to infer the patient’s wrist angle.”
    The researchers collected data from a healthy subject performing tasks with the Falcon while wearing motion sensors on the forearm and hand to capture the true wrist angle. They then trained an algorithm to predict the wrist angles based on the sensor data and Falcon controller movements.
    The resulting algorithm could predict wrist angles with over 90% accuracy, a promising initial step toward enabling remote therapy with real-time feedback in the absence of an in-person therapist.

    “This technology could allow patients to undergo rehabilitation exercises at home while providing detailed data to therapists remotely assessing their progress,” Roni Barak Ventura, the paper’s lead author who was an NYU Tandon postdoctoral fellow at the time of the study. “It’s a low-cost, user-friendly approach to increasing access to crucial post-stroke care.”
    The researchers plan to further refine the algorithm using data from more subjects. Ultimately, they hope the system could help stroke survivors stick to intensive rehab regimens from the comfort of their homes.
    “The ability to do rehabilitation exercises at home with automatic tracking could dramatically improve quality of life for stroke patients,” said Barak Ventura. “This portable, affordable technology has great potential for making a difficult recovery process much more accessible.”
    This study adds to NYU Tandon’s body of work that aims to improve stroke recovery. In 2022, Researchers from NYU Tandon began collaborating with the FDA to design a regulatory science tool based on biomarkers to objectively assess the efficacy of rehabilitation devices for post-stroke motor recovery and guide their optimal usage. A study from earlier this year unveiled advances in technology that uses implanted brain electrodes to recreate the speaking voice of someone who has lost speech ability, which can be an outcome from stroke. More