More stories

  • in

    Bridging traditional economics and econophysics

    How do asset markets work? Which stocks behave similarly? Economists, physicists, and mathematicians work intensively to draw a picture but need to learn what is happening outside their discipline. A new paper now builds a bridge.
    In a new study, researchers of the Complexity Science Hub highlight the connecting elements between traditional financial market research and econophysics. “We want to create an overview of the models that exist in financial economics and those that researchers in physics and mathematics have developed so that everybody can benefit from it,” explains Matthias Raddant from the Complexity Science Hub and the University for Continuing Education Krems.
    Scientists from both fields try to classify or even predict how the market will behave. They aim to create a large-scale correlation matrix describing the correlation of one stock to all other stocks. “Progress, however, is often barely noticed, if at all, by researchers in other disciplines. Researchers in finance hardly know that physicists are researching similar topics and just call it something different. That’s why we want to build a bridge,” says Raddant.
    WHAT ARE THE DIFFERENCES?
    Experts in the traditional financial markets field are very concerned with accurately describing how volatile stocks are statistically. However, their fine-grained models no longer work adequately when the data set becomes too large and includes tens of thousands of stocks.
    Physicists, on the other hand, can handle large amounts of data very well. Their motto is: “The more data I have, the nicer it is because then I can see certain regularities better,” explains Raddant. They also work based on correlations, but they model financial markets as evolving complex networks. These networks describe dependencies that can reveal asset comovement, i.e., which stocks behave fundamentally similarly and therefore group together. However, physicists and mathematicians may not know what insights already exist in the finance literature and what factors need to be considered.
    DIFFERENT LANGUAGE
    In their study, Raddant and his co-author, CSH external faculty member Tiziana Di Matteo of King’s College London, note that the mechanical parts that go into these models are often relatively similar, but their language is different. On the one hand, researchers in finance try to discover companies’ connecting features. On the other hand, physicists and mathematicians are working on creating order out of many time series of stocks, where certain regularities occur. “What physicists and mathematicians call regularities, economists call properties of companies, for example,” says Raddant.
    AVOIDING RESEARCH THAT GETS LOST
    “Through this study, we wish to sensitize young scientists, in particular, who are working on an interdisciplinary basis in financial markets, to the connecting elements between the disciplines,” says Raddant. So that researchers who do not come from financial economics know what the vocabulary is and what the essential research questions are that they have to address. Otherwise, there is a risk of producing research that is of no interest to anyone in finance and financial economics.
    On the other hand, scientists from the disciplines traditionally involved with financial markets must understand how to describe large data sets and statistical regularities with methods from physics and network science. More

  • in

    AI could replace humans in social science research

    In an article published yesterday in the journal Science, leading researchers from the University of Waterloo, University of Toronto, Yale University and the University of Pennsylvania look at how AI (large language models or LLMs in particular) could change the nature of their work.
    “What we wanted to explore in this article is how social science research practices can be adapted, even reinvented, to harness the power of AI,” said Igor Grossmann, professor of psychology at Waterloo.
    Grossmann and colleagues note that large language models trained on vast amounts of text data are increasingly capable of simulating human-like responses and behaviours. This offers novel opportunities for testing theories and hypotheses about human behaviour at great scale and speed.
    Traditionally, social sciences rely on a range of methods, including questionnaires, behavioral tests, observational studies, and experiments. A common goal in social science research is to obtain a generalized representation of characteristics of individuals, groups, cultures, and their dynamics. With the advent of advanced AI systems, the landscape of data collection in social sciences may shift.
    “AI models can represent a vast array of human experiences and perspectives, possibly giving them a higher degree of freedom to generate diverse responses than conventional human participant methods, which can help to reduce generalizability concerns in research,” said Grossmann.
    “LLMs might supplant human participants for data collection,” said UPenn psychology professor Philip Tetlock. “In fact, LLMs have already demonstrated their ability to generate realistic survey responses concerning consumer behaviour. Large language models will revolutionize human-based forecasting in the next 3 years. It won’t make sense for humans unassisted by AIs to venture probabilistic judgments in serious policy debates. I put an 90% chance on that. Of course, how humans react to all of that is another matter.”
    While opinions on the feasibility of this application of advanced AI systems vary, studies using simulated participants could be used to generate novel hypotheses that could then be confirmed in human populations.
    But the researchers warn of some of the possible pitfalls in this approach — including the fact that LLMs are often trained to exclude socio-cultural biases that exist for real-life humans. This means that sociologists using AI in this way couldn’t study those biases.
    Professor Dawn Parker, a co-author on the article from the University of Waterloo, notes that researchers will need to establish guidelines for the governance of LLMs in research.
    “Pragmatic concerns with data quality, fairness, and equity of access to the powerful AI systems will be substantial,” Parker said. “So, we must ensure that social science LLMs, like all scientific models, are open-source, meaning that their algorithms and ideally data are available to all to scrutinize, test, and modify. Only by maintaining transparency and replicability can we ensure that AI-assisted social science research truly contributes to our understanding of human experience.” More

  • in

    Advanced universal control system may revolutionize lower limb exoskeleton control and optimize user experience

    A team of researchers has developed a new method for controlling lower limb exoskeletons using deep reinforcement learning. The method, described in a study published in the Journal of NeuroEngineering and Rehabilitation on March 19, 2023, enables more robust and natural walking control for users of lower limb exoskeletons. “Robust walking control of a lower limb rehabilitation exoskeleton coupled with a musculoskeletal model via deep reinforcement learning” is available open access.
    While advances in wearable robotics have helped restore mobility for people with lower limb impairments, current control methods for exoskeletons are limited in their ability to provide natural and intuitive movements for users. This can compromise balance and contribute to user fatigue and discomfort. Few studies have focused on the development of robust controllers that can optimize the user’s experience in terms of safety and independence.
    Existing exoskeletons for lower limb rehabilitation employ a variety of technologies to help the user maintain balance, including special crutches and sensors, according to co-author Ghaith Androwis, PhD, senior research scientist in the Center for Mobility and Rehabilitation Engineering Research at Kessler Foundation and director of the Center’s Rehabilitation Robotics and Research Laboratory. Exoskeletons that operate without such helpers allow more independent walking, but at the cost of added weight and slow walking speed.
    “Advanced control systems are essential to developing a lower limb exoskeleton that enables autonomous, independent walking under a range of conditions,” said Dr. Androwis. The novel method developed by the research team uses deep reinforcement learning to improve exoskeleton control. Reinforcement learning is a type of artificial intelligence that enables machines to learn from their own experiences through trial and error.
    “Using a musculoskeletal model coupled with an exoskeleton, we simulated the movements of the lower limb and trained the exoskeleton control system to achieve natural walking patterns using reinforcement learning,” explained corresponding author Xianlian Zhou, PhD, associate professor and director of the BioDynamics Lab in the Department of Biomedical Engineering at New Jersey Institute of Technology (NJIT). “We are testing the system in real-world conditions with a lower limb exoskeleton being developed by our team and the results show the potential for improved walking stability and reduced user fatigue.”
    The team determined that their proposed model generated a universal robust walking controller capable of handling various levels of human-exoskeleton interactions without the need for tuning parameters. The new system has the potential to benefit a wide range of users, including those with spinal cord injuries, multiple sclerosis, stroke, and other neurological conditions. The researchers plan to continue testing the system with users and further refine the control algorithms to improve walking performance.
    “We are excited about the potential of this new system to improve the quality of life for people with lower limb impairments,” said Dr. Androwis. “By enabling more natural and intuitive walking patterns, we hope to help users of exoskeletons to move with greater ease and confidence.” More

  • in

    Energy harvesting via vibrations: Researchers develop highly durable and efficient device

    An international research group has engineered a new energy-generating device by combining piezoelectric composites with carbon fiber-reinforced polymer (CFRP), a commonly used material that is both light and strong. The new device transforms vibrations from the surrounding environment into electricity, providing an efficient and reliable means for self-powered sensors.
    Details of the group’s research were published in the journal Nano Energy on June 13, 2023.
    Energy harvesting involves converting energy from the environment into usable electrical energy and is something crucial for ensuring a sustainable future.
    “Everyday items, from fridges to street lamps, are connected to the internet as part of the Internet of Things (IoT), and many of them are equipped with sensors that collect data,” says Fumio Narita, co-author of the study and professor at Tohoku University’s Graduate School of Environmental Studies. “But these IoT devices need power to function, which is challenging if they are in remote places, or if there are lots of them.”
    The sun’s rays, heat, and vibration all can generate electrical power. Vibrational energy can be utilized thanks to piezoelectric materials’ ability to generate electricity when physically stressed. Meanwhile, CFRP lends itself to applications in the aerospace and automotive industries, sports equipment, and medical equipment because of its durability and lightness.
    “We pondered whether a piezoelectric vibration energy harvester (PVEH), harnessing the robustness of CFRP together with a piezoelectric composite, could be a more efficient and durable means of harvesting energy,” says Narita.
    The group fabricated the device using a combination of CFRP and potassium sodium niobate (KNN) nanoparticles mixed with epoxy resin. The CFRP served as both an electrode and a reinforcement substrate.
    The so-called C-PVEH device lived up to its expectations. Tests and simulations revealed that it could maintain high performance even after being bent more than 100,000 times. It proved capable of storing the generated electricity and powering LED lights. Additionally, it outperformed other KNN-based polymer composites in terms of energy output density.
    The C-PVEH will help propel the development of self-powered IoT sensors, leading to more energy-efficient IoT devices.
    Narita and his colleagues are also excited about the technological advancements of their breakthrough. “As well as the societal benefits of our C-PVEH device, we are thrilled with the contributions we have made to the field of energy harvesting and sensor technology. The blend of excellent energy output density and high resilience can guide future research into other composite materials for diverse applications.” More

  • in

    Terahertz-to-visible light conversion for future telecommunications

    A study carried out by a research team from the Helmholtz-Zentrum Dresden-Rossendorf (HZDR), the Catalan Institute of Nanoscience and Nanotechnology (ICN2), University of Exeter Centre for Graphene Science, and TU Eindhoven demonstrates that graphene-based materials can be used to efficiently convert high-frequency signals into visible light, and that this mechanism is ultrafast and tunable, as the team presents its findings in Nano Letters. These outcomes open the path to exciting applications in near-future information and communication technologies.
    The ability to convert signals from one frequency regime to another is key to various technologies, in particular in telecommunications, where, for example, data processed by electronic devices are often transmitted as optical signals through glass fibers. To enable significantly higher data transmission rates future 6G wireless communication systems will need to extend the carrier frequency above 100 gigahertz up to the terahertz range. Terahertz waves are a part of the electromagnetic spectrum that lies between microwaves and infrared light. However, terahertz waves can only be used to transport data wirelessly over very limited distances. “Therefore, a fast and controllable mechanism to convert terahertz waves into visible or infrared light will be required, which can be transported via optical fibers. Imaging and sensing technologies could also benefit from such a mechanism,” says Dr. Igor Ilyakov of the Institute of Radiation Physics at HZDR.
    What is missing so far is a material that is capable of upconverting photon energies by a factor of about 1000. The team has only recently identified the strong nonlinear response of so-called Dirac quantum materials, e.g. graphene and topological insulators, to terahertz light pulses. “This manifests in the highly efficient generation of high harmonics, that is, light with a multiple of the original laser frequency. These harmonics are still within the terahertz range, however, there were also first observations of visible light emission from graphene upon infrared and terahertz excitation,” recalls Dr. Sergey Kovalev of the Institute of Radiation Physics at HZDR. “Until now, this effect has been extremely inefficient, and the underlying physical mechanism unknown.”
    The mechanism behind
    The new results provide a physical explanation for this mechanism and show how the light emission can be strongly enhanced by using highly doped graphene or by using a grating-graphene metamaterial — a material with a tailored structure characterized by special optical, electrical or magnetic properties. The team also observed that the conversion occurs very rapidly — on the sub-nanosecond time scale, and that it can be controlled by electrostatic gating.
    “We ascribe the light frequency conversion in graphene to a terahertz-induced thermal radiation mechanism, that is, the charge carriers absorb electromagnetic energy from the incident terahertz field. The absorbed energy rapidly distributes in the material, leading to carrier heating; and finally this leads to emission of photons in the visible spectrum, quite like light emitted by any heated object,” explains Prof. Klaas-Jan Tielrooij of ICN2’s Ultrafast Dynamics in Nanoscale Systems group and Eindhoven University of Technology.
    The tunability and speed of the terahertz-to-visible light conversion achieved in graphene-based materials has great potential for application in information and communication technologies. The underlying ultrafast thermodynamic mechanism could certainly produce an impact on terahertz-to-telecom interconnects, as well as in any technology that requires ultrafast frequency conversion of signals. More

  • in

    High-quality child care contributes to later success in science, math

    Children who receive high-quality child care as babies, toddlers and preschoolers do better in science, technology, engineering and math through high school, and that link is stronger among children from low-income backgrounds, according to research published by the American Psychological Association.
    “Our results suggest that caregiving quality in early childhood can build a strong foundation for a trajectory of STEM success,” said study author Andres S. Bustamante, PhD, of the University of California Irvine. “Investing in quality child care and early childhood education could help remedy the underrepresentation of racially and ethnically diverse populations in STEM fields.”
    The research was published in the journal Developmental Psychology.
    Many studies have demonstrated that higher quality caregiving in early childhood is associated with better school readiness for young children from low-income families. But not as many have looked at how the effects of early child care extend into high school, and even fewer have focused specifically on STEM subjects, according to Bustamante.
    To investigate those questions, Bustamante and his colleagues examined data from 979 families who participated in the National Institute of Child Health and Human Development Study of Early Child Care and Youth Development, from the time of the child’s birth in 1991 until 2006.
    As part of the study, trained observers visited the day cares and preschools of all the children who were enrolled for 10 or more hours per week. The observers visited when the children were 6, 15, 24, 36 and 54 months old, and rated two aspects of the child care: the extent to which the caregivers provided a warm and supportive environment and responded to children’s interests and emotions, and the amount of cognitive stimulation they provided through using rich language, asking questions to probe the children’s thinking, and providing feedback to deepen the children’s understanding of concepts.
    The researchers then looked at how the students performed in STEM subjects in elementary and high school. To measure STEM success, they examined the children’s scores on the math and reasoning portions of a standardized test in grades three to five. To measure high school achievement, the researchers looked at standardized test scores and the students’ most advanced science course completed, the most advanced math course completed, GPA in science courses and GPA in math courses.
    Overall, they found that both aspects of caregiving quality (more cognitive stimulation and better caregiver sensitivity-responsivity) predicted greater STEM achievement in late elementary school (third, fourth and fifth grade), which in turn predicted greater STEM achievement in high school at age 15. Sensitive and responsive caregiving in early childhood was a stronger predictor of high school STEM performance for children from low-income families compared with children from higher income families.
    “Our hypothesis was that cognitive stimulation would be more strongly related to STEM outcomes because those kinds of interactions provide the foundation for exploration and inquiry, which are key in STEM learning,” Bustamante said. “However, what we saw was that the caregiver sensitivity and responsiveness was just as predictive of later STEM outcomes, highlighting the importance of children’s social emotional development and settings that support cognitive and social emotional skills.”
    Overall, Bustamante said, research and theory suggest that high-quality early care practices support a strong foundation for science learning. “Together, these results highlight caregiver cognitive stimulation and sensitivity and responsiveness in early childhood as an area for investment to strengthen the STEM pipeline, particularly for children from low-income households.” More

  • in

    Video games spark exciting new frontier in neuroscience

    University of Queensland researchers have used an algorithm from a video game to gain insights into the behaviour of molecules within live brain cells.
    Dr Tristan Wallis and Professor Frederic Meunier from UQ’s Queensland Brain Institute came up with the idea while in lockdown during the COVID-19 pandemic.
    “Combat video games use a very fast algorithm to track the trajectory of bullets, to ensure the correct target is hit on the battlefield at the right time,” Dr Wallis said.
    “The technology has been optimised to be highly accurate, so the experience feels as realistic as possible.
    “We thought a similar algorithm could be used to analyse tracked molecules moving within a brain cell.”
    Until now, technology has only been able to detect and analyse molecules in space, and not how they behave in space and time.

    “Scientists use super-resolution microscopy to look into live brain cells and record how tiny molecules within them cluster to perform specific functions,” Dr Wallis said.
    “Individual proteins bounce and move in a seemingly chaotic environment, but when you observe these molecules in space and time, you start to see order within the chaos.
    “It was an exciting idea — and it worked.”
    Dr Wallis used coding tools to build an algorithm that is now used by several labs to gather rich data about brain cell activity.
    “Rather than tracking bullets to the bad guys in video games, we applied the algorithm to observe molecules clustering together — which ones, when, where, for how long and how often,” Dr Wallis said.

    “This gives us new information about how molecules perform critical functions within brain cells and how these functions can be disrupted during ageing and disease.”
    Professor Meunier said the potential impact of the approach was exponential.
    “Our team is already using the technology to gather valuable evidence about proteins such as Syntaxin-1A, essential for communication within brain cells,” Professor Meunier said.
    “Other researchers are also applying it to different research questions.
    “And we are collaborating with UQ mathematicians and statisticians to expand how we use this technology to accelerate scientific discoveries.”
    Professor Meunier said it was gratifying to see the effect of a simple idea.
    “We used our creativity to solve a research challenge by merging two unrelated high-tech worlds, video games and super-resolution microscopy,” he said.
    “It has brought us to a new frontier in neuroscience.”
    The research was published in Nature Communications. More

  • in

    Metaverse could put a dent in global warming

    For many technology enthusiasts, the metaverse has the potential to transform almost every facet of human life, from work to education to entertainment. Now, new Cornell University research shows it could have environmental benefits, too.
    Researchers find the metaverse could lower global surface temperature by up to 0.02 degrees Celsius before the end of the century.
    The team’s paper, “Growing Metaverse Sector Can Reduce Greenhouse Gas Emissions by 10 Gt CO2e in the United States by 2050,” published June 14 in Energy and Environmental Science.
    They used AI-based modeling to analyze data from key sectors — technology, energy, environment and business — to anticipate the growth of metaverse usage and the impact of its most promising applications: remote work, virtual traveling, distance learning, gaming and non-fungible tokens (NFTs).
    The researchers projected metaverse expansion through 2050 along three different trajectories — slow, nominal and fast — and they looked to previous technologies, such as television, the internet and the iPhone, for insight into how quickly that adoption might occur. They also factored in the amount of energy that increasing usage would consume. The modeling suggested that within 30 years, the technology would be adopted by more than 90% of the population.
    “One thing that did surprise us is that this metaverse is going to grow much quicker than what we expected,” said Fengqi You, professor in energy systems engineering and the paper’s senior author. “Look at earlier technologies — TV, for instance. It took decades to be eventually adopted by everyone. Now we are really in an age of technology explosion. Think of our smartphones. They grew very fast.”
    Currently, two of the biggest industry drivers of metaverse development are Meta and Microsoft, both of which contributed to the study. Meta has been focusing on individual experiences, such as gaming, while Microsoft specializes in business solutions, including remote conferencing and distance learning.

    Limiting business travel would generate the largest environmental benefit, according to You.
    “Think about the decarbonization of our transportation sector,” he said. “Electric vehicles work, but you can’t drive a car to London or Tokyo. Do I really have to fly to Singapore for a conference tomorrow? That will be an interesting decision-making point for some stakeholders to consider as we move forward with these technologies with human-machine interface in a 3D virtual world.”
    The paper notes that by 2050 the metaverse industry could potentially lower greenhouse gas emissions by 10 gigatons; lower atmospheric carbon dioxide concentration by 4.0 parts per million; decrease effective radiative forcing by 0.035 watts per square meter; and lower total domestic energy consumption by 92 EJ, a reduction that surpasses the annual nationwide energy consumption of all end-use sectors in previous years.
    These findings could help policymakers understand how metaverse industry growth can accelerate progress towards achieving net-zero emissions targets and spur more flexible decarbonization strategies. Metaverse-based remote working, distance learning and virtual tourism could be promoted to improve air quality. In addition to alleviating air pollutant emissions, the reduction of transportation and commercial energy usage could help transform the way energy is distributed, with more energy supply going towards the residential sector.
    “This mechanism is going to help, but in the end, it is going to help lower the global surface temperature by up to 0.02 degrees,” You said. “There are so many sectors in this economy. You cannot count on the metaverse to do everything. But it could do a little bit if we leverage it in a reasonable way.”
    The research was supported by the National Science Foundation. More