More stories

  • in

    Large language models respond differently based on user’s motivation

    A new study recently published in the Journal of the American Medical Informatics Association (JAMIA) reveals how large language models (LLMs) respond to different motivational states. In their evaluation of three LLM-based generative conversational agents (GAs) — ChatGPT, Google Bard, and Llama 2, PhD student Michelle Bak and Assistant Professor Jessie Chin of the School of Information Sciences at the University of Illinois Urbana-Champaign found that while GAs are able to identify users’ motivation states and provide relevant information when individuals have established goals, they are less likely to provide guidance when the users are hesitant or ambivalent about changing their behavior.
    Bak provides the example of an individual with diabetes who is resistant to changing their sedentary lifestyle.
    “If they were advised by a doctor that exercising would be necessary to manage their diabetes, it would be important to provide information through GAs that helps them increase an awareness about healthy behaviors, become emotionally engaged with the changes, and realize how their unhealthy habits might affect people around them. This kind of information can help them take the next steps toward making positive changes,” said Bak.
    Current GAs lack specific information about these processes, which puts the individual at a health disadvantage. Conversely, for individuals who are committed to changing their physical activity levels (e.g., have joined personal fitness training to manage chronic depression), GAs are able to provide relevant information and support.
    “This major gap of LLMs in responding to certain states of motivation suggests future directions of LLMs research for health promotion,” said Chin.
    Bak’s research goal is to develop a digital health solution based on using natural language processing and psychological theories to promote preventive health behaviors. She earned her bachelor’s degree in sociology from the University of California Los Angeles.
    Chin’s research aims to translate social and behavioral sciences theories to design technologies and interactive experiences to promote health communication and behavior across the lifespan. She leads the Adaptive Cognition and Interaction Design (ACTION) Lab at the University of Illinois. Chin holds a BS in psychology from National Taiwan University, an MS in human factors, and a PhD in educational psychology with a focus on cognitive science in teaching and learning from the University of Illinois. More

  • in

    ‘Smart swarms’ of tiny robots inspired by natural herd mentality

    In natural ecosystems, the herd mentality plays a major role — from schools of fish, to beehives to ant colonies. This collective behavior allows the whole to exceed the sum of its parts and better respond to threats and challenges.
    This behavior inspired researchers from The University of Texas at Austin, and for more than a year they’ve been working on creating “smart swarms” of microscopic robots. The researchers engineered social interactions among these tiny machines so that they can act as one coordinated group, performing tasks better than they would if they were moving as individuals or at random.
    “All these groups, flocks of birds, schools of fish and others, each member of the group has this natural inclination to work in concert with its neighbor, and together they are smarter, stronger and more efficient than they would be on their own,” said Yuebing Zheng, associate professor in the Walker Department of Mechanical Engineering and Texas Materials Institute. “We wanted to learn more about the mechanisms that make this happen and see if we can reproduce it.”
    Zheng and his team first showcased these innovations in a paper published in Advanced Materials last year. But they’ve taken things a step further in a new paper published recently in Science Advances.
    In the new paper, Zheng and his team have given these swarms a new trait called adaptive time delay. This concept allows each microrobot within the swarm to adapt its motion to changes in local surroundings. By doing this, the swarm showed a significant increase in responsivity without decreasing its robustness — the ability to quickly respond to any environment change while maintaining the integrity of the swarm.
    This finding builds on a novel optical feedback system — the ability to direct these microrobots in a collective way using controllable light patterns. This system was first unveiled in the researchers’ 2023 paper — recently chosen as an “editor’s choice” by Advanced Materials – and it facilitated the development of adaptive time delay for microrobots.
    The adaptive time delay strategy offers potential for scalability and integration into larger machinery. This approach could significantly enhance the operational efficiency of autonomous drone fleets. Similarly, it could enable conveys of trucks and cars to autonomously navigate extensive highway journeys in unison, with improved responsiveness and increased robustness. The same way schools of fish can communicate and follow each other, so will these machines. As a result, there’s no need for any kind of central control, which takes more data and energy to operate.
    “Nanorobots, on an individual basis, are vulnerable to complex environments; they struggle to navigate effectively in challenging conditions such as bloodstreams or polluted waters,” said Zhihan Chen, a Ph.D. student in Zheng’s lab and co-author on the new paper. “This collective motion can help them better navigate a complicated environment and reach the target efficiently and avoid obstacles or threats.”
    Having proven this swarm mentality in the lab setting, the next step is to introduce more obstacles. These experiments were conducted in a static liquid solution. Up next, they’ll try to repeat the behavior in flowing liquid. And then they’ll move to replicate it inside an organism.
    Once fully developed, these smart swarms could serve as advanced drug delivery forces, able to navigate the human body and elude its defenses to bring medicine to its target. Or, they could operate like iRobot robotic vacuums, but for contaminated water, collectively cleaning every bit of an area together. More

  • in

    Computer scientists show the way: AI models need not be SO power hungry

    The development of AI models is an overlooked climate culprit. Computer scientists at the University of Copenhagen have created a recipe book for designing AI models that use much less energy without compromising performance. They argue that a model’s energy consumption and carbon footprint should be a fixed criterion when designing and training AI models.
    The fact that colossal amounts of energy are needed to Google away, talk to Siri, ask ChatGPT to get something done, or use AI in any sense, has gradually become common knowledge. One study estimates that by 2027, AI servers will consume as much energy as Argentina or Sweden. Indeed, a single ChatGPT prompt is estimated to consume, on average, as much energy as forty mobile phone charges. But the research community and the industry have yet to make the development of AI models that are energy efficient and thus more climate friendly the focus, computer science researchers at the University of Copenhagen point out.
    “Today, developers are narrowly focused on building AI models that are effective in terms of the accuracy of their results. It’s like saying that a car is effective because it gets you to your destination quickly, without considering the amount of fuel it uses. As a result, AI models are often inefficient in terms of energy consumption,” says Assistant Professor Raghavendra Selvan from the Department of Computer Science, whose research looks in to possibilities for reducing AI’s carbon footprint.
    But the new study, of which he and computer science student Pedram Bakhtiarifard are two of the authors, demonstrates that it is easy to curb a great deal of CO2e without compromising the precision of an AI model. Doing so demands keeping climate costs in mind from the design and training phases of AI models.
    “If you put together a model that is energy efficient from the get-go, you reduce the carbon footprint in each phase of the model’s ‘life cycle’. This applies both to the model’s training, which is a particularly energy-intensive process that often takes weeks or months, as well as to its application,” says Selvan.
    Recipe book for the AI industry
    In their study, the researchers calculated how much energy it takes to train more than 400,000 convolutional neural network type AI models — this was done without actually training all these models. Among other things, convolutional neural networks are used to analyse medical imagery, for language translation and for object and face recognition — a function you might know from the camera app on your smartphone.

    Based on the calculations, the researchers present a benchmark collection of AI models that use less energy to solve a given task, but which perform at approximately the same level. The study shows that by opting for other types of models or by adjusting models, 70-80% energy savings can be achieved during the training and deployment phase, with only a 1% or less decrease in performance. And according to the researchers, this is a conservative estimate.
    “Consider our results as a recipe book for the AI professionals. The recipes don’t just describe the performance of different algorithms, but how energy efficient they are. And that by swapping one ingredient with another in the design of a model, one can often achieve the same result. So now, the practitioners can choose the model they want based on both performance and energy consumption, and without needing to train each model first,” says Pedram Bakhtiarifard, who continues:
    “Oftentimes, many models are trained before finding the one that is suspected of being the most suitable for solving a particular task. This makes the development of AI extremely energy-intensive. Therefore, it would be more climate-friendly to choose the right model from the outset, while choosing one that does not consume too much power during the training phase.”
    The researchers stress that in some fields, like self-driving cars or certain areas of medicine, model precision can be critical for safety. Here, it is important not to compromise on performance. However, this shouldn’t be a deterrence to striving for high energy efficiency in other domains.
    “AI has amazing potential. But if we are to ensure sustainable and responsible AI development, we need a more holistic approach that not only has model performance in mind, but also climate impact. Here, we show that it is possible to find a better trade-off. When AI models are developed for different tasks, energy efficiency ought to be a fixed criterion — just as it is standard in many other industries,” concludes Raghavendra Selvan.
    The “recipe book” put together in this work is available as an open-source dataset for other researchers to experiment with. The information about all these 423,000 architectures is published on Github which AI practitioners can access using simple Python scripts.
    The UCPH researchers estimated how much energy it takes to train 429,000 of the AI subtype models known as convolutional neural networks in this dataset. Among other things, these are used for object detection, language translation and medical image analysis.
    It is estimated that the training alone of the 429,000 neural networks the study looked at would require 263,000 kWh. This equals the amount of energy that an average Danish citizen consumes over 46 years. And it would take one computer about 100 years to do the training. The authors in this work did not actually train these models themselves but estimated these using another AI model, and thus saving 99% of the energy it would have taken.
    Training AI models consumes a lot of energy, and thereby emits a lot of CO2e. This is due to the intensive computations performed while training a model, typically run on powerful computers. This is especially true for large models, like the language model behind ChatGPT. AI tasks are often processed in data centers, which demand significant amounts of power to keep computers running and cool. The energy source for these centers, which may rely on fossil fuels, influences their carbon footprint. More

  • in

    Drawing inspiration from plants: A metal-air paper battery for wearable devices

    Drawing inspiration from the way plants breathe, a group of researchers at Tohoku University has created a paper-based magnesium-air battery that can be used in GPS sensors or pulse oximeter sensors. Taking advantage of paper’s recyclability and lightweight nature, the engineered battery holds promise for a more environmentally friendly source of energy.
    For over two millennia, paper has been a staple of human civilization. But these days, the usage of paper is not limited to writing. It is also playing a pivotal role in ushering in a greener future.
    Lightweight and thin paper-based devices help reduce dependence on metal or plastic materials, whilst at the same time being easier to dispose of. From paper-based diagnostic devices that deliver economical and rapid detection of infectious diseases to batteries and energy devices that offer an environmentally friendly alternative for power generation, scientists are finding ingenious ways to put this versatile material to use.
    Now, a team of researchers at Tohoku University has reported on a high-performance magnesium-air (Mg-air) battery that is paper-based and activated by water.
    “We drew inspiration for this device from the respiration mechanism of plants,” points out Hiroshi Yabu, corresponding authors of the study. “Photosynthesis is analogous to the charge and discharge process in batteries. Just as plants harness solar energy to synthesize sugar from water in the ground and carbon dioxide from the air, our battery utilizes magnesium as a substrate to generate power from oxygen and water.”
    To fabricate the battery, Yabu and his colleagues bonded magnesium foil onto paper and added the cathode catalyst and gas diffusion layer directly to the other side of the paper. The paper battery achieved an open circuit voltage of 1.8 volts, a 1.0 volt current density of 100 mA/cm², and a maximum output of 103 milliwatts/cm².
    “Not only did the battery demonstrate impressive performance results, it operates without using toxic materials — instead using carbon cathodes and a pigment electrocatalyst that have passed stringent assessments,” adds Yabu.
    The researchers put the battery to the test in a pulse oximeter sensor and a gps sensor, illustrating its versatility for wearable devices. More

  • in

    The math problem that took nearly a century to solve: Secret to Ramsey numbers

    We’ve all been there: staring at a math test with a problem that seems impossible to solve. What if finding the solution to a problem took almost a century? For mathematicians who dabble in Ramsey theory, this is very much the case. In fact, little progress had been made in solving Ramsey problems since the 1930s.
    Now, University of California San Diego researchers Jacques Verstraete and Sam Mattheus have found the answer to r(4,t), a longstanding Ramsey problem that has perplexed the math world for decades.
    What was Ramsey’s problem, anyway?
    In mathematical parlance, a graph is a series of points and the lines in between those points. Ramsey theory suggests that if the graph is large enough, you’re guaranteed to find some kind of order within it — either a set of points with no lines between them or a set of points with all possible lines between them (these sets are called “cliques”). This is written as r(s,t) where s are the points with lines and t are the points without lines.
    To those of us who don’t deal in graph theory, the most well-known Ramsey problem, r(3,3), is sometimes called “the theorem on friends and strangers” and is explained by way of a party: in a group of six people, you will find at least three people who all know each other or three people who all don’t know each other. The answer to r(3,3) is six.
    “It’s a fact of nature, an absolute truth,” Verstraete states. “It doesn’t matter what the situation is or which six people you pick — you will find three people who all know each other or three people who all don’t know each other. You may be able to find more, but you are guaranteed that there will be at least three in one clique or the other.”
    What happened after mathematicians found that r(3,3) = 6? Naturally, they wanted to know r(4,4), r(5,5), and r(4,t) where the number of points that are not connected is variable. The solution to r(4,4) is 18 and is proved using a theorem created by Paul Erdös and George Szekeres in the 1930s.

    Currently r(5,5) is still unknown.
    A good problem fights back
    Why is something so simple to state so hard to solve? It turns out to be more complicated than it appears. Let’s say you knew the solution to r(5,5) was somewhere between 40-50. If you started with 45 points, there would be more than 10234 graphs to consider!
    “Because these numbers are so notoriously difficult to find, mathematicians look for estimations,” Verstraete explained. “This is what Sam and I have achieved in our recent work. How do we find not the exact answer, but the best estimates for what these Ramsey numbers might be?”
    Math students learn about Ramsey problems early on, so r(4,t) has been on Verstraete’s radar for most of his professional career. In fact, he first saw the problem in print in Erdös on Graphs: His Legacy of Unsolved Problems, written by two UC San Diego professors, Fan Chung and the late Ron Graham. The problem is a conjecture from Erdös, who offered $250 to the first person who could solve it.
    “Many people have thought about r(4,t) — it’s been an open problem for over 90 years,” Verstraete said. “But it wasn’t something that was at the forefront of my research. Everybody knows it’s hard and everyone’s tried to figure it out, so unless you have a new idea, you’re not likely to get anywhere.”
    Then about four years ago, Verstraete was working on a different Ramsey problem with a mathematician at the University of Illinois-Chicago, Dhruv Mubayi. Together they discovered that pseudorandom graphs could advance the current knowledge on these old problems.

    In 1937, Erdös discovered that using random graphs could give good lower bounds on Ramsey problems. What Verstraete and Mubayi discovered was that sampling from pseudorandom graphs frequently gives better bounds on Ramsey numbers than random graphs. These bounds — upper and lower limits on the possible answer — tightened the range of estimations they could make. In other words, they were getting closer to the truth.
    In 2019, to the delight of the math world, Verstraete and Mubayi used pseudorandom graphs to solve r(3,t). However, Verstraete struggled to build a pseudorandom graph that could help solve r(4,t).
    He began pulling in different areas of math outside of combinatorics, including finite geometry, algebra and probability. Eventually he joined forces with Mattheus, a postdoctoral scholar in his group whose background was in finite geometry.
    “It turned out that the pseudorandom graph we needed could be found in finite geometry,” Verstraete stated. “Sam was the perfect person to come along and help build what we needed.”
    Once they had the pseudorandom graph in place, they still had to puzzle out several pieces of math. It took almost a year, but eventually they realized they had a solution: r(4,t) is close to a cubic function of t. If you want a party where there will always be four people who all know each other or t people who all don’t know each other, you will need roughly t3 people present. There is a small asterisk (actually an o) because, remember, this is an estimate, not an exact answer. But t3 is very close to the exact answer.
    The findings are currently under review with the Annals of Mathematics.
    “It really did take us years to solve,” Verstraete stated. “And there were many times where we were stuck and wondered if we’d be able to solve it at all. But one should never give up, no matter how long it takes.”
    Verstraete emphasizes the importance of perseverance — something he reminds his students of often. “If you find that the problem is hard and you’re stuck, that means it’s a good problem. Fan Chung said a good problem fights back. You can’t expect it just to reveal itself.”
    Verstraete knows such dogged determination is well-rewarded: “I got a call from Fan saying she owes me $250.” More

  • in

    100 kilometers of quantum-encrypted transfer

    Researchers at DTU have successfully distributed a quantum-secure key using a method called Continuous Variable Quantum Key Distribution (CV QKD). The researchers have managed to make the method work over a record 100 km distance — the longest distance ever achieved using the CV QKD method. The advantage of the method is that it can be applied to the existing Internet infrastructure.
    Quantum computers threaten existing algorithm-based encryptions, which currently secure data transfers against eavesdropping and surveillance. They are not yet powerful enough to break them, but it’s a matter of time. If a quantum computer succeeds in figuring out the most secure algorithms, it leaves an open door to all data connected via the internet. This has accelerated the development of a new encryption method based on the principles of quantum physics.
    But to succeed, researchers must overcome one of the challenges of quantum mechanics — ensuring consistency over longer distances. Continuous Variable Quantum Key Distribution has so far worked best over short distances.
    “We have achieved a wide range of improvements, especially regarding the loss of photons along the way. In this experiment, published in Science Advances, we securely distributed a quantum-encrypted key 100 kilometres via fibre optic cable. This is a record distance with this method,” says Tobias Gehring, an associate professor at DTU, who, together with a group of researchers at DTU, aims to be able to distribute quantum-encrypted information around the world via the internet.
    Secret keys from quantum states of light
    “When data needs to be sent from A to B, it must be protected. Encryption combines data with a secure key distributed between sender and receiver so both can access the data. A third party must not be able to figure out the key while it is being transmitted; otherwise, the encryption will be compromised. Key exchange is, therefore, essential in encrypting data.
    Quantum Key Distribution (QKD) is an advanced technology that researchers are working on for crucial exchanges. The technology ensures the exchange of cryptographic keys by using light from quantum mechanical particles called photons.

    When a sender sends information encoded in photons, the quantum mechanical properties of the photons are exploited to create a unique key for the sender and receiver. Attempts by others to measure or observe photons in a quantum state will instantly change their state. Therefore, it is physically only possible to measure light by disturbing the signal.
    “It is impossible to make a copy of a quantum state, as when making a copy of an A4 sheet — if you try, it will be an inferior copy. That’s what ensures that it is not possible to copy the key. This can protect critical infrastructure such as health records and the financial sector from being hacked,” explains Tobias Gehring.
    Works via existing infrastructure
    The Continuous Variable Quantum Key Distribution (CV QKD) technology can be integrated into the existing internet infrastructure.
    “The advantage of using this technology is that we can build a system that resembles what optical communication already relies on.”
    The backbone of the internet is optical communication. It works by sending data via infrared light running through optical fibres. They function as light guides laid in cables, ensuring we can send data worldwide. Data can be sent faster and over longer distances via fibre optic cables, and light signals are less susceptible to interference, which is called noise in technical terms.

    “It is a standard technology that has been used for a long time. So, you don’t need to invent anything new to be able to use it to distribute quantum keys, and it can make implementation significantly cheaper. And we can operate at room temperature,” explains Tobias Gehring, adding:
    “But CV QKD technology works best over shorter distances. Our task is to increase the distance. And the 100 kilometres is a big step in the right direction.”
    Noise, Errors, and Assistance from Machine Learning
    The researchers succeeded in increasing the distance by addressing three factors that limit their system in exchanging the quantum-encrypted keys over longer distances:
    Machine learning provided earlier measurements of the disturbances affecting the system. Noise, as these disturbances are called, can arise, for example, from electromagnetic radiation, which can distort or destroy the quantum states being transmitted. The earlier detection of the noise made it possible to reduce its corresponding effect more effectively.
    Furthermore, the researchers have become better at correcting errors that can occur along the way, which can be caused by noise, interference, or imperfections in the hardware.
    “In our upcoming work, we will use the technology to establish a secure communication network between Danish ministries to secure their communication. We will also attempt to generate secret keys between, for example, Copenhagen and Odense to enable companies with branches in both cities to establish quantum-safe communication,” Tobias Gehring says.
    Facts:
    We don’t exactly know what happens — yet.
    Quantum Key Distribution was developed as a concept in 1984 by Bennett and Brassard, while the Canadian physicist and computer pioneer Artur Ekert and his colleagues carried out the first practical implementation of QKD in 1992. Their contribution has been crucial for developing modern QKD protocols, a set of rules, procedures, or conventions that determine how a device should perform a task.
    Quantum Key Distribution (QKD) is based on a fundamental uncertainty in copying photons in a quantum state. Photons are the quantum mechanical particles that light consists of.
    Photons in a quantum state carry a fundamental uncertainty, meaning it is not possible with certainty to know whether the photon is one or several photons collected in the given state, also called coherent photons. This prevents a hacker from measuring the number of photons, making it impossible to make an exact copy of a state.
    They also carry a fundamental randomness because photons are in multiple states simultaneously, also called superposition. The superposition of photons collapses into a random state when the measurement occurs. This makes it impossible to measure precisely which phase they are in while in superposition.
    Together, it becomes nearly impossible for a hacker to copy a key without introducing errors, and the system will know if a hacker is trying to break in and can shut down immediately. In other words, it becomes impossible for a hacker to first steal the key and then to avoid the door locking as he tries to put the key in the lock.
    Continuous Variable Quantum Key Distribution (CV QKD) focuses on measuring the smooth properties of quantum states in photons. It can be compared to conveying information in a stream of all the nuances of colours instead of conveying information step by step in each colour.
    Facts:
    The Innovation Fund Denmark, the Danish National Research Foundation, the European Union’s Horizon Europe research and innovation program, the Carlsberg Foundation, and the Czech Science Foundation support the project.
    The research group comprises Adnan A.E. Hajomer, Nitin Jain, Hou-Man Chin, Ivan Derkach, Ulrik L. Andersen, and Tobias Gehring.
    The Danish Quantum Communication Infrastructure (QCI.DK) targets the first deployment of Danish quantum communication technologies in a versatile network supporting real-life Quantum Key Distribution applications. More

  • in

    I spy with my speedy eye — scientists discover speed of visual perception ranges widely in humans

    Using a blink-and-you’ll-miss-it experiment, researchers from Trinity College Dublin have discovered that individuals differ widely in the rate at which they perceive visual signals. Some people perceive a rapidly changing visual cue at frequencies that others cannot, which means some access more visual information per timeframe than others.
    This discovery suggests some people have an innate advantage in certain settings where response time is crucial, such as in ball sports, or in competitive gaming.
    The rate with which we perceive the world is known as our “temporal resolution,” and in many ways it is similar to the refresh rate of a computer monitor.
    The researchers, from the Department of Zoology in the School of Natural Sciences and the Trinity College Institute of Neuroscience, found that there is considerable variation among people in their temporal resolution, meaning some people effectively see more “images per second” than others.
    To quantify this, the scientists used the “critical flicker fusion threshold,” a measure for the maximum frequency at which an individual can perceive a flickering light source.
    If the light source flickers above a person’s threshold, they will not be able to see that it is flickering, and instead see the light as steady. Some participants in the experiment indicated they saw the light as completely still when it was in fact flashing about 35 times per second, while others were still able to perceive the flashing at rates of over 60 times per second.
    Clinton Haarlem, PhD Candidate in the School of Natural Sciences, is the first author of the article that has just been published in leading journal PLOS ONE. He said: “We also measured temporal resolution on multiple occasions in the same participants and found that even though there is significant variation among individuals, the trait appears to be quite stable over time within individuals.”
    Though our visual temporal resolution is quite stable from day to day in general, a post-hoc analysis did suggest that there may be slightly more variation over time within females than within males.

    “We don’t yet know how this variation in visual temporal resolution might affect our day-to-day lives, but we believe that individual differences in perception speed might become apparent in high-speed situations where one might need to locate or track fast-moving objects, such as in ball sports, or in situations where visual scenes change rapidly, such as in competitive gaming,” added Clinton Haarlem.
    “This suggests that some people may have an advantage over others before they have even picked up a racquet and hit a tennis ball, or grabbed a controller and jumped into some fantasy world online.”
    Andrew Jackson, Professor in Zoology in Trinity’s School of Natural Sciences, said: “What I think is really interesting about this project is how a zoologist, a geneticist and a psychologist can all find different angles to this work. For me as a zoologist the consequences of variation in visual perception likely has profound implications for how predators and prey interact, with various arms-races existing for investment in brain processing power and clever strategies to exploit weaknesses in one’s enemy.”
    Kevin Mitchell, Associate Professor in Developmental Neurobiology in Trinity’s School of Genetics and Microbiology, and the Trinity College Institute of Neuroscience, said: “Because we only have access to our own subjective experience, we might naively expect that everyone else perceives the world in the same way we do. Examples like colour blindness show that isn’t always true, but there are many less well known ways that perception can vary too. This study characterises one such difference — in the ‘frame rate”‘ of our visual systems. Some people really do seem to see the world faster than others.” More

  • in

    Study uses artificial intelligence to show how personality influences the expression of our genes

    An international study led by the UGR using artificial intelligence has shown that our personalities alter the expression of our genes. The findings shed new light on the long-standing mystery of how the mind and body interact.
    The study, published in the journal Molecular Psychiatry (Nature), examines how an individual’s personality and underlying outlook on life regulate their gene expression, and thus affect their health and well-being. It is the first study to measure the transcription of the entire genome in relation to human personality.
    The multi- and interdisciplinary study was led by researchers from the Andalusian Interuniversity Research Institute in Data Science and Computational Intelligence (DaSCI), the UGR’s Department of Computer Science and Artificial Intelligence, and the Biohealth Research Institute in Granada (ibs.GRANADA). It was carried out in collaboration with Professor Robert Cloninger (Washington University in St. Louis), researchers from Baylor College of Medicine (Texas, USA) and the Young Finns Study (Finland).
    The international research team (made up of specialists in genetics, medicine, psychology and computer science) used data from the Young Finns Study, an extensive study conducted in the general population of Finland over four decades during which relevant information was collected on participants’ health, physical condition and lifestyle. In addition, participants were subjected to extensive personality assessments that addressed both temperament (habits and emotional reactivity) and character (conscious goals and values). The results showed that certain outlooks on life are conducive to a healthy, fulfilling and long life, while others lead to a stressful, unhealthy and short life.
    The study analysed the regulation of gene expression in these individuals, taking into account three levels of self-awareness that were measured through their combined temperament and character profiles. These levels were designated “unregulated” — individuals dominated by irrational emotions and habits associated with their traditions and obedience to authority, “organised” — self-sufficient individuals capable of intentionally regulating their habits and cooperating with others for mutual benefit, and lastly, “creative” — self-transcendent individuals who adapt their habits to live in harmony with others, with nature or with the universe, even if this requires occasional personal sacrifices.
    Two key findings
    As UGR researcher and co-lead author of the study Coral del Val explains: “In our research we made two key discoveries about the expression and organisation of genes according to the personality profiles of these individuals. First, we discovered a network of 4,000 genes that clustered into multiple modules that were expressed in specific regions of the brain. Some of these genes had already been linked in previous studies to the inheritance of human personality. Second, we discovered that the modules formed a functional interaction network capable of orchestrating changes in gene expression in order to adapt to varying internal and external conditions. The modules turned on and off in a flexible manner, facilitating adaptation to the everyday challenges we all face, and choreographing our development.”
    The researchers showed that the changes in the patterns of interaction between these modules were orchestrated by two sub-networks. One network regulated emotional reactivity (anxiety, fear, etc.), while the other regulated what a person perceives as meaningful (e.g. production of concepts and language). “What’s most remarkable is the fact that the networks for emotion and meaning are coordinated by a control centre made up of six genes,” notes Elisa Díaz de la Guardia-Bolívar, the other co-lead author of the study. “It is particularly interesting that we found that the six genes of the control hub are highly preserved throughout evolution, from single-celled organisms to modern humans. This finding confirms their beneficial role in regulating the functioning of all forms of life on Earth,” she adds.

    Identifying these gene networks and the control hub regulating gene expression in humans has practical value because it shows how people can improve the quality of their health, happiness and overall quality of daily life, despite the challenges and stresses we all face.
    The UGR’s Igor Zwir explains: “In previous research, we found significant differences in well-being between people in the three personality groups, depending on their level of self-awareness. Specifically, those with greater self-awareness (the creative group) reported greater well-being compared to the organised and unregulated groups. We have now shown that these levels of self-awareness are also strongly associated with the regulation of gene expression in the same order (creative > organised > unregulated). This suggests that a person can improve their health and well-being by cultivating a more self-transcendent and creative outlook on life.”
    However, he cautions that it remains to be confirmed whether the regulation of gene expression through interventions that enhance self-awareness is the mediating factor in the association between self-awareness and well-being. Nevertheless, treatments that promote greater self-transcendence and mindfulness have also been shown to contribute to improvements in all aspects of health, including physical, mental, social and spiritual well-being. It is therefore plausible that the regulation of gene expression is the real mediator in this association.
    As the researchers predicted, certain types of genes, such as transcription factors, microRNAs and long non-coding RNAs, showed extensive enrichment in the 4000-gene integrated molecular network. However, the most significant enrichment was observed in a group of RNAs that are thought to have played a crucial role in the origin of cellular life. These RNAs have the ability to form membraneless compartments and carry out chemical reactions, allowing them to adapt rapidly to stress. This process, known as liquid-liquid phase separation (LLPS), creates a comprehensive bioreactor in which the chemicals that are essential for life can be synthesised.
    “We are delighted to discover the important roles of different types of genes in health and personality. It is amazing to see that evolution has preserved genes that are thought to have been important in the origin of life, allowing for the increasing plasticity, complexity and consciousness that we observe in humans. The innovative computational methods used in this project enable us to study complex biological systems in humans in an ethical, non-intrusive and beneficial way, with the aim of understanding how to live healthily,” says Professor Cloninger. He adds: “These findings clearly demonstrate that a person’s mind and body are deeply interconnected. Each influences the other, so they are not separate. It is important to recognise that our future well-being is not entirely determined by our past or present conditions; rather, we can cultivate our own well-being in a creative process full of open-ended possibilities.” More