More stories

  • in

    People watched other people shake boxes for science: Here’s why

    When researchers asked hundreds of people to watch other people shake boxes, it took just seconds for almost all of them to figure out what the shaking was for.
    The deceptively simple work by Johns Hopkins University perception researchers is the first to demonstrate that people can tell what others are trying to learn just by watching their actions. Published today in the journal Proceedings of the National Academy of Sciences, the study reveals a key yet neglected aspect of human cognition, and one with implications for artificial intelligence.
    “Just by looking at how someone’s body is moving, you can tell what they are trying to learn about their environment,” said author Chaz Firestone, an assistant professor of psychological and brain sciences who investigates how vision and thought interact. “We do this all the time, but there has been very little research on it.”
    Recognizing another person’s actions is something we do every day, whether it’s guessing which way someone is headed or figuring out what object they’re reaching for. These are known as “pragmatic actions.” Numerous studies have shown people can quickly and accurately identify these actions just by watching them. The new Johns Hopkins work investigates a different kind of behavior: “epistemic actions,” which are performed when someone is trying to learn something.
    For instance, someone might put their foot in a swimming pool because they’re going for a swim or they might put their foot in a pool to test the water. Though the actions are similar, there are differences and the Johns Hopkins team surmised observers would be able to detect another person’s “epistemic goals” just by watching them.
    Across several experiments, researchers asked a total of 500 participants to watch two videos in which someone picks up a box full of objects and shakes it around. One shows someone shaking a box to figure out the number of objects inside it. The other shows someone shaking a box to figure out the shape of the objects inside. Almost every participant knew who was shaking for the number and who was shaking for shape.
    “What is surprising to me is how intuitive this is,” said lead author Sholei Croom, a Johns Hopkins graduate student. “People really can suss out what others are trying to figure out, which shows how we can make these judgments even though what we’re looking at is very noisy and changes from person to person.”
    Added Firestone, “When you think about all the mental calculations someone must make to understand what someone else is trying to learn, it’s a remarkably complicated process. But our findings show it’s something people do easily.”

    The findings could also inform the development of artificial intelligence systems designed to interact with humans. A commercial robot assistant, for example, that can look at a customer and guess what they’re looking for.
    “It’s one thing to know where someone is headed or what product they are reaching for,” Firestone said. “But it’s another thing to infer whether someone is lost or what kind of information they are seeking.”
    In the future the team would like to pursue whether people can observe someone’s epistemic intent versus their pragmatic intent — what are they up to when they dip their foot in the pool. They’re also interested in when these observational skills emerge in human development and if it’s possible to build computational models to detail exactly how observed physical actions reveal epistemic intent.
    The Johns Hopkins team also included Hanbei Zhou, a sophomore studying neuroscience. More

  • in

    AI system self-organizes to develop features of brains of complex organisms

    Cambridge scientists have shown that placing physical constraints on an artificially-intelligent system — in much the same way that the human brain has to develop and operate within physical and biological constraints — allows it to develop features of the brains of complex organisms in order to solve tasks.
    As neural systems such as the brain organise themselves and make connections, they have to balance competing demands. For example, energy and resources are needed to grow and sustain the network in physical space, while at the same time optimising the network for information processing. This trade-off shapes all brains within and across species, which may help explain why many brains converge on similar organisational solutions.
    Jascha Achterberg, a Gates Scholar from the Medical Research Council Cognition and Brain Sciences Unit (MRC CBSU) at the University of Cambridge said: “Not only is the brain great at solving complex problems, it does so while using very little energy. In our new work we show that considering the brain’s problem solving abilities alongside its goal of spending as few resources as possible can help us understand why brains look like they do.”
    Co-lead author Dr Danyal Akarca, also from the MRC CBSU, added: “This stems from a broad principle, which is that biological systems commonly evolve to make the most of what energetic resources they have available to them. The solutions they come to are often very elegant and reflect the trade-offs between various forces imposed on them.”
    In a study published today in Nature Machine Intelligence, Achterberg, Akarca and colleagues created an artificial system intended to model a very simplified version of the brain and applied physical constraints. They found that their system went on to develop certain key characteristics and tactics similar to those found in human brains.
    Instead of real neurons, the system used computational nodes. Neurons and nodes are similar in function, in that each takes an input, transforms it, and produces an output, and a single node or neuron might connect to multiple others, all inputting information to be computed.
    In their system, however, the researchers applied a ‘physical’ constraint on the system. Each node was given a specific location in a virtual space, and the further away two nodes were, the more difficult it was for them to communicate. This is similar to how neurons in the human brain are organised.

    The researchers gave the system a simple task to complete — in this case a simplified version of a maze navigation task typically given to animals such as rats and macaques when studying the brain, where it has to combine multiple pieces of information to decide on the shortest route to get to the end point.
    One of the reasons the team chose this particular task is because to complete it, the system needs to maintain a number of elements — start location, end location and intermediate steps — and once it has learned to do the task reliably, it is possible to observe, at different moments in a trial, which nodes are important. For example, one particular cluster of nodes may encode the finish locations, while others encode the available routes, and it is possible to track which nodes are active at different stages of the task.
    Initially, the system does not know how to complete the task and makes mistakes. But when it is given feedback it gradually learns to get better at the task. It learns by changing the strength of the connections between its nodes, similar to how the strength of connections between brain cells changes as we learn. The system then repeats the task over and over again, until eventually it learns to perform it correctly.
    With their system, however, the physical constraint meant that the further away two nodes were, the more difficult it was to build a connection between the two nodes in response to the feedback. In the human brain, connections that span a large physical distance are expensive to form and maintain.
    When the system was asked to perform the task under these constraints, it used some of the same tricks used by real human brains to solve the task. For example, to get around the constraints, the artificial systems started to develop hubs — highly connected nodes that act as conduits for passing information across the network.
    More surprising, however, was that the response profiles of individual nodes themselves began to change: in other words, rather than having a system where each node codes for one particular property of the maze task, like the goal location or the next choice, nodes developed a flexible coding scheme. This means that at different moments in time nodes might be firing for a mix of the properties of the maze. For instance, the same node might be able to encode multiple locations of a maze, rather than needing specialised nodes for encoding specific locations. This is another feature seen in the brains of complex organisms.

    Co-author Professor Duncan Astle, from Cambridge’s Department of Psychiatry, said: “This simple constraint — it’s harder to wire nodes that are far apart — forces artificial systems to produce some quite complicated characteristics. Interestingly, they are characteristics shared by biological systems like the human brain. I think that tells us something fundamental about why our brains are organised the way they are.”
    Understanding the human brain
    The team are hopeful that their AI system could begin to shed light on how these constraints, shape differences between people’s brains, and contribute to differences seen in those that experience cognitive or mental health difficulties.
    Co-author Professor John Duncan from the MRC CBSU said: “These artificial brains give us a way to understand the rich and bewildering data we see when the activity of real neurons is recorded in real brains.”
    Achterberg added: “Artificial ‘brains’ allow us to ask questions that it would be impossible to look at in an actual biological system. We can train the system to perform tasks and then play around experimentally with the constraints we impose, to see if it begins to look more like the brains of particular individuals.”
    Implications for designing future AI systems
    The findings are likely to be of interest to the AI community, too, where they could allow for the development of more efficient systems, particularly in situations where there are likely to be physical constraints.
    Dr Akarca said: “AI researchers are constantly trying to work out how to make complex, neural systems that can encode and perform in a flexible way that is efficient. To achieve this, we think that neurobiology will give us a lot of inspiration. For example, the overall wiring cost of the system we’ve created is much lower than you would find in a typical AI system.”
    Many modern AI solutions involve using architectures that only superficially resemble a brain. The researchers say their works shows that the type of problem the AI is solving will influence which architecture is the most powerful to use.
    Achterberg said: “If you want to build an artificially-intelligent system that solves similar problems to humans, then ultimately the system will end up looking much closer to an actual brain than systems running on large compute cluster that specialise in very different tasks to those carried out by humans. The architecture and structure we see in our artificial ‘brain’ is there because it is beneficial for handling the specific brain-like challenges it faces.”
    This means that robots that have to process a large amount of constantly changing information with finite energetic resources could benefit from having brain structures not dissimilar to ours.
    Achterberg added: “Brains of robots that are deployed in the real physical world are probably going to look more like our brains because they might face the same challenges as us. They need to constantly process new information coming in through their sensors while controlling their bodies to move through space towards a goal. Many systems will need to run all their computations with a limited supply of electric energy and so, to balance these energetic constraints with the amount of information it needs to process, it will probably need a brain structure similar to ours.”
    The research was funded by the Medical Research Council, Gates Cambridge, the James S McDonnell Foundation, Templeton World Charity Foundation and Google DeepMind. More

  • in

    Want better AI? Get input from a real (human) expert

    Can AI be trusted? The question pops up wherever AI is used or discussed — which, these days, is everywhere.
    It’s a question that even some AI systems ask themselves.
    Many machine-learning systems create what experts call a “confidence score,” a value that reflects how confident the system is in its decisions. A low score tells the human user that there is some uncertainty about the recommendation; a high score indicates to the human user that the system, at least, is quite sure of its decisions. Savvy humans know to check the confidence score when deciding whether to trust the recommendation of a machine-learning system.
    Scientists at the Department of Energy’s Pacific Northwest National Laboratory have put forth a new way to evaluate an AI system’s recommendations. They bring human experts into the loop to view how the ML performed on a set of data. The expert learns which types of data the machine-learning system typically classifies correctly, and which data types lead to confusion and system errors. Armed with this knowledge, the experts then offer their own confidence score on future system recommendations.
    The result of having a human look over the shoulder of the AI system? Humans predicted the AI system’s performance more accurately.
    Minimal human effort — just a few hours — evaluating some of the decisions made by the AI program allowed researchers to vastly improve on the AI program’s ability to assess its decisions. In some analyses by the team, the accuracy of the confidence score doubled when a human provided the score.
    The PNNL team presented its results at a recent meeting of the Human Factors and Ergonomics Society in Washington, D.C., part of a session on human-AI robot teaming.

    “If you didn’t develop the machine-learning algorithm in the first place, then it can seem like a black box,” said Corey Fallon, the lead author of the study and an expert in human-machine interaction. “In some cases, the decisions seem fine. In other cases, you might get a recommendation that is a real head-scratcher. You may not understand why it’s making the decisions it is.”
    The grid and AI
    It’s a dilemma that power engineers working with the electric grid face. Their decisions based on reams of data that change every instant keep the lights on and the nation running. But power engineers may be reluctant to turn over decision-making authority to machine-learning systems.
    “There are hundreds of research papers about the use of machine learning in power systems, but almost none of them are applied in the real world. Many operators simply don’t trust ML. They have domain experience — something that ML can’t learn,” said coauthor Tianzhixi “Tim” Yin.
    The researchers at PNNL, which has a world-class team modernizing the grid, took a closer look at one machine-learning algorithm applied to power systems. They trained the SVM (support-vector machine) algorithm on real data from the grid’s Eastern Interconnection in the U.S. The program looked at 124 events, deciding whether a generator was malfunctioning, or whether the data was showing other types of events that are less noteworthy.
    The algorithm was 85% reliable in its decisions. Many of its errors occurred when there were complex power bumps or frequency shifts. Confidence scores created with a human in the loop were a marked improvement over the system’s assessment of its own decisions. The human expert’s input predicted the algorithm’s decisions with much greater accuracy.

    More human, better machine learning
    Fallon and Yin call the new score an “Expert-Derived Confidence” score, or EDC score.
    They found that, on average, when humans weighed in on the data, their EDC scores predicted model behavior that the algorithm’s confidence scores couldn’t predict.
    “The human expert fills in gaps in the ML’s knowledge,” said Yin. “The human provides information that the ML did not have, and we show that that information is significant. The bottom line is that we’ve shown that if you add human expertise to the ML results, you get much better confidence.”
    The work by Fallon and Yin was funded by PNNL through an initiative known as MARS — Mathematics for Artificial Reasoning in Science. The effort is part of a broader effort in artificial intelligence at PNNL. The initiative brought together Fallon, an expert on human-machine teaming and human factors research, and Yin, a data scientist and an expert on machine learning.
    “This is the type of research needed to prepare and equip an AI-ready workforce,” said Fallon. “If people don’t trust the tool, then you’ve wasted your time and money. You’ve got to know what will happen when you take a machine learning model out of the laboratory and put it to work in the real world.
    “I’m a big fan of human expertise and of human-machine teaming. Our EDC scores allow the human to better assess the situation and make the ultimate decision.” More

  • in

    Gold now has a golden future in revolutionizing wearable devices

    Top Olympic achievers are awarded the gold medal, a symbol revered for wealth and honor both in the East and the West. This metal also serves as a key element in diverse fields due to its stability in air, exceptional electrical conductivity, and biocompatibility. It’s highly favored in medical and energy sectors as the ‘preferred catalyst’ and is increasingly finding application in cutting-edge wearable technologies.
    A research team led by Professor Sei Kwang Hahn and Dr. Tae Yeon Kim from the Department of Materials Science and Engineering at Pohang University of Science and Technology (POSTECH) developed an integrated wearable sensor device that effectively measures and processes two bio-signals simultaneously. Their research findings were featured in Advanced Materials, an international top journal in the materials field.
    Wearable devices, available in various forms like attachments and patches, play a pivotal role in detecting physical, chemical, and electrophysiological signals for disease diagnosis and management. Recent strides in research focus on devising wearables capable of measuring multiple bio-signals concurrently. However, a major challenge has been the disparate materials needed for each signal measurement, leading to interface damage, complex fabrication, and reduced device stability. Additionally, these varied signals analysis requires further signal processing systems and algorithms.
    The team tackled this challenge using various shapes of gold (Au) nanowires. While silver (Ag) nanowires, known for their extreme thinness, lightness, and conductivity, are commonly used in wearable devices, the team fused them with gold. Initially, they developed bulk gold nanowires by coating the exterior of the silver nanowires, suppressing the galvanic phenomenon. Subsequently, they created hollow gold nanowires by selectively etching the silver from the gold-coated nanowires. The bulk gold nanowires responded sensitively to temperature variations, whereas the hollow gold nanowires showed high sensitivity to minute changes in strain.
    These nanowires were then patterned onto a substrate made of styrene-ethylene-butylene-styrene (SEBS) polymer, seamlessly integrated without separations. By leveraging two types of gold nanowires, each with distinct properties, they engineered an integrated sensor capable of measuring both temperature and strain. Additionally, they engineered a logic circuit for signal analysis, utilizing the negative gauge factor resulting from introducing micrometer-scale corrugations into the pattern. This approach led to the successful creation of an intelligent wearable device system that not only captures but also analyzes signals simultaneously, all using a single material of Au.
    The team’s sensors exhibited remarkable performance in detecting subtle muscle tremors, identifying heartbeat patterns, recognizing speech through vocal cord tremors, and monitoring changes in body temperature. Notably, these sensors maintained high stability without causing damage to the material interfaces. Their flexibility and excellent stretchability enabled them to conform to curved skin seamlessly.
    Professor Sei Kwang Hahn stated, “This research underscores the potential for the development of a futuristic bioelectronics platform capable of analyzing a diverse range of bio-signals.” He added, “We envision new prospects across various industries including healthcare and integrated electronic systems.”
    The research was sponsored by the Basic Research Program and the Biomedical Technology Development Program of the National Research Foundation of Korea, and POSCO Holdings.

    Top Olympic achievers are awarded the gold medal, a symbol revered for wealth and honor both in the East and the West. This metal also serves as a key element in diverse fields due to its stability in air, exceptional electrical conductivity, and biocompatibility. It’s highly favored in medical and energy sectors as the ‘preferred catalyst’ and is increasingly finding application in cutting-edge wearable technologies.
    A research team led by Professor Sei Kwang Hahn and Dr. Tae Yeon Kim from the Department of Materials Science and Engineering at Pohang University of Science and Technology (POSTECH) developed an integrated wearable sensor device that effectively measures and processes two bio-signals simultaneously. Their research findings were featured in Advanced Materials, an international top journal in the materials field.
    Wearable devices, available in various forms like attachments and patches, play a pivotal role in detecting physical, chemical, and electrophysiological signals for disease diagnosis and management. Recent strides in research focus on devising wearables capable of measuring multiple bio-signals concurrently. However, a major challenge has been the disparate materials needed for each signal measurement, leading to interface damage, complex fabrication, and reduced device stability. Additionally, these varied signals analysis requires further signal processing systems and algorithms.
    The team tackled this challenge using various shapes of gold (Au) nanowires. While silver (Ag) nanowires, known for their extreme thinness, lightness, and conductivity, are commonly used in wearable devices, the team fused them with gold. Initially, they developed bulk gold nanowires by coating the exterior of the silver nanowires, suppressing the galvanic phenomenon. Subsequently, they created hollow gold nanowires by selectively etching the silver from the gold-coated nanowires. The bulk gold nanowires responded sensitively to temperature variations, whereas the hollow gold nanowires showed high sensitivity to minute changes in strain.
    These nanowires were then patterned onto a substrate made of styrene-ethylene-butylene-styrene (SEBS) polymer, seamlessly integrated without separations. By leveraging two types of gold nanowires, each with distinct properties, they engineered an integrated sensor capable of measuring both temperature and strain. Additionally, they engineered a logic circuit for signal analysis, utilizing the negative gauge factor resulting from introducing micrometer-scale corrugations into the pattern. This approach led to the successful creation of an intelligent wearable device system that not only captures but also analyzes signals simultaneously, all using a single material of Au.
    The team’s sensors exhibited remarkable performance in detecting subtle muscle tremors, identifying heartbeat patterns, recognizing speech through vocal cord tremors, and monitoring changes in body temperature. Notably, these sensors maintained high stability without causing damage to the material interfaces. Their flexibility and excellent stretchability enabled them to conform to curved skin seamlessly.
    Professor Sei Kwang Hahn stated, “This research underscores the potential for the development of a futuristic bioelectronics platform capable of analyzing a diverse range of bio-signals.” He added, “We envision new prospects across various industries including healthcare and integrated electronic systems.”
    The research was sponsored by the Basic Research Program and the Biomedical Technology Development Program of the National Research Foundation of Korea, and POSCO Holdings. More

  • in

    AI: Researchers develop automatic text recognition for ancient cuneiform tablets

    A new artificial intelligence (AI) software is now able to decipher difficult-to-read texts on cuneiform tablets. It was developed by a team from Martin Luther University Halle-Wittenberg (MLU), Johannes Gutenberg University Mainz, and Mainz University of Applied Sciences. Instead of photos, the AI system uses 3D models of the tablets, delivering significantly more reliable results than previous methods. This makes it possible to search through the contents of multiple tablets to compare them with each other. It also paves the way for entirely new research questions.
    In their new approach, the researchers used 3D models of nearly 2,000 cuneiform tablets, including around 50 from a collection at MLU. According to estimates, around one million such tablets still exist worldwide. Many of them are over 5,000 years old and are thus among humankind’s oldest surviving written records. They cover an extremely wide range of topics: “Everything can be found on them: from shopping lists to court rulings. The tablets provide a glimpse into humankind’s past several millennia ago. However, they are heavily weathered and thus difficult to decipher even for trained eyes,” says Hubert Mara, an assistant professor at MLU.
    This is because the cuneiform tablets are unfired chunks of clay into which writing has been pressed. To complicate matters, the writing system back then was very complex and encompassed several languages. Therefore, not only are optimal lighting conditions needed to recognise the symbols correctly, a lot of background knowledge is required as well. “Up until now it has been difficult to access the content of many cuneiform tablets at once — you sort of need to know exactly what you are looking for and where,” Mara adds.
    His lab came up with the idea of developing a system of artificial intelligence which is based on 3D models. The new system deciphers characters better than previous methods. In principle, the AI system works along the same lines as OCR software (optical character recognition), which converts the images of writing and text in into machine-readable text. This has many advantages. Once converted into computer text, the writing can be more easily read or searched through. “OCR usually works with photographs or scans. This is no problem for ink on paper or parchment. In the case of cuneiform tablets, however, things are more difficult because the light and the viewing angle greatly influence how well certain characters can be identified,” explains Ernst Stötzner from MLU. He developed the new AI system as part of his master’s thesis under Hubert Mara.
    The team trained the new AI software using three-dimensional scans and additional data. Much of this data was provided by Mainz University of Applied Sciences, which is overseeing a large edition project for 3D models of clay tablets. The AI system subsequently did succeed in reliably recognising the symbols on the tablets. “We were surprised to find that our system even works well with photographs, which are actually a poorer source material,” says Stötzner.
    The work by the researchers from Halle and Mainz provides new access to what has hitherto been a relatively exclusive material and opens up many new lines of inquiry. Up until now it has only been a prototype which is able to reliably discern symbols from two languages. However, a total of twelve cuneiform languages are known to exist. In the future, the software could also help to decipher weathered inscriptions, for example in cemeteries, which are three-dimensional like the cuneiform script. More

  • in

    Research reveals rare metal could offer revolutionary switch for future quantum devices

    Quantum scientists have discovered a rare phenomenon that could hold the key to creating a ‘perfect switch’ in quantum devices which flips between being an insulator and superconductor.
    The research, led by the University of Bristol and published in Science, found these two opposing electronic states exist within purple bronze, a unique one-dimensional metal composed of individual conducting chains of atoms.
    Tiny changes in the material, for instance prompted by a small stimulus like heat or light, may trigger an instant transition from an insulating state with zero conductivity to a superconductor with unlimited conductivity, and vice versa. This polarised versatility, known as ’emergent symmetry’, has the potential to offer an ideal On/Off switch in future quantum technology developments.
    Lead author Nigel Hussey, Professor of Physics at the University of Bristol, said: “It’s a really exciting discovery which could provide a perfect switch for quantum devices of tomorrow.
    “The remarkable journey started 13 years ago in my lab when two PhD students, Xiaofeng Xu and Nick Wakeham, measured the magnetoresistance — the change in resistance caused by a magnetic field — of purple bronze.”
    In the absence of a magnetic field, the resistance of purple bronze was highly dependent on the direction in which the electrical current is introduced. Its temperature dependence was also rather complicated. Around room temperature, the resistance is metallic, but as the temperature is lowered, this reverses and the material appears to be turning into an insulator. Then, at the lowest temperatures, the resistance plummets again as it transitions into a superconductor. Despite this complexity, surprisingly, the magnetoresistance was found to be extremely simple. It was essentially the same irrespective of the direction in which the current or field were aligned and followed a perfect linear temperature dependence all the way from room temperature down to the superconducting transition temperature.
    “Finding no coherent explanation for this puzzling behaviour, the data lay dormant and published unpublished for the next seven years. A hiatus like this is unusual in quantum research, though the reason for it was not a lack of statistics,” Prof Hussey explained.

    “Such simplicity in the magnetic response invariably belies a complex origin and as it turns out, its possible resolution would only come about through a chance encounter.”
    In 2017, Prof Hussey was working at Radboud University and saw advertised a seminar by physicist Dr Piotr Chudzinski on the subject of purple bronze. At the time few researchers were devoting an entire seminar to this little-known material, so his interest was piqued.
    Prof Hussey said: “In the seminar Chudzinski proposed that the resistive upturn may be caused by interference between the conduction electrons and elusive, composite particles known as ‘dark excitons’. We chatted after the seminar and together proposed an experiment to test his theory. Our subsequent measurements essentially confirmed it.”
    Buoyed by this success, Prof Hussey resurrected Xu and Wakeham’s magnetoresistance data and showed them to Dr Chudzinski. The two central features of the data — the linearity with temperature and the independence on the orientation of current and field — intrigued Chudzinski, as did the fact that the material itself could exhibit both insulating and superconducting behaviour depending on how the material was grown.
    Dr Chudzinski wondered whether rather than transforming completely into an insulator, the interaction between the charge carriers and the excitons he’d introduced earlier could cause the former to gravitate towards the boundary between the insulating and superconducting states as the temperature is lowered. At the boundary itself, the probability of the system being an insulator or a superconductor is essentially the same.
    Prof Hussey said: “Such physical symmetry is an unusual state of affairs and to develop such symmetry in a metal as the temperature is lowered, hence the term ’emergent symmetry’, would constitute a world-first.”
    Physicists are well versed in the phenomenon of symmetry breaking: lowering the symmetry of an electron system upon cooling. The complex arrangement of water molecules in an ice crystal is an example of such broken symmetry. But the converse is an extremely rare, if not unique, occurrence. Returning to the water/ice analogy, it is as though upon cooling the ice further, the complexity of the ice crystals ‘melts’ once again into something as symmetric and smooth as the water droplet.

    Dr Chudzinski, now a Research Fellow at Queen’s University Belfast, said: “Imagine a magic trick where a dull, distorted figure transforms into a beautiful, perfectly symmetric sphere. This is, in a nutshell, the essence of emergent symmetry. The figure in question is our material, purple bronze, while our magician is nature itself.”
    To further test whether the theory held water, an additional 100 individual crystals, some insulating and others superconducting, were investigated by another PhD student, Maarten Berben, working at Radboud University.
    Prof Hussey added: “After Maarten’s Herculean effort, the story was complete and the reason why different crystals exhibited such wildly different ground states became apparent. Looking ahead, it might be possible to exploit this ‘edginess’ to create switches in quantum circuits whereby tiny stimuli induce profound, orders-of-magnitude changes in the switch resistance.” More

  • in

    Nostalgia and memories after ten years of social media

    As possibilities have changed and technology has advanced, memories and nostalgia are now a significant part of our use of social media. This is shown in a study from the University of Gothenburg and University West.
    Researchers at the University of Gothenburg and University West have been following a group of eleven active social media users for ten years, allowing them to describe and reflect on how they use the platforms to document and share their lives. The study provides insight into the role of technology in creating experiences and reliving meaningful moments.
    “These types of studies help us look back and understand the culture as it was in the 2010s and 2020s when social media was a central part of it,” says Beata Jungselius, senior lecturer of informatics at University West and one of the researchers behind the study.
    Social media users engage in what researchers define as “social media nostalgizing,” meaning they actively seek out content that evokes feelings of nostalgia.
    Alexandra Weilenmann, professor of interaction design at the University of Gothenburg, explains that participants in the study have described it as “treating themselves” to a nostalgia trip now and then.
    “Going back and remembering what has happened earlier in life becomes a bigger part of it over time than posting new content,” she says, and explains that in later interviews, it becomes clear that the platforms often serve as diary-like tools that allow memories to be relived.
    Social media platforms are introducing increasingly advanced features to help users interact with older content. Personal, music-infused photo albums generated for us or reminders of pictures we posted on the same date one, three, or ten years ago allow for nostalgic experiences, which are often seen as positive. The study describes how these features can lead to users reconnecting with old friends by “tagging” them in a shared memory. Alexandra Weilenmann and Beata Jungselius believe this could be a deliberate move by social media platforms to encourage users to stay active since the publication of new content has decreased.
    The researchers have noted that it’s not just the content itself that evokes feelings of nostalgia but also memories of the actual usage of social media play a significant role. For example, one of the interviewees reminisces about how rewarding the intense communication in forums was and how it often led to real-life meetings and interactions.
    “It’s only now that we’ve lived with social media long enough to make and draw conclusions from a study like this. Through our method of studying the same users over ten years, we’ve been able to follow how their usage and attitudes toward the platforms have changed as they have evolved,” says Beata Jungselius. More

  • in

    New computer code for mechanics of tissues and cells in three dimensions

    Biological materials are made of individual components, including tiny motors that convert fuel into motion. This creates patterns of movement, and the material shapes itself with coherent flows by constant consumption of energy. Such continuously driven materials are called “active matter.” The mechanics of cells and tissues can be described by active matter theory, a scientific framework to understand shape, flows, and form of living materials. The active matter theory consists of many challenging mathematical equations.
    Scientists from the Max Planck Institute of Molecular Cell Biology and Genetics (MPI-CBG) in Dresden, the Center for Systems Biology Dresden (CSBD), and the TU Dresden have now developed an algorithm, implemented in an open-source supercomputer code, that can for the first time solve the equations of active matter theory in realistic scenarios. These solutions bring us a big step closer to solving the century-old riddle of how cells and tissues attain their shape and to designing artificial biological machines.
    Biological processes and behaviors are often very complex. Physical theories provide a precise and quantitative framework for understanding them. The active matter theory offers a framework to understand and describe the behavior of active matter — materials composed of individual components capable of converting a chemical fuel (“food”) into mechanical forces. Several scientists from Dresden were key in developing this theory, among others Frank Jülicher, director at the Max Planck Institute for the Physics of Complex Systems, and Stephan Grill, director at the MPI-CBG. With these principles of physics, the dynamics of active living matter can be described and predicted by mathematical equations. However, these equations are extremely complex and hard to solve. Therefore, scientists require the power of supercomputers to comprehend and analyze living materials. There are different ways to predict the behavior of active matter, with some focusing on the tiny individual particles, others studying active matter at the molecular level, and yet others studying active fluids on a large scale. These studies help scientists see how active matter behaves at different scales in space and over time.
    Solving complex mathematical equations
    Scientists from the research group of Ivo Sbalzarini, TU Dresden Professor at the Center for Systems Biology Dresden (CSBD), research group leader at the Max Planck Institute of Molecular Cell Biology and Genetics (MPI-CBG), and Dean of the Faculty of Computer Science at TU Dresden, have now developed a computer algorithm to solve the equations of active matter. Their work was published in the journal “Physics of Fluids” and was featured on the cover. They present an algorithm that can solve the complex equations of active matter in three dimensions and in complex-shaped spaces. “Our approach can handle different shapes in three dimensions over time,” says one of the first authors of the study, Abhinav Singh, a studied mathematician. He continues, “Even when the data points are not regularly distributed, our algorithm employs a novel numerical approach that works seamlessly for complex biologically realistic scenarios to accurately solve the theory’s equations. Using our approach, we can finally understand the long-term behavior of active materials in both moving and non-moving scenarios for predicting their dynamics. Further, the theory and simulations could be used to program biological materials or create engines at the nano-scale to extract useful work.” The other first author, Philipp Suhrcke, a graduate of TU Dresden’s Computational Modeling and Simulation M.Sc. program, adds, “thanks to our work, scientists can now, for example, predict the shape of a tissue or when a biological material is going to become unstable or dysregulated, with far-reaching implications in understanding the mechanisms of growth and disease.”
    A powerful code for everyone to use
    The scientists implemented their software using the open-source library OpenFPM, meaning that it is freely available for others to use. OpenFPM is developed by the Sbalzarini group for democratizing large-scale scientific computing. The authors first developed a custom computer language that allows computational scientists to write supercomputer codes by specifying the equations in mathematical notation and let the computer do the work to create a correct program code. As a result, they do not have to start from scratch every time they write a code, effectively reducing code development times in scientific research from months or years to days or weeks, providing enormous productivity gains. Due to the tremendous computational demands of studying three-dimensional active materials, the new code is scalable on shared and distributed-memory multi-processor parallel supercomputers, thanks to the use of OpenFPM. Although the application is designed to run on powerful supercomputers, it can also run on regular office computers for studying two-dimensional materials.

    The Principal Investigator of the study, Ivo Sbalzarini, summarizes: “Ten years of our research went into creating this simulation framework and enhancing the productivity of computational science. This now all comes together in a tool for understanding the three-dimensional behavior of living materials. Open-source, scalable, and capable of handling complex scenarios, our code opens new avenues for modeling active materials. This may finally lead us to understand how cells and tissues attain their shape, addressing the fundamental question of morphogenesis that has puzzled scientist for centuries. But it may also help us design artificial biological machines with minimal numbers of components.”
    The computer code that support the findings of this study are openly available in the 3Dactive-hydrodynamics github repository located at https://github.com/mosaic-group/3Dactive-hydrodynamics
    The open source framework OpenFPM is available at https://github.com/mosaic-group/openfpm_pdata
    Related Publications for the embedded computer language and the OpenFPM software library: https://doi.org/10.1016/j.cpc.2019.03.007 and https://doi.org/10.1140/epje/s10189-021-00121-x More