More stories

  • in

    A sibling-guided strategy to capture the 3D shape of the human face

    A new strategy for capturing the 3D shape of the human face draws on data from sibling pairs and leads to identification of novel links between facial shape traits and specific locations within the human genome. Hanne Hoskens of the Department of Human Genetics at Katholieke Universiteit in Leuven, Belgium, and colleagues present these findings in the open-access journal PLOS Genetics.
    The ability to capture the 3D shape of the human face — and how it varies between individuals with different genetics — can inform a variety of applications, including understanding human evolution, planning for surgery, and forensic sciences. However, existing tools for linking genetics to physical traits require input of simple measurements, such as distance between the eyes, that do not adequately capture the complexities of facial shape.
    Now, Hoskens and colleagues have developed a new strategy for capturing these complexities in a format that can then be studied with existing analytical tools. To do so, they drew on the facial similarities often seen between genetically related siblings. The strategy was initially developed by learning from 3D facial data from a group of 273 pairs of siblings of European ancestry, which revealed 1,048 facial traits that are shared between siblings — and therefore presumably have a genetic basis.
    The researchers then applied their new strategy for capturing face shape to 8,246 individuals of European ancestry, for whom they also had genetic information. This produced data on face-shape similarities between siblings that could then be combined with their genetic data and analyzed with existing tools for linking genetics to physical traits. Doing so revealed 218 locations within the human genome, or loci, that were associated with facial traits shared by siblings.
    Further examination of the 218 loci showed that some are the sites of genes that have previously been linked to embryonic facial development and abnormal development of head and facial bones.
    The authors note that this study could serve as the basis for several different directions of future research, including replication of the findings in larger populations, and investigation of the identified genetic loci in order to better understand the biological processes involved in facial development.
    Hoskens adds, “Since siblings are likely to share facial features due to close kinship, traits that are biologically relevant can be extracted from phenotypically similar sibling pairs.”
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Making AI algorithms show their work

    Artificial intelligence (AI) learning machines can be trained to solve problems and puzzles on their own instead of using rules that we made for them. But often, researchers do not know what rules the machines make for themselves. Cold Spring Harbor Laboratory (CSHL) Assistant Professor Peter Koo developed a new method that quizzes a machine-learning program to figure out what rules it learned on its own and if they are the right ones.
    Computer scientists “train” an AI machine to make predictions by presenting it with a set of data. The machine extracts a series of rules and operations — a model — based on information it encountered during its training. Koo says:
    “If you learn general rules about the math instead of memorizing the equations, you know how to solve those equations. So rather than just memorizing those equations, we hope that these models are learning to solve it and now we can give it any equation and it will solve it.”
    Koo developed a type of AI called a deep neural network (DNN) to look for patterns in RNA strands that increase the ability of a protein to bind to them. Koo trained his DNN, called Residual Bind (RB), with thousands of RNA sequences matched to protein binding scores, and RB became good at predicting scores for new RNA sequences. But Koo did not know whether the machine was focusing on a short sequence of RNA letters — a motif — that humans might expect, or some other secondary characteristic of the RNA strands that they might not.
    Koo and his team developed a new method, called Global Importance Analysis, to test what rules RB generated to make its predictions. He presented the trained network with a carefully designed set of synthetic RNA sequences containing different combinations of motifs and features that the scientists thought might influence RB’s assessments.
    They discovered the network considered more than just the spelling of a short motif. It factored in how the RNA strand might fold over and bind to itself, how close one motif is to another, and other features.
    Koo hopes to test some key results in a laboratory. But rather than test every prediction in that lab, Koo’s new method acts like a virtual lab. Researchers can design and test millions of different variables computationally, far more than humans could test in a real-world lab.
    “Biology is super anecdotal. You can find a sequence, you can find a pattern but you don’t know ‘Is that pattern really important?’ You have to do these interventional experiments. In this case, all my experiments are all done by just asking the neural network.”
    Story Source:
    Materials provided by Cold Spring Harbor Laboratory. Original written by Luis Sandoval. Note: Content may be edited for style and length. More

  • in

    Rivers might not be as resilient to drought as once thought

    Rivers ravaged by a lengthy drought may not be able to recover, even after the rains return. Seven years after the Millennium drought baked southeastern Australia, a large fraction of the region’s rivers still show no signs of returning to their predrought water flow, researchers report in the May 14 Science.

    There’s “an implicit assumption that no matter how big a disturbance is, the water will always come back — it’s just a matter of how long it takes,” says Tim Peterson, a hydrologist at Monash University in Melbourne, Australia. “I’ve never been satisfied with that.”

    The years-long drought in southeastern Australia, which began sometime between 1997 and 2001 and lasted until 2010, offered a natural experiment to test this assumption, he says. “It wasn’t the most severe drought” the region has ever experienced, but it was the longest period of low rainfall in the region since about 1900.

    Peterson and colleagues analyzed annual and seasonal streamflow rates in 161 river basins in the region from before, during and after the drought. By 2017, they found, 37 percent of those river basins still weren’t seeing the amount of water flow that they had predrought. Furthermore, of those low-flow rivers, the vast majority — 80 percent — also show no signs that they might recover in the future, the team found.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Many of southeastern Australia’s rivers had bounced back from previous droughts, including a severe but brief episode in 1983. But even heavy rains in 2010, marking the end of the Millennium drought, weren’t enough to return these basins to their earlier state. That suggests that there is, after all, a limit to rivers’ resilience.

    What’s changed in these river basins isn’t yet clear, Peterson says. The precipitation post drought was similar to predrought precipitation, and the water isn’t ending up in the streamflow, so it must be going somewhere else. The team examined various possibilities: The water infiltrated into the ground and was stored as groundwater, or it never made it to the ground at all — possibly intercepted by leaves, and then evaporating back to the air.

    But none of these explanations were borne out by studies of these sites, the researchers report. The remaining, and most probable, possibility is that the environment has changed: Water is evaporating from soils and transpiring from plants more quickly than it did predrought.

    Peterson has long suggested that under certain conditions rivers might not, in fact, recover — and this study confirms that theoretical work, says Peter Troch, a hydrologist at the University of Arizona in Tucson. Enhanced soil evaporation and plant transpiration are examples of such positive feedbacks, processes that can enhance the impacts of a drought. “Until his work, this lack of resilience was not anticipated, and all hydrological models did not account for such possibility,” Troch says.

    “This study will definitely inspire other researchers to undertake such work,” he notes. “Hopefully we can gain more insight into the functioning of [river basins’] response to climate change.”

    Indeed, the finding that rivers have “finite resilience” to drought is of particular concern as the planet warms and lengthier droughts become more likely, writes hydrologist Flavia Tauro in a commentary in the same issue of Science. More

  • in

    New evidence for electron's dual nature found in a quantum spin liquid

    A new discovery led by Princeton University could upend our understanding of how electrons behave under extreme conditions in quantum materials. The finding provides experimental evidence that this familiar building block of matter behaves as if it is made of two particles: one particle that gives the electron its negative charge and another that supplies its magnet-like property, known as spin.
    “We think this is the first hard evidence of spin-charge separation,” said Nai Phuan Ong, Princeton’s Eugene Higgins Professor of Physics and senior author on the paper published this week in the journal Nature Physics.
    The experimental results fulfill a prediction made decades ago to explain one of the most mind-bending states of matter, the quantum spin liquid. In all materials, the spin of an electron can point either up or down. In the familiar magnet, all of the spins uniformly point in one direction throughout the sample when the temperature drops below a critical temperature.
    However, in spin liquid materials, the spins are unable to establish a uniform pattern even when cooled very close to absolute zero. Instead, the spins are constantly changing in a tightly coordinated, entangled choreography. The result is one of the most entangled quantum states ever conceived, a state of great interest to researchers in the growing field of quantum computing.
    To describe this behavior mathematically, Nobel prize-winning Princeton physicist Philip Anderson (1923-2020), who first predicted the existence of spin liquids in 1973, proposed an explanation: in the quantum regime an electron may be regarded as composed of two particles, one bearing the electron’s negative charge and the other containing its spin. Anderson called the spin-containing particle a spinon.
    In this new study, the team searched for signs of the spinon in a spin liquid composed of ruthenium and chlorine atoms. At temperatures a fraction of a Kelvin above absolute zero (or roughly -452 degrees Fahrenheit) and in the presence of a high magnetic field, ruthenium chloride crystals enter the spin liquid state. More

  • in

    Quantum machine learning hits a limit

    A new theorem from the field of quantum machine learning has poked a major hole in the accepted understanding about information scrambling.
    “Our theorem implies that we are not going to be able to use quantum machine learning to learn typical random or chaotic processes, such as black holes. In this sense, it places a fundamental limit on the learnability of unknown processes,” said Zoe Holmes, a post-doc at Los Alamos National Laboratory and coauthor of the paper describing the work published today in Physical Review Letters.
    “Thankfully, because most physically interesting processes are sufficiently simple or structured so that they do not resemble a random process, the results don’t condemn quantum machine learning, but rather highlight the importance of understanding its limits,” Holmes said.
    In the classic Hayden-Preskill thought experiment, a fictitious Alice tosses information such as a book into a black hole that scrambles the text. Her companion, Bob, can still retrieve it using entanglement, a unique feature of quantum physics. However, the new work proves that fundamental constraints on Bob’s ability to learn the particulars of a given black hole’s physics means that reconstructing the information in the book is going to be very difficult or even impossible.
    “Any information run through an information scrambler such as a black hole will reach a point where the machine learning algorithm stalls out on a barren plateau and thus becomes untrainable. That means the algorithm can’t learn scrambling processes,” said Andrew Sornborger a computer scientist at Los Alamos and coauthor of the paper. Sornborger is Director of Quantum Science Center at Los Alamos and leader of the Center’s algorithms and simulation thrust. The Center is a multi-institutional collaboration led by Oak Ridge National Laboratory.
    Barren plateaus are regions in the mathematical space of optimization algorithms where the ability to solve the problem becomes exponentially harder as the size of the system being studied increases. This phenomenon, which severely limits the trainability of large scale quantum neural networks, was described in a recent paper by a related Los Alamos team.
    “Recent work has identified the potential for quantum machine learning to be a formidable tool in our attempts to understand complex systems,” said Andreas Albrecht, a co-author of the research. Albrecht is Director of the Center for Quantum Mathematics and Physics (QMAP) and Distinguished Professor, Department of Physics and Astronomy, at UC Davis. “Our work points out fundamental considerations that limit the capabilities of this tool.”
    In the Hayden-Preskill thought experiment, Alice attempts to destroy a secret, encoded in a quantum state, by throwing it into nature’s fastest scrambler, a black hole. Bob and Alice are the fictitious quantum dynamic duo typically used by physicists to represent agents in a thought experiment.
    “You might think that this would make Alice’s secret pretty safe,” Holmes said, “but Hayden and Preskill argued that if Bob knows the unitary dynamics implemented by the black hole, and share a maximally entangled state with the black hole, it is possible to decode Alice’s secret by collecting a few additional photons emitted from the black hole. But this prompts the question, how could Bob learn the dynamics implemented by the black hole? Well, not by using quantum machine learning, according to our findings.”
    A key piece of the new theorem developed by Holmes and her coauthors assumes no prior knowledge of the quantum scrambler, a situation unlikely to occur in real-world science.
    “Our work draws attention to the tremendous leverage even small amounts of prior information may play in our ability to extract information from complex systems and potentially reduce the power of our theorem,” Albrecht said. “Our ability to do this can vary greatly among different situations (as we scan from theoretical consideration of black holes to concrete situations controlled by humans here on earth). Future research is likely to turn up interesting examples, both of situations where our theorem remains fully in force, and others where it can be evaded. More

  • in

    How AIs ask for personal information is important for gaining user trust

    People may be reluctant to give their personal information to artificial intelligence (AI) systems even though it is needed by the systems for providing more accurate and personalized services, but a new study reveals that the manner in which the systems ask for information from users can make a difference.
    In a study, Penn State researchers report that users responded differently when AIs either offered to help the user, or asked for help from the user. This response influenced whether the user trusted the AI with their personal information. They added that these introductions from the AI could be designed in a way to both increase users’ trust, as well as raise their awareness about the importance of personal information.
    The researchers, who presented their findings today at the virtual 2021 ACM CHI Conference on Human Factors in Computing Systems, the premier international conference of human-computer interaction research, found that people who are familiar with technology — power users — preferred AIs that are in need of help, or help-seeking, while non-expert users were more likely to prefer AIs that introduce themselves as simultaneously help-seekers and help-providers.
    As AIs become increasingly ubiquitous, developers need to create systems that can better relate to humans, said S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory.
    “There’s a need for us to re-think how AI systems talk to human users,” said Sundar. “This has come to the surface because there are rising concerns about how AI systems are starting to take over our lives and know more about us than we realize. So, given these concerns, it may be better if we start to switch from the traditional dialogue scripts into a more collaborative, cooperative communication that acknowledges the agency of the user.”
    Here to help?
    The researchers said that traditional AI dialogues usually offer introductions that frame their role as a helper. More

  • in

    New research may explain shortages in STEM careers

    A new study by the University of Georgia revealed that more college students change majors within the STEM pipeline than leave the career path of science, technology, engineering and mathematics altogether.
    Funded by a National Institutes of Health grant and a National Science Foundation Postdoctoral Fellowship and done in collaboration with the University of Wisconsin, the study examined interviews, surveys and institutional data from 1,193 students at a U.S. midwestern university for more than six years to observe a single area of the STEM pipeline: biomedical fields of study.
    Out of 921 students who stayed in the biomedical pipeline through graduation, almost half changed their career plans within the biomedical fields.
    “This was almost double the number of students who left biomedical fields altogether,” said Emily Rosenzweig, co-author of the study and assistant professor in the Mary Frances Early College of Education’s department of educational psychology. “This suggests that if we want to fully understand why there are shortages in certain STEM careers, we need to look at those who change plans within the pipeline, not just those who leave it.”
    Rosenzweig examined students’ motivations for changing career plans and found that students were more often inspired to make a change because a new field seemed more attractive.
    This finding pointed to an underexplored research area that educators, policymakers and administrators should devote more attention to in the future. Rather than focusing only on what makes students disenchanted with a particular career, factors that make alternative career paths seem valuable to students need to be considered.
    “The sheer number of changes made by students who remained in the biomedical pipeline highlights the divergence of paths students take in their career decision-making,” Rosenzweig said. “We should not simply assume that students are staying on course and progressing smoothly toward intended careers just because they have not left the [STEM] pipeline.”
    Ultimately, the research provides new insights about students’ motivations for choosing various careers inside the STEM pipeline and demonstrates the importance of understanding this group if schools are to promote retention in particular STEM careers.
    Story Source:
    Materials provided by University of Georgia. Original written by Lauren Leathers. Note: Content may be edited for style and length. More

  • in

    Interactive typeface for digital text

    AdaptiFont has recently been presented at CHI, the leading Conference on Human Factors in Computing.
    Language is without doubt the most pervasive medium for exchanging knowledge between humans. However, spoken language or abstract text need to be made visible in order to be read, be it in print or on screen.
    How does the way a text looks affect its readability, that is, how it is being read, processed, and understood? A team at TU Darmstadt’s Centre for Cognitive Science investigated this question at the intersection of perceptual science, cognitive science, and linguistics. Electronic text is even more complex. Texts are read on different devices under different external conditions. And although any digital text is formatted initially, users might resize it on screen, change brightness and contrast of the display, or even select a different font when reading text on the web.
    The team of researchers from TU Darmstadt now developed a system that leaves font design to the user’s visual system. First, they needed to come up with a way of synthesizing new fonts. This was achieved by using a machine learning algorithm, which learned the structure of fonts analysing 25 popular and classic typefaces. The system is capable of creating an infinite number of new fonts that are any intermediate form of others — for example, visually halfway between Helvetica and Times New Roman.
    Since some fonts may make it more difficult to read the text, they may slow the reader down. Other fonts may help the user read more fluently. Measuring reading speed, a second algorithm can now generate more typefaces that increase the reading speed.
    In a laboratory experiment, in which users read texts over one hour, the research team showed that their algorithm indeed generates new fonts that increase individual user’s reading speed. Interestingly all readers had their own personalized font that made reading especially easy for them. However: This individual favorite typeface does not necessarily fit in all situations. “AdaptiFont therefore can be understood as a system which creates fonts for an individual dynamically and continuously while reading, which maximizes the reading speed at the time of use. This may depend on the content of the text, whether you are tired, or perhaps are using different display devices,” explains Professor Constantin A. Rothkopf, Centre for Cognitive Science und head of the institute of Psychology of Information Processing at TU Darmstadt.
    The AdaptiFont system was recently presented to the scientific community at the Conference on Human Factors in Computing Systems (CHI). A patent application has been filed. Future possible applications are with all electronic devices on which text is read.
    Story Source:
    Materials provided by Technische Universitat Darmstadt. Note: Content may be edited for style and length. More