More stories

  • in

    Future sparkles for diamond-based quantum technology

    Marilyn Monroe famously sang that diamonds are a girl’s best friend, but they are also very popular with quantum scientists — with two new research breakthroughs poised to accelerate the development of synthetic diamond-based quantum technology, improve scalability, and dramatically reduce manufacturing costs.
    While silicon is traditionally used for computer and mobile phone hardware, diamond has unique properties that make it particularly useful as a base for emerging quantum technologies such as quantum supercomputers, secure communications and sensors.
    However there are two key problems; cost, and difficulty in fabricating the single crystal diamond layer, which is smaller than one millionth of a metre.
    A research team from the ARC Centre of Excellence for Transformative Meta-Optics at the University of Technology Sydney (UTS), led by Professor Igor Aharonovich, has just published two research papers, in Nanoscale and Advanced Quantum Technologies, that address these challenges.
    “For diamond to be used in quantum applications, we need to precisely engineer ‘optical defects’ in the diamond devices — cavities and waveguides — to control, manipulate and readout information in the form of qubits — the quantum version of classical computer bits,” said Professor Aharonovich.
    “It’s akin to cutting holes or carving gullies in a super thin sheet of diamond, to ensure light travels and bounces in the desired direction,” he said. More

  • in

    Virtual reality warps your sense of time

    Psychology researchers at UC Santa Cruz have found that playing games in virtual reality creates an effect called “time compression,” where time goes by faster than you think. Grayson Mullen, who was a cognitive science undergraduate at the time, worked with Psychology Professor Nicolas Davidenko to design an experiment that tested how virtual reality’s effects on a game player’s sense of time differ from those of conventional monitors. The results are now published in the journal Timing & Time Perception.
    Mullen designed a maze game that could be played in both virtual reality and conventional formats, then the research team recruited 41 UC Santa Cruz undergraduate students to test the game. Participants played in both formats, with researchers randomizing which version of the game each student started with. Both versions were essentially the same, but the mazes in each varied slightly so that there was no repetition between formats.
    Participants were asked to stop playing the game whenever they felt like five minutes had passed. Since there were no clocks available, each person had to make this estimate based on their own perception of the passage of time.
    Prior studies of time perception in virtual reality have often asked participants about their experiences after the fact, but in this experiment, the research team wanted to integrate a time-keeping task into the virtual reality experience in order to capture what was happening in the moment. Researchers recorded the actual amount of time that had passed when each participant stopped playing the game, and this revealed a gap between participants’ perception of time and the reality.
    The study found that participants who played the virtual reality version of the game first played for an average of 72.6 seconds longer before feeling that five minutes had passed than students who started on a conventional monitor. In other words, students played for 28.5 percent more time than they realized in virtual reality, compared to conventional formats.
    This time compression effect was observed only among participants who played the game in virtual reality first. The paper concluded this was because participants based their judgement of time in the second round on whatever initial time estimates they made during the first round, regardless of format. But if the time compression observed in the first round is translatable to other types of virtual reality experiences and longer time intervals, it could be a big step forward in understanding how this effect works. More

  • in

    A sibling-guided strategy to capture the 3D shape of the human face

    A new strategy for capturing the 3D shape of the human face draws on data from sibling pairs and leads to identification of novel links between facial shape traits and specific locations within the human genome. Hanne Hoskens of the Department of Human Genetics at Katholieke Universiteit in Leuven, Belgium, and colleagues present these findings in the open-access journal PLOS Genetics.
    The ability to capture the 3D shape of the human face — and how it varies between individuals with different genetics — can inform a variety of applications, including understanding human evolution, planning for surgery, and forensic sciences. However, existing tools for linking genetics to physical traits require input of simple measurements, such as distance between the eyes, that do not adequately capture the complexities of facial shape.
    Now, Hoskens and colleagues have developed a new strategy for capturing these complexities in a format that can then be studied with existing analytical tools. To do so, they drew on the facial similarities often seen between genetically related siblings. The strategy was initially developed by learning from 3D facial data from a group of 273 pairs of siblings of European ancestry, which revealed 1,048 facial traits that are shared between siblings — and therefore presumably have a genetic basis.
    The researchers then applied their new strategy for capturing face shape to 8,246 individuals of European ancestry, for whom they also had genetic information. This produced data on face-shape similarities between siblings that could then be combined with their genetic data and analyzed with existing tools for linking genetics to physical traits. Doing so revealed 218 locations within the human genome, or loci, that were associated with facial traits shared by siblings.
    Further examination of the 218 loci showed that some are the sites of genes that have previously been linked to embryonic facial development and abnormal development of head and facial bones.
    The authors note that this study could serve as the basis for several different directions of future research, including replication of the findings in larger populations, and investigation of the identified genetic loci in order to better understand the biological processes involved in facial development.
    Hoskens adds, “Since siblings are likely to share facial features due to close kinship, traits that are biologically relevant can be extracted from phenotypically similar sibling pairs.”
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Making AI algorithms show their work

    Artificial intelligence (AI) learning machines can be trained to solve problems and puzzles on their own instead of using rules that we made for them. But often, researchers do not know what rules the machines make for themselves. Cold Spring Harbor Laboratory (CSHL) Assistant Professor Peter Koo developed a new method that quizzes a machine-learning program to figure out what rules it learned on its own and if they are the right ones.
    Computer scientists “train” an AI machine to make predictions by presenting it with a set of data. The machine extracts a series of rules and operations — a model — based on information it encountered during its training. Koo says:
    “If you learn general rules about the math instead of memorizing the equations, you know how to solve those equations. So rather than just memorizing those equations, we hope that these models are learning to solve it and now we can give it any equation and it will solve it.”
    Koo developed a type of AI called a deep neural network (DNN) to look for patterns in RNA strands that increase the ability of a protein to bind to them. Koo trained his DNN, called Residual Bind (RB), with thousands of RNA sequences matched to protein binding scores, and RB became good at predicting scores for new RNA sequences. But Koo did not know whether the machine was focusing on a short sequence of RNA letters — a motif — that humans might expect, or some other secondary characteristic of the RNA strands that they might not.
    Koo and his team developed a new method, called Global Importance Analysis, to test what rules RB generated to make its predictions. He presented the trained network with a carefully designed set of synthetic RNA sequences containing different combinations of motifs and features that the scientists thought might influence RB’s assessments.
    They discovered the network considered more than just the spelling of a short motif. It factored in how the RNA strand might fold over and bind to itself, how close one motif is to another, and other features.
    Koo hopes to test some key results in a laboratory. But rather than test every prediction in that lab, Koo’s new method acts like a virtual lab. Researchers can design and test millions of different variables computationally, far more than humans could test in a real-world lab.
    “Biology is super anecdotal. You can find a sequence, you can find a pattern but you don’t know ‘Is that pattern really important?’ You have to do these interventional experiments. In this case, all my experiments are all done by just asking the neural network.”
    Story Source:
    Materials provided by Cold Spring Harbor Laboratory. Original written by Luis Sandoval. Note: Content may be edited for style and length. More

  • in

    New evidence for electron's dual nature found in a quantum spin liquid

    A new discovery led by Princeton University could upend our understanding of how electrons behave under extreme conditions in quantum materials. The finding provides experimental evidence that this familiar building block of matter behaves as if it is made of two particles: one particle that gives the electron its negative charge and another that supplies its magnet-like property, known as spin.
    “We think this is the first hard evidence of spin-charge separation,” said Nai Phuan Ong, Princeton’s Eugene Higgins Professor of Physics and senior author on the paper published this week in the journal Nature Physics.
    The experimental results fulfill a prediction made decades ago to explain one of the most mind-bending states of matter, the quantum spin liquid. In all materials, the spin of an electron can point either up or down. In the familiar magnet, all of the spins uniformly point in one direction throughout the sample when the temperature drops below a critical temperature.
    However, in spin liquid materials, the spins are unable to establish a uniform pattern even when cooled very close to absolute zero. Instead, the spins are constantly changing in a tightly coordinated, entangled choreography. The result is one of the most entangled quantum states ever conceived, a state of great interest to researchers in the growing field of quantum computing.
    To describe this behavior mathematically, Nobel prize-winning Princeton physicist Philip Anderson (1923-2020), who first predicted the existence of spin liquids in 1973, proposed an explanation: in the quantum regime an electron may be regarded as composed of two particles, one bearing the electron’s negative charge and the other containing its spin. Anderson called the spin-containing particle a spinon.
    In this new study, the team searched for signs of the spinon in a spin liquid composed of ruthenium and chlorine atoms. At temperatures a fraction of a Kelvin above absolute zero (or roughly -452 degrees Fahrenheit) and in the presence of a high magnetic field, ruthenium chloride crystals enter the spin liquid state. More

  • in

    Quantum machine learning hits a limit

    A new theorem from the field of quantum machine learning has poked a major hole in the accepted understanding about information scrambling.
    “Our theorem implies that we are not going to be able to use quantum machine learning to learn typical random or chaotic processes, such as black holes. In this sense, it places a fundamental limit on the learnability of unknown processes,” said Zoe Holmes, a post-doc at Los Alamos National Laboratory and coauthor of the paper describing the work published today in Physical Review Letters.
    “Thankfully, because most physically interesting processes are sufficiently simple or structured so that they do not resemble a random process, the results don’t condemn quantum machine learning, but rather highlight the importance of understanding its limits,” Holmes said.
    In the classic Hayden-Preskill thought experiment, a fictitious Alice tosses information such as a book into a black hole that scrambles the text. Her companion, Bob, can still retrieve it using entanglement, a unique feature of quantum physics. However, the new work proves that fundamental constraints on Bob’s ability to learn the particulars of a given black hole’s physics means that reconstructing the information in the book is going to be very difficult or even impossible.
    “Any information run through an information scrambler such as a black hole will reach a point where the machine learning algorithm stalls out on a barren plateau and thus becomes untrainable. That means the algorithm can’t learn scrambling processes,” said Andrew Sornborger a computer scientist at Los Alamos and coauthor of the paper. Sornborger is Director of Quantum Science Center at Los Alamos and leader of the Center’s algorithms and simulation thrust. The Center is a multi-institutional collaboration led by Oak Ridge National Laboratory.
    Barren plateaus are regions in the mathematical space of optimization algorithms where the ability to solve the problem becomes exponentially harder as the size of the system being studied increases. This phenomenon, which severely limits the trainability of large scale quantum neural networks, was described in a recent paper by a related Los Alamos team.
    “Recent work has identified the potential for quantum machine learning to be a formidable tool in our attempts to understand complex systems,” said Andreas Albrecht, a co-author of the research. Albrecht is Director of the Center for Quantum Mathematics and Physics (QMAP) and Distinguished Professor, Department of Physics and Astronomy, at UC Davis. “Our work points out fundamental considerations that limit the capabilities of this tool.”
    In the Hayden-Preskill thought experiment, Alice attempts to destroy a secret, encoded in a quantum state, by throwing it into nature’s fastest scrambler, a black hole. Bob and Alice are the fictitious quantum dynamic duo typically used by physicists to represent agents in a thought experiment.
    “You might think that this would make Alice’s secret pretty safe,” Holmes said, “but Hayden and Preskill argued that if Bob knows the unitary dynamics implemented by the black hole, and share a maximally entangled state with the black hole, it is possible to decode Alice’s secret by collecting a few additional photons emitted from the black hole. But this prompts the question, how could Bob learn the dynamics implemented by the black hole? Well, not by using quantum machine learning, according to our findings.”
    A key piece of the new theorem developed by Holmes and her coauthors assumes no prior knowledge of the quantum scrambler, a situation unlikely to occur in real-world science.
    “Our work draws attention to the tremendous leverage even small amounts of prior information may play in our ability to extract information from complex systems and potentially reduce the power of our theorem,” Albrecht said. “Our ability to do this can vary greatly among different situations (as we scan from theoretical consideration of black holes to concrete situations controlled by humans here on earth). Future research is likely to turn up interesting examples, both of situations where our theorem remains fully in force, and others where it can be evaded. More

  • in

    How AIs ask for personal information is important for gaining user trust

    People may be reluctant to give their personal information to artificial intelligence (AI) systems even though it is needed by the systems for providing more accurate and personalized services, but a new study reveals that the manner in which the systems ask for information from users can make a difference.
    In a study, Penn State researchers report that users responded differently when AIs either offered to help the user, or asked for help from the user. This response influenced whether the user trusted the AI with their personal information. They added that these introductions from the AI could be designed in a way to both increase users’ trust, as well as raise their awareness about the importance of personal information.
    The researchers, who presented their findings today at the virtual 2021 ACM CHI Conference on Human Factors in Computing Systems, the premier international conference of human-computer interaction research, found that people who are familiar with technology — power users — preferred AIs that are in need of help, or help-seeking, while non-expert users were more likely to prefer AIs that introduce themselves as simultaneously help-seekers and help-providers.
    As AIs become increasingly ubiquitous, developers need to create systems that can better relate to humans, said S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory.
    “There’s a need for us to re-think how AI systems talk to human users,” said Sundar. “This has come to the surface because there are rising concerns about how AI systems are starting to take over our lives and know more about us than we realize. So, given these concerns, it may be better if we start to switch from the traditional dialogue scripts into a more collaborative, cooperative communication that acknowledges the agency of the user.”
    Here to help?
    The researchers said that traditional AI dialogues usually offer introductions that frame their role as a helper. More

  • in

    New research may explain shortages in STEM careers

    A new study by the University of Georgia revealed that more college students change majors within the STEM pipeline than leave the career path of science, technology, engineering and mathematics altogether.
    Funded by a National Institutes of Health grant and a National Science Foundation Postdoctoral Fellowship and done in collaboration with the University of Wisconsin, the study examined interviews, surveys and institutional data from 1,193 students at a U.S. midwestern university for more than six years to observe a single area of the STEM pipeline: biomedical fields of study.
    Out of 921 students who stayed in the biomedical pipeline through graduation, almost half changed their career plans within the biomedical fields.
    “This was almost double the number of students who left biomedical fields altogether,” said Emily Rosenzweig, co-author of the study and assistant professor in the Mary Frances Early College of Education’s department of educational psychology. “This suggests that if we want to fully understand why there are shortages in certain STEM careers, we need to look at those who change plans within the pipeline, not just those who leave it.”
    Rosenzweig examined students’ motivations for changing career plans and found that students were more often inspired to make a change because a new field seemed more attractive.
    This finding pointed to an underexplored research area that educators, policymakers and administrators should devote more attention to in the future. Rather than focusing only on what makes students disenchanted with a particular career, factors that make alternative career paths seem valuable to students need to be considered.
    “The sheer number of changes made by students who remained in the biomedical pipeline highlights the divergence of paths students take in their career decision-making,” Rosenzweig said. “We should not simply assume that students are staying on course and progressing smoothly toward intended careers just because they have not left the [STEM] pipeline.”
    Ultimately, the research provides new insights about students’ motivations for choosing various careers inside the STEM pipeline and demonstrates the importance of understanding this group if schools are to promote retention in particular STEM careers.
    Story Source:
    Materials provided by University of Georgia. Original written by Lauren Leathers. Note: Content may be edited for style and length. More