More stories

  • in

    Scientists have full state of a quantum liquid down cold

    A team of physicists has illuminated certain properties of quantum systems by observing how their fluctuations spread over time. The research offers an intricate understanding of a complex phenomenon that is foundational to quantum computing — a method that can perform certain calculations significantly more efficiently than conventional computing.
    “In an era of quantum computing it’s vital to generate a precise characterization of the systems we are building,” explains Dries Sels, an assistant professor in New York University’s Department of Physics and an author of the paper, which appears in the journal Nature Physics. “This work reconstructs the full state of a quantum liquid, consistent with the predictions of a quantum field theory — similar to those that describe the fundamental particles in our universe.”
    Sels adds that the breakthrough offers promise for technological advancement.
    “Quantum computing relies on the ability to generate entanglement between different subsystems, and that’s exactly what we can probe with our method,” he notes. “The ability to do such precise characterization could also lead to better quantum sensors — another application area of quantum technologies.”
    The research team, which included scientists from Vienna University of Technology, ETH Zurich, Free University of Berlin, and the Max-Planck Institute of Quantum Optics, performed a tomography of a quantum system — the reconstruction of a specific quantum state with the aim of seeking experimental evidence of a theory.
    The studied quantum system consisted of ultracold atoms — slow-moving atoms that make the movement easier to analyze because of their near-zero temperature — trapped on an atom chip.
    In their work, the scientists created two “copies” of this quantum system — cigar-shaped clouds of atoms that evolve over time without influencing each other. At different stages of this process, the team performed a series of experiments that revealed the two copies’ correlations.
    “By constructing an entire history of these correlations, we can infer what is the initial quantum state of the system and extract its properties,” explains Sels. “Initially, we have a very strongly coupled quantum liquid, which we split into two so that it evolves as two independent liquids, and then we recombine it to reveal the ripples that are in the liquid.
    “It’s like watching the ripples in a pond after throwing a rock in it and inferring the properties of the rock, such as its size, shape, and weight.”
    This research was supported by grants from the Air Force Office of Scientific Research (FA9550-21-1-0236) and the U.S. Army Research Office (W911NF-20-1-0163) as well as the Austrian Science Fund (FWF) and the German Research Research Foundation (DRG). More

  • in

    Researchers use AI to discover new planet outside solar system

    A University of Georgia research team has confirmed evidence of a previously unknown planet outside of our solar system, and they used machine learning tools to detect it.
    A recent study by the team showed that machine learning can correctly determine if an exoplanet is present by looking in protoplanetary disks, the gas around newly formed stars.
    The newly published findings represent a first step toward using machine learning to identify previously overlooked exoplanets.
    “We confirmed the planet using traditional techniques, but our models directed us to run those simulations and showed us exactly where the planet might be,” said Jason Terry, doctoral student in the UGA Franklin College of Arts and Sciences department of physics and astronomy and lead author on the study.
    “When we applied our models to a set of older observations, they identified a disk that wasn’t known to have a planet despite having already been analyzed. Like previous discoveries, we ran simulations of the disk and found that a planet could re-create the observation.”
    According to Terry, the models suggested a planet’s presence, indicated by several images that strongly highlighted a particular region of the disk that turned out to have the characteristic sign of a planet — an unusual deviation in the velocity of the gas near the planet.
    “This is an incredibly exciting proof of concept. We knew from our previous work that we could use machine learning to find known forming exoplanets,” said Cassandra Hall, assistant professor of computational astrophysics and principal investigator of the Exoplanet and Planet Formation Research Group at UGA. “Now, we know for sure that we can use it to make brand new discoveries.”
    The discovery highlights how machine learning has the power to enhance scientists’ work, utilizing artificial intelligence as an added tool to expand researchers’ accuracy and more efficiently economize their time when engaged in such a vast endeavor as investigating deep, outer space.
    The models were able to detect a signal in data that people had already analyzed; they found something that previously had gone undetected.
    “This demonstrates that our models — and machine learning in general — have the ability to quickly and accurately identify important information that people can miss. This has the potential to dramatically speed up analysis and subsequent theoretical insights,” Terry said. “It only took about an hour to analyze that entire catalog and find strong evidence for a new planet in a specific spot, so we think there will be an important place for these types of techniques as our datasets get even larger.” More

  • in

    New programmable smart fabric responds to temperature and electricity

    A new smart material developed by researchers at the University of Waterloo is activated by both heat and electricity, making it the first ever to respond to two different stimuli.
    The unique design paves the way for a wide variety of potential applications, including clothing that warms up while you walk from the car to the office in winter and vehicle bumpers that return to their original shape after a collision.
    Inexpensively made with polymer nano-composite fibres from recycled plastic, the programmable fabric can change its colour and shape when stimuli are applied.
    “As a wearable material alone, it has almost infinite potential in AI, robotics and virtual reality games and experiences,” said Dr. Milad Kamkar, a chemical engineering professor at Waterloo. “Imagine feeling warmth or a physical trigger eliciting a more in-depth adventure in the virtual world.”
    The novel fabric design is a product of the happy union of soft and hard materials, featuring a combination of highly engineered polymer composites and stainless steel in a woven structure.
    Researchers created a device similar to a traditional loom to weave the smart fabric. The resulting process is extremely versatile, enabling design freedom and macro-scale control of the fabric’s properties.
    The fabric can also be activated by a lower voltage of electricity than previous systems, making it more energy-efficient and cost-effective. In addition, lower voltage allows integration into smaller, more portable devices, making it suitable for use in biomedical devices and environment sensors.
    “The idea of these intelligent materials was first bred and born from biomimicry science,” said Kamkar, director of the Multi-scale Materials Design (MMD) Centre at Waterloo.
    “Through the ability to sense and react to environmental stimuli such as temperature, this is proof of concept that our new material can interact with the environment to monitor ecosystems without damaging them.”
    The next step for researchers is to improve the fabric’s shape-memory performance for applications in the field of robotics. The aim is to construct a robot that can effectively carry and transfer weight to complete tasks. More

  • in

    Better superconductors with palladium

    It is one of the most exciting races in modern physics: How can we produce the best superconductors that remain superconducting even at the highest possible temperatures and ambient pressure? In recent years, a new era of superconductivity has begun with the discovery of nickelates. These superconductors are based on nickel, which is why many scientists speak of the “nickel age of superconductivity research.” In many respects, nickelates are similar to cuprates, which are based on copper and were discovered in the 1980s.
    But now a new class of materials is coming into play: In a cooperation between TU Wien and universities in Japan, it was possible to simulate the behaviour of various materials more precisely on the computer than before. There is a “Goldilocks zone” in which superconductivity works particularly well. And this zone is reached neither with nickel nor with copper, but with palladium. This could usher in a new “age of palladates” in superconductivity research. The results have now been published in the scientific journal Physical Review Letters.
    The search for higher transition temperatures
    At high temperatures, superconductors behave very similar to other conducting materials. But when they are cooled below a certain “critical temperature,” they change dramatically: their electrical resistance disappears completely and suddenly they can conduct electricity without any loss. This limit, at which a material changes between a superconducting and a normally conducting state, is called the “critical temperature.”
    “We have now been able to calculate this “critical temperature” for a whole range of materials. With our modelling on high-performance computers, we were able to predict the phase diagram of nickelate superconductivity with a high degree of accuracy, as the experiments then showed later,” says Prof. Karsten Held from the Institute of Solid State Physics at TU Wien.
    Many materials become superconducting only just above absolute zero (-273.15°C), while others retain their superconducting properties even at much higher temperatures. A superconductor that still remains superconducting at normal room temperature and normal atmospheric pressure would fundamentally revolutionise the way we generate, transport and use electricity. However, such a material has not yet been discovered. Nevertheless, high-temperature superconductors, including those from the cuprate class, play an important role in technology — for example, in the transmission of large currents or in the production of extremely strong magnetic fields.
    Copper? Nickel? Or Palladium?
    The search for the best possible superconducting materials is difficult: there are many different chemical elements that come into question. You can put them together in different structures, you can add tiny traces of other elements to optimise superconductivity. “To find suitable candidates, you have to understand on a quantum-physical level how the electrons interact with each other in the material,” says Prof. Karsten Held.
    This showed that there is an optimum for the interaction strength of the electrons. The interaction must be strong, but also not too strong. There is a “golden zone” in between that makes it possible to achieve the highest transition temperatures.
    Palladates as the optimal solution
    This golden zone of medium interaction can be reached neither with cuprates nor with nickelates — but one can hit the bull’s eye with a new type of material: so-called palladates. “Palladium is directly one line below nickel in the periodic table. The properties are similar, but the electrons there are on average somewhat further away from the atomic nucleus and each other, so the electronic interaction is weaker,” says Karsten Held.
    The model calculations show how to achieve optimal transition temperatures for palladium data. “The computational results are very promising,” says Karsten Held. “We hope that we can now use them to initiate experimental research. If we have a whole new, additional class of materials available with palladates to better understand superconductivity and to create even better superconductors, this could bring the entire research field forward.” More

  • in

    Cheaper method for making woven displays and smart fabrics — of any size or shape

    Researchers have developed next-generation smart textiles — incorporating LEDs, sensors, energy harvesting, and storage — that can be produced inexpensively, in any shape or size, using the same machines used to make the clothing we wear every day.
    The international team, led by the University of Cambridge, have previously demonstrated that woven displays can be made at large sizes, but these earlier examples were made using specialised manual laboratory equipment. Other smart textiles can be manufactured in specialised microelectronic fabrication facilities, but these are highly expensive and produce large volumes of waste.
    However, the team found that flexible displays and smart fabrics can be made much more cheaply, and more sustainably, by weaving electronic, optoelectronic, sensing and energy fibre components on the same industrial looms used to make conventional textiles. Their results, reported in the journal Science Advances, demonstrate how smart textiles could be an alternative to larger electronics in sectors including automotive, electronics, fashion and construction.
    Despite recent progress in the development of smart textiles, their functionality, dimensions and shapes have been limited by current manufacturing processes.
    “We could make these textiles in specialised microelectronics facilities, but these require billions of pounds of investment,” said Dr Sanghyo Lee from Cambridge’s Department of Engineering, the paper’s first author. “In addition, manufacturing smart textiles in this way is highly limited, since everything has to be made on the same rigid wafers used to make integrated circuits, so the maximum size we can get is about 30 centimetres in diameter.”
    “Smart textiles have also been limited by their lack of practicality,” said Dr Luigi Occhipinti, also from the Department of Engineering, who co-led the research. “You think of the sort of bending, stretching and folding that normal fabrics have to withstand, and it’s been a challenge to incorporate that same durability into smart textiles.”
    Last year, some of the same researchers showed that if the fibres used in smart textiles were coated with materials that can withstand stretching, they could be compatible with conventional weaving processes. Using this technique, they produced a 46-inch woven demonstrator display.

    Now, the researchers have shown that smart textiles can be made using automated processes, with no limits on their size or shape. Multiple types of fibre devices, including energy storage devices, light-emitting diodes, and transistors were fabricated, encapsulated, and mixed with conventional fibres, either synthetic or natural, to build smart textiles by automated weaving. The fibre devices were interconnected by an automated laser welding method with electrically conductive adhesive.
    The processes were all optimised to minimise damage to the electronic components, which in turn made the smart textiles durable enough to withstand the stretching of an industrial weaving machine. The encapsulation method was developed to consider the functionality of the fibre devices, and the mechanical force and thermal energy were investigated systematically to achieve the automated weaving and laser-based interconnection, respectively.
    The research team, working in partnership with textile manufacturers, were able to produce test patches of smart textiles of roughly 50×50 centimetres, although this can be scaled up to larger dimensions and produced in large volumes.
    “These companies have well-established manufacturing lines with high throughput fibre extruders and large weaving machines that can weave a metre square of textiles automatically,” said Lee. “So when we introduce the smart fibres to the process, the result is basically an electronic system that is manufactured exactly the same way other textiles are manufactured.”
    The researchers say it could be possible for large, flexible displays and monitors to be made on industrial looms, rather than in specialised electronics manufacturing facilities, which would make them far cheaper to produce. Further optimisation of the process is needed, however.
    “The flexibility of these textiles is absolutely amazing,” said Occhipinti. “Not just in terms of their mechanical flexibility, but the flexibility of the approach, and to deploy sustainable and eco-friendly electronics manufacturing platforms that contribute to the reduction of carbon emissions and enable real applications of smart textiles in buildings, car interiors and clothing. Our approach is quite unique in that way.”
    The research was supported in part by the European Union and UK Research and Innovation. More

  • in

    Nanowire networks learn and remember like a human brain

    An international team led by scientists at the University of Sydney has demonstrated nanowire networks can exhibit both short- and long-term memory like the human brain.
    The research has been published today in the journal Science Advances, led by Dr Alon Loeffler, who received his PhD in the School of Physics, with collaborators in Japan.
    “In this research we found higher-order cognitive function, which we normally associate with the human brain, can be emulated in non-biological hardware,” Dr Loeffler said.
    “This work builds on our previous research in which we showed how nanotechnology could be used to build a brain-inspired electrical device with neural network-like circuitry and synapse-like signalling.
    “Our current work paves the way towards replicating brain-like learning and memory in non-biological hardware systems and suggests that the underlying nature of brain-like intelligence may be physical.”
    Nanowire networks are a type of nanotechnology typically made from tiny, highly conductive silver wires that are invisible to the naked eye, covered in a plastic material, which are scattered across each other like a mesh. The wires mimic aspects of the networked physical structure of a human brain.

    Advances in nanowire networks could herald many real-world applications, such as improving robotics or sensor devices that need to make quick decisions in unpredictable environments.
    “This nanowire network is like a synthetic neural network because the nanowires act like neurons, and the places where they connect with each other are analogous to synapses,” senior author Professor Zdenka Kuncic, from the School of Physics, said.
    “Instead of implementing some kind of machine learning task, in this study Dr Loeffler has actually taken it one step further and tried to demonstrate that nanowire networks exhibit some kind of cognitive function.”
    To test the capabilities of the nanowire network, the researchers gave it a test similar to a common memory task used in human psychology experiments, called the N-Back task.
    For a person, the N-Back task might involve remembering a specific picture of a cat from a series of feline images presented in a sequence. An N-Back score of 7, the average for people, indicates the person can recognise the same image that appeared seven steps back.

    When applied to the nanowire network, the researchers found it could ‘remember’ a desired endpoint in an electric circuit seven steps back, meaning a score of 7 in an N-Back test.
    “What we did here is manipulate the voltages of the end electrodes to force the pathways to change, rather than letting the network just do its own thing. We forced the pathways to go where we wanted them to go,” Dr Loefflersaid.
    “When we implement that, its memory had much higher accuracy and didn’t really decrease over time, suggesting that we’ve found a way to strengthen the pathways to push them towards where we want them, and then the network remembers it.
    “Neuroscientists think this is how the brain works, certain synaptic connections strengthen while others weaken, and that’s thought to be how we preferentially remember some things, how we learn and so on.”
    The researcherssaid when the nanowire network is constantly reinforced, it reaches a point where that reinforcement is no longer needed because the information is consolidated into memory.
    “It’s kind of like the difference between long-term memory and short-term memory in our brains,” Professor Kuncic said.
    “If we want to remember something for a long period of time, we really need to keep training our brains to consolidate that, otherwise it just kind of fades away over time.
    “One task showed that the nanowire network can store up to seven items in memory at substantially higher than chance levels without reinforcement training and near-perfect accuracy with reinforcement training.” More

  • in

    ChatGPT is still no match for humans when it comes to accounting

    Last month, OpenAI launched its newest AI chatbot product, GPT-4. According to the folks at OpenAI, the bot, which uses machine learning to generate natural language text, passed the bar exam with a score in the 90th percentile, passed 13 of 15 AP exams and got a nearly perfect score on the GRE Verbal test.
    Inquiring minds at BYU and 186 other universities wanted to know how OpenAI’s tech would fare on accounting exams. So, they put the original version, ChatGPT, to the test. The researchers say that while it still has work to do in the realm of accounting, it’s a game changer that will change the way everyone teaches and learns — for the better.
    “When this technology first came out, everyone was worried that students could now use it to cheat,” said lead study author David Wood, a BYU professor of accounting. “But opportunities to cheat have always existed. So for us, we’re trying to focus on what we can do with this technology now that we couldn’t do before to improve the teaching process for faculty and the learning process for students. Testing it out was eye-opening.”
    Since its debut in November 2022, ChatGPT has become the fastest growing technology platform ever, reaching 100 million users in under two months. In response to intense debate about how models like ChatGPT should factor into education, Wood decided to recruit as many professors as possible to see how the AI fared against actual university accounting students.
    His co-author recruiting pitch on social media exploded: 327 co-authors from 186 educational institutions in 14 countries participated in the research, contributing 25,181 classroom accounting exam questions. They also recruited undergrad BYU students (including Wood’s daughter, Jessica) to feed another 2,268 textbook test bank questions to ChatGPT. The questions covered accounting information systems (AIS), auditing, financial accounting, managerial accounting and tax, and varied in difficulty and type (true/false, multiple choice, short answer, etc.).
    Although ChatGPT’s performance was impressive, the students performed better. Students scored an overall average of 76.7%, compared to ChatGPT’s score of 47.4%. On a 11.3% of questions, ChatGPT scored higher than the student average, doing particularly well on AIS and auditing. But the AI bot did worse on tax, financial, and managerial assessments, possibly because ChatGPT struggled with the mathematical processes required for the latter type.
    When it came to question type, ChatGPT did better on true/false questions (68.7% correct) and multiple-choice questions (59.5%), but struggled with short-answer questions (between 28.7% and 39.1%). In general, higher-order questions were harder for ChatGPT to answer. In fact, sometimes ChatGPT would provide authoritative written descriptions for incorrect answers, or answer the same question different ways.
    “It’s not perfect; you’re not going to be using it for everything,” said Jessica Wood, currently a freshman at BYU. “Trying to learn solely by using ChatGPT is a fool’s errand.”
    The researchers also uncovered some other fascinating trends through the study, including: ChatGPT doesn’t always recognize when it is doing math and makes nonsensical errors such as adding two numbers in a subtraction problem, or dividing numbers incorrectly. ChatGPT often provides explanations for its answers, even if they are incorrect. Other times, ChatGPT’s descriptions are accurate, but it will then proceed to select the wrong multiple-choice answer. ChatGPT sometimes makes up facts. For example, when providing a reference, it generates a real-looking reference that is completely fabricated. The work and sometimes the authors do not even exist.That said, authors fully expect GPT-4 to improve exponentially on the accounting questions posed in their study, and the issues mentioned above. What they find most promising is how the chatbot can help improve teaching and learning, including the ability to design and test assignments, or perhaps be used for drafting portions of a project.
    “It’s an opportunity to reflect on whether we are teaching value-added information or not,” said study coauthor and fellow BYU accounting professor Melissa Larson. “This is a disruption, and we need to assess where we go from here. Of course, I’m still going to have TAs, but this is going to force us to use them in different ways.” More

  • in

    Reinforcement learning: From board games to protein design

    Scientists have successfully applied reinforcement learning to a challenge in molecular biology.
    The team of researchers developed powerful new protein design software adapted from a strategy proven adept at board games like Chess and Go. In one experiment, proteins made with the new approach were found to be more effective at generating useful antibodies in mice.
    The findings, reported April 21 in Science, suggest that this breakthrough may soon lead to more potent vaccines. More broadly, the approach could lead to a new era in protein design.
    “Our results show that reinforcement learning can do more than master board games. When trained to solve long-standing puzzles in protein science, the software excelled at creating useful molecules,” said senior author David Baker, professor of biochemistry at the UW School of Medicine in Seattle and a recipient of the 2021 Breakthrough Prize in Life Sciences.
    “If this method is applied to the right research problems,” he said, “it could accelerate progress in a variety of scientific fields.”
    The research is a milestone in tapping artificial intelligence to conduct protein science research. The potential applications are vast, from developing more effective cancer treatments to creating new biodegradable textiles.

    Reinforcement learning is a type of machine learning in which a computer program learns to make decisions by trying different actions and receiving feedback. Such an algorithm can learn to play chess, for example, by testing millions of different moves that lead to victory or defeat on the board. The program is designed to learn from these experiences and become better at making decisions over time.
    To make a reinforcement learning program for protein design, the scientists gave the computer millions of simple starting molecules. The software then made ten thousand attempts at randomly improving each toward a predefined goal. The computer lengthened the proteins or bent them in specific ways until it learned how to contort them into desired shapes.
    Isaac D. Lutz, Shunzhi Wang, and Christoffer Norn, all members of the Baker Lab, led the research. Their team’s Science manuscript is titled “Top-down design of protein architectures with reinforcement learning.”
    “Our approach is unique because we use reinforcement learning to solve the problem of creating protein shapes that fit together like pieces of a puzzle,” explained co-lead author Lutz, a doctoral student at the UW Medicine Institute for Protein Design. “This simply was not possible using prior approaches and has the potential to transform the types of molecules we can build.”
    As part of this study, the scientists manufactured hundreds of AI-designed proteins in the lab. Using electron microscopes and other instruments, they confirmed that many of the protein shapes created by the computer were indeed realized in the lab.

    “This approach proved not only accurate but also highly customizable. For example, we asked the software to make spherical structures with no holes, small holes, or large holes. Its potential to make all kinds of architectures has yet to be fully explored,” said co-lead author Shunzhi Wang, a postdoctoral scholar at the UW Medicine Institute for Protein Design.
    The team concentrated on designing new nano-scale structures composed of many protein molecules. This required designing both the protein components themselves and the chemical interfaces that allow the nano-structures to self-assemble.
    Electron microscopy confirmed that numerous AI-designed nano-structures were able to form in the lab. As a measure of how accurate the design software had become, the scientists observed many unique nano-structures in which every atom was found to be in the intended place. In other words, the deviation between the intended and realized nano-structure was on average less than the width of a single atom. This is called atomically accurate design.
    The authors foresee a future in which this approach could enable them and others to create therapeutic proteins, vaccines, and other molecules that could not have been made using prior methods.
    Researchers from the UW Medicine Institute for Stem Cell and Regenerative Medicine used primary cell models of blood vessel cells to show that the designed protein scaffolds outperformed previous versions of the technology. For example, because the receptors that help cells receive and interpret signals were clustered more densely on the more compact scaffolds, they were more effective at promoting blood vessel stability.
    Hannele Ruohola-Baker, a UW School of Medicine professor of biochemistry and one of the study’s authors, spoke to the implications of the investigation for regenerative medicine: “The more accurate the technology becomes, the more it opens up potential applications, including vascular treatments for diabetes, brain injuries, strokes, and other cases where blood vessels are at risk. We can also imagine more precise delivery of factors that we use to differentiate stem cells into various cell types, giving us new ways to regulate the processes of cell development and aging.”
    This work was funded by the National Institutes of Health (P30 GM124169, S10OD018483, 1U19AG065156-01, T90 DE021984, 1P01AI167966); Open Philanthropy Project and The Audacious Project at the Institute for Protein Design; Novo Nordisk Foundation (NNF170C0030446); Microsoft; and Amgen. Research was in part conducted at the Advanced Light Source, a national user facility operated by Lawrence Berkeley National Laboratory on behalf of the Department of Energy
    News release written by Ian Haydon, UW Medicine Institute for Protein Design. More