More stories

  • in

    Read to succeed — in math; study shows how reading skill shapes more than just reading

    A University at Buffalo researcher’s recent work on dyslexia has unexpectedly produced a startling discovery which clearly demonstrates how the cooperative areas of the brain responsible for reading skill are also at work during apparently unrelated activities, such as multiplication.
    Though the division between literacy and math is commonly reflected in the division between the arts and sciences, the findings suggest that reading, writing and arithmetic, the foundational skills informally identified as the three Rs, might actually overlap in ways not previously imagined, let alone experimentally validated.
    “These findings floored me,” said Christopher McNorgan, PhD, the paper’s author and an assistant professor in UB’s Department of Psychology. “They elevate the value and importance of literacy by showing how reading proficiency reaches across domains, guiding how we approach other tasks and solve other problems.
    “Reading is everything, and saying so is more than an inspirational slogan. It’s now a definitive research conclusion.”
    And it’s a conclusion that was not originally part of McNorgan’s design. He planned to exclusively explore if it was possible to identify children with dyslexia on the basis of how the brain was wired for reading.
    “It seemed plausible given the work I had recently finished, which identified a biomarker for ADHD,” said McNorgan, an expert in neuroimaging and computational modeling.

    advertisement

    Like that previous study, a novel deep learning approach that makes multiple simultaneous classifications is at the core of McNorgan’s current paper, which appears in the journal Frontiers in Computational Neuroscience.
    Deep learning networks are ideal for uncovering conditional, non-linear relationships.
    Where linear relationships involve one variable directly influencing another, a non-linear relationship can be slippery because changes in one area do not necessarily proportionally influence another area. But what’s challenging for traditional methods is easily handled through deep learning.
    McNorgan identified dyslexia with 94% accuracy when he finished with his first data set, consisting of functional connectivity from 14 good readers and 14 poor readers engaged in a language task.
    But he needed another data set to determine if his findings could be generalized. So McNorgan chose a math study, which relied on a mental multiplication task, and measured functional connectivity from the fMRI information in that second data set.

    advertisement

    Functional connectivity, unlike what the name might imply, is a dynamic description of how the brain is virtually wired from moment to moment. Don’t think in terms of the physical wires used in a network, but instead of how those wires are used throughout the day. When you’re working, your laptop is sending a document to your printer. Later in the day, your laptop might be streaming a movie to your television. How those wires are used depends on whether you’re working or relaxing. Functional connectivity changes according to the immediate task.
    The brain dynamically rewires itself according to the task all the time. Imagine reading a list of restaurant specials while standing only a few steps away from the menu board nailed to the wall. The visual cortex is working whenever you’re looking at something, but because you’re reading, the visual cortex works with, or is wired to, at least for the moment, the auditory cortex.
    Pointing to one of the items on the board, you accidentally knock it from the wall. When you reach out to catch it, your brain wiring changes. You’re no longer reading, but trying to catch a falling object, and your visual cortex now works with the pre-motor cortex to guide your hand.
    Different tasks, different wiring; or, as McNorgan explains, different functional networks.
    In the two data sets McNorgan used, participants were engaged in different tasks: language and math. Yet in each case, the connectivity fingerprint was the same, and he was able to identify dyslexia with 94% accuracy whether testing against the reading group or the math group.
    It was a whim, he said, to see how well his model distinguished good readers from poor readers — or from participants who weren’t reading at all. Seeing the accuracy, and the similarity, changed the direction of the paper McNorgan intended.
    Yes, he could identify dyslexia. But it became obvious that the brain’s wiring for reading was also present for math.
    Different task. Same functional networks.
    “The brain should be dynamically wiring itself in a way that’s specifically relevant to doing math because of the multiplication problem in the second data set, but there’s clear evidence of the dynamic configuration of the reading network showing up in the math task,” McNorgan says.
    He says it’s the sort of finding that strengthens the already strong case for supporting literacy.
    “These results show that the way our brain is wired for reading is actually influencing how the brain functions for math,” he said. “That says your reading skill is going to affect how you tackle problems in other domains, and helps us better understand children with learning difficulties in both reading and math.”
    As the line between cognitive domains becomes more blurred, McNorgan wonders what other domains the reading network is actually guiding.
    “I’ve looked at two domains which couldn’t be farther afield,” he said. “If the brain is showing that its wiring for reading is showing up in mental multiplication, what else might it be contributing toward?”
    That’s an open question, for now, according to McNorgan.
    “What I do know because of this research is that an educational emphasis on reading means much more than improving reading skill,” he said. “These findings suggest that learning how to read shapes so much more.” More

  • in

    Breakthrough lays groundwork for future quantum networks

    New Army-funded research could help lay the groundwork for future quantum communication networks and large-scale quantum computers.
    Researchers sent entangled qubit states through a communication cable linking one quantum network node to a second node.
    Scientists at the Pritzker School of Molecular Engineering at the University of Chicago, funded and managed by the U.S. Army Combat Capability Development, known as DEVCOM, Army Research Laboratory’s Center for Distributed Quantum Information, also amplified an entangled state via the same cable first by using the cable to entangle two qubits in each of two nodes, then entangling these qubits further with other qubits in the nodes. The peer-reviewed journal published the research in its Feb. 24, 2021, issue.
    “The entanglement distribution results the team achieved brought together years of their research related to approaches for transferring quantum states and related to advanced fabrication procedures to realize the experiments,” said Dr. Sara Gamble, program manager at the Army Research Office, an element of the Army’s corporate research laboratory, and co-manager of the CDQI, which funded the work. “This is an exciting achievement and one that paves the way for increasingly complex experiments with additional quantum nodes that we’ll need for the large-scale quantum networks and computers of ultimate interest to the Army.”
    Qubits, or quantum bits, are the basic units of quantum information. By exploiting their quantum properties, like superposition, and their ability to be entangled together, scientists and engineers are creating next-generation quantum computers that will be able solve previously unsolvable problems.
    The research team uses superconducting qubits, tiny cryogenic circuits that can be manipulated electrically.

    advertisement

    “Developing methods that allow us to transfer entangled states will be essential to scaling quantum computing,” said Prof. Andrew Cleland, the John A. MacLean senior professor of Molecular Engineering Innovation and Enterprise at University of Chicago, who led the research.
    Entanglement is a correlation that can be created between quantum entities such as qubits. When two qubits are entangled and a measurement is made on one, it will affect the outcome of a measurement made on the other, even if that second qubit is physically far away.
    Entanglement is a correlation that can be created between quantum entities such as qubits. When two qubits are entangled and a measurement is made on one, it will affect the outcome of a measurement made on the other, even if that second qubit is physically far away.
    To send the entangled states through the communication cable — a one-meter-long superconducting cable — the researchers created an experimental set-up with three superconducting qubits in each of two nodes. They connected one qubit in each node to the cable and then sent quantum states, in the form of microwave photons, through the cable with minimal loss of information. The fragile nature of quantum states makes this process quite challenging.
    The researchers developed a system in which the whole transfer process — node to cable to node — takes only a few tens of nanoseconds (a nanosecond is one billionth of a second). That allowed them to send entangled quantum states with very little information loss.

    advertisement

    The system also allowed them to amplify the entanglement of qubits. The researchers used one qubit in each node and entangled them together by essentially sending a half-photon through the cable. They then extended this entanglement to the other qubits in each node. When they were finished, all six qubits in two nodes were entangled in a single globally entangled state.
    “We want to show that superconducting qubits have a viable role going forward,” Cleland said.
    A quantum communication network could potentially take advantage of this advance. The group plans to extend their system to three nodes to build three-way entanglement.
    The researchers developed a system in which the whole transfer process — node to cable to node — takes only a few tens of nanoseconds (a nanosecond is one billionth of a second).
    “The team was able to identify a primary limiting factor in this current experiment related to loss in some of the components,” said Dr. Fredrik Fatemi, branch chief for quantum sciences, DEVCOM ARL, and co-manager of CDQI. “They have a clear path forward for increasingly complex experiments which will enable us to explore new regimes in distributed entanglement.” More

  • in

    Robots learn faster with quantum technology

    Artificial intelligence is part of our modern life. A crucial question for practical applications is how fast such intelligent machines can learn. An experiment has answered this question, showing that quantum technology enables a speed-up in the learning process. The physicists have achieved this result by using a quantum processor for single photons as a robot.
    Robots solving computer games, recognizing human voices, or helping in finding optimal medical treatments: those are only a few astonishing examples of what the field of artificial intelligence has produced in the past years. The ongoing race for better machines has led to the question of how and with what means improvements can be achieved. In parallel, huge recent progress in quantum technologies have confirmed the power of quantum physics, not only for its often peculiar and puzzling theories, but also for real-life applications. Hence, the idea of merging the two fields: on one hand, artificial intelligence with its autonomous machines; on the other hand, quantum physics with its powerful algorithms.
    Over the past few years, many scientists have started to investigate how to bridge these two worlds, and to study in what ways quantum mechanics can prove beneficial for learning robots, or vice versa. Several fascinating results have shown, for example, robots deciding faster on their next move, or the design of new quantum experiments using specific learning techniques. Yet, robots were still incapable of learning faster, a key feature in the development of increasingly complex autonomous machines.
    Within an international collaboration led by Philip Walther, a team of experimental physicists from the University of Vienna, together with theoreticians from the University of Innsbruck, the Austrian Academy of Sciences, the Leiden University, and the German Aerospace Center, have been successful in experimentally proving for the first time a speed-up in the actual robot’s learning time. The team has made use of single photons, the fundamental particles of light, coupled into an integrated photonic quantum processor, which was designed at the Massachusetts Institute of Technology. This processor was used as a robot and for implementing the learning tasks. Here, the robot would learn to route the single photons to a predefined direction. “The experiment could show that the learning time is significantly reduced compared to the case where no quantum physics is used,” says Valeria Saggio, first author of the publication.
    In a nutshell, the experiment can be understood by imagining a robot standing at a crossroad, provided with the task of learning to always take the left turn. The robot learns by obtaining a reward when doing the correct move. Now, if the robot is placed in our usual classical world, then it will try either a left or right turn, and will be rewarded only if the left turn is chosen. In contrast, when the robot exploits quantum technology, the bizarre aspects of quantum physics come into play. The robot can now make use of one of its most famous and peculiar features, the so called superposition principle. This can be intuitively understood by imagining the robot taking the two turns, left and right, at the same time. “This key feature enables the implementation of a quantum search algorithm that reduces the number of trials for learning the correct path. As a consequence, an agent that can explore its environment in superposition will learn significantly faster than its classical counterpart,” says Hans Briegel, who developed the theoretical ideas on quantum learning agents with his group at the University of Innsbruck.
    This experimental demonstration that machine learning can be enhanced by using quantum computing shows promising advantages when combining these two technologies. “We are just at the beginning of understanding the possibilities of quantum artificial intelligence” says Philip Walther, “and thus every new experimental result contributes to the development of this field, which is currently seen as one of the most fertile areas for quantum computing.”

    Story Source:
    Materials provided by University of Vienna. Note: Content may be edited for style and length. More

  • in

    A tour of ‘Four Lost Cities’ reveals modern ties to ancient people

    Four Lost CitiesAnnalee NewitzW.W. Norton & Co., $26.95
    It’s a familiar trope in movies and books: A bright-eyed protagonist moves to the big city in search of fame and fortune. Amid the bustle and lights, all hopes and dreams come true. But why do we cling to this cliché? In Four Lost Cities: A Secret History of the Urban Age, author Annalee Newitz explores ancient settlements to find out why people flock to big cities — and why they leave.  
    The book is divided into four enjoyable, snack-sized sections, one for each city. Each section is accompanied by a handy map, drawn by artist Jason Thompson with engaging, cartoon-style flair.
    Rather than dry history, Newitz makes a special effort to highlight the oddities and innovations that made these cities unique. Take Çatalhöyük, the oldest city they feature, which thrived from 7500 to 5700 B.C. in what is now Turkey. This ancient city persisted for nearly 2,000 years despite lacking things that we might consider necessary to a city, such as roads, dedicated public spaces or shopping areas.
    Four Lost Cities includes illustrated maps, including this one of Çatalhöyük, to help guide readers, as well as offering a bit of insight into the art of ancient cultures.Jason Thompson
    Newitz’s also explores Pompeii (700 B.C to A.D. 79 in modern-day Italy). When paired with Çatalhöyük, it offers insights into how humans developed the distinction between public and private spaces and activities — ideas that would not have made sense before humans began living in large settled groups. The section on Cahokia (A.D. 1050 to 1350) — located in what is now Illinois, across the Mississippi River from St. Louis — offers an unexpected reason for a city’s emergence. Many people link cities with capitalism and trade. Cahokia’s 30-meter-tall pyramids, 20-hectare plazas and a population (at the time) bigger than Paris suggest that spiritual revival can also build a major metropolis. Cahokia and Angkor, which reached its peak from A.D. 800 to 1431 in what is now Cambodia, also show how cities can form when power gets concentrated in a few influential people. 
    Through touring such diverse cities, Newitz shows that the move to urban life isn’t just a setup for a hero of a story. It’s a common setup for many ancient cultures.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Each city, of course, eventually fell. Çatalhöyük and Angkor suffered from droughts and flooding (SN: 10/17/18). Pompeii felt the fury of a volcano (SN: 1/23/20). But Newitz also reveals something else: Collapsing infrastructure provided the final push that kept people away. Here we glimpse our potential future, as climate crises and political instability threaten our own urban networks. But Newitz’s vivid imaginings, bright prose and boundless enthusiasm manage to keep the tone optimistic. These cities did end, yes. Yet the people who built them and resided in them lived on. Even in Pompeii, many inhabitants made it out. Collectively, they went to new places and spurred new growth.
    Four Lost Cities is about how cities collapse. But it’s also about what makes a city succeed. It’s not glamour or Wall Street. It’s not good restaurants or big factories. It’s people and their infrastructure. It’s clean water, public spaces, decent roads and opportunities for residents to live with dignity and improve their lot, Newitz explains. And when infrastructure crumbles beyond repair, people inevitably move on. “Our forebears’ eroded palaces and villas warn us about how communities can go wrong,” they write. “But their streets and plazas testify to all the times we built something meaningful together.”
    Buy Four Lost Cities from Bookshop.org. Science News is a Bookshop.org affiliate and will earn a comission on purchases made from links in this article. More

  • in

    Classic math conundrum solved: Superb algorithm for finding the shortest route

    One of the most classic algorithmic problems deals with calculating the shortest path between two points. A more complicated variant of the problem is when the route traverses a changing network — whether this be a road network or the internet. For 40 years, an algorithm has been sought to provide an optimal solution to this problem. Now, computer scientist Christian Wulff-Nilsen of the University of Copenhagen and two research colleagues have come up with a recipe.
    When heading somewhere new, most of us leave it to computer algorithms to help us find the best route, whether by using a car’s GPS, or public transport and map apps on their phone. Still, there are times when a proposed route doesn’t quite align with reality. This is because road networks, public transportation networks and other networks aren’t static. The best route can suddenly be the slowest, e.g. because a queue has formed due to roadworks or an accident.
    People probably don’t think about the complicated math behind routing suggestions in these types of situations. The software being used is trying to solve a variant for the classic algorithmic “shortest path” problem, the shortest path in a dynamic network. For 40 years, researchers have been working to find an algorithm that can optimally solve this mathematical conundrum. Now, Christian Wulff-Nilsen of the University of Copenhagen’s Department of Computer Science has succeeded in cracking the nut along with two colleagues.
    “We have developed an algorithm, for which we now have mathematical proof, that it is better than every other algorithm up to now — and the closest thing to optimal that will ever be, even if we look 1000 years into the future,” says Associate Professor Wulff-Nilsen. The results were presented at the FOCS 2020 conference.
    Optimally, in this context, refers to an algorithm that spends as little time and as little computer memory as possible to calculate the best route in a given network. This is not just true of road and transportation networks, but also the internet or any other type of network.
    Networks as graphs
    The researchers represent a network as a so-called dynamic graph.” In this context, a graph is an abstract representation of a network consisting of edges, roads for example, and nodes, representing intersections, for example. When a graph is dynamic, it means that it can change over time. The new algorithm handles changes consisting of deleted edges — for example, if the equivalent of a stretch of a road suddenly becomes inaccessible due to roadworks.

    advertisement

    “The tremendous advantage of seeing a network as an abstract graph is that it can be used to represent any type of network. It could be the internet, where you want to send data via as short a route as possible, a human brain or the network of friendship relations on Facebook. This makes graph algorithms applicable in a wide variety of contexts,” explains Christian Wulff-Nilsen.
    Traditional algorithms assume that a graph is static, which is rarely true in the real world. When these kinds of algorithms are used in a dynamic network, they need to be rerun every time a small change occurs in the graph — which wastes time.
    More data necessitates better algorithms
    Finding better algorithms is not just useful when travelling. It is necessary in virtually any area where data is produced, as Christian Wulff-Nilsen points out:
    “We are living in a time when volumes of data grow at a tremendous rate and the development of hardware simply can’t keep up. In order to manage all of the data we produce, we need to develop smarter software that requires less running time and memory. That’s why we need smarter algorithms,” he says.
    He hopes that it will be possible to use this algorithm or some of the techniques behind it in practice, but stresses that this is theoretical evidence and first requires experimentation.
    Background
    The research article “Near-Optimal Decremental SSSP in Dense Weighted Digraphs” was presented at the prestigious FOCS 2020 conference.
    The article was written by Christian Wulff-Nilsen, of the University of Copenhagen’s Department of Computer Science, and former Department of Computer Science PhD student Maximillian Probst Gutenberg and assistant professor Aaron Bernstein of Rutgers University.
    The version of the “shortest path” problem that the researchers solved is called “The Decremental Single-Source Shortest Path Problem.” It is essentially about maintaining the shortest paths in a changing dynamic network from one starting point to all other nodes in a graph. The changes to a network consist of edge removals.
    The paper gives a mathematical proof that the algorithm is essentially the optimal one for dynamic networks. On average, users will be able to change routes according to calculations made in constant time. More

  • in

    Large computer language models carry environmental, social risks

    Computer engineers at the world’s largest companies and universities are using machines to scan through tomes of written material. The goal? Teach these machines the gift of language. Do that, some even claim, and computers will be able to mimic the human brain.
    But this impressive compute capability comes with real costs, including perpetuating racism and causing significant environmental damage, according to a new paper, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” The paper is being presented Wednesday, March 10 at the ACM Conference on Fairness, Accountability and Transparency (ACM FAccT).
    This is the first exhaustive review of the literature surrounding the risks that come with rapid growth of language-learning technologies, said Emily M. Bender, a University of Washington professor of linguistics and a lead author of the paper along with Timnit Gebru, a well-known AI researcher.
    “The question we’re asking is what are the possible dangers of this approach and the answers that we’re giving involve surveying literature across a broad range of fields and pulling them together,” said Bender, who is the UW Howard and Frances Nostrand Endowed Professor.
    What the researchers surfaced was that there are downsides to the ever-growing computing power put into natural language models. They discuss how the ever-increasing size of training data for language modeling exacerbates social and environmental issues. Alarmingly, such language models perpetuate hegemonic language and can deceive people into thinking they are having a “real” conversation with a person rather than a machine. The increased computational needs of these models further contributes to environmental degradation.
    The authors were motivated to write the paper because of a trend within the field towards ever-larger language models and their growing spheres of influence.

    advertisement

    The paper already has generated wide-spread attention due, in part, to the fact that two of the paper’s co-authors say they were fired recently from Google for reasons that remain unsettled. Margaret Mitchell and Gebru, the two now-former Google researchers, said they stand by the paper’s scholarship and point to its conclusions as a clarion call to industry to take heed.
    “It’s very clear that putting in the concerns has to happen right now, because it’s already becoming too late,” said Mitchell, a researcher in AI.
    It takes an enormous amount of computing power to fuel the model language programs, Bender said. That takes up energy at tremendous scale, and that, the authors argue, causes environmental degradation. And those costs aren’t borne by the computer engineers, but rather by marginalized people who cannot afford the environmental costs.
    “It’s not just that there’s big energy impacts here, but also that the carbon impacts of that will bring costs first to people who are not benefiting from this technology,” Bender said. “When we do the cost-benefit analysis, it’s important to think of who’s getting the benefit and who’s paying the cost because they’re not the same people.”
    The large scale of this compute power also can restrict access to only the most well-resourced companies and research groups, leaving out smaller developers outside of the U.S., Canada, Europe and China. That’s because it takes huge machines to run the software necessary to make computers mimic human thought and speech.
    Another risk comes from the training data itself, the authors say. Because the computers read language from the Web and from other sources, they can pick up and perpetuate racist, sexist, ableist, extremist and other harmful ideologies.
    “One of the fallacies that people fall into is well, the internet is big, the internet is everything. If I just scrape the whole internet then clearly I’ve incorporated diverse viewpoints,” Bender said. “But when we did a step-by-step review of the literature, it says that’s not the case right now because not everybody’s on the internet, and of the people who are on the internet, not everybody is socially comfortable participating in the same way.”
    And, people can confuse the language models for real human interaction, believing that they’re actually talking with a person or reading something that a person has spoken or written, when, in fact, the language comes from a machine. Thus, the stochastic parrots.
    “It produces this seemingly coherent text, but it has no communicative intent. It has no idea what it’s saying. There’s no there there,” Bender said. More

  • in

    Robots can use eye contact to draw out reluctant participants in groups

    Eye contact is a key to establishing a connection, and teachers use it often to encourage participation. But can a robot do this too? Can it draw a response simply by making “eye” contact, even with people who are less inclined to speak up. A recent study suggests that it can.
    Researchers at KTH Royal Institute of Technology published results of experiments in which robots led a Swedish word game with individuals whose proficiency in the Nordic language was varied. They found that by redirecting its gaze to less proficient players, a robot can elicit involvement from even the most reluctant participants.
    Researchers Sarah Gillet and Ronald Cumbal say the results offer evidence that robots could play a productive role in educational settings.
    Calling on someone by name isn’t always the best way to elicit engagement, Gillet says. “Gaze can by nature influence very dynamically how much people are participating, especially if there is this natural tendency for imbalance — due to the differences in language proficiency,” she says.
    “If someone is not inclined to participate for some reason, we showed that gaze is able to overcome this difference and help everyone to participate.”
    Cumbal says that studies have shown that robots can support group discussion, but this is the first study to examine what happens when a robot uses gaze in a group interaction that isn’t balanced — when it is dominated by one or more individuals.
    The experiment involved pairs of players — one fluent in Swedish and one who is learning Swedish. The players were instructed to give the robot clues in Swedish so that it could guess the correct term. The face of the robot was an animated projection on a specially designed plastic mask.
    While it would be natural for a fluent speaker to dominate such a scenario, Cumbal says, the robot was able to prompt the participation of the less fluent player by redirecting its gaze naturally toward them and silently waiting for them to hazard an attempt.
    “Robot gaze can modify group dynamics — what role people take in a situation,” he says. “Our work builds on that and shows further that even when there is an imbalance in skills required for the activity, the gaze of a robot can still influence how the participants contribute.”

    Story Source:
    Materials provided by KTH, Royal Institute of Technology. Note: Content may be edited for style and length. More

  • in

    An electrically charged glass display smoothly transitions between a spectrum of colors

    Scientists have developed a see-through glass display with a high white light contrast ratio that smoothly transitions between a broad spectrum of colors when electrically charged. The technology, from researchers at Jilin University in Changchun, China, overcomes limitations of existing electrochromic devices by harnessing interactions between metal ions and ligands, opening the door for numerous future applications. The work appears March 10 in the journal Chem.
    “We believe that the method behind this see-through, non-emissive display may accelerate the development of transparent, eye-friendly displays with improved readability for bright working conditions,” says Yu-Mo Zhang, an associate professor of chemistry at Jilin University and an author on the study. “As an inevitable display technology in the near future, non-emissive see-through displays will be ubiquitous and irreplaceable as a part of the Internet of Things, in which physical objects are interconnected through software.”
    With the application of voltage, electrochromic displays offer a platform in which light’s properties can be continuously and reversibly manipulated. These devices have been proposed for use in windows, energy-saving electronic price tags, flashy billboards, rearview mirrors, augmented virtual reality, and even artificial irises. However, current models come with limitations — they tend to have low contrast ratios, especially for white light, poor stability, and limited color variations, all of which have prevented electrochromic displays from reaching their technological potential.
    To overcome these deficiencies, Yuyang Wang and colleagues developed a simple chemical approach in which metal ions induce a wide variety of switchable dyes to take on particular structures, then stabilize them once they have reached the desired configurations. To trigger a color change, the electrical field is simply applied to switch the metal ions’ valences, forming new bonds between the metal ions and molecular switches.
    “Differently from the traditional electrochromic materials, whose color-changing motifs and redox motifs are located at the same site, this new material is an indirect-redox-color-changing system composed by switchable dyes and multivalent metal ions,” says Zhang.
    To test this approach, the researchers fabricated an electrochromic device by injecting a material containing metal salts, dyes, electrolytes, and solvent into a sandwiched device with two electrodes and adhesive as a spacer. Next, they performed a battery of light spectrum and electrochemical tests, finding that the devices could effectively achieve cyan, magenta, yellow, red, green, black, pink, purple, and gray-black displays, while maintaining a high contrast ratio. The prototype also shifted seamlessly from a colorless, transparent display to black — the most useful color for commercial applications — with high coloration efficiency, low transmittance change voltage, and a white light contrast ratio that would be suitable for real transparent displays.
    “The low cost and simple preparation process of this glass device will also benefit its scalable production and commercial applications,” notes Zhang.
    Next, the researchers plan to optimize the display’s performance so that it may quickly meet the requirements of high-end displays for real-world applications. Additionally, to avoid leakage from its liquid components, they plan to develop improved fabrication technologies that can produce solid or semi-solid electrochromic devices.
    “We are hoping that more and more visionary researchers and engineers cooperate with each other to optimize the electrochromic displays and promote their commercialization,” says Zhang.
    The authors received financial support from the National Natural Science Foundation of China.

    Story Source:
    Materials provided by Cell Press. Note: Content may be edited for style and length. More