More stories

  • in

    'Triple contagion': How fears influence coronavirus transmission

    A new mathematical model for predicting infectious disease outbreaks incorporates fear — both of disease and of vaccines — to better understand how pandemics can occur in multiple waves of infections, like those we are seeing with COVID-19. The “Triple Contagion” model of disease and fears, developed by researchers at NYU School of Global Public Health, is published in the Journal of The Royal Society Interface.
    Human behaviors like social distancing (which suppresses spread) and vaccine refusal (which promotes it) have shaped the dynamics of epidemics for centuries. Yet, traditional epidemic models have overwhelmingly ignored human behavior and the fears that drive it.
    “Emotions like fear can override rational behavior and prompt unconstructive behavioral change,” said Joshua Epstein, professor of epidemiology at NYU School of Global Public Health, founding director of the NYU Agent-Based Modeling Laboratory, and the study’s lead author. “Fear of a contagious disease can shift how susceptible individuals behave; they may take action to protect themselves, but abandon those actions prematurely as fear decays.”
    For instance, the fear of catching a virus like SARS-CoV-2 can cause healthy people to self-isolate at home or wear masks, suppressing spread. But, because spread is reduced, the fear can evaporate — leading people to stop isolating or wearing masks too early, when there are still many infected people circulating. This pours fuel — in the form of susceptible people — onto the embers, and a new wave explodes.
    Likewise, fear of COVID-19 has motivated millions of people to get vaccinated. But as vaccines suppress spread and with it the fear of disease, people may fear the vaccine more than they do the infection and forego vaccination, again producing disease resurgence.
    For the first time, the “Triple Contagion” model couples these psychological dynamics to the disease dynamics, uncovering new behavioral mechanisms for pandemic persistence and successive waves of infection. More

  • in

    Towards next-gen computers: Mimicking brain functions with graphene-diamond junctions

    The human brain holds the secret to our unique personalities. But did you know that it can also form the basis of highly efficient computing devices? Researchers from Nagoya University, Japan, recently showed how to do this, through graphene-diamond junctions that mimic some of the human brain’s functions.
    But, why would scientists try to emulate the human brain? Today, existing computer architectures are subjected to complex data, limiting their processing speed. The human brain, on the other hand, can process highly complex data, such as images, with high efficiency. Scientists have, therefore, tried to build “neuromorphic” architectures that mimic the neural network in the brain.
    A phenomenon essential for memory and learning is “synaptic plasticity,” the ability of synapses (neuronal links) to adapt in response to an increased or decreased activity. Scientists have tried to recreate a similar effect using transistors and “memristors” (electronic memory devices whose resistance can be stored). Recently developed light-controlled memristors, or “photomemristors,” can both detect light and provide non-volatile memory, similar to human visual perception and memory. These excellent properties have opened the door to a whole new world of materials that can act as artificial optoelectronic synapses!
    This motivated the research team from Nagoya University to design graphene-diamond junctions that can mimic the characteristics of biological synapses and key memory functions, opening doors for next-generation image sensing memory devices. In their recent study published in Carbon, the researchers, led by Dr. Kenji Ueda, demonstrated optoelectronically controlled synaptic functions using junctions between vertically aligned graphene (VG) and diamond. The fabricated junctions mimic biological synaptic functions, such as the production of “excitatory postsynaptic current” (EPSC) — the charge induced by neurotransmitters at the synaptic membrane — when stimulated with optical pulses and exhibit other basic brain functions such as the transition from short-term memory (STM) to long-term memory (LTM).
    Dr. Ueda explains, “Our brains are well-equipped to sieve through the information available and store what’s important. We tried something similar with our VG-diamond arrays, which emulate the human brain when exposed to optical stimuli.” He adds, “This study was triggered due to a discovery in 2016, when we found a large optically induced conductivity change in graphene-diamond junctions.” Apart from EPSC, STM, and LTM, the junctions also show a paired pulse facilitation of 300% — an increase in postsynaptic current when closely preceded by a prior synapse.
    The VG-diamond arrays underwent redox reactions induced by fluorescent light and blue LEDs under a bias voltage. The researchers attributed this to the presence of differently hybridized carbons of graphene and diamond at the junction interface, which led to the migration of ions in response to the light and in turn allowed the junctions to perform photo-sensing and photo-controllable functions similar to those performed by the brain and retina. In addition, the VG-diamond arrays surpassed the performance of conventional rare-metal-based photosensitive materials in terms of photosensitivity and structural simplicity.
    Dr. Ueda says, “Our study provides a better understanding of the working mechanism behind the artificial optoelectronic synaptic behaviors, paving the way for optically controllable brain-mimicking computers better information-processing capabilities than existing computers.” The future of next-generation computing may not be too far away!
    Story Source:
    Materials provided by Nagoya University. Note: Content may be edited for style and length. More

  • in

    Dissolvable smartwatch makes for easier electronics recycling

    Small electronics, including smartwatches and fitness trackers, aren’t easily dismantled and recycled. So when a new model comes out, most users send the old devices into hazardous waste streams. To simplify small electronics recycling, researchers reporting in ACS Applied Materials & Interfaces have developed a two-metal nanocomposite for circuits that disintegrates when submerged in water. They demonstrated the circuits in a prototype transient device — a functional smartwatch that dissolved within 40 hours.
    Planned obsolescence and the fast pace of technology innovations leads to new devices that are continuously replacing old versions, which generates millions of tons of electronic waste per year. Recycling can reduce the volume of e-waste and is mandatory in many places. However, it often isn’t worth the effort to recycle small consumer electronics because their parts must be salvaged by hand, and some processing steps, such as open burning and acid leaching, can cause health issues and environmental pollution. Dissolvable devices that break apart on demand could solve both of those problems. Previously Xian Huang and colleagues developed a zinc-based nanocomposite that dissolved in water for use in temporary circuits, but it wasn’t conductive enough for consumer electronics. So, they wanted to improve their dissolvable nanocomposite’s electrical properties while also creating circuits robust enough to withstand everyday use.
    The researchers modified the zinc-based nanocomposite by adding silver nanowires, making it highly conductive. Then, they screen-printed the metallic solution onto pieces of poly(vinyl alcohol) — a polymer that degrades in water — and solidified the circuits by applying small droplets of water that facilitate chemical reactions and then evaporate. With this approach, the team made a smartwatch with multiple nanocomposite-printed circuit boards inside a 3D printed poly(vinyl alcohol) case. The smartwatch had sensors that accurately measured a person’s heart rate, blood oxygen levels and step count, and sent the information to a cellphone app via a Bluetooth connection. The outer package held up to sweat, but once the whole device was fully immersed in water, both the polymer case and circuits dissolved completely within 40 hours. All that was left behind were the watch’s components, such as an organic light-emitting diode (OLED) screen and microcontroller, as well as resistors and capacitors that had been integrated into the circuits. The researchers say the two-metal nanocomposite can be used to produce transient devices with performance matching that of commercial models, which could go a long way toward solving the challenges of small electronics waste.
    The authors do not acknowledge a funding source for this study.
    Story Source:
    Materials provided by American Chemical Society. Note: Content may be edited for style and length. More

  • in

    Mathematician reveals world’s oldest example of applied geometry

    A UNSW mathematician has revealed the origins of applied geometry on a 3700-year-old clay tablet that has been hiding in plain sight in a museum in Istanbul for over a century.
    The tablet — known as Si.427 — was discovered in the late 19th century in what is now central Iraq, but its significance was unknown until the UNSW scientist’s detective work was revealed today.
    Most excitingly, Si.427 is thought to be the oldest known example of applied geometry — and in the study released today in Foundations of Science, the research also reveals a compelling human story of land surveying.
    “Si.427 dates from the Old Babylonian (OB) period — 1900 to 1600 BCE,” says lead researcher Dr Daniel Mansfield from UNSW Science’s School of Mathematics and Statistics.
    “It’s the only known example of a cadastral document from the OB period, which is a plan used by surveyors define land boundaries. In this case, it tells us legal and geometric details about a field that’s split after some of it was sold off.”
    This is a significant object because the surveyor uses what are now known as “Pythagorean triples” to make accurate right angles. More

  • in

    How chemical reactions compute

    A single molecule contains a wealth of information. It includes not only the number of each kind of constituent atom, but also how they’re arranged and how they attach to each other. And during chemical reactions, that information determines the outcome and becomes transformed. Molecules collide, break apart, reassemble, and rebuild in predictable ways.
    There’s another way of looking at a chemical reaction, says Santa Fe Institute External Professor Juan-Pérez Mercader, who is a physicist and astrobiologist based at Harvard University. It’s a kind of computation. A computing device is one that takes information as its input, then mechanically transforms that information and produces some output with a functional purpose. The input and output can be almost anything: Numbers, letters, objects, images, symbols, or something else.
    Or, says Pérez-Mercader, molecules. When molecules react, they’re following the same steps that describe computation: Input, transformation, output. “It’s a computation that controls when certain events take place,” says Pérez-Mercader, “but at the nanometer scale, or shorter.”
    Molecules may be small, but their potential as tools of computation is enormous. “This is a very powerful computing tool that needs to be harnessed,” he says, noting that a single mole of a substance has 10^23 elementary chemical processors capable of computation. For the last few years, Pérez-Mercader has been developing a new field he calls “native chemical computation.” It’s a multifaceted quest: He wants to not only exploit chemical computing but also find challenges for which it’s best-suited.
    “If we have such a huge power, what kinds of problems can we tackle?” he asks. They’re not the same as those that might be better solved with a supercomputer, he says. “So what are they good for?”
    He has some ideas. Chemical reactions, he says, are very good at building things. So in 2017, his group “programmed” chemical reactions to use a bunch of molecules to assemble a container. The experiment demonstrated that these molecules, in a sense, could recognize information — and transform it in a specific way, analogous to computation.
    Pérez-Mercader and his chief collaborator on the project, chemical engineer Marta Dueñas-Díez at Harvard and the Repsol Technology Lab in Madrid, recently published a review of their progress on chemical computation. In it, they describe how chemical reactions can be used, in a lab, to build a wide range of familiar computing systems, from simple logic gates to Turing Machines. Their findings, says Pérez-Mercader, suggest that if chemical reactions can be “programmed” like other types of computing machines, they might be exploited for applications in many areas, including intelligent drug delivery, neural networks, or even artificial cells.
    Story Source:
    Materials provided by Santa Fe Institute. Note: Content may be edited for style and length. More

  • in

    Researchers use AI to unlock the secrets of ancient texts

    The Abbey Library of St. Gall in Switzerland is home to approximately 160,000 volumes of literary and historical manuscripts dating back to the eighth century — all of which are written by hand, on parchment, in languages rarely spoken in modern times.
    To preserve these historical accounts of humanity, such texts, numbering in the millions, have been kept safely stored away in libraries and monasteries all over the world. A significant portion of these collections are available to the general public through digital imagery, but experts say there is an extraordinary amount of material that has never been read — a treasure trove of insight into the world’s history hidden within.
    Now, researchers at University of Notre Dame are developing an artificial neural network to read complex ancient handwriting based on human perception to improve capabilities of deep learning transcription.
    “We’re dealing with historical documents written in styles that have long fallen out of fashion, going back many centuries, and in languages like Latin, which are rarely ever used anymore,” said Walter Scheirer, the Dennis O. Doughty Collegiate Associate Professor in the Department of Computer Science and Engineering at Notre Dame. “You can get beautiful photos of these materials, but what we’ve set out to do is automate transcription in a way that mimics the perception of the page through the eyes of the expert reader and provides a quick, searchable reading of the text.”
    In research published in the Institute of Electrical and Electronics Engineers journal Transactions on Pattern Analysis and Machine Intelligence, Scheirer outlines how his team combined traditional methods of machine learning with visual psychophysics — a method of measuring the connections between physical stimuli and mental phenomena, such as the amount of time it takes for an expert reader to recognize a specific character, gauge the quality of the handwriting or identify the use of certain abbreviations.
    Scheirer’s team studied digitized Latin manuscripts that were written by scribes in the Cloister of St. Gall in the ninth century. Readers entered their manual transcriptions into a specially designed software interface. The team then measured reaction times during transcription for an understanding of which words, characters and passages were easy or difficult. Scheirer explained that including that kind of data created a network more consistent with human behavior, reduced errors and provided a more accurate, more realistic reading of the text.
    “It’s a strategy not typically used in machine learning,” Scheirer said. “We’re labeling the data through these psychophysical measurements, which comes directly from psychological studies of perception — by taking behavioral measurements. We then inform the network of common difficulties in the perception of these characters and can make corrections based on those measurements.”
    Using deep learning to transcribe ancient texts is something of great interest to scholars in the humanities.
    “There’s a difference between just taking the photos and reading them, and having a program to provide a searchable reading,” said Hildegund Müller, associate professor in the Department of Classics at Notre Dame. “If you consider the texts used in this study — ninth-century manuscripts — that’s an early stage of the Middle Ages. It’s a long time before the printing press. That’s a time when an enormous amount of manuscripts was produced. There is all sorts of information hidden in these manuscripts — unidentified texts that nobody has seen before.”
    Scheirer said challenges remain. His team is working on improving accuracy of transcriptions, especially in the case of damaged or incomplete documents, as well as how to account for illustrations or other aspects of a page that could be confusing to the network.
    However, the team was able to adjust the program to transcribe Ethiopian texts, adapting it to a language with a completely different set of characters — a first step toward developing a program with the capability to transcribe and translate information for users.
    “In the literary field, it could be really helpful. Every good literary work is surrounded by a vast amount of historical documents, but where it’s really going to be useful is in historical archival research,” said Müller. “There is a great need to advance the digital humanities. When you talk about the Middle Ages and early modern times, if you want to understand the details and consequences of historical events, you have to look through the written material, and these texts are the only thing we have. The problem may be even greater outside the Western world. Think of languages that are disappearing in cultures that are under threat. We must first of all preserve these works, make them accessible and, at some point, incorporate translations to make them a part of cultural processes that are still underway — and we are racing against time.”
    Story Source:
    Materials provided by University of Notre Dame. Original written by Jessica Sieff. Note: Content may be edited for style and length. More

  • in

    Connective issue: AI learns by doing more with less

    Brains have evolved to do more with less. Take a tiny insect brain, which has less than a million neurons but shows a diversity of behaviors and is more energy-efficient than current AI systems. These tiny brains serve as models for computing systems that are becoming more sophisticated as billions of silicon neurons can be implemented on hardware.
    The secret to achieving energy-efficiency lies in the silicon neurons’ ability to learn to communicate and form networks, as shown by new research from the lab of Shantanu Chakrabartty, the Clifford W. Murphy Professor in the Preston M. Green Department of Electrical & Systems Engineering at Washington University in St. Louis’ McKelvey School of Engineering.
    Their results were published July 28, 2021 in the journal Frontiers in Neuroscience.
    For several years, his research group studied dynamical systems approaches to address the neuron-to-network performance gap and provide a blueprint for AI systems as energy efficient as biological ones.
    Previous work from his group showed that in a computational system, spiking neurons create perturbations which allow each neuron to “know” which others are spiking and which are responding. It’s as if the neurons were all embedded in a rubber sheet formed by energy constraints; a single ripple, caused by a spike, would create a wave that affects them all. Like all physical processes, systems of silicon neurons tend to self-optimize to their least-energetic states, while also being affected by the other neurons in the network. These constraints come together to form a kind of secondary communication network, where additional information can be communicated through the dynamic but synchronized topology of spikes. It’s like the rubber sheet vibrating in a synchronized rhythm in response to multiple spikes.
    In the latest research result, Chakrabartty and doctoral student Ahana Gangopadhyay showed how the neurons learn to pick the most energy-efficient perturbations and wave patterns in the rubber sheet. They show that if the learning is guided by sparsity (less energy), it’s like the electrical stiffness of the rubber sheet is adjusted by each neuron so that the entire network vibrates in a most energy-efficient way. The neuron does this using only local information which is communicated more efficiently. Communications between the neurons then become an emergent phenomenon guided by the need to optimize energy use. More

  • in

    Running quantum software on a classical computer

    In a paper published in Nature Quantum Information, EPFL professor Giuseppe Carleo and Matija Medvidovi?, a graduate student at Columbia University and at the Flatiron Institute in New York, have found a way to execute a complex quantum computing algorithm on traditional computers instead of quantum ones.
    The specific “quantum software” they are considering is known as Quantum Approximate Optimization Algorithm (QAOA) and is used to solve classical optimization problems in mathematics; it’s essentially a way of picking the best solution to a problem out of a set of possible solutions. “There is a lot of interest in understanding what problems can be solved efficiently by a quantum computer, and QAOA is one of the more prominent candidates,” says Carleo.
    Ultimately, QAOA is meant to help us on the way to the famed “quantum speedup,” the predicted boost in processing speed that we can achieve with quantum computers instead of conventional ones. Understandably, QAOA has a number of proponents, including Google, who have their sights set on quantum technologies and computing in the near future: in 2019 they created Sycamore, a 53-qubit quantum processor, and used it to run a task it estimated it would take a state-of-the-art classical supercomputer around 10,000 years to complete. Sycamore ran the same task in 200 seconds.
    “But the barrier of “quantum speedup” is all but rigid and it is being continuously reshaped by new research, also thanks to the progress in the development of more efficient classical algorithms,” says Carleo.
    In their study, Carleo and Medvidovi? address a key open question in the field: can algorithms running on current and near-term quantum computers offer a significant advantage over classical algorithms for tasks of practical interest? “If we are to answer that question, we first need to understand the limits of classical computing in simulating quantum systems,” says Carleo. This is especially important since the current generation of quantum processors operate in a regime where they make errors when running quantum “software,” and can therefore only run algorithms of limited complexity.
    Using conventional computers, the two researchers developed a method that can approximately simulate the behavior of a special class of algorithms known as variational quantum algorithms, which are ways of working out the lowest energy state, or “ground state” of a quantum system. QAOA is one important example of such family of quantum algorithms, that researchers believe are among the most promising candidates for “quantum advantage” in near-term quantum computers.
    The approach is based on the idea that modern machine-learning tools, e.g. the ones used in learning complex games like Go, can also be used to learn and emulate the inner workings of a quantum computer. The key tool for these simulations are Neural Network Quantum States, an artificial neural network that Carleo developed in 2016 with Matthias Troyer, and that was now used for the first time to simulate QAOA. The results are considered the province of quantum computing, and set a new benchmark for the future development of quantum hardware.
    “Our work shows that the QAOA you can run on current and near-term quantum computers can be simulated, with good accuracy, on a classical computer too,” says Carleo. “However, this does not mean that alluseful quantum algorithms that can be run on near-term quantum processors can be emulated classically. In fact, we hope that our approach will serve as a guide to devise new quantum algorithms that are both useful and hard to simulate for classical computers.”
    Story Source:
    Materials provided by Ecole Polytechnique Fédérale de Lausanne. Original written by Nik Papageorgiou. Note: Content may be edited for style and length. More