More stories

  • in

    Mathematician reveals world’s oldest example of applied geometry

    A UNSW mathematician has revealed the origins of applied geometry on a 3700-year-old clay tablet that has been hiding in plain sight in a museum in Istanbul for over a century.
    The tablet — known as Si.427 — was discovered in the late 19th century in what is now central Iraq, but its significance was unknown until the UNSW scientist’s detective work was revealed today.
    Most excitingly, Si.427 is thought to be the oldest known example of applied geometry — and in the study released today in Foundations of Science, the research also reveals a compelling human story of land surveying.
    “Si.427 dates from the Old Babylonian (OB) period — 1900 to 1600 BCE,” says lead researcher Dr Daniel Mansfield from UNSW Science’s School of Mathematics and Statistics.
    “It’s the only known example of a cadastral document from the OB period, which is a plan used by surveyors define land boundaries. In this case, it tells us legal and geometric details about a field that’s split after some of it was sold off.”
    This is a significant object because the surveyor uses what are now known as “Pythagorean triples” to make accurate right angles. More

  • in

    How chemical reactions compute

    A single molecule contains a wealth of information. It includes not only the number of each kind of constituent atom, but also how they’re arranged and how they attach to each other. And during chemical reactions, that information determines the outcome and becomes transformed. Molecules collide, break apart, reassemble, and rebuild in predictable ways.
    There’s another way of looking at a chemical reaction, says Santa Fe Institute External Professor Juan-Pérez Mercader, who is a physicist and astrobiologist based at Harvard University. It’s a kind of computation. A computing device is one that takes information as its input, then mechanically transforms that information and produces some output with a functional purpose. The input and output can be almost anything: Numbers, letters, objects, images, symbols, or something else.
    Or, says Pérez-Mercader, molecules. When molecules react, they’re following the same steps that describe computation: Input, transformation, output. “It’s a computation that controls when certain events take place,” says Pérez-Mercader, “but at the nanometer scale, or shorter.”
    Molecules may be small, but their potential as tools of computation is enormous. “This is a very powerful computing tool that needs to be harnessed,” he says, noting that a single mole of a substance has 10^23 elementary chemical processors capable of computation. For the last few years, Pérez-Mercader has been developing a new field he calls “native chemical computation.” It’s a multifaceted quest: He wants to not only exploit chemical computing but also find challenges for which it’s best-suited.
    “If we have such a huge power, what kinds of problems can we tackle?” he asks. They’re not the same as those that might be better solved with a supercomputer, he says. “So what are they good for?”
    He has some ideas. Chemical reactions, he says, are very good at building things. So in 2017, his group “programmed” chemical reactions to use a bunch of molecules to assemble a container. The experiment demonstrated that these molecules, in a sense, could recognize information — and transform it in a specific way, analogous to computation.
    Pérez-Mercader and his chief collaborator on the project, chemical engineer Marta Dueñas-Díez at Harvard and the Repsol Technology Lab in Madrid, recently published a review of their progress on chemical computation. In it, they describe how chemical reactions can be used, in a lab, to build a wide range of familiar computing systems, from simple logic gates to Turing Machines. Their findings, says Pérez-Mercader, suggest that if chemical reactions can be “programmed” like other types of computing machines, they might be exploited for applications in many areas, including intelligent drug delivery, neural networks, or even artificial cells.
    Story Source:
    Materials provided by Santa Fe Institute. Note: Content may be edited for style and length. More

  • in

    Researchers use AI to unlock the secrets of ancient texts

    The Abbey Library of St. Gall in Switzerland is home to approximately 160,000 volumes of literary and historical manuscripts dating back to the eighth century — all of which are written by hand, on parchment, in languages rarely spoken in modern times.
    To preserve these historical accounts of humanity, such texts, numbering in the millions, have been kept safely stored away in libraries and monasteries all over the world. A significant portion of these collections are available to the general public through digital imagery, but experts say there is an extraordinary amount of material that has never been read — a treasure trove of insight into the world’s history hidden within.
    Now, researchers at University of Notre Dame are developing an artificial neural network to read complex ancient handwriting based on human perception to improve capabilities of deep learning transcription.
    “We’re dealing with historical documents written in styles that have long fallen out of fashion, going back many centuries, and in languages like Latin, which are rarely ever used anymore,” said Walter Scheirer, the Dennis O. Doughty Collegiate Associate Professor in the Department of Computer Science and Engineering at Notre Dame. “You can get beautiful photos of these materials, but what we’ve set out to do is automate transcription in a way that mimics the perception of the page through the eyes of the expert reader and provides a quick, searchable reading of the text.”
    In research published in the Institute of Electrical and Electronics Engineers journal Transactions on Pattern Analysis and Machine Intelligence, Scheirer outlines how his team combined traditional methods of machine learning with visual psychophysics — a method of measuring the connections between physical stimuli and mental phenomena, such as the amount of time it takes for an expert reader to recognize a specific character, gauge the quality of the handwriting or identify the use of certain abbreviations.
    Scheirer’s team studied digitized Latin manuscripts that were written by scribes in the Cloister of St. Gall in the ninth century. Readers entered their manual transcriptions into a specially designed software interface. The team then measured reaction times during transcription for an understanding of which words, characters and passages were easy or difficult. Scheirer explained that including that kind of data created a network more consistent with human behavior, reduced errors and provided a more accurate, more realistic reading of the text.
    “It’s a strategy not typically used in machine learning,” Scheirer said. “We’re labeling the data through these psychophysical measurements, which comes directly from psychological studies of perception — by taking behavioral measurements. We then inform the network of common difficulties in the perception of these characters and can make corrections based on those measurements.”
    Using deep learning to transcribe ancient texts is something of great interest to scholars in the humanities.
    “There’s a difference between just taking the photos and reading them, and having a program to provide a searchable reading,” said Hildegund Müller, associate professor in the Department of Classics at Notre Dame. “If you consider the texts used in this study — ninth-century manuscripts — that’s an early stage of the Middle Ages. It’s a long time before the printing press. That’s a time when an enormous amount of manuscripts was produced. There is all sorts of information hidden in these manuscripts — unidentified texts that nobody has seen before.”
    Scheirer said challenges remain. His team is working on improving accuracy of transcriptions, especially in the case of damaged or incomplete documents, as well as how to account for illustrations or other aspects of a page that could be confusing to the network.
    However, the team was able to adjust the program to transcribe Ethiopian texts, adapting it to a language with a completely different set of characters — a first step toward developing a program with the capability to transcribe and translate information for users.
    “In the literary field, it could be really helpful. Every good literary work is surrounded by a vast amount of historical documents, but where it’s really going to be useful is in historical archival research,” said Müller. “There is a great need to advance the digital humanities. When you talk about the Middle Ages and early modern times, if you want to understand the details and consequences of historical events, you have to look through the written material, and these texts are the only thing we have. The problem may be even greater outside the Western world. Think of languages that are disappearing in cultures that are under threat. We must first of all preserve these works, make them accessible and, at some point, incorporate translations to make them a part of cultural processes that are still underway — and we are racing against time.”
    Story Source:
    Materials provided by University of Notre Dame. Original written by Jessica Sieff. Note: Content may be edited for style and length. More

  • in

    Connective issue: AI learns by doing more with less

    Brains have evolved to do more with less. Take a tiny insect brain, which has less than a million neurons but shows a diversity of behaviors and is more energy-efficient than current AI systems. These tiny brains serve as models for computing systems that are becoming more sophisticated as billions of silicon neurons can be implemented on hardware.
    The secret to achieving energy-efficiency lies in the silicon neurons’ ability to learn to communicate and form networks, as shown by new research from the lab of Shantanu Chakrabartty, the Clifford W. Murphy Professor in the Preston M. Green Department of Electrical & Systems Engineering at Washington University in St. Louis’ McKelvey School of Engineering.
    Their results were published July 28, 2021 in the journal Frontiers in Neuroscience.
    For several years, his research group studied dynamical systems approaches to address the neuron-to-network performance gap and provide a blueprint for AI systems as energy efficient as biological ones.
    Previous work from his group showed that in a computational system, spiking neurons create perturbations which allow each neuron to “know” which others are spiking and which are responding. It’s as if the neurons were all embedded in a rubber sheet formed by energy constraints; a single ripple, caused by a spike, would create a wave that affects them all. Like all physical processes, systems of silicon neurons tend to self-optimize to their least-energetic states, while also being affected by the other neurons in the network. These constraints come together to form a kind of secondary communication network, where additional information can be communicated through the dynamic but synchronized topology of spikes. It’s like the rubber sheet vibrating in a synchronized rhythm in response to multiple spikes.
    In the latest research result, Chakrabartty and doctoral student Ahana Gangopadhyay showed how the neurons learn to pick the most energy-efficient perturbations and wave patterns in the rubber sheet. They show that if the learning is guided by sparsity (less energy), it’s like the electrical stiffness of the rubber sheet is adjusted by each neuron so that the entire network vibrates in a most energy-efficient way. The neuron does this using only local information which is communicated more efficiently. Communications between the neurons then become an emergent phenomenon guided by the need to optimize energy use. More

  • in

    Running quantum software on a classical computer

    In a paper published in Nature Quantum Information, EPFL professor Giuseppe Carleo and Matija Medvidovi?, a graduate student at Columbia University and at the Flatiron Institute in New York, have found a way to execute a complex quantum computing algorithm on traditional computers instead of quantum ones.
    The specific “quantum software” they are considering is known as Quantum Approximate Optimization Algorithm (QAOA) and is used to solve classical optimization problems in mathematics; it’s essentially a way of picking the best solution to a problem out of a set of possible solutions. “There is a lot of interest in understanding what problems can be solved efficiently by a quantum computer, and QAOA is one of the more prominent candidates,” says Carleo.
    Ultimately, QAOA is meant to help us on the way to the famed “quantum speedup,” the predicted boost in processing speed that we can achieve with quantum computers instead of conventional ones. Understandably, QAOA has a number of proponents, including Google, who have their sights set on quantum technologies and computing in the near future: in 2019 they created Sycamore, a 53-qubit quantum processor, and used it to run a task it estimated it would take a state-of-the-art classical supercomputer around 10,000 years to complete. Sycamore ran the same task in 200 seconds.
    “But the barrier of “quantum speedup” is all but rigid and it is being continuously reshaped by new research, also thanks to the progress in the development of more efficient classical algorithms,” says Carleo.
    In their study, Carleo and Medvidovi? address a key open question in the field: can algorithms running on current and near-term quantum computers offer a significant advantage over classical algorithms for tasks of practical interest? “If we are to answer that question, we first need to understand the limits of classical computing in simulating quantum systems,” says Carleo. This is especially important since the current generation of quantum processors operate in a regime where they make errors when running quantum “software,” and can therefore only run algorithms of limited complexity.
    Using conventional computers, the two researchers developed a method that can approximately simulate the behavior of a special class of algorithms known as variational quantum algorithms, which are ways of working out the lowest energy state, or “ground state” of a quantum system. QAOA is one important example of such family of quantum algorithms, that researchers believe are among the most promising candidates for “quantum advantage” in near-term quantum computers.
    The approach is based on the idea that modern machine-learning tools, e.g. the ones used in learning complex games like Go, can also be used to learn and emulate the inner workings of a quantum computer. The key tool for these simulations are Neural Network Quantum States, an artificial neural network that Carleo developed in 2016 with Matthias Troyer, and that was now used for the first time to simulate QAOA. The results are considered the province of quantum computing, and set a new benchmark for the future development of quantum hardware.
    “Our work shows that the QAOA you can run on current and near-term quantum computers can be simulated, with good accuracy, on a classical computer too,” says Carleo. “However, this does not mean that alluseful quantum algorithms that can be run on near-term quantum processors can be emulated classically. In fact, we hope that our approach will serve as a guide to devise new quantum algorithms that are both useful and hard to simulate for classical computers.”
    Story Source:
    Materials provided by Ecole Polytechnique Fédérale de Lausanne. Original written by Nik Papageorgiou. Note: Content may be edited for style and length. More

  • in

    New viable means of storing information for quantum technologies?

    Quantum information could be behind the next technological revolution. By analogy with the bit in classical computing, the qubit is the basic element of quantum computing. However, demonstrating the existence of this information storage unit and using it remains complex, and hence limited.
    In a study published on 3 August 2021 in Physical Review X, an international research team consisting of CNRS researcher Fabio Pistolesi1 and two foreign researchers used theoretical calculations to show that it is possible to realize a new type of qubit, in which information is stored in the oscillation amplitude of a carbon nanotube. These nanotubes can perform a large number of oscillations without diminishing, which shows their low level of interaction with the environment, and makes them excellent potential qubits. This property would enable for greater reliability in quantum computation.
    A problem nevertheless persists with regard to the reading and writing of information stored in the first two energy levels2 of these oscillators. Scientists successfully proved that this information could be read by using the coupling between electrons, a negatively charged particle, and the flexural mode of these nanotubes.
    This changes the spacing between the first levels of energy enough to make them accessible independently from other levels, thereby making it possible to read the information they contain. These promising theoretical predictions have not yet been verified experimentally.
    Notes
    1 — Researcher at the Laboratoire ondes et matières d’Aquitaine (CNRS/Université de Bordeaux). He worked with a scientist from the University of Chicago (United States) and from the Institute of Photonic Sciences in Barcelona (Spain).
    2 — The level of energy is a quantity that can describe systems in physics, with a level corresponding to a “state” of the system.
    Story Source:
    Materials provided by CNRS. Note: Content may be edited for style and length. More

  • in

    Using virtual reality to help students understand the brain's complex systems, researchers demonstrate effectiveness of 3D visualization as a learning tool

    Researchers from the Neuroimaging Center at NYU Abu Dhabi (NYUAD) and Wisconsin Institute for the Discovery at University Wisconsin-Madison have developed the UW Virtual Brain Project™, producing unique, interactive, 3D narrated diagrams to help students learn about the structure and function of perceptual systems in the human brain. A new study exploring how students responded to these lessons on desktop PCs and in virtual reality (VR) offers new insights into the benefits of VR as an educational tool.
    Led by Associate Professor and Director of NYUAD’s Neuroimaging Center Bas Rokers and Assistant Professor of Psychology and a Principal Investigator in the Virtual Environments Group at the Wisconsin Institute for Discovery at University of Wisconsin-Madison Karen Schloss, the researchers have published the findings of their work in a new paper, “The UW Virtual Brain Project: An immersive approach to teaching functional neuroanatomy”  in the journal Translational Issues in Psychological Science from the American Psychological Association (APA). In their experiments, the researchers found that participants showed significant content-based learning for both devices, with no significant differences between PC and VR devices for content-based learning outcomes. However, VR far exceeded PC viewing for achieving experience-based learning outcomes — VR was, in other words, more enjoyable and easier to use.
    “Students are enthusiastic about learning in VR,” said Rokers. “However, our findings indicate that learners can have similar access to learning about functional neuroanatomy through multiple platforms, which means that those who don’t have access to VR technology are not at an inherent disadvantage. The power of VR is its ability to transport learners to new environments they might not otherwise be able to explore. But, importantly, VR is not a substitute for real-world interactions with peers and instructors.”
    The 3D narrated videos are already in active use at classes that include neuro-anatomy instruction both at the University of Wisconsin-Madison and at NYUAD.
    Story Source:
    Materials provided by New York University. Note: Content may be edited for style and length. More

  • in

    Decoding how salamanders walk

    Researchers at Tohoku University and the Swiss Federal Institute of Technology in Lausanne, with the support of the Human Frontier Science Program, have decoded the flexible motor control mechanisms underlying salamander walking.
    Their findings were published in the journal Frontiers in Neurorobotics on July 30, 2021.
    Animals with four feet can navigate complex, unpredictable, and unstructured environments. The impressive ability is thanks to their body-limb coordination.
    The salamander is an excellent specimen for studying body-limb coordination mechanisms. It is an amphibian that uses four legs and walks by swaying itself from left to right in a motion known as undulation.
    Their nervous system is simpler than those of mammals, and they change their walking pattern according to the speed at which they are moving.
    To decode the salamander’s movement, researchers led by Professor Akio Ishiguro of the Research Institute of Electrical Communication at Tohoku University modeled the salamander’s nervous system mathematically and physically simulated the model.
    In making the model, the researchers hypothesized that the legs and the body are controlled to support other motions by sharing sensory information. They then reproduced the speed-dependent gait transitions of salamanders through computer simulations.
    “We hope this finding provides insights into the essential mechanism behind the adaptive and versatile locomotion of animals,” said Ishiguro.
    The researchers are confident their discovery will aid the development of robots that can move with high agility and adaptability by flexibly changing body-limb coordination patterns.
    Story Source:
    Materials provided by Tohoku University. Note: Content may be edited for style and length. More