More stories

  • in

    Trapping spins with sound

    The captured electrons typically absorb light in the visible spectrum, so that a transparent material becomes colored under the presence of such centers, for instance in diamond. “Color centers are often coming along with certain magnetic properties, making them promising systems for applications in quantum technologies, like quantum memories — the qubits — or quantum sensors. The challenge here is to develop efficient methods to control the magnetic quantum property of electrons, or, in this case, their spin states,” Dr. Georgy Astakhov from HZDR’s Institute of Ion Beam Physics and Materials Research explains.
    His team colleague Dr. Alberto Hernández-Mínguez from the Paul-Drude-Institut expands on the subject: “This is typically realized by applying electromagnetic fields, but an alternative method is the use of mechanical vibrations like surface acoustic waves. These are sound waves confined to the surface of a solid that resemble water waves on a lake. They are commonly integrated in microchips as radio frequency filters, oscillators and transformers in current electronic devices like mobile phones, tablets and laptops.”
    Tuning the spin to the sound of a surface
    In their paper, the researchers demonstrate the use of surface acoustic waves for on-chip control of electron spins in silicon carbide, a semiconductor, which will replace silicon in many applications requiring high-power electronics, for instance, in electrical vehicles. “You might think of this control like the tuning of a guitar with a regular electronic tuner,” Dr. Alexander Poshakinskiy from the Ioffe Physical-Technical Institute in St. Petersburg weighs in and proceeds: “Only that in our experiment it is a bit more complicated: a magnetic field tunes the resonant frequencies of the electron spin to the frequency of the acoustic wave, while a laser induces transitions between the ground and excited state of the color center.”
    These optical transitions play a fundamental role: they enable the optical detection of the spin state by registering the light quanta emitted when the electron returns to the ground state. Due to a giant interaction between the periodic vibrations of the crystal lattice and the electrons trapped in the color centers, the scientists realize simultaneous control of the electron spin by the acoustic wave, in both its ground and excited state.
    At this point, Hernández-Mínguez calls into play another physical process: precession. “Anybody who played as a child with a spinning top experienced precession as a change in the orientation of the rotational axis while trying to tilt it. An electronic spin can be imagined as a tiny spinning top as well, in our case with a precession axes under the influence of an acoustic wave that changes orientation every time the color center jumps between ground and excited state. Now, since the amount of time spent by the color center in the excited state is random, the large difference in the alignment of the precession axes in the ground and excited states changes the orientation of the electron spin in an uncontrolled way.”
    This change renders the quantum information stored in the electronic spin to be lost after several jumps. In their work, the researchers show a way to prevent this: by appropriately tuning the resonant frequencies of the color center, the precession axes of the spin in the ground and excited states becomes what the scientists call collinear: the spins keep their precession orientation along a well-defined direction even when they jump between the ground and excited states.
    Under this specific condition, the quantum information stored in the electron spin becomes decoupled from the jumps between ground and excited state caused by the laser. This technique of acoustic manipulation provides new opportunities for the processing of quantum information in quantum devices with dimensions similar to those of current microchips. This should have a significant impact on the fabrication cost and, therefore, the availability of quantum technologies to the general public.
    Story Source:
    Materials provided by Helmholtz-Zentrum Dresden-Rossendorf. Note: Content may be edited for style and length. More

  • in

    Spiders' web secrets unraveled

    Johns Hopkins University researchers discovered precisely how spiders build webs by using night vision and artificial intelligence to track and record every movement of all eight legs as spiders worked in the dark.
    Their creation of a web-building playbook or algorithm brings new understanding of how creatures with brains a fraction of the size of a human’s are able to create structures of such elegance, complexity and geometric precision. The findings, now available online, are set to publish in the November issue of Current Biology.
    “I first got interested in this topic while I was out birding with my son. After seeing a spectacular web I thought, ‘if you went to a zoo and saw a chimpanzee building this you’d think that’s one amazing and impressive chimpanzee.’ Well this is even more amazing because a spider’s brain is so tiny and I was frustrated that we didn’t know more about how this remarkable behavior occurs,” said senior author Andrew Gordus, a Johns Hopkins behavioral biologist. “Now we’ve defined the entire choreography for web building, which has never been done for any animal architecture at this fine of a resolution.”
    Web-weaving spiders that build blindly using only the sense of touch, have fascinated humans for centuries. Not all spiders build webs but those that do are among a subset of animal species known for their architectural creations, like nest-building birds and puffer fish that create elaborate sand circles when mating.
    The first step to understanding how the relatively small brains of these animal architects support their high-level construction projects, is to systematically document and analyze the behaviors and motor skills involved, which until now has never been done, mainly because of the challenges of capturing and recording the actions, Gordus said.
    Here his team studied a hackled orb weaver, a spider native to the western United States that’s small enough to sit comfortably on a fingertip. To observe the spiders during their nighttime web-building work, the lab designed an arena with infrared cameras and infrared lights. With that set-up they monitored and recorded six spiders every night as they constructed webs. They tracked the millions of individual leg actions with machine vision software designed specifically to detect limb movement. More

  • in

    Key to resilient energy-efficient AI/machine learning may reside in human brain

    A clearer understanding of how a type of brain cell known as astrocytes function and can be emulated in the physics of hardware devices, may result in artificial intelligence (AI) and machine learning that autonomously self-repairs and consumes much less energy than the technologies currently do, according to a team of Penn State researchers.
    Astrocytes are named for their star shape and are a type of glial cell, which are support cells for neurons in the brain. They play a crucial role in brain functions such as memory, learning, self-repair and synchronization.
    “This project stemmed from recent observations in computational neuroscience, as there has been a lot of effort and understanding of how the brain works and people are trying to revise the model of simplistic neuron-synapse connections,” said Abhronil Sengupta, assistant professor of electrical engineering and computer science. “It turns out there is a third component in the brain, the astrocytes, which constitutes a significant section of the cells in the brain, but its role in machine learning and neuroscience has kind of been overlooked.”
    At the same time, the AI and machine learning fields are experiencing a boom. According to the analytics firm Burning Glass Technologies, demand for AI and machine learning skills is expected to increase by a compound growth rate of 71% by 2025. However, AI and machine learning faces a challenge as the use of these technologies increase — they use a lot of energy.
    “An often-underestimated issue of AI and machine learning is the amount of power consumption of these systems,” Sengupta said. “A few years back, for instance, IBM tried to simulate the brain activity of a cat, and in doing so ended up consuming around a few megawatts of power. And if we were to just extend this number to simulate brain activity of a human being on the best possible supercomputer we have today, the power consumption would be even higher than megawatts.”
    All this power usage is due to the complex dance of switches, semiconductors and other mechanical and electrical processes that happens in computer processing, which greatly increases when the processes are as complex as what AI and machine learning demand. A potential solution is neuromorphic computing, which is computing that mimics brain functions. Neuromorphic computing is of interest to researchers because the human brain has evolved to use much less energy for its processes than do a computer, so mimicking those functions would make AI and machine learning a more energy-efficient process. More

  • in

    Innovative chip resolves quantum headache

    Quantum physicists at the University of Copenhagen are reporting an international achievement for Denmark in the field of quantum technology. By simultaneously operating multiple spin qubits on the same quantum chip, they surmounted a key obstacle on the road to the supercomputer of the future. The result bodes well for the use of semiconductor materials as a platform for solid-state quantum computers.
    One of the engineering headaches in the global marathon towards a large functional quantum computer is the control of many basic memory devices — qubits — simultaneously. This is because the control of one qubit is typically negatively affected by simultaneous control pulses applied to another qubit. Now, a pair of young quantum physicists at the University of Copenhagen’s Niels Bohr Institute -PhD student, now Postdoc, Federico Fedele, 29 and Asst. Prof. Anasua Chatterjee, 32,- working in the group of Assoc. Prof. Ferdinand Kuemmeth, have managed to overcome this obstacle.
    Global qubit research is based on various technologies. While Google and IBM have come far with quantum processors based on superconductor technology, the UCPH research group is betting on semiconductor qubits — known as spin qubits.
    “Broadly speaking, they consist of electron spins trapped in semiconducting nanostructures called quantum dots, such that individual spin states can be controlled and entangled with each other,” explains Federico Fedele.
    Spin qubits have the advantage of maintaining their quantum states for a long time. This potentially allows them to perform faster and more flawless computations than other platform types. And, they are so miniscule that far more of them can be squeezed onto a chip than with other qubit approaches. The more qubits, the greater a computer’s processing power. The UCPH team has extended the state of the art by fabricating and operating four qubits in a 2×2 array on a single chip.
    Circuitry is ‘the name of the game’
    Thus far, the greatest focus of quantum technology has been on producing better and better qubits. Now it’s about getting them to communicate with each other, explains Anasua Chatterjee: More

  • in

    Researchers set ‘ultrabroadband’ record with entangled photons

    Quantum entanglement — or what Albert Einstein once referred to as “spooky action at a distance” — occurs when two quantum particles are connected to each other, even when millions of miles apart. Any observation of one particle affects the other as if they were communicating with each other. When this entanglement involves photons, interesting possibilities emerge, including entangling the photons’ frequencies, the bandwidth of which can be controlled.
    Researchers at the University of Rochester have taken advantage of this phenomenon to generate an incredibly large bandwidth by using a thin-film nanophotonic device they describe in Physical Review Letters.
    The breakthrough could lead to: Enhanced sensitivity and resolution for experiments in metrology and sensing, including spectroscopy, nonlinear microscopy, and quantum optical coherence tomography Higher dimensional encoding of information in quantum networks for information processing and communications”This work represents a major leap forward in producing ultrabroadband quantum entanglement on a nanophotonic chip,” says Qiang Lin, professor of electrical and computer engineering. “And it demonstrates the power of nanotechnology for developing future quantum devices for communication, computing, and sensing,”
    No more tradeoff between bandwidth and brightness
    To date, most devices used to generate broadband entanglement of light have resorted to dividing up a bulk crystal into small sections, each with slightly varying optical properties and each generating different frequencies of the photon pairs. The frequencies are then added together to give a larger bandwidth.
    “This is quite inefficient and comes at a cost of reduced brightness and purity of the photons,” says lead author Usman Javid, a PhD student in Lin’s lab. In those devices, “there will always be a tradeoff between the bandwidth and the brightness of the generated photon pairs, and one has to make a choice between the two. We have completely circumvented this tradeoff with our dispersion engineering technique to get both: a record-high bandwidth at a record-high brightness.”
    The thin-film lithium niobate nanophotonic device created by Lin’s lab uses a single waveguide with electrodes on both sides. Whereas a bulk device can be millimeters across, the thin-film device has a thickness of 600 nanometers — more than a million times smaller in its cross-sectional area than a bulk crystal, according to Javid. This makes the propagation of light extremely sensitive to the dimensions of the waveguide.
    Indeed, even a variation of a few nanometers can cause significant changes to the phase and group velocity of the light propagating through it. As a result, the researchers’ thin-film device allows precise control over the bandwidth in which the pair-generation process is momentum-matched. “We can then solve a parameter optimization problem to find the geometry that maximizes this bandwidth,” Javid says.
    The device is ready to be deployed in experiments, but only in a lab setting, Javid says. In order to be used commercially, a more efficient and cost-effective fabrication process is needed. And although lithium niobate is an important material for light-based technologies, lithium niobate fabrication is “still in its infancy, and it will take some time to mature enough to make financial sense,” he says.
    Other collaborators include coauthors Jingwei Ling, Mingxiao Li, and Yang He of the Department of Electrical and Computer Engineering, and Jeremy Staffa of the Institute of Optics, all of whom are graduate students. Yang He is a postdoctoral researcher.
    The National Science Foundation, the Defense Threat Reduction Agency, and the Defense Advanced Research Projects Agency helped fund the research.
    Story Source:
    Materials provided by University of Rochester. Original written by Bob Marcotte. Note: Content may be edited for style and length. More

  • in

    Solving complex learning tasks in brain-inspired computers

    Developing a machine that processes information as efficiently as the human brain has been a long-standing research goal towards true artificial intelligence. An interdisciplinary research team at Heidelberg University and the University of Bern (Switzerland) led by Dr Mihai Petrovici is tackling this problem with the help of biologically-inspired artificial neural networks. Spiking neural networks, which mimic the structure and function of a natural nervous system, represent promising candidates because they are powerful, fast, and energy-efficient. One key challenge is how to train such complex systems. The German-Swiss research team has now developed and successfully implemented an algorithm that achieves such training.
    The nerve cells (or neurons) in the brain transmit information using short electrical pulses known as spikes. These spikes are triggered when a certain stimulus threshold is exceeded. Both the frequency with which a single neuron produces such spikes and the temporal sequence of the individual spikes are critical for the exchange of information. “The main difference of biological spiking networks to artificial neural networks is that, because they are using spike-based information processing, they can solve complex tasks such as image recognition and classification with extreme energy efficiency,” states Julian Göltz, a doctoral candidate in Dr Petrovici’s research group.
    Both the human brain and the architecturally similar artificial spiking neural networks can only perform at their full potential if the individual neurons are properly connected to one another. But how can brain-inspired — that is, neuromorphic — systems be adjusted to process spiking input correctly? “This question is fundamental for the development of powerful artificial networks based on biological models,” stresses Laura Kriener, also a member of Dr Petrovici’s research team. Special algorithms are required to guarantee that the neurons in a spiking neural network fire at the correct time. These algorithms adjust the connections between the neurons so that the network can perform the required task, such as classifying images with high precision.
    The team under the direction of Dr Petrovici developed just such an algorithm. “Using this approach, we can train spiking neural networks to code and transmit information exclusively in single spikes. They thereby produce the desired results especially quickly and efficiently,” explains Julian Göltz. Moreover, the researchers succeeded in implementing a neural network trained with this algorithm on a physical platform — the BrainScaleS-2 neuromorphic hardware platform developed at Heidelberg University.
    According to the researchers, the BrainScaleS system processes information up to a thousand times faster than the human brain and needs far less energy than conventional computer systems. It is part of the European Human Brain Project, which integrates technologies like neuromorphic computing into an open platform called EBRAINS. “However, our work is not only interesting for neuromorphic computing and biologically inspired hardware. It also acknowledges the demand from the scientific community to transfer so-called Deep Learning approaches to neuroscience and thereby further unveil the secrets of the human brain,” emphasises Dr Petrovici.
    The research was funded by the Manfred Stärk Foundation and the Human Brain Project — one of three European flagship initiatives in Future and Emerging Technologies supported under the European Union’s Horizon 2020 Framework Programme. The research results were published in the journal “Nature Machine Intelligence.”
    Story Source:
    Materials provided by Heidelberg University. Note: Content may be edited for style and length. More

  • in

    Technology’s impact on worker well-being

    In the traditional narrative of the evolving 21st century workplace, technological substitution of human employees is treated as a serious concern. But technological complementarity — the use of automation and artificial intelligence to complement workers, rather than replace them — is viewed optimistically as a good thing, improving productivity and wages for those who remain employed.
    That’s the story two graduate researchers from the Georgia Institute of Technology and Georgia State University kept reading, from policymakers and other scholars, as they began their own study of technology’s impact on the workplace. But there was another, more nuanced story that ultimately informed their research.
    “We saw these images on the internet of worker strikes that were happening all around the world in different cities and realized that there was more going on, something beyond the usual optimistic discourse around this topic,” said Daniel Schiff, a Ph.D. candidate in the School of Public Policy at Georgia Tech.
    The photos and stories of unhappy workers protesting conditions in their modern, technologically-enhanced workplaces inspired Schiff and Luísa Nazareno, a graduate researcher in the Andrew Young School of Policy Studies at Georgia State, to dig a little deeper. The result is a new study, “The impact of automation and artificial intelligence on worker well-being,” in the journal Technology in Society.
    The paper presents a more surprising, dynamic, and complex picture of the recent history and likely future of automation and AI in the workplace. Schiff and Nazareno have incorporated multiple disciplines — economics, sociology, psychology, policy, even ethics — to reframe the conversation around automation and AI in the workplace and perhaps help decision makers and researchers think a bit deeper and more broadly about the human component.
    “The well-being of the worker has implications for all of society, for families, even for productivity,” Nazareno said. “If we are really interested in productivity, worker well-being is something that must be taken into account.”
    Changing the Discourse More

  • in

    New study solves energy storage and supply puzzle

    Curtin University research has found a simple and affordable method to determine which chemicals and types of metals are best used to store and supply energy, in a breakthrough for any battery-run devices and technologies reliant on the fast and reliable supply of electricity, including smart phones and tablets.
    Lead author Associate Professor Simone Ciampi from Curtin’s School of Molecular and Life Sciences said this easy, low-cost method of determining how to produce and retain the highest energy charge in a capacitor, could be of great benefit to all scientists, engineers and start-ups looking to solve the energy storage challenges of the future.
    “All electronic devices require an energy source. While a battery needs to be recharged over time, a capacitor can be charged instantaneously because it stores energy by separating charged ions, found in ionic liquids,” Associate Professor Ciampi said.
    “There are thousands of types of ionic liquids, a type of “liquid salt,” and until now, it was difficult to know which would be best suited for use in a capacitor. What our team has done is devise a quick and easy test, able to be performed in a basic lab, which can measure both the ability to store charge when a solid electrode touches a given ionic liquid — a simple capacitor — as well as the stability of the device when it’s charged.
    “The study has also been able to unveil a model that can predict which ionic liquid is likely to be the best performing for fast charging and long-lasting energy storage.”
    Research co-author PhD student Mattia Belotti, also from Curtin’s School of Molecular and Life Sciences said the test simply required a relatively basic and affordable piece of equipment, called a potentiostat.
    “The simplicity of this test means anyone can apply it without the need for expensive equipment. Using this method, our research found that charging the device for 60 seconds produced a full charge, which did not ‘leak’ and begin to diminish for at least four days,” Mr Belotti said.
    “The next step will be to use this new screening method to find ionic liquid/electrode combinations with an even longer duration in the charged state and larger energy density.”
    Funded by the Australian Research Council, the study was led by Curtin University and done in collaboration with the Australian National University and Monash University.
    Other Curtin authors include Mr Xin Lyu, Dr Nadim Darwish, Associate Professor Debbie Silvester and Dr Ching Goh, all from the School of Molecular and Life Sciences.
    Story Source:
    Materials provided by Curtin University. Note: Content may be edited for style and length. More