More stories

  • in

    Error mitigation approach helps quantum computers level up

    A collaboration between Lawrence Berkeley National Laboratory’s (Berkeley Lab’s) Applied Mathematics and Computational Research Division (AMCRD) and Physics Division has yielded a new approach to error mitigation that could help make quantum computing’s theoretical potential a reality.
    The research team describes this work in a paper published in Physical Review Letters, “Mitigating Depolarizing Noise on Quantum Computers with Noise-Estimation Circuits.”
    “Quantum computers have the potential to solve more complex problems way faster than classical computers,” said Bert de Jong, one of the lead authors of the study and the director of the AIDE-QC and QAT4Chem quantum computing projects. De Jong also leads the AMCRD’s Applied Computing for Scientific Discovery Group. “But the real challenge is quantum computers are relatively new. And there’s still a lot of work that has to be done to make them reliable.”
    For now, one of the problems is that quantum computers are still too error-prone to be consistently useful. This is due in large part to something known as “noise” (errors).
    There are different types of noise, including readout noise and gate noise. The former has to do with reading out the result of a run on a quantum computer; the more noise, the higher the chance a qubit — the quantum equivalent of a bit on a classical computer — will be measured in the wrong state. The latter relates to the actual operations performed; noise here means the probability of applying the wrong operation. And the prevalence of noise dramatically increases the more operations one tries to perform with a quantum computer, which makes it harder to tease out the right answer and severely limits quantum computers’ usability as they’re scaled up.
    “So noise here just basically means: It’s stuff you don’t want, and it obscures the result you do want,” said Ben Nachman, a Berkeley Lab physicist and co-author on the study who also leads the cross-cutting Machine Learning for Fundamental Physics group. More

  • in

    California's push for computer science education examined

    New studies of computer science education at California high schools found that a greater emphasis on computer science education did not produce the anticipated spillover effects, neither improving or harming students’ math or English language arts skills, according to school-level test scores.
    However, one trade-off of increased enrollments in computing courses may be that students are taking fewer humanities courses such as the arts and social studies, researchers at the University of Illinois Urbana-Champaign found.
    Paul Bruno and Colleen M. Lewis examined the implications of California’s recent state policies promoting computer science education and the proliferation of these courses in the state’s high schools. Bruno is a professor of education policy, organization and leadership, and Lewis is a professor of computer science in the Grainger College of Engineering, both at Illinois.
    Using data that schools reported to the California Department of Education from 2003-2019, the researchers explored the effects on student test scores and the curricular trade-offs of student enrollments in computer science courses. That study was published in the journal Educational Administration Quarterly.
    In a related project, the couple — who are marital as well as research partners — explored equity and diversity among California’s computer science teachers and their students. That study was published in Policy Futures in Education.
    The Google Computer Science Education Research Program supported both projects. More

  • in

    Development of a diamond transistor with high hole mobility

    Using a new fabrication technique, NIMS has developed a diamond field-effect transistor (FET) with high hole mobility, which allows reduced conduction loss and higher operational speed. This new FET also exhibits normally-off behavior (i.e., electric current flow through the transistor ceases when no gate voltage is applied, a feature that makes electronic devices safer). These results may facilitate the development of low-loss power conversion and high-speed communications devices.
    Diamond has excellent wide bandgap semiconductor properties: its bandgap is larger than those of silicon carbide and gallium nitride, which are already in practical use. Diamond therefore could potentially be used to create power electronics and communications devices capable of operating more energy efficiently at higher speeds, voltages and temperatures. A number of R&D projects have previously been carried out with the aim of creating FETs using hydrogen-terminated diamonds (i.e., diamonds with their superficial carbon atoms covalently bonded with hydrogen atoms). However, these efforts have failed to fully exploit diamonds’ excellent wide bandgap semiconductor properties: the hole mobility (a measure of how quickly holes can move) of these diamond-integrated transistors was only 1-10% that of the diamonds before integration.
    The NIMS research team succeeded in developing a high-performance FET by using hexagonal boron nitride (h-BN) as a gate insulator instead of conventionally used oxides (e.g., alumina), and by employing a new fabrication technique capable of preventing the surface of hydrogen-terminated diamond from being exposed to air. At high hole densities, the hole mobility of this FET was five times that of conventional FETs with oxide gate insulators. FETs with high hole mobility can operate with lower electrical resistance, thereby reducing conduction loss, and can be used to develop higher speed and smaller electronic devices. The team also demonstrated normally-off operation of the FET, an important feature for power electronics applications. The new fabrication technique enabled removal of electron acceptors from the surface of the hydrogen-terminated diamond. This was the key to the team’s success in developing the high-performance FET, although these acceptors had generally been thought to be necessary in inducing electrical conductivity in hydrogen-terminated diamonds.
    These results are new mileposts in the development of efficient diamond transistors for high-performance power electronics and communications devices. The team hopes to further improve the physical properties of the diamond FET and to make it more suitable for practical use.
    Story Source:
    Materials provided by National Institute for Materials Science, Japan. Note: Content may be edited for style and length. More

  • in

    Rebooting evolution

    The building blocks of life-saving therapeutics could be developed in days instead of years thanks to new software that simulates evolution.
    Proseeker is the name of a new computational tool that mimics the processes of natural selection, producing proteins that can be used for a range of medicinal and household uses.
    The enzymes in your laundry detergent, the insulin in your diabetes medication or the antibodies used in cancer therapy are currently made in the laboratory using a painstaking process called directed evolution.
    Laboratory evolution mimics natural evolution by making mutations in naturally-sourced proteins and selecting the best mutants, to be mutated and selected again, in a time-intensive and laborious process that creates useful proteins.
    Scientists at the ARC Centre of Excellence in Synthetic Biology have now discovered a way to perform the entire process of directed evolution using a computer. It can reduce the time required from many months or even years to just days.
    The team was led by Professor Oliver Rackham, Curtin University, in collaboration with Professor Aleksandra Filipovska, the University of Western Australia, and is based at the Harry Perkins Institute of Medical Research in Perth, Western Australia.
    To prove how useful this process could be they took a protein with no function at all and gave it the ability to bind DNA.
    ‘Proteins that bind DNA are currently revolutionising the field of gene therapy where scientists are using them to reverse disease-causing mutations,’ says Professor Rackham. ‘So this could be of great use in the future.
    ‘Reconstituting the entire process of directed evolution represents a radical advance for the field.’
    Story Source:
    Materials provided by Curtin University. Original written by Lucien Wilkinson. Note: Content may be edited for style and length. More

  • in

    New methods for network visualizations enable change of perspectives and views

    When visualizing data using networks, the type of representation is crucial for extracting hidden information and relationships. The research group of Jörg Menche, Adjunct Principal Investigator at the CeMM Research Center for Molecular Medicine of the Austrian Academy of Sciences, Professor at the University of Vienna, and Group leader at Max Perutz Labs, developed a new method for generating network layouts that allow for visualizing different information of a network in two- and three-dimensional virtual space and exploring different perspectives. The results could also facilitate future research on rare diseases by providing more versatile, comprehensible representations of complex protein interactions.
    Network visualizations allow for exploring connections between individual data points. However, the more complex and larger the networks, the more difficult it becomes to find the information you are looking for. For lack of suitable layouts, so-called “hairballs” visualizations emerge, that often obscure network structure, rather than elucidate it. Scientists from Jörg Menche’s research group at CeMM and Max Perutz Labs (a joint venture of the University of Vienna and the Medical University of Vienna), developed a method that makes it possible to specify in advance which network properties and information should be visually represented in order to explore them interactively. The results have now been published in Nature Computational Science.
    Reducing complexity
    For the study, first author Christiane V. R. Hütter, a PhD student in Joerg Menche’s research group, used the latest dimensionality reduction techniques that allow visualizations for networks with thousands of points to be computed within a very short time on a standard laptop. “The key idea behind our research was to develop different views for large networks to capture the complexity and get a more comprehensive view and present it in a visually understandable way — similar to looking at maps of the same region with different information content, detailed views and perspectives.” Menche Lab scientists developed four different network layouts, which they termed cartographs, as well as two- and three-dimensional visualizations, each following different rules to open up new perspectives on a given dataset. Any network information can be encoded and visualized in this fashion, for example, the structural significance of a particular point, but also functional features. Users can switch between different layouts to get a comprehensive picture. Study leader Jörg Menche explains: “Using the new layouts, we can now specify in advance that we want to see, for example, the number of connections of a point within the network represented, or a particular functional characteristic. In a biological network, for instance, I can explore connections between genes that are associated with a particular disease and what they might have in common.”
    The interplay of genes
    The scientists performed a proof-of-concept on both simple model networks and the complex interactome network, which maps all the proteins of the human body and their interactions. This consists of more than 16,000 points and over 300,000 connections. Christiane V.R. Hütter explains: “Using our new layouts, we are now able to visually represent different features of proteins and their connections, such as the close relationship between the biological importance of a protein and its centrality within the network. We can also visualize connection patterns between a group of proteins associated with the same disease that are difficult to decipher using conventional methods.”
    Tailored solutions
    The flexibility of the new framework allows users to tailor network visualizations for a specific application. For example, the study authors were able to develop 3D interactome layouts specifically for studying the biological functions of certain genes whose mutations are suspected to cause rare diseases. Jörg Menche adds, “To facilitate the visual representation and also analysis of large networks such as the interactome, our layouts can also be integrated into a virtual reality platform.” More

  • in

    Visualization of the origin of magnetic forces by atomic resolution electron microscopy

    The joint development team of Professor Shibata (the University of Tokyo), JEOL Ltd. and Monash University succeeded in directly observing an atomic magnetic field, the origin of magnets (magnetic force), for the first time in the world. The observation was conducted using the newly developed Magnetic-field-free Atomic-Resolution STEM (MARS) (1). This team had already succeeded in observing the electric field inside atoms for the first time in 2012. However, since the magnetic fields in atoms are extremely weak compared with electric fields, the technology to observe the magnetic fields had been unexplored since the development of electron microscopes. This is an epoch-making achievement that will rewrite the history of microscope development.
    Electron microscopes have the highest spatial resolution among all currently used microscopes. However, in order to achieve ultra-high resolution so that atoms can be observed directly, we have to observe the sample by placing it in an extremely strong lens magnetic field. Therefore, atomic observation of magnetic materials that are strongly affected by the lens magnetic field such as magnets and steels had been impossible for many years. For this difficult problem, the team succeeded in developing a lens that has a completely new structure in 2019. Using this new lens, the team realized atomic observation of magnetic materials, which is not affected by the lens magnetic field. The team’s next goal was to observe the magnetic fields of atoms, which are the origin of magnets (magnetic force), and they continued technological development to achieve the goal.
    This time, the joint development team took on the challenge of observing the magnetic fields of iron (Fe) atoms in a hematite crystal (α-Fe2O3) by loading MARS with a newly developed high-sensitivity high-speed detector, and further using computer image processing technology. To observe the magnetic fields, they used the Differential Phase Contrast (DPC) method (2) at atomic resolution, which is an ultrahigh-resolution local electromagnetic field measurement method using a scanning transmission electron microscope (STEM) (3), developed by Professor Shibata et al. The results directly demonstrated that iron atoms themselves are small magnets (atomic magnet). The results also clarified the origin of magnetism (antiferromagnetism (4)) exhibited by hematite at the atomic level.
    From the present research results, the observation on atomic magnetic field was demonstrated, and a method for observation of atomic magnetic fields was established. This method is expected to become a new measuring method in the future that will lead the research and development of various magnetic materials and devices such as magnets, steels, magnetic devices, magnetic memory, magnetic semiconductors, spintronics and topological materials.
    This research was conducted by the joint development team of Professor Naoya Shibata (Director of the Institute of Engineering Innovation, School of Engineering, the University of Tokyo) and Dr. Yuji Kohno et al. (Specialists of JEOL Ltd.) in collaboration with Monash University, Australia, under the Advanced Measurement and Analysis Systems Development (SENTAN), Japan Science and Technology Agency (JST).
    Terms
    (1) Magnetic-field-free Atomic-Resolution STEM (MARS) More

  • in

    How a single nerve cell can multiply

    Neurons are constantly performing complex calculations to process sensory information and infer the state of the environment. For example, to localize a sound or to recognize the direction of visual motion, individual neurons are thought to multiply two signals. However, how such a computation is carried out has been a mystery for decades. Researchers at the Max Planck Institute for Biological Intelligence, in foundation (i.f.), have now discovered in fruit flies the biophysical basis that enables a specific type of neuron to multiply two incoming signals. This provides fundamental insights into the algebra of neurons — the computations that may underlie countless processes in the brain.
    We easily recognize objects and the direction in which they move. The brain calculates this information based on local changes in light intensity detected by our retina. The calculations occur at the level of individual neurons. But what does it mean when neurons calculate? In a network of communicating nerve cells, each cell must calculate its outgoing signal based on a multitude of incoming signals. Certain types of signals will increase and others will reduce the outgoing signal — processes that neuroscientists refer to as ‘excitation’ and ‘inhibition’.
    Theoretical models assume that seeing motion requires the multiplication of two signals, but how such arithmetic operations are performed at the level of single neurons was previously unknown. Researchers from Alexander Borst’s department at the Max Planck Institute for Biological Intelligence, i.f., have now solved this puzzle in a specific type of neuron.
    Recording from T4 cells
    The scientists focused on so-called T4 cells in the visual system of the fruit fly. These neurons only respond to visual motion in one specific direction. The lead authors Jonatan Malis and Lukas Groschner succeeded for the first time in measuring both the incoming and the outgoing signals of T4 cells. To do so, the neurobiologists placed the animal in a miniature cinema and used minuscule electrodes to record the neurons’ electrical activities. Since T4 cells are among the smallest of all neurons, the successful measurements were a methodological milestone.
    Together with computer simulations, the data revealed that the activity of a T4 cell is constantly inhibited. However, if a visual stimulus moves in a certain direction, the inhibition is briefly lifted. Within this short time window, an incoming excitatory signal is amplified: Mathematically, constant inhibition is equivalent to a division; removing the inhibition results in a multiplication. “We have discovered a simple basis for a complex calculation in a single neuron,” explains Lukas Groschner. “The inverse operation of a division is a multiplication. Neurons seem to be able to exploit this relationship,” adds Jonatan Malis.
    Relevance for behavior
    The T4 cell’s ability to multiply is linked to a certain receptor molecule on its surface. “Animals lacking this receptor misperceive visual motion and fail to maintain a stable course in behavioral experiments,” explains co-author Birte Zuidinga, who analyzed the walking trajectories of fruit flies in a virtual reality setup. This illustrates the importance of this type of computation for the animals’ behavior. “So far, our understanding of the basic algebra of neurons was rather incomplete,” says Alexander Borst. “However, the comparatively simple brain of the fruit fly has allowed us to gain insight into this seemingly intractable puzzle.” The researchers assume that similar neuronal computations underlie, for example, our abilities to localize sounds, to focus our attention, or to orient ourselves in space.
    Story Source:
    Materials provided by Max-Planck-Gesellschaft. Note: Content may be edited for style and length. More

  • in

    Fingertip sensitivity for robots

    In a paper published on February 23, 2022 in Nature Machine Intelligence, a team of scientists at the Max Planck Institute for Intelligent Systems (MPI-IS) introduce a robust soft haptic sensor named “Insight” that uses computer vision and a deep neural network to accurately estimate where objects come into contact with the sensor and how large the applied forces are. The research project is a significant step toward robots being able to feel their environment as accurately as humans and animals. Like its natural counterpart, the fingertip sensor is very sensitive, robust, and high resolution.
    The thumb-shaped sensor is made of a soft shell built around a lightweight stiff skeleton. This skeleton holds up the structure much like bones stabilize the soft finger tissue. The shell is made from an elastomer mixed with dark but reflective aluminum flakes, resulting in an opaque greyish color which prevents any external light finding its way in. Hidden inside this finger-sized cap is a tiny 160-degree fish-eye camera which records colorful images illuminated by a ring of LEDs.
    When any objects touch the sensor’s shell, the appearance of the color pattern inside the sensor changes. The camera records images many times per second and feeds a deep neural network with this data. The algorithm detects even the smallest change in light in each pixel. Within a fraction of a second, the trained machine-learning model can map out where exactly the finger is contacting an object, determine how strong the forces are and indicate the force direction. The model infers what scientists call a force map: it provides a force vector for every point in the three-dimensional fingertip.
    “We achieved this excellent sensing performance through the innovative mechanical design of the shell, the tailored imaging system inside, automatic data collection, and cutting-edge deep learning,” says Georg Martius, Max Planck Research Group Leader at MPI-IS, where he heads the Autonomous Learning Group. His Ph.D. student Huanbo Sun adds: “Our unique hybrid structure of a soft shell enclosing a stiff skeleton ensures high sensitivity and robustness. Our camera can detect even the slightest deformations of the surface from one single image.” Indeed, while testing the sensor, the researchers realized it was sensitive enough to feel its own orientation relative to gravity.
    The third member of the team is Katherine J. Kuchenbecker, the Director of the Haptic Intelligence Department at MPI-IS. She confirms that the new sensor will be useful: “Previous soft haptic sensors had only small sensing areas, were delicate and difficult to make, and often could not feel forces parallel to the skin, which are essential for robotic manipulation like holding a glass of water or sliding a coin along a table,” says Kuchenbecker.
    But how does such a sensor learn? Huanbo Sun designed a testbed to generate the training data needed for the machine-learning model to understand the correlation between the change in raw image pixels and the forces applied. The testbed probes the sensor all around its surface and records the true contact force vector together with the camera image inside the sensor. In this way, about 200,000 measurements were generated. It took nearly three weeks to collect the data and another one day to train the machine-learning model. Surviving this long experiment with so many different contact forces helped prove the robustness of Insight’s mechanical design, and tests with a larger probe showed how well the sensing system generalizes.
    Another special feature of the thumb-shaped sensor is that itpossesses a nail-shaped zone with a thinner elastomer layer. This tactile fovea is designed to detect even tiny forces and detailed object shapes. For this super-sensitive zone, the scientists choose an elastomer thickness of 1.2 mm rather than the 4 mm they used on the rest of the finger sensor.
    “The hardware and software design we present in our work can be transferred to a wide variety of robot parts with different shapes and precision requirements. The machine-learning architecture, training, and inference process are all general and can be applied to many other sensor designs,” Huanbo Sun concludes.
    Video: https://youtu.be/lTAJwcZopAA
    Story Source:
    Materials provided by Max Planck Institute for Intelligent Systems. Note: Content may be edited for style and length. More