More stories

  • in

    Consciousness in humans, animals and artificial intelligence

    Two researchers at Ruhr-Universität Bochum (RUB) have come up with a new theory of consciousness. They have long been exploring the nature of consciousness, the question of how and where the brain generates consciousness, and whether animals also have consciousness. The new concept describes consciousness as a state that is tied to complex cognitive operations — and not as a passive basic state that automatically prevails when we are awake.
    Professor Armin Zlomuzica from the Behavioral and Clinical Neuroscience research group at RUB and Professor Ekrem Dere, formerly at Université Paris-Sorbonne, now at RUB, describe their theory in the journal Behavioural Brain Research. The printed version will be published on 15 February 2022, the online article has been available since November 2021.
    “The hypotheses underlying our platform theory of consciousness can be tested in experimental studies,” as the authors describe one advantage of their concept over alternative models. “Thus, the process of consciousness can be explored in humans and animals or even in the context of artificial intelligence.”
    The platform theory in detail
    The complex cognitive operations that, according to platform theory, are associated with consciousness are applied to mental representations that are maintained and processed. They can include perceptions, emotions, sensations, memories, imaginations and associations. Conscious cognitive operations are necessary, for example, in situations where learned behaviour or habits are no longer sufficient for coping. People don’t necessarily need consciousness to drive a car or take a shower. But when something unexpected happens, conscious cognitive actions are required to resolve the situation. They are also necessary to predict future events or problems and to develop suitable coping strategies. Most importantly, conscious cognitive operations are at the basis for adaptive and flexible behaviour that enables humans and animals to adapt to new environmental conditions.
    According to the new theory, conscious cognitive actions take place on the basis of a so-called online platform, a kind of central executive that controls subordinate platforms. The subordinate platforms can act, for example, as storage media for knowledge or activities.
    Electrical junctions between nerve cells crucial
    Conscious cognitive operations are facilitated by the interaction of different neuronal networks. Armin Zlomuzica and Ekrem Dere consider electrical synapses, also known as gap junctions, to be crucial in this context. These structures enable extremely fast transmission of signals between nerve cells. They work much faster than chemical synapses, where communication between cells takes place through the exchange of neurotransmitters and -modulators.
    A possible experiment
    The authors suggest for example the following experiment to test their platform theory: a human, an experimental animal or artificial intelligence is confronted with a novel problem that can only be solved by combining two or more rules learned in a different context. This creative combination of stored information and application to a new problem can only be accomplished using conscious cognitive operations.
    By administering pharmacological substances that block gap junctions, the researchers would be able to test whether gap junctions do indeed play a decisive role in the processes. Gap junction blockers should inhibit performance in the experiment. However, routine execution of the individual rules, in the contexts in which they were learned, should still be possible.
    “To what extent an artificial intelligence which is capable of independently solving a new and complex problem for which it has no predefined solution algorithm can likewise be considered conscious has to be tested,” point out the authors. “Several conditions would have to be fulfilled: The first one, for example, would be fulfilled, if it successfully proposes a strategy to combat a pandemic by autonomously screening, evaluating, selecting and creatively combining information from the Internet.”
    Story Source:
    Materials provided by Ruhr-University Bochum. Original written by Julia Weiler. Note: Content may be edited for style and length. More

  • in

    Shellac for printed circuits

    More precise, faster, cheaper: Researchers all over the world have been working for years on producing electrical circuits using additive processes such as robotic 3-printing (so-called robocasting) — with great success, but this is now becoming a problem. The metal particles that make such “inks” electrically conductive are exacerbating the problem of electronic waste. Especially since the waste generated is likely to increase in the future in view of new types of disposable sensors, some of which are only used for a few days.
    Unnecessary waste, thinks Gustav Nyström, head of Empa’s Cellulose & Wood Materials lab: “There is an urgent need for materials that balance electronic performance, cost and sustainability.” To develop an environmentally friendly ink, Nyström’s team therefore set ambitious goals: metal-free, non-toxic, biodegradable. And with practical applications in mind: easily formable and stable to moisture and moderate heat.
    With carbon and shellac
    The researchers chose inexpensive carbon as the conductive material, as they recently reported in the journal Scientific Reports. More precisely: elongated graphite platelets mixed with tiny soot particles that establish electrical contact between these platelets — all this in a matrix made of a well-known biomaterial: shellac, which is obtained from the excretions of scale insects. In the past, it was used to make records; today it is used, among other things, as a varnish for wooden instruments and fingernails. Its advantages correspond exactly to the researchers’ desired profile. And on top of that, it is soluble in alcohol — an inexpensive solvent that evaporates after the ink is applied so that it dries.
    Despite these ingredients, the task proved challenging. That’s because whether used in simple screen printing or with modern 3D printers, the ink must exhibit shear thinning behavior: At “rest,” the ink is rather viscous. But at the moment of printing, when it is subjected to a lateral shear force, it becomes somewhat more fluid — just like a non-drip wall paint that only acquires a softer consistency when applied by the force of the roller. When used in additive manufacturing such as 3D printing with a robotic arm, however, this is particularly tricky: An ink that is too viscous would be too tough — but if it becomes too liquid during printing, the solid components could separate and clog the printer’s tiny nozzle.
    Tests with real applications
    To meet the requirements, the researchers tinkered intensively with the formulation for their ink. They tested two sizes of graphite platelets: 40 micrometers and 7 to 10 micrometers in length. Many variations were also needed in the mixing ratio of graphite and carbon black, because too much carbon black makes the material brittle — with the risk of cracking as the ink dries. By optimizing the formulation and the relative composition of the components, the team was able to develop several variants of the ink that can be used in different 2D and 3D printing processes.
    “The biggest challenge was to achieve high electrical conductivity,” says Xavier Aeby, one of the researchers involved, “and at the same time form a gel-like network of carbon, graphite and shellac.” The team investigated how this material behaves in practice in several steps. For example, with a tiny test cuboid: 15 superimposed grids from the 3D printer — made of fine strands just 0.4 millimeters in diameter. This showed that the ink was also sufficient for demanding processes such as robocasting.
    To prove its suitability for real components, the researchers constructed, among other things, a sensor for deformations: a thin PET strip with an ink structure printed on it, whose electrical resistance changed precisely with varying degrees of bending. In addition, tests for tensile strength, stability under water and other properties showed promising results — and so the research team is confident that the new material, which has already been patented, could prove itself in practice. “We hope that this ink system can be used for applications in sustainable printed electronics,” says Gustav Nyström, “for example, for conductive tracks and sensor elements in smart packaging and biomedical devices or in the field of food and environmental sensing.” More

  • in

    Measuring a quantum computer’s power just got faster and more accurate

    What does a quantum computer have in common with a top draft pick in sports? Both have attracted lots of attention from talent scouts. Quantum computers, experimental machines that can perform some tasks faster than supercomputers, are constantly evaluated, much like young athletes, for their potential to someday become game-changing technology.
    Now, scientist-scouts have their first tool to rank a prospective technology’s ability to run realistic tasks, revealing its true potential and limitations.
    A new kind of benchmark test, designed at Sandia National Laboratories, predicts how likely it is that a quantum processor will run a specific program without errors.
    The so-called mirror-circuit method, published today in Nature Physics, is faster and more accurate than conventional tests, helping scientists develop the technologies that are most likely to lead to the world’s first practical quantum computer, which could greatly accelerate research for medicine, chemistry, physics, agriculture and national security.
    Until now, scientists have been measuring performance on obstacle courses of random operations.
    But according to the new research, conventional benchmark tests underestimate many quantum computing errors. This can lead to unrealistic expectations of how powerful or useful a quantum machine is. Mirror-circuits offer a more accurate testing method, according to the paper. More

  • in

    Moments of silence point the way towards better superconductors

    High-precision measurements have provided important clues about processes that impair the efficiency of superconductors. Future work building on this research could offer improvements in a range of superconductor devices, such quantum computers and sensitive particle detectors.
    Superconductivity depends on the presence of electrons bound together in a Cooper pair. Two electrons become coupled because of interactions with the metal lattice, synchronizing with each other despite being hundreds of nanometres apart. Below a critical temperature, these Cooper pairs act as a fluid which doesn’t dissipate energy, thus providing no resistance to electrical current.
    But Cooper pairs sometimes break, dissipating into two quasiparticles — unpaired electrons — that hamper the performance of superconductors. Scientists still don’t know why Cooper pairs break, but the presence of quasiparticles introduces noise into technologies based on superconductors.
    ‘Even if there was only one quasiparticle per billion Cooper pairs, that would limit the performance of quantum bits and prevent a quantum computer from operating flawlessly,’ says Elsa Mannila, who researched quasiparticles at Aalto University before moving to the VTT Technical Research Centre of Finland. ‘If there are more unpaired particles, the lifetime of qubits is also shorter,’ she adds.
    Long silences
    Understanding the origin of these quasiparticles — in other words, knowing why Cooper pairs break — would be a step towards improving the performance of superconductors and the many technologies that rely on them. To answer that question, researchers at Aalto precisely measured the dynamics of Cooper pair breaking in a superconductor. More

  • in

    Using sparse data to predict lab earthquakes

    A machine-learning approach developed for sparse data reliably predicts fault slip in laboratory earthquakes and could be key to predicting fault slip and potentially earthquakes in the field. The research by a Los Alamos National Laboratory team builds on their previous success using data-driven approaches that worked for slow-slip events in earth but came up short on large-scale stick-slip faults that generate relatively little data — but big quakes.
    “The very long timescale between major earthquakes limits the data sets, since major faults may slip only once in 50 to 100 years or longer, meaning seismologists have had little opportunity to collect the vast amounts of observational data needed for machine learning,” said Paul Johnson, a geophysicist at Los Alamos and a co-author on a new paper, “Predicting Fault Slip via Transfer Learning,” in Nature Communications.
    To compensate for limited data, Johnson said, the team trained a convolutional neural network on the output of numerical simulations of laboratory quakes as well as on a small set of data from lab experiments. Then they were able to predict fault slips in the remaining unseen lab data.
    This research was the first application of transfer learning to numerical simulations for predicting fault slip in lab experiments, Johnson said, and no one has applied it to earth observations.
    With transfer learning, researchers can generalize from one model to another as a way of overcoming data sparsity. The approach allowed the Laboratory team to build on their earlier data-driven machine learning experiments successfully predicting slip in laboratory quakes and apply it to sparse data from the simulations. Specifically, in this case, transfer learning refers to training the neural network on one type of data — simulation output — and applying it to another — experimental data — with the additional step of training on a small subset of experimental data, as well.
    “Our aha moment came when I realized we can take this approach to earth,” Johnson said. “We can simulate a seismogenic fault in earth, then incorporate data from the actual fault during a portion of the slip cycle through the same kind of cross training.” The aim would be to predict fault movement in a seismogenic fault such as the San Andreas, where data is limited by infrequent earthquakes.
    The team first ran numerical simulations of the lab quakes. These simulations involve building a mathematical grid and plugging in values to simulate fault behavior, which are sometimes just good guesses.
    For this paper, the convolutional neural network comprised an encoder that boils down the output of the simulation to its key features, which are encoded in the model’s hidden, or latent space, between the encoder and decoder. Those features are the essence of the input data that can predict fault-slip behavior.
    The neural network decoded the simplified features to estimate the friction on the fault at any given time. In a further refinement of this method, the model’s latent space was additionally trained on a small slice of experimental data. Armed with this “cross-training,” the neural network predicted fault-slip events accurately when fed unseen data from a different experiment.
    Story Source:
    Materials provided by DOE/Los Alamos National Laboratory. Note: Content may be edited for style and length. More

  • in

    Using ergonomics to reduce pain from technology use

    The use of smartphones, tablets and laptops has become commonplace throughout the world and has been especially prevalent among college students. Recent studies have found that college students have higher levels of screen time, and they utilize multiple devices at higher rates compared to previous generations.
    With the increased use of these devices, especially smartphones, students tend to use a less-traditional workplace such as a couch or chair with no desk, leading to an increase in musculoskeletal disorders in that age group. A team of Texas A&M researchers led by Mark E. Benden conducted a study looking at the technology students use, the postures they adapt when they use their devices, and the amount of pain the students were currently experiencing.
    Benden and his co-authors found that smartphones have become the most common link to educational materials though they have the least favorable control and display scenario from an ergonomic perspective. Additionally, the team concluded that regardless of device, ergonomic interventions focused on improving posture and facilitating stress management may reduce the likelihood of pain.
    The results of the team’s study were published recently in the open-access, peer reviewed journal BMC Public Health.
    “When we started this study a few years ago it was because we had determined that college students were the heavy users of smartphones,” Benden said. “Now those same levels we were concerned about in college students are seen in 40-year-olds and college students have increased to new levels.”
    Benden, professor and head of the Department of Environmental and Occupational Health (EOH) at the Texas A&M University School of Public Health and director of the Ergo Center, co-authored the study with EOH associate professors Adam Pickens, S. Camille Peres, and Matthew Lee Smith, Ranjana Mehta, associate professor in the Wm Michael Barnes ’64 Department of Industrial & Systems Engineering, Brett Harp, a recent EOH graduate, and Samuel Towne Jr., adjunct assistant professor at the School of Public Health.
    The research team used a 35-minute online survey that asked participants about their technology use, posture when using the technology, current level of pain or discomfort, and their activity and stress levels.
    Among the respondents, 64 percent indicated that their smartphone was the electronic device they used most frequently, followed by laptops, tablets and desktop computers. On average, the students used their smartphone 4.4 hours per day, and they indicated that when doing so, they were more likely to do so on the couch or at a chair with no desk.
    “It is amazing to consider how quickly smartphones have become the dominant tech device in our daily lives with little research into how that level of use would impact our health,” Benden said.
    The researchers found that posture components and stress more consistently contributed to the pain reported by the students, not the variables associated with the devices they were using.
    Still, the researchers point out that in our ever-increasing technology-focused society, efforts are needed to ensure that pain is deferred or delayed until an individual’s later years to preserve the productivity of the workforce.
    “Now that we are moving toward hybrid and/or remote workspaces for our jobs, college students are taking habits formed in dorm and apartment rooms during college into young adulthood as employees in home offices,” Benden said. “We need to get this right or it could have adverse impacts on an entire generation.”
    Story Source:
    Materials provided by Texas A&M University. Original written by Tim Schnettler. Note: Content may be edited for style and length. More

  • in

    Magnetic ‘hedgehogs’ could store big data in a small space

    Atomic-scale magnetic patterns resembling a hedgehog’s spikes could result in hard disks with massively larger capacities than today’s devices, a new study suggests. The finding could help data centers keep up with the exponentially increasing demand for video and cloud data storage.
    In a study published today in the journal Science, researchers at The Ohio State University used a magnetic microscope to visualize the patterns, formed in thin films of an unusual magnetic material, manganese germanide. Unlike familiar magnets such as iron, the magnetism in this material follows helices, similar to the structure of DNA. This leads to a new zoo of magnetic patterns with names such as hedgehogs, anti-hedgehogs, skyrmions and merons that can be much smaller than today’s magnetic bits.
    “These new magnetic patterns could be used for next-generation data storage,” said Jay Gupta, senior author of the study and a professor of physics at Ohio State. “The density of storage in hard disks is approaching its limits, related to how small you can make the magnetic bits that allow for that storage. And that’s motivated us to look for new materials, where we might be able to make the magnetic bits much smaller.”
    To visualize the magnetic patterns, Gupta and his team used a scanning tunneling microscope in his lab, modified with special tips. This microscope provides pictures of the magnetic patterns with atomic resolution. Their images revealed that in certain parts of the sample, the magnetism at the surface was twisted into a pattern resembling the spikes of a hedgehog. However, in this case the “body” of the hedgehog is only 10 nanometers wide, which is much smaller than today’s magnetic bits (about 50 nanometers), and nearly impossible to visualize. By comparison, a single human hair is about 80,000 nanometers thick.
    The research team also found that the hedgehog patterns could be shifted on the surface with electric currents, or inverted with magnetic fields. This foreshadows the reading and writing of magnetic data, potentially using much less energy than currently possible.
    “There is enormous potential for these magnetic patterns to allow data storage to be more energy efficient,” Gupta said, though he cautions that there is more research to do before the material could be put into use on a data storage site. “We have a huge amount of fundamental science still to do about understanding these magnetic patterns and improving how we control them. But this is a very exciting step.”
    This research was funded by the Defense Advanced Research Projects Agency, a research division of the U.S. Department of Defense. Other Ohio State researchers who co-authored this study include Jacob Repicky, Po-Kuwan Wu, Tao Liu, Joseph Corbett, Tiancong Zhu, Shuyu Cheng, Adam Ahmed, Mohit Randeria and Roland Kawakami.
    Story Source:
    Materials provided by Ohio State University. Original written by Laura Arenschield. Note: Content may be edited for style and length. More

  • in

    Redrawing the lines: Growing inexpensive, high-quality iron-based superconductors

    Superconducting materials show zero electrical resistance at low temperatures, which allows them to conduct “supercurrents” without dissipation. Recently, a group of scientists led by Dr. Kazumasa Iida from Nagoya University, Japan, developed an inexpensive, scalable way to produce high-temperature superconductors using “grain boundary engineering” techniques. The new method could help develop stronger, inexpensive, and high operating temperature superconductors with impactful technological applications.
    Key to the dissipation-free conduction of currents in superconductors in the presence of a magnetic field is a property called “pinning potential.” Pinning describes how defects in the superconducting matrix pin vortices against the Lorentz force. Controlling the micro-structure of the material allows for careful introduction of defects into the material to form “artificial pinning centers” (APCs), which can then improve its properties. The most common approach to introducing such defects into superconductors is “ion irradiation.” However, ion irradiation is both complicated and expensive.
    In their study published in NPG Asia Materials, Professor Iida and his research team successfully grew a thin film superconductor that has a surprisingly high pinning efficiency without APCs. “Crystalline materials are made up of different regions with different crystalline orientations called ‘grains.’ When the angle between the boundaries of different grains in the material are less than their critical angle, ?c, we call it a ‘low-angle grain boundary (LAGB).’ LAGBs contribute to magnetic flux pinning, which enhances the properties of the superconductor,” explains Dr. Iida.
    Iron (Fe)-based superconductors (FBS) are considered to be the next-generation superconductor technology. In their study, Professor Iida and team grew an FBS called “potassium (K)-doped BaFe2As2 (Ba122)” using a technique called “molecular beam epitaxy,” in which the superconductor is grown on a substrate. “The difficulties involved in controlling volatile potassium made the realization of epitaxial K-doped Ba122 challenging, but we succeeded in growing the thin films on fluoride substrates,” says Dr. Iida.
    The team then characterized the FBS using transmission electron microscopy and found that the film was composed of columnar grains approximately 30-60 nm wide. These grains were rotated around the crystallographic principle axes by angles well within ?c for K-doped Ba122 and formed LAGB networks.
    The researchers then performed measurements of the thin film’s electrical resistivity and magnetic properties. They observed that the thin films had a surprisingly high critical current (the maximum current in a superconductor above which it transitions to a dissipation state). The LAGB networks further ensured a strong pinning efficiency in the material. “The in-field properties obtained in our study are comparable to that of ion-irradiated K-doped Ba122. Moreover, grain boundary engineering is a simple technique and can be scaled up for industrial applications,” comments Dr. Iida.
    The findings of this study could accelerate the development of strong magnets using superconductors, leading to advances in magnetic resonance imaging (MRI). The widespread application of MRI is currently limited by the high investment and operational cost of the MRI machines due to the cooling costs of the superconductors within. But with simple and inexpensive techniques such as grain boundary engineering for fabricating superconductors, MRIs could become more accessible to patients, improving our quality of life.
    Story Source:
    Materials provided by Nagoya University. Note: Content may be edited for style and length. More