More stories

  • in

    Using ultra-low temperatures to understand high-temperature superconductivity

    At low temperatures, certain materials lose their electrical resistance and conduct electricity without any loss — this phenomenon of superconductivity has been known since 1911, but it is still not fully understood. And that is a pity, because finding a material that would still have superconducting properties even at high temperatures would probably trigger a technological revolution.
    A discovery made at TU Wien (Vienna) could be an important step in this direction: A team of solid-state physicists studied an unusual material — a so-called “strange metal” made of ytterbium, rhodium and silicon. Strange metals show an unusual relationship between electrical resistance and temperature. In the case of this material, this correlation can be seen in a particularly wide temperature range, and the underlying mechanism is known. Contrary to previous assumptions, it now turns out that this material is also a superconductor and that superconductivity is closely related to strange metal behaviour. This could be the key to understanding high-temperature superconductivity in other classes of materials as well.
    Strange metal: linear relationship between resistance and temperature
    In ordinary metals, electrical resistance at low temperatures increases with the square of the temperature. In some high-temperature superconductors, however, the situation is completely different: at low temperatures, below the so-called superconducting transition temperature, they show no electrical resistance at all, and above this temperature the resistance increases linearly instead of quadratically with temperature. This is what defines “strange metals.”
    “It has therefore already been suspected in recent years that this linear relationship between resistance and temperature is of great importance for superconductivity,” says Prof. Silke Bühler-Paschen, who heads the research area “Quantum Materials” at the Institute of Solid State Physics at TU Wien. “But unfortunately, until now we didn’t know of a suitable material to study this in great depth.” In the case of high-temperature superconductors, the linear relationship between temperature and resistance is usually only detectable in a relatively small temperature range, and, furthermore, various effects that inevitably occur at higher temperatures can influence this relationship in complicated ways.
    Many experiments have already been carried out with an exotic material (YbRh2Si2) that displays strange metal behaviour over an extremely wide temperature range — but, surprisingly, no superconductivity seemed to emerge from this extreme “strange metal” state. “Theoretical considerations have already been put forward to justify why superconductivity is simply not possible here,” says Silke Bühler-Paschen. “Nevertheless, we decided to take another look at this material.”
    Record-breaking temperatures
    At TU Wien, a particularly powerful low-temperature laboratory is available. “There we can study materials under more extreme conditions than other research groups have been able to do so far,” explains Silke Bühler-Paschen. First, the team was able to show that in YbRh2Si2 the linear relationship between resistance and temperature exists in an even larger temperature range than previously thought — and then they made the key discovery: at extremely low temperatures of only one millikelvin, the strange metal turns into a superconductor.
    “This makes our material ideally suited for finding out in what way the strange metal behaviour leads to superconductivity,” says Silke Bühler-Paschen.
    Paradoxically, the very fact that the material only becomes superconducting at very low temperatures ensures that it can be used to study high-temperature superconductivity particularly well: “The mechanisms that lead to superconductivity are visible particularly well at these extremely low temperatures because they are not overlaid by other effects in this regime. In our material, this is the localisation of some of the conduction electrons at a quantum critical point. There are indications that a similar mechanism may also be responsible for the behaviour of high-temperature superconductors such as the famous cuprates,” says Silke Bühler-Paschen.
    Story Source:
    Materials provided by Vienna University of Technology. Note: Content may be edited for style and length. More

  • in

    New technology shows promise in detecting, blocking grid cyberattacks

    Researchers from Idaho National Laboratory and New Mexico-based Visgence Inc. have designed and demonstrated a technology that can block cyberattacks from impacting the nation’s electric power grid.
    During a recent live demonstration at INL’s Critical Infrastructure Test Range Complex, the Constrained Cyber Communication Device (C3D) was tested against a series of remote access attempts indicative of a cyberattack. The device alerted operators to the abnormal commands and blocked them automatically, preventing the attacks from accessing and damaging critical power grid components.
    “Protecting our critical infrastructure from foreign adversaries is a key component in the department’s national security posture,” said Patricia Hoffman, acting assistant secretary for the U.S. Department of Energy. “It’s accomplishments like this that expand our efforts to strengthen our electric system against threats while mitigating vulnerabilities. Leveraging the capabilities of Idaho National Laboratory and the other national laboratories will accelerate the modernization of our grid hardware, protecting us from cyberattacks.”
    The C3D device uses advanced communication capabilities to autonomously review and filter commands being sent to protective relay devices. Relays are the heart and soul of the nation’s power grid and are designed to rapidly command breakers to turn off the flow of electricity when a disturbance is detected. For instance, relays can prevent expensive equipment from being damaged when a power line fails because of a severe storm.
    However, relays are not traditionally designed to block the speed and stealthiness of a cyberattack, which can send wild commands to grid equipment in milliseconds. To prevent this kind of attack, an intelligent and automatic filtering technology is needed.
    “As cyberattacks against the nation’s critical infrastructure have grown more sophisticated, there is a need for a device to provide a last line of defense against threats,” said INL program manager Jake Gentle. “The C3D device sits deep inside a utility’s network, monitoring and blocking cyberattacks before they impact relay operations.”
    To test the technology’s effectiveness, researchers spent nearly a year collaborating with industry experts, including longtime partners from Power Engineers, an international engineering and environmental consulting firm. INL and the Department of Energy also established an industry advisory board consisting of power grid and cybersecurity experts from across the federal government, private industry and academia.
    After thoroughly assessing industry needs and analyzing the makeup of modern cyber threats, researchers designed an electronic device that could be wired into a protective relay’s communication network. Then they constructed a 36-foot mobile substation and connected it to INL’s full-scale electric power grid test bed to establish an at-scale power grid environment.
    With the entire system online, researchers sent a sudden power spike command to the substation relays and monitored the effects from a nearby command center. Instantly, the C3D device blocked the command and prevented the attack from damaging the larger grid.
    The development of the device was funded by DOE’s Office of Electricity under the Protective Relay Permission Communication project. The technology and an associated software package will undergo further testing over the next several months before being made available for licensing to private industry.
    Story Source:
    Materials provided by DOE/Idaho National Laboratory. Note: Content may be edited for style and length. More

  • in

    Machine learning models to help photovoltaic systems find their place in the sun

    Although photovoltaic systems constitute a promising way of harnessing solar energy, power grid managers need to accurately predict their power output to schedule generation and maintenance operations efficiently. Scientists from Incheon National University, Korea, have developed a machine learning-based approach that can more accurately estimate the output of photovoltaic systems than similar algorithms, paving the way to a more sustainable society.
    With the looming threat of climate change, it is high time we embrace renewable energy sources on a larger scale. Photovoltaic systems, which generate electricity from the nearly limitless supply of sunlight energy, are one of the most promising ways of generating clean energy. However, integrating photovoltaic systems into existing power grids is not a straightforward process. Because the power output of photovoltaic systems depends heavily on environmental conditions, power plant and grid managers need estimations of how much power will be injected by photovoltaic systems so as to plan optimal generation and maintenance schedules, among other important operational aspects.
    In line with modern trends, if something needs predicting, you can safely bet that artificial intelligence will make an appearance. To date, there are many algorithms that can estimate the power produced by photovoltaic systems several hours ahead by learning from previous data and analyzing current variables. One of them, called adaptive neuro-fuzzy inference system (ANFIS), has been widely applied for forecasting the performance of complex renewable energy systems. Since its inception, many researchers have combined ANFIS with a variety of machine learning algorithms to improve its performance even further.
    In a recent study published in Renewable and Sustainable Energy Reviews, a research team led by Jong Wan Hu from Incheon National University, Korea, developed two new ANFIS-based models to better estimate the power generated by photovoltaic systems ahead of time by up to a full day. These two models are ‘hybrid algorithms’ because they combine the traditional ANFIS approach with two different particle swarm optimization methods, which are powerful and computationally efficient strategies for finding optimal solutions to optimization problems.
    To assess the performance of their models, the team compared them with other ANFIS-based hybrid algorithms. They tested the predictive abilities of each model using real data from an actual photovoltaic system deployed in Italy in a previous study. The results, as Dr. Hu remarks, were very promising: “One of the two models we developed outperformed all the hybrid models tested, and hence showed great potential for predicting the photovoltaic power of solar systems at both short- and long-time horizons.”
    The findings of this study could have immediate implications in the field of photovoltaic systems from software and production perspectives. “In terms of software, our models can be turned into applications that accurately estimate photovoltaic system values, leading to enhanced performance and grid operation. In terms of production, our methods can translate into a direct increase in photovoltaic power by helping select variables that can be used in the photovoltaic system’s design,” explains Dr. Hu. Let us hope this work helps us in the transition to sustainable energy sources!
    Story Source:
    Materials provided by Incheon National University. Note: Content may be edited for style and length. More

  • in

    Bleak cyborg future from brain-computer interfaces if we're not careful

    Surpassing the biological limitations of the brain and using one’s mind to interact with and control external electronic devices may sound like the distant cyborg future, but it could come sooner than we think.
    Researchers from Imperial College London conducted a review of modern commercial brain-computer interface (BCI) devices, and they discuss the primary technological limitations and humanitarian concerns of these devices in APL Bioengineering, from AIP Publishing.
    The most promising method to achieve real-world BCI applications is through electroencephalography (EEG), a method of monitoring the brain noninvasively through its electrical activity. EEG-based BCIs, or eBCIs, will require a number of technological advances prior to widespread use, but more importantly, they will raise a variety of social, ethical, and legal concerns.
    Though it is difficult to understand exactly what a user experiences when operating an external device with an eBCI, a few things are certain. For one, eBCIs can communicate both ways. This allows a person to control electronics, which is particularly useful for medical patients that need help controlling wheelchairs, for example, but also potentially changes the way the brain functions.
    “For some of these patients, these devices become such an integrated part of themselves that they refuse to have them removed at the end of the clinical trial,” said Rylie Green, one of the authors. “It has become increasingly evident that neurotechnologies have the potential to profoundly shape our own human experience and sense of self.”
    Aside from these potentially bleak mental and physiological side effects, intellectual property concerns are also an issue and may allow private companies that develop eBCI technologies to own users’ neural data.
    “This is particularly worrisome, since neural data is often considered to be the most intimate and private information that could be associated with any given user,” said Roberto Portillo-Lara, another author. “This is mainly because, apart from its diagnostic value, EEG data could be used to infer emotional and cognitive states, which would provide unparalleled insight into user intentions, preferences, and emotions.”
    As the availability of these platforms increases past medical treatment, disparities in access to these technologies may exacerbate existing social inequalities. For example, eBCIs can be used for cognitive enhancement and cause extreme imbalances in academic or professional successes and educational advancements.
    “This bleak panorama brings forth an interesting dilemma about the role of policymakers in BCI commercialization,” Green said. “Should regulatory bodies intervene to prevent misuse and unequal access to neurotech? Should society follow instead the path taken by previous innovations, such as the internet or the smartphone, which originally targeted niche markets but are now commercialized on a global scale?”
    She calls on global policymakers, neuroscientists, manufacturers, and potential users of these technologies to begin having these conversations early and collaborate to produce answers to these difficult moral questions.
    “Despite the potential risks, the ability to integrate the sophistication of the human mind with the capabilities of modern technology constitutes an unprecedented scientific achievement, which is beginning to challenge our own preconceptions of what it is to be human,” Green said. More

  • in

    Study finds surprising source of social influence

    Imagine you’re a CEO who wants to promote an innovative new product — a time management app or a fitness program. Should you send the product to Kim Kardashian in the hope that she’ll love it and spread the word to her legions of Instagram followers? The answer would be ‘yes’ if successfully transmitting new ideas or behavior patterns was as simple as showing them to as many people as possible.
    However, a forthcoming study in the journal Nature Communications finds that as prominent and revered as social influencers seem to be, in fact, they are unlikely to change a person’s behavior by example — and might actually be detrimental to the cause.
    Why?
    “When social influencers present ideas that are dissonant with their followers’ worldviews — say, for example, that vaccination is safe and effective — they can unintentionally antagonize the people they are seeking to persuade because people typically only follow influencers whose ideas confirm their beliefs about the world,” says Damon Centola, Elihu Katz Professor of Communication, Sociology, and Engineering at Penn, and senior author on the paper.
    So what strategy do we take if we want to use an online or real world neighborhood network to ‘plant’ a new idea? Is there anyone in a social network who is effective at transmitting new beliefs? The new study delivers a surprising answer: yes, and it’s the people you’d least expect to have any pull. To stimulate a shift in thinking, target small groups of people in the “outer edge” or fringe of a network.
    Centola and Douglas Guilbeault, Ph.D., a recent Annenberg graduate, studied over 400 public health networks to discover which people could spread new ideas and behaviors most effectively. They tested every possible person in every network to determine who would be most effective for spreading everything from celebrity gossip to vaccine acceptance. More

  • in

    Ultrathin magnet operates at room temperature

    The development of an ultrathin magnet that operates at room temperature could lead to new applications in computing and electronics — such as high-density, compact spintronic memory devices — and new tools for the study of quantum physics.
    The ultrathin magnet, which was recently reported in the journal Nature Communications, could make big advances in next-gen memories, computing, spintronics, and quantum physics. It was discovered by scientists at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley.
    “We’re the first to make a room-temperature 2D magnet that is chemically stable under ambient conditions,” said senior author Jie Yao, a faculty scientist in Berkeley Lab’s Materials Sciences Division and associate professor of materials science and engineering at UC Berkeley.
    “This discovery is exciting because it not only makes 2D magnetism possible at room temperature, but it also uncovers a new mechanism to realize 2D magnetic materials,” added Rui Chen, a UC Berkeley graduate student in the Yao Research Group and lead author on the study.”
    The magnetic component of today’s memory devices is typically made of magnetic thin films. But at the atomic level, these magnetic films are still three-dimensional — hundreds or thousands of atoms thick. For decades, researchers have searched for ways to make thinner and smaller 2D magnets and thus enable data to be stored at a much higher density.
    Previous achievements in the field of 2D magnetic materials have brought promising results. But these early 2D magnets lose their magnetism and become chemically unstable at room temperature. More

  • in

    New algorithm may help autonomous vehicles navigate narrow, crowded streets

    It is a scenario familiar to anyone who has driven down a crowded, narrow street. Parked cars line both sides, and there isn’t enough space for vehicles traveling in both directions to pass each other. One has to duck into a gap in the parked cars or slow and pull over as far as possible for the other to squeeze by.
    Drivers find a way to negotiate this, but not without close calls and frustration. Programming an autonomous vehicle (AV) to do the same — without a human behind the wheel or knowledge of what the other driver might do — presented a unique challenge for researchers at the Carnegie Mellon University Argo AI Center for Autonomous Vehicle Research.
    “It’s the unwritten rules of the road, that’s pretty much what we’re dealing with here,” said Christoph Killing, a former visiting research scholar in the School of Computer Science’s Robotics Institute and now part of the Autonomous Aerial Systems Lab at the Technical University of Munich. “It’s a difficult bit. You have to learn to negotiate this scenario without knowing if the other vehicle is going to stop or go.”
    While at CMU, Killing teamed up with research scientist John Dolan and Ph.D. student Adam Villaflor to crack this problem. The team presented its research, “Learning To Robustly Negotiate Bi-Directional Lane Usage in High-Conflict Driving Scenarios,” at the International Conference on Robotics and Automation.
    The team believes their research is the first into this specific driving scenario. It requires drivers — human or not — to collaborate to make it past each other safely without knowing what the other is thinking. Drivers must balance aggression with cooperation. An overly aggressive driver, one that just goes without regard for other vehicles, could put itself and others at risk. An overly cooperative driver, one that always pulls over in the face of oncoming traffic, may never make it down the street.
    “I have always found this to be an interesting and sometimes difficult aspect of driving in Pittsburgh,” Dolan said.
    Autonomous vehicles have been heralded as a potential solution to the last mile challenges of delivery and transportation. But for an AV to deliver a pizza, package or person to their destination, they have to be able to navigate tight spaces and unknown driver intentions.
    The team developed a method to model different levels of driver cooperativeness — how likely a driver was to pull over to let the other driver pass — and used those models to train an algorithm that could assist an autonomous vehicle to safely and efficiently navigate this situation. The algorithm has only been used in simulation and not on a vehicle in the real world, but the results are promising. The team found that their algorithm performed better than current models.
    Driving is full of complex scenarios like this one. As the autonomous driving researchers tackle them, they look for ways to make the algorithms and models developed for one scenario, say merging onto a highway, work for other scenarios, like changing lanes or making a left turn against traffic at an intersection.
    “Extensive testing is bringing to light the last percent of touch cases,” Dolan said. “We keep finding these corner cases and keep coming up with ways to handle them.”
    Video: https://www.youtube.com/watch?v=5njRSHcHMBk
    Story Source:
    Materials provided by Carnegie Mellon University. Original written by Aaron Aupperlee. Note: Content may be edited for style and length. More

  • in

    Novel techniques extract more accurate data from images degraded by environmental factors

    Computer vision technology is increasingly used in areas such as automatic surveillance systems, self-driving cars, facial recognition, healthcare and social distancing tools. Users require accurate and reliable visual information to fully harness the benefits of video analytics applications but the quality of the video data is often affected by environmental factors such as rain, night-time conditions or crowds (where there are multiple images of people overlapping with each other in a scene). Using computer vision and deep learning, a team of researchers led by Yale-NUS College Associate Professor of Science (Computer Science) Robby Tan, who is also from the National University of Singapore’s (NUS) Faculty of Engineering, has developed novel approaches that resolve the problem of low-level vision in videos caused by rain and night-time conditions, as well as improve the accuracy of 3D human pose estimation in videos.
    The research was presented at the 2021 Conference on Computer Vision and Pattern Recognition (CVPR).
    Combating visibility issues during rain and night-time conditions
    Night-time images are affected by low light and human-made light effects such as glare, glow, and floodlights, while rain images are affected by rain streaks or rain accumulation (or rain veiling effect).
    “Many computer vision systems like automatic surveillance and self-driving cars, rely on clear visibility of the input videos to work well. For instance, self-driving cars cannot work robustly in heavy rain and CCTV automatic surveillance systems often fail at night, particularly if the scenes are dark or there is significant glare or floodlights,” explained Assoc Prof Tan.
    In two separate studies, Assoc Prof Tan and his team introduced deep learning algorithms to enhance the quality of night-time videos and rain videos, respectively. In the first study, they boosted the brightness yet simultaneously suppressed noise and light effects (glare, glow and floodlights) to yield clear night-time images. This technique is new and addresses the challenge of clarity in night-time images and videos when the presence of glare cannot be ignored. In comparison, the existing state-of-the-art methods fail to handle glare. More