More stories

  • in

    A novel all-optical switching method makes optical computing and communication systems more power-efficient

    Photonics researchers have introduced a novel method to control a light beam with another beam through a unique plasmonic metasurface in a linear medium at ultra-low power. This simple linear switching method makes nanophotonic devices such as optical computing and communication systems more sustainable requiring low intensity of light.
    All-optical switching is the modulation of signal light due to control light in such a way that it possesses the ON/OFF conversion function. In general, a light beam can be modulated with another intense laser beam in the presence of a nonlinear medium.
    The switching method developed by the researchers is fundamentally based on the quantum optical phenomenon known as Enhancementof Index of Refraction (EIR).
    “Our work is the first experimental demonstration of this effect on the optical system and its utilization for linear all-optical switching. The research also enlightens the scientific community to achieve loss-compensated plasmonic devices operating at resonance frequencies through extraordinary enhancement of refractive index without using any gain media or nonlinear processes,” says Humeyra Caglayan, Associate Professor (tenure track) in Photonics at Tampere University.
    Optical switching enabled with ultrafast speed
    High-speed switching and low-loss medium to avoid the strong dissipation of signal during propagation are the basis to develop integrated photonic technology where photons are utilized as information carriers instead of electrons. To realize on-chip ultrafast all-optical switch networks and photonic central processing units, all-optical switching must have ultrafast switching time, ultralow threshold control power, ultrahigh switching efficiency, and nanoscale feature size. More

  • in

    Study explores the promises and pitfalls of evolutionary genomics

    The second century Alexandrian astronomer and mathematician Claudius Ptolemy had a grand ambition. Hoping to make sense of the motion of stars and the paths of planets, he published a magisterial treatise on the subject, known as the Almagest. Ptolemy created a complex mathematical model of the universe that seemed to recapitulate the movements of the celestial objects he observed.
    Unfortunately, a fatal flaw lay at the heart of his cosmic scheme. Following the prejudices of his day, Ptolemy worked from the premise that the Earth was the center of the universe. The Ptolemaic universe, composed of complex “epicycles” to account for planet and star movements, has long since been consigned to the history books, though its conclusions remained the scientific dogma for over 1200 years.
    The field of evolutionary biology is no less subject to misguided theoretical approaches, sometimes producing impressive models that nevertheless fail to convey the true workings of nature as it shapes the dizzying assortment of living forms on Earth.
    A new study examines mathematical models designed to draw inferences about how evolution operates at the level of populations of organisms. The study concludes that such models must be constructed with the greatest care, avoiding unwarranted initial assumptions, weighing the quality of existing knowledge and remaining open to alternate explanations.
    Failure to apply strict procedures in null model construction can lead to theories that seem to square with certain aspects of available data derived from DNA sequencing, yet fail to correctly elucidate underlying evolutionary processes, which are often highly complex and multifaceted.
    Such theoretical frameworks may offer compelling but ultimately flawed pictures of how evolution actually acts on populations over time, be these populations of bacteria, shoals of fish, or human societies and their various migrations during prehistory. More

  • in

    Bumps could smooth quantum investigations

    Atoms do weird things when forced out of their comfort zones. Rice University engineers have thought up a new way to give them a nudge.
    Materials theorist Boris Yakobson and his team at Rice’s George R. Brown School of Engineering have a theory that changing the contour of a layer of 2D material, thus changing the relationships between its atoms, might be simpler to do than previously thought.
    While others twist 2D bilayers — two layers stacked together — of graphene and the like to change their topology, the Rice researchers suggest through computational models that growing or stamping single-layer 2D materials on a carefully designed undulating surface would achieve “an unprecedented level of control” over their magnetic and electronic properties.
    They say the discovery opens a path to explore many-body effects, the interactions between multiple microscopic particles, including quantum systems.
    The paper by Yakobson and two alumni, co-lead author Sunny Gupta and Henry Yu, of his lab appears in Nature Communications.
    The researchers were inspired by recent discoveries that twisting or otherwise deforming 2D materials bilayers like bilayer graphene into “magic angles” induced interesting electronic and magnetic phenomena, including superconductivity. More

  • in

    'Beam-steering' technology takes mobile communications beyond 5G

    Birmingham scientists have revealed a new beam-steering antenna that increases the efficiency of data transmission for ‘beyond 5G’ — and opens up a range of frequencies for mobile communications that are inaccessible to currently used technologies.
    Experimental results, presented today for the first time at the 3rd International Union of Radio Science Atlantic / Asia-Pacific Radio Science Meeting, show the device can provide continuous ‘wide-angle’ beam steering, allowing it to track a moving mobile phone user in the same way that a satellite dish turns to track a moving object, but with significantly enhanced speeds.
    Devised by researchers from the University of Birmingham’s School of Engineering, the technology has demonstrated vast improvements in data transmissoin efficiency at frequencies ranging across the millimetre wave spectrum, specifically those identified for 5G (mmWave) and 6G, where high efficiency is currently only achievable using slow, mechanically steered antenna solutions.
    For 5G mmWave applications, prototypes of the beam-steering antenna at 26 GHz have shown unprecedented data transmission efficiency.
    The device is fully compatible with existing 5G specifications that are currently used by mobile communications networks. Moreover, the new technology does not require the complex and inefficient feeding networks required for commonly deployed antenna systems, instead using a low complexity system which improves performance and is simple to fabricate.
    The beam-steering antenna was developed by Dr James Churm, Dr Muhammad Rabbani, and Professor Alexandros Feresidis, Head of the Metamaterials Engineering Laboratory, as a solution for fixed, base station antenna, for which current technology shows reduced efficiency at higher frequencies, limiting the use of these frequencies for long-distance transmission. More

  • in

    Great timing, supercomputer upgrade lead to successful forecast of volcanic eruption

    In the fall of 2017, geology professor Patricia Gregg and her team had just set up a new volcanic forecasting modeling program on the Blue Waters and iForge supercomputers. Simultaneously, another team was monitoring activity at the Sierra Negra volcano in the Galapagos Islands, Ecuador. One of the scientists on the Ecuador project, Dennis Geist of Colgate University, contacted Gregg, and what happened next was the fortuitous forecast of the June 2018 Sierra Negra eruption five months before it occurred.
    Initially developed on an iMac computer, the new modeling approach had already garnered attention for successfully recreating the unexpected eruption of Alaska’s Okmok volcano in 2008. Gregg’s team, based out of the University of Illinois Urbana-Champaign and the National Center for Supercomputing Applications, wanted to test the model’s new high-performance computing upgrade, and Geist’s Sierra Negra observations showed signs of an imminent eruption.
    “Sierra Negra is a well-behaved volcano,” said Gregg, the lead author of a new report of the successful effort. “Meaning that, before eruptions in the past, the volcano has shown all the telltale signs of an eruption that we would expect to see like groundswell, gas release and increased seismic activity. This characteristic made Sierra Negra a great test case for our upgraded model.”
    However, many volcanoes don’t follow these neatly established patterns, the researchers said. Forecasting eruptions is one of the grand challenges in volcanology, and the development of quantitative models to help with these trickier scenarios is the focus of Gregg and her team’s work.
    Over the winter break of 2017-18, Gregg and her colleagues ran the Sierra Negra data through the new supercomputing-powered model. They completed the run in January 2018 and, even though it was intended as a test, it ended up providing a framework for understanding Sierra Negra’s eruption cycles and evaluating the potential and timing of future eruptions — but nobody realized it yet.
    “Our model forecasted that the strength of the rocks that contain Sierra Negra’s magma chamber would become very unstable sometime between June 25 and July 5, and possibly result in a mechanical failure and subsequent eruption,” said Gregg, who also is an NCSA faculty fellow. “We presented this conclusion at a scientific conference in March 2018. After that, we became busy with other work and did not look at our models again until Dennis texted me on June 26, asking me to confirm the date we had forecasted. Sierra Negra erupted one day after our earliest forecasted mechanical failure date. We were floored.”
    Though it represents an ideal scenario, the researchers said, the study shows the power of incorporating high-performance supercomputing into practical research. “The advantage of this upgraded model is its ability to constantly assimilate multidisciplinary, real-time data and process it rapidly to provide a daily forecast, similar to weather forecasting,” said Yan Zhan, a former Illinois graduate student and co-author of the study. “This takes an incredible amount of computing power previously unavailable to the volcanic forecasting community.” More

  • in

    AI ethical decision making: Is society ready?

    With the accelerating evolution of technology, artificial intelligence (AI) plays a growing role in decision-making processes. Humans are becoming increasingly dependent on algorithms to process information, recommend certain behaviors, and even take actions of their behalf. A research team has studied how humans react to the introduction of AI decision making. Specifically, they explored the question, “is society ready for AI ethical decision making?” by studying human interaction with autonomous cars.
    The team published their findings on May 6, 2022, in the Journal of Behavioral and Experimental Economics.
    In the first of two experiments, the researchers presented 529 human subjects with an ethical dilemma a driver might face. In the scenario the researchers created, the car driver had to decide whether to crash the car into one group of people or another — the collision was unavoidable. The crash would cause severe harm to one group of people, but would save the lives of the other group. The subjects in the study had to rate the car driver’s decision, when the driver was a human and also when the driver was AI. This first experiment was designed to measure the bias people might have against AI ethical decision making.
    In their second experiment, 563 human subjects responded to the researchers’ questions. The researchers determined how people react to the debate over AI ethical decisions once they become part of social and political discussions. In this experiment, there were two scenarios. One involved a hypothetical government that had already decided to allow autonomous cars to make ethical decisions. Their other scenario allowed the subjects to “vote” whether to allow the autonomous cars to make ethical decisions. In both cases, the subjects could choose to be in favor of or against the decisions made by the technology. This second experiment was designed to test the effect of two alternative ways of introducing AI into society.
    The researchers observed that when the subjects were asked to evaluate the ethical decisions of either a human or AI driver, they did not have a definitive preference for either. However, when the subjects were asked their explicit opinion on whether a driver should be allowed to make ethical decisions on the road, the subjects had a stronger opinion against AI-operated cars. The researchers believe that the discrepancy between the two results is caused by a combination of two elements.
    The first element is that individual people believe society as a whole does not want AI ethical decision making, and so they assign a positive weight to their beliefs when asked for their opinion on the matter. “Indeed, when participants are asked explicitly to separate their answers from those of society, the difference between the permissibility for AI and human drivers vanishes,” said Johann Caro-Burnett, an assistant professor in the Graduate School of Humanities and Social Sciences, Hiroshima University.
    The second element is that when introducing this new technology into society, allowing discussion of the topic has mixed results depending on the country. “In regions where people trust their government and have strong political institutions, information and decision-making power improve how subjects evaluate the ethical decisions of AI. In contrast, in regions where people do not trust their government and have weak political institutions, decision-making capability deteriorates how subjects evaluate the ethical decisions of AI,” said Caro-Burnett.
    “We find that there is a social fear of AI ethical decision-making. However, the source of this fear is not intrinsic to individuals. Indeed, this rejection of AI comes from what individuals believe is the society’s opinion,” said Shinji Kaneko, a professor in the Graduate School of Humanities and Social Sciences, Hiroshima University, and the Network for Education and Research on Peace and Sustainability. So when not being asked explicitly, people do not show any signs of bias against AI ethical decision-making. However, when asked explicitly, people show an aversion to AI. Furthermore, where there is added discussion and information on the topic, the acceptance of AI improves in developed countries and worsens in developing countries.
    The researchers believe this rejection of a new technology, that is mostly due to incorporating individuals’ beliefs about society’s opinion, is likely to apply in other machines and robots. “Therefore, it will be important to determine how to aggregate individual preferences into one social preference. Moreover, this task will also have to be different across countries, as our results suggest,” said Kaneko.
    Story Source:
    Materials provided by Hiroshima University. Note: Content may be edited for style and length. More

  • in

    An atomic-scale window into superconductivity paves the way for new quantum materials

    Superconductors are materials with no electrical resistance whatsoever, commonly requiring extremely low temperatures. They are used in a wide range of domains, from medical applications to a central role in quantum computers. Superconductivity is caused by specially linked pairs of electrons known as Cooper pairs. So far, the occurrence of Cooper pairs has been measured indirectly macroscopically in bulk, but a new technique developed by researchers at Aalto University and Oak Ridge National Laboratories in the US can detect their occurrence with atomic precision.
    The experiments were carried out by Wonhee Ko and Petro Maksymovych at Oak Ridge National Laboratory with the theoretical support of Professor Jose Lado of Aalto University. Electrons can quantum tunnel across energy barriers, jumping from one system to another through space in a way that cannot be explained with classical physics. For example, if an electron pairs with another electron right at the point where a metal and superconductor meet, it could form a Cooper pair that enters the superconductor while also “kicking back” another kind of particle into the metal in a process known as Andreev reflection. The researchers looked for these Andreev reflections to detect Cooper pairs.
    To do this, they measured the electrical current between an atomically sharp metallic tip and a superconductor, as well as how the current depended on the separation between the tip and the superconductor. This enabled them to detect the amount of Andreev reflection going back to the superconductor, while maintaining an imaging resolution comparable to individual atoms. The results of the experiment corresponded exactly to Lado’s theoretical model.
    This experimental detection of Cooper pairs at the atomic scale provides an entirely new method for understanding quantum materials. For the first time, researchers can uniquely determine how the wave functions of Cooper pairs are reconstructed at the atomic scale and how they interact with atomic-scale impurities and other obstacles.
    ‘This technique establishes a critical new methodology for understanding the internal quantum structure of exotic types of superconductors known as unconventional superconductors, potentially allowing us to tackle a variety of open problems in quantum materials,’ Lado says. Unconventional superconductors are a potential fundamental building block for quantum computers and could provide a platform to realize superconductivity at room temperature. Cooper pairs have unique internal structures in unconventional superconductors which so far have been challenging to understand.
    This discovery allows for the direct probing of the state of Cooper pairs in unconventional superconductors, establishing a critical new technique for a whole family of quantum materials. It represents a major step forward in our understanding of quantum materials and helps push forward the work of developing quantum technologies.
    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More

  • in

    Creating artificial intelligence that acts more human by 'knowing that it knows'

    A research group from the Graduate School of Informatics, Nagoya University, has taken a big step towards creating a neural network with metamemory through a computer-based evolution experiment.
    In recent years, there has been rapid progress in designing artificial intelligence technology using neural networks that imitate brain circuits. One goal of this field of research is understanding the evolution of metamemory to use it to create artificial intelligence with a human-like mind.
    Metamemory is the process by which we ask ourselves whether we remember what we had for dinner yesterday and then use that memory to decide whether to eat something different tonight. While this may seem like a simple question, answering it involves a complex process. Metamemory is important because it involves a person having knowledge of their own memory capabilities and adjusting their behavior accordingly.
    “In order to elucidate the evolutionary basis of the human mind and consciousness, it is important to understand metamemory,” explains lead author Professor Takaya Arita. “A truly human-like artificial intelligence, which can be interacted with and enjoyed like a family member in a person’s home, is an artificial intelligence that has a certain amount of metamemory, as it has the ability to remember things that it once heard or learned.”
    When studying metamemory, researchers often employ a ‘delayed matching-to-sample task’. In humans, this task consists of the participant seeing an object, such as a red circle, remembering it, and then taking part in a test to select the thing that they had previously seen from multiple similar objects. Correct answers are rewarded and wrong answers punished. However, the subject can choose not to do the test and still earn a smaller reward.
    A human performing this task would naturally use their metamemory to consider if they remembered seeing the object. If they remembered it, they would take the test to get the bigger reward, and if they were unsure, they would avoid risking the penalty and receive the smaller reward instead. Previous studies reported that monkeys could perform this task as well.
    The Nagoya University team comprising Professor Takaya Arita, Yusuke Yamato, and Reiji Suzuki of the Graduate School of Informatics created an artificial neural network model that performed the delayed matching-to-sample task and analyzed how it behaved.
    Despite starting from random neural networks that did not even have a memory function, the model was able to evolve to the point that it performed similarly to the monkeys in previous studies. The neural network could examine its memories, keep them, and separate outputs. The intelligence was able to do this without requiring any assistance or intervention by the researchers, suggesting the plausibility of it having metamemory mechanisms. “The need for metamemory depends on the user’s environment. Therefore, it is important for artificial intelligence to have a metamemory that adapts to its environment by learning and evolving,” says Professor Arita of the finding. “The key point is that the artificial intelligence learns and evolves to create a metamemory that adapts to its environment.”
    Creating an adaptable intelligence with metamemory is a big step towards making machines that have memories like ours. The team is enthusiastic about the future, “This achievement is expected to provide clues to the realization of artificial intelligence with a ‘human-like mind’ and even consciousness.”
    The research results were published in the online edition of the international scientific journal Scientific Reports. The study was partly supported by a JSPS/MEXT Grants-in-Aid for Scientific Research KAKENHI (JP17H06383 in #4903).
    Story Source:
    Materials provided by Nagoya University. Note: Content may be edited for style and length. More