More stories

  • in

    A new type of photonic time crystal gives light a boost

    Researchers have developed a way to create photonic time crystals and shown that these bizarre, artificial materials amplify the light that shines on them. These findings, described in a paper in Science Advances, could lead to more efficient and robust wireless communications and significantly improved lasers.
    Time crystals were first conceived by Nobel laureate Frank Wilczek in 2012. Mundane, familiar crystals have a structural pattern that repeats in space, but in a time crystal, the pattern repeats in time instead. While some physicists were initially sceptical that time crystals could exist, recent experiments have succeeding in creating them. Last year, researchers at Aalto University’s Low Temperature Laboratory created paired time crystals that could be useful for quantum devices.
    Now, another team has made photonic time crystals, which are time-based versions of optical materials. The researchers created photonic time crystals that operate at microwave frequencies, and they showed that the crystals can amplify electromagnetic waves. This ability has potential applications in various technologies, including wireless communication, integrated circuits, and lasers.
    So far, research on photonic time crystals has focused on bulk materials — that is, three-dimensional structures. This has proven enormously challenging, and the experiments haven’t gotten past model systems with no practical applications. So the team, which included researchers from Aalto University, the Karlsruhe Institute of Technology (KIT), and Stanford University, tried a new approach: building a two-dimensional photonic time crystal, known as a metasurface.
    ‘We found that reducing the dimensionality from a 3D to a 2D structure made the implementation significantly easier, which made it possible to realise photonic time crystals in reality,’ says Xuchen Wang, the study’s lead author, who was a doctoral student at Aalto and is currently at KIT.
    The new approach enabled the team to fabricate a photonic time crystal and experimentally verify the theoretical predictions about its behaviour. ‘We demonstrated for the first time that photonic time crystals can amplify incident light with high gain,’ says Wang.
    ‘In a photonic time crystal, the photons are arranged in a pattern that repeats over time. This means that the photons in the crystal are synchronized and coherent, which can lead to constructive interference and amplification of the light,’ explains Wang. The periodic arrangement of the photons means they can also interact in ways that boost the amplification.
    Two-dimensional photonic time crystals have a range of potential applications. By amplifying electromagnetic waves, they could make wireless transmitters and receivers more powerful or more efficient. Wang points out that coating surfaces with 2D photonic time crystals could also help with signal decay, which is a significant problem in wireless transmission. Photonic time crystals could also simplify laser designs by removing the need for bulk mirrors that are typically used in laser cavities.
    Another application emerges from the finding that 2D photonic time crystals don’t just amplify electromagnetic waves that hit them in free space but also waves travelling along the surface. Surface waves are used for communication between electronic components in integrated circuits. ‘When a surface wave propagates, it suffers from material losses, and the signal strength is reduced. With 2D photonic time crystals integrated into the system, the surface wave can be amplified, and communication efficiency enhanced,’ says Wang. More

  • in

    Social consequences of using AI in conversations

    Cornell University researchers have found people have more efficient conversations, use more positive language and perceive each other more positively when using an artificial intelligence-enabled chat tool.
    The study, published in Scientific Reports, examined how the use of AI in conversations impacts the way that people express themselves and view each other.
    “Technology companies tend to emphasize the utility of AI tools to accomplish tasks faster and better, but they ignore the social dimension,” said Malte Jung, associate professor of information science. “We do not live and work in isolation, and the systems we use impact our interactions with others.”
    However, in addition to greater efficiency and positivity, the group found that when participants think their partner is using more AI-suggested responses, they perceive that partner as less cooperative, and feel less affiliation toward them.
    “I was surprised to find that people tend to evaluate you more negatively simply because they suspect that you’re using AI to help you compose text, regardless of whether you actually are,” said Jess Hohenstein, lead author and postdoctoral researcher. “This illustrates the persistent overall suspicion that people seem to have around AI.”
    For their first experiment, researchers developed a smart-reply platform the group called “Moshi” (Japanese for “hello”), patterned after the now-defunct Google “Allo” (French for “hello”), the first smart-reply platform, unveiled in 2016. Smart replies are generated from LLMs (large language models) to predict plausible next responses in chat-based interactions.
    Participants were asked to talk about a policy issue and assigned to one of three conditions: both participants can use smart replies; only one participant can use smart replies; or neither participant can use smart replies.
    Researchers found that using smart replies increased communication efficiency, positive emotional language and positive evaluations by communication partners. On average, smart replies accounted for 14.3% of sent messages (1 in 7).
    But participants who their partners suspected of responding with smart replies were evaluated more negatively than those who were thought to have typed their own responses, consistent with common assumptions about the negative implications of AI.
    “While AI might be able to help you write,” Hohenstein said, “it’s altering your language in ways you might not expect, especially by making you sound more positive. This suggests that by using text-generating AI, you’re sacrificing some of your own personal voice.”
    Said Jung: “What we observe in this study is the impact that AI has on social dynamics and some of the unintended consequences that could result from integrating AI in social contexts. This suggests that whoever is in control of the algorithm may have influence on people’s interactions, language and perceptions of each other.”
    This work was supported by the National Science Foundation. More

  • in

    Is artificial intelligence better at assessing heart health?

    Who can assess and diagnose cardiac function best after reading an echocardiogram: artificial intelligence (AI) or a sonographer?
    According to Cedars-Sinai investigators and their research published today in the peer-reviewed journal Nature, AI proved superior in assessing and diagnosing cardiac function when compared with echocardiogram assessments made by sonographers.
    The findings are based on a first-of-its-kind, blinded, randomized clinical trial of AI in cardiology led by investigators in the Smidt Heart Institute and the Division of Artificial Intelligence in Medicine at Cedars-Sinai.
    “The results have immediate implications for patients undergoing cardiac function imaging as well as broader implications for the field of cardiac imaging,” said cardiologist David Ouyang, MD, principal investigator of the clinical trial and senior author of the study. “This trial offers rigorous evidence that utilizing AI in this novel way can improve the quality and effectiveness of echocardiogram imaging for many patients.”
    Investigators are confident that this technology will be found beneficial when deployed across the clinical system at Cedars-Sinai and health systems nationwide.
    “This successful clinical trial sets a superb precedent for how novel clinical AI algorithms can be discovered and tested within health systems, increasing the likelihood of seamless deployment for improved patient care,” said Sumeet Chugh, MD, director of the Division of Artificial Intelligence in Medicine and the Pauline and Harold Price Chair in Cardiac Electrophysiology Research.

    In 2020, researchers at the Smidt Heart Institute and Stanford University developed one of the first AI technologies to assess cardiac function, specifically, left ventricular ejection fraction — the key heart measurement used in diagnosing cardiac function. Their research also was published in Nature.
    Building on those findings, the new study assessed whether AI was more accurate in evaluating 3,495 transthoracic echocardiogram studies by comparing initial assessment by AI or by a sonographer — also known as an ultrasound technician.
    Among the findings: Cardiologists more frequently agreed with the AI initial assessment and made corrections to only 16.8% of the initial assessments made by AI. Cardiologists made corrections to 27.2% of the initial assessments made by the sonographers. The physicians were unable to tell which assessments were made by AI and which were made by sonographers. The AI assistance saved cardiologists and sonographers time.”We asked our cardiologists to guess if the preliminary interpretation was performed by AI or by a sonographer, and it turns out that they couldn’t tell the difference,” said Ouyang. “This speaks to the strong performance of the AI algorithm as well as the seamless integration into clinical software. We believe these are all good signs for future AI trial research in the field.”
    The hope, Ouyang says, is to save clinicians time and minimize the more tedious parts of the cardiac imaging workflow. The cardiologist, however, remains the final expert adjudicator of the AI model output.
    The clinical trial and subsequent published research also shed light on the opportunity for regulatory approvals.
    “This work raises the bar for artificial intelligence technologies being considered for regulatory approval, as the Food and Drug Administration has previously approved artificial intelligence tools without data from prospective clinical trials,” said Susan Cheng, MD, MPH, director of the Institute for Research on Healthy Aging in the Department of Cardiology at the Smidt Heart Institute and co-senior author of the study. “We believe this level of evidence offers clinicians extra assurance as health systems work to adopt artificial intelligence more broadly as part of efforts to increase efficiency and quality overall.” More

  • in

    Robots predict human intention for faster builds

    Humans have a way of understandings others’ goals, desires and beliefs, a crucial skill that allows us to anticipate people’s actions. Taking bread out of the toaster? You’ll need a plate. Sweeping up leaves? I’ll grab the green trash can.
    This skill, often referred to as “theory of mind,” comes easily to us as humans, but is still challenging for robots. But, if robots are to become truly collaborative helpers in manufacturing and in everyday life, they need to learn the same abilities.
    In a new paper, a best paper award finalist at the ACM/IEEE International Conference on Human-Robot Interaction (HRI), USC Viterbi computer science researchers aim to teach robots how to predict human preferences in assembly tasks, so they can one day help out on everything from building a satellite to setting a table.
    “When working with people, a robot needs to constantly guess what the person will do next,” said lead author Heramb Nemlekar, a USC computer science PhD student working under the supervision of Stefanos Nikolaidis, an assistant professor of computer science. “For example, if the robot thinks the person will need a screwdriver to assemble the next part, it can get the screwdriver ahead of time so that the person does not have to wait. This way the robot can help people finish the assembly much faster.”
    But, as anyone who has co-built furniture with a partner can attest, predicting what a person will do next is difficult: different people prefer to build the same product in different ways. While some people want to start with the most difficult parts to get them over with, others may want to start with the easiest parts to save energy.
    Making predictions
    Most of the current techniques require people to show the robot how they would like to perform the assembly, but this takes time and effort and can defeat the purpose, said Nemlekar. “Imagine having to assemble an entire airplane just to teach the robot your preferences,” he said.

    In this new study, however, the researchers found similarities in how an individual will assemble different products. For instance, if you start with the hardest part when building an Ikea sofa, you are likely to use the same tact when putting together a baby’s crib.
    So, instead of “showing” the robot their preferences in a complex task, they created a small assembly task (called a “canonical” task) that people can easily and quickly perform. In this case, putting together parts of a simple model airplane, such as the wings, tail and propeller.
    The robot “watched” the human complete the task using a camera placed directly above the assembly area, looking down. To detect the parts operated by the human, the system used AprilTags, similar to QR codes, attached to the parts.
    Then, the system used machine learning to learn a person’s preference based on their sequence of actions in the canonical task.
    “Based on how a person performs the small assembly, the robot predicts what that person will do in the larger assembly,” said Nemlekar. “For example, if the robot sees that a person likes to start the small assembly with the easiest part, it will predict that they will start with the easiest part in the large assembly as well.”
    Building trust

    In the researchers’ user study, their system was able to predict the actions that humans will take with around 82% accuracy.
    “We hope that our research can make it easier for people to show robots what they prefer,” said Nemlekar. “By helping each person in their preferred way, robots can reduce their work, save time and even build trust with them.”
    For instance, imagine you’re assembling a piece of furniture at home, but you’re not particularly handy and struggle with the task. A robot that has been trained to predict your preferences could provide you with the necessary tools and parts ahead of time, making the assembly process easier.
    This technology could also be useful in industrial settings where workers are tasked with assembling products on a mass scale, saving time and reducing the risk of injury or accidents. Additionally, it could help persons with disabilities or limited mobility to more easily assemble products and maintain independence.
    Quickly learning preferences
    The goal is not to replace humans on the factory floor, say the researchers. Instead, they hope this research will lead to significant improvements in the safety and productivity of assembly workers in human-robot hybrid factories. “Robots can perform the non-value-added or ergonomically challenging tasks that are currently being performed by workers.
    As for the next steps, the researchers plan to develop a method to automatically design canonical tasks for different types of assembly task. They also aim to evaluate the benefit of learning human preferences from short tasks and predicting their actions in a complex task in different contexts, for instance, personal assistance in homes.
    “While we observed that human preferences transfer from canonical to actual tasks in assembly manufacturing, I expect similar findings in other applications as well,” said Nikolaidis. “A robot that can quickly learn our preferences can help us prepare a meal, rearrange furniture or do house repairs, having a significant impact in our daily lives.” More

  • in

    DMI allows magnon-magnon coupling in hybrid perovskites

    An international group of researchers has created a mixed magnon state in an organic hybrid perovskite material by utilizing the Dzyaloshinskii-Moriya-Interaction (DMI). The resulting material has potential for processing and storing quantum computing information. The work also expands the number of potential materials that can be used to create hybrid magnonic systems.
    In magnetic materials, quasi-particles called magnons direct the electron spin within the material. There are two types of magnons — optical and acoustic — which refer to the direction of their spin.
    “Both optical and acoustic magnons propagate spin waves in antiferromagnets,” says Dali Sun, associate professor of physics and member of the Organic and Carbon Electronics Lab (ORaCEL) at North Carolina State University. “But in order to use spin waves to process quantum information, you need a mixed spin wave state.”
    “Normally two magnon modes cannot generate a mixed spin state due to their different symmetries,” Sun says. “But by harnessing the DMI we discovered a hybrid perovskite with a mixed magnon state.” Sun is also a corresponding author of the research.
    The researchers accomplished this by adding an organic cation to the material, which created a particular interaction called the DMI. In short, the DMI breaks the symmetry of the material, allowing the spins to mix.
    The team utilized a copper based magnetic hybrid organic-inorganic perovskite, which has a unique octahedral structure. These octahedrons can tilt and deform in different ways. Adding an organic cation to the material breaks the symmetry, creating angles within the material that allow the different magnon modes to couple and the spins to mix.
    “Beyond the quantum implications, this is the first time we’ve observed broken symmetry in a hybrid organic-inorganic perovskite,” says Andrew Comstock, NC State graduate research assistant and first author of the research.
    “We found that the DMI allows magnon coupling in copper-based hybrid perovskite materials with the correct symmetry requirements,” Comstock says. “Adding different cations creates different effects. This work really opens up ways to create magnon coupling from a lot of different materials — and studying the dynamic effects of this material can teach us new physics as well.”
    The work appears in Nature Communications and was primarily supported by the U.S. Department of Energy’s Center for Hybrid Organic Inorganic Semiconductors for Energy (CHOISE). Chung-Tao Chou of the Massachusetts Institute of Technology is co-first author of the work. Luqiao Liu of MIT, and Matthew Beard and Haipeng Lu of the National Renewable Energy Laboratory are co-corresponding authors of the research. More

  • in

    Students use machine learning in lesson designed to reveal issues, promise of A.I.

    In a new study, North Carolina State University researchers had 28 high school students create their own machine-learning artificial intelligence (AI) models for analyzing data. The goals of the project were to help students explore the challenges, limitations and promise of AI, and to ensure a future workforce is prepared to make use of AI tools.
    The study was conducted in conjunction with a high school journalism class in the Northeast. Since then, researchers have expanded the program to high school classrooms in multiple states, including North Carolina. NC State researchers are looking to partner with additional schools to collaborate in bringing the curriculum into classrooms.
    “We want students, from a very young age, to open up that black box so they aren’t afraid of AI,” said the study’s lead author Shiyan Jiang, assistant professor of learning design and technology at NC State. “We want students to know the potential and challenges of AI, and so they think about how they, the next generation, can respond to the evolving role of AI and society. We want to prepare students for the future workforce.”
    For the study, researchers developed a computer program called StoryQ that allows students to build their own machine-learning models. Then, researchers hosted a teacher workshop about the machine learning curriculum and technology in one-and-a-half hour sessions each week for a month. For teachers who signed up to participate further, researchers did another recap of the curriculum for participating teachers, and worked out logistics.
    “We created the StoryQ technology to allow students in high school or undergraduate classrooms to build what we call ‘text classification’ models,” Jiang said. “We wanted to lower the barriers so students can really know what’s going on in machine-learning, instead of struggling with the coding. So we created StoryQ, a tool that allows students to understand the nuances in building machine-learning and text classification models.”
    A teacher who decided to participate led a journalism class through a 15-day lesson where they used StoryQ to evaluate a series of Yelp reviews about ice cream stores. Students developed models to predict if reviews were “positive” or “negative” based on the language.

    “The teacher saw the relevance of the program to journalism,” Jiang said. “This was a very diverse class with many students who are under-represented in STEM and in computing. Overall, we found students enjoyed the lessons a lot, and had great discussions about the use and mechanism of machine-learning.”
    Researchers saw that students made hypotheses about specific words in the Yelp reviews, which they thought would predict if a review would be positive, or negative. For example, they expected reviews containing the word “like” to be positive. Then, the teacher guided the students to analyze whether their models correctly classified reviews. For example, a student who used the word “like” to predict reviews found that more than half of reviews containing the word were actually negative. Then, researchers said students used trial and error to try to improve the accuracy of their models.
    “Students learned how these models make decisions, and the role that humans can play in creating these technologies, and the kind of perspectives that can be brought in when they create AI technology,” Jiang said.
    From their discussions, researchers found that students had mixed reactions to AI technologies. Students were deeply concerned, for example, about the potential to use AI to automate processes for selecting students or candidates for opportunities like scholarships or programs.
    For future classes, researchers created a shorter, five-hour program. They’ve launched the program in two high schools in North Carolina, as well as schools in Georgia, Maryland and Massachusetts. In the next phase of their research, they are looking to study how teachers across disciplines collaborate to launch an AI-focused program and create a community of AI learning.
    “We want to expand the implementation in North Carolina,” Jiang said. “If there are any schools interested, we are always ready to bring this program to a school. Since we know teachers are super busy, we’re offering a shorter professional development course, and we also provide a stipend for teachers. We will go into the classroom to teach if needed, or demonstrate how we would teach the curriculum so teachers can replicate, adapt, and revise it. We will support teachers in all the ways we can.”
    The study, “High school students’ data modeling practices and processes: From modeling unstructured data to evaluating automated decisions,” was published online March 13 in the journal Learning, Media and Technology. Co-authors included Hengtao Tang, Cansu Tatar, Carolyn P. Rosé and Jie Chao. The work was supported by the National Science Foundation under grant number 1949110. More

  • in

    New classification of chess openings

    Using real data from an online chess platform, scientists of the Complexity Science Hub and the Centro Ricerche Enrico Fermi (CREF) studied similarities of different chess openings. Based on these similarities, they developed a new classification method which can complement the standard classification.
    “To find out how similar chess openings actually are to each other — meaning in real game behavior — we drew on the wisdom of the crowd,” Giordano De Marzo of the Complexity Science Hub and the Centro Ricerche Enrico Fermi (CREF) explains. The researchers analyzed 3,746,135 chess games, 18,253 players and 988 different openings from the chess platform Lichess and observed who plays which opening games. If several players choose two specific opening games over and over again, it stands to reason that they will be similar. Opening games that are so popular that they occur together with most others were excluded. “We also only included players in our analyses that had a rating above 2,000 on the platform Lichess. Total novices could randomly play any opening games, which would skew our analyses,” explains Vito D.P. Servedio of the Complexity Science Hub.
    Ten Clusters Clearly Delineated
    In this way, the researchers found that certain opening games group together. Ten different clusters clearly stood out according to actual similarities in playing behavior. “And these clusters don’t necessarily coincide with the common classification of chess openings,” says De Marzo. For example, certain opening games from different classes were played repeatedly by the same players. Therefore, although these strategies are classified in different classes, they must have some similarity. So, they are all in the same cluster. Each cluster thus represents a certain style of play — for example, rather defensive or very offensive. Moreover, the method of classification that the researchers have developed here can be applied not only to chess, but to similar games such as Go or Stratego.
    Complement the Standard Classification
    The opening phase in chess is usually less than 20 moves. Depending on which pieces are moved first, one speaks of an open, half-open, closed or irregular opening. The standard classification, the so-called ECO Code (Encyclopaedia of Chess Openings), divides them into five main groups: A, B, C, D and E. “Since this has evolved historically, it contains very useful information. Our clustering represents a new order that is close to the used one and can add to it by showing players how similar openings actually are to each other,” Servedio explains. After all, something that grows historically cannot be reordered from scratch. “You can’t say A20 now becomes B3. That would be like trying to exchange words in a language,” adds De Marzo.
    Rate Players and Opening Games
    In addition, their method also allowed the researchers to determine how good a player and how difficult a particular opening game is. The basic assumption: if a particular opening game is played by many people, it is likely to be rather easy. So, they examined which opening games were played the most and who played them. This gave the researchers a measure of how difficult an opening game is (= complexity) and a measure of how good a player is (= fitness). Matching these with the players’ rating on the chess platform itself showed a significant correlation. “On the one hand, this underlines the significance of our two newly introduced measures, but also the accuracy of our analysis,” explains Servedio. To ensure the relevance and validity of these results from a chess theory perspective, the researchers sought the expertise of a chess grandmaster who wishes to remain anonymous. More

  • in

    Absolute zero in the quantum computer

    The absolute lowest temperature possible is -273.15 degrees Celsius. It is never possible to cool any object exactly to this temperature — one can only approach absolute zero. This is the third law of thermodynamics.
    A research team at TU Wien (Vienna) has now investigated the question: How can this law be reconciled with the rules of quantum physics? They succeeded in developing a “quantum version” of the third law of thermodynamics: Theoretically, absolute zero is attainable. But for any conceivable recipe for it, you need three ingredients: Energy, time and complexity. And only if you have an infinite amount of one of these ingredients can you reach absolute zero.
    Information and thermodynamics: an apparent contradiction
    When quantum particles reach absolute zero, their state is precisely known: They are guaranteed to be in the state with the lowest energy. The particles then no longer contain any information about what state they were in before. Everything that may have happened to the particle before is perfectly erased. From a quantum physics point of view, cooling and deleting information are thus closely related.
    At this point, two important physical theories meet: Information theory and thermodynamics. But the two seem to contradict each other: “From information theory, we know the so-called Landauer principle. It says that a very specific minimum amount of energy is required to delete one bit of information,” explains Prof. Marcus Huber from the Atomic Institute of TU Wien. Thermodynamics, however, says that you need an infinite amount of energy to cool anything down exactly to absolute zero. But if deleting information and cooling to absolute zero are the same thing — how does that fit together?
    Energy, time and complexity
    The roots of the problem lie in the fact that thermodynamics was formulated in the 19th century for classical objects — for steam engines, refrigerators or glowing pieces of coal. At that time, people had no idea about quantum theory. If we want to understand the thermodynamics of individual particles, we first have to analyse how thermodynamics and quantum physics interact — and that is exactly what Marcus Huber and his team did.
    “We quickly realised that you don’t necessarily have to use infinite energy to reach absolute zero,” says Marcus Huber. “It is also possible with finite energy — but then you need an infinitely long time to do it.” Up to this point, the considerations are still compatible with classical thermodynamics as we know it from textbooks. But then the team came across an additional detail of crucial importance:
    “We found that quantum systems can be defined that allow the absolute ground state to be reached even at finite energy and in finite time — none of us had expected that,” says Marcus Huber. “But these special quantum systems have another important property: they are infinitely complex.” So you would need infinitely precise control over infinitely many details of the quantum system — then you could cool a quantum object to absolute zero in finite time with finite energy. In practice, of course, this is just as unattainable as infinitely high energy or infinitely long time.
    Erasing data in the quantum computer
    “So if you want to perfectly erase quantum information in a quantum computer, and in the process transfer a qubit to a perfectly pure ground state, then theoretically you would need an infinitely complex quantum computer that can perfectly control an infinite number of particles,” says Marcus Huber. In practice, however, perfection is not necessary — no machine is ever perfect. It is enough for a quantum computer to do its job fairly well. So the new results are not an obstacle in principle to the development of quantum computers.
    In practical applications of quantum technologies, temperature plays a key role today — the higher the temperature, the easier it is for quantum states to break and become unusable for any technical use. “This is precisely why it is so important to better understand the connection between quantum theory and thermodynamics,” says Marcus Huber. “There is a lot of interesting progress in this area at the moment. It is slowly becoming possible to see how these two important parts of physics intertwine.” More