More stories

  • in

    Is artificial intelligence better at assessing heart health?

    Who can assess and diagnose cardiac function best after reading an echocardiogram: artificial intelligence (AI) or a sonographer?
    According to Cedars-Sinai investigators and their research published today in the peer-reviewed journal Nature, AI proved superior in assessing and diagnosing cardiac function when compared with echocardiogram assessments made by sonographers.
    The findings are based on a first-of-its-kind, blinded, randomized clinical trial of AI in cardiology led by investigators in the Smidt Heart Institute and the Division of Artificial Intelligence in Medicine at Cedars-Sinai.
    “The results have immediate implications for patients undergoing cardiac function imaging as well as broader implications for the field of cardiac imaging,” said cardiologist David Ouyang, MD, principal investigator of the clinical trial and senior author of the study. “This trial offers rigorous evidence that utilizing AI in this novel way can improve the quality and effectiveness of echocardiogram imaging for many patients.”
    Investigators are confident that this technology will be found beneficial when deployed across the clinical system at Cedars-Sinai and health systems nationwide.
    “This successful clinical trial sets a superb precedent for how novel clinical AI algorithms can be discovered and tested within health systems, increasing the likelihood of seamless deployment for improved patient care,” said Sumeet Chugh, MD, director of the Division of Artificial Intelligence in Medicine and the Pauline and Harold Price Chair in Cardiac Electrophysiology Research.

    In 2020, researchers at the Smidt Heart Institute and Stanford University developed one of the first AI technologies to assess cardiac function, specifically, left ventricular ejection fraction — the key heart measurement used in diagnosing cardiac function. Their research also was published in Nature.
    Building on those findings, the new study assessed whether AI was more accurate in evaluating 3,495 transthoracic echocardiogram studies by comparing initial assessment by AI or by a sonographer — also known as an ultrasound technician.
    Among the findings: Cardiologists more frequently agreed with the AI initial assessment and made corrections to only 16.8% of the initial assessments made by AI. Cardiologists made corrections to 27.2% of the initial assessments made by the sonographers. The physicians were unable to tell which assessments were made by AI and which were made by sonographers. The AI assistance saved cardiologists and sonographers time.”We asked our cardiologists to guess if the preliminary interpretation was performed by AI or by a sonographer, and it turns out that they couldn’t tell the difference,” said Ouyang. “This speaks to the strong performance of the AI algorithm as well as the seamless integration into clinical software. We believe these are all good signs for future AI trial research in the field.”
    The hope, Ouyang says, is to save clinicians time and minimize the more tedious parts of the cardiac imaging workflow. The cardiologist, however, remains the final expert adjudicator of the AI model output.
    The clinical trial and subsequent published research also shed light on the opportunity for regulatory approvals.
    “This work raises the bar for artificial intelligence technologies being considered for regulatory approval, as the Food and Drug Administration has previously approved artificial intelligence tools without data from prospective clinical trials,” said Susan Cheng, MD, MPH, director of the Institute for Research on Healthy Aging in the Department of Cardiology at the Smidt Heart Institute and co-senior author of the study. “We believe this level of evidence offers clinicians extra assurance as health systems work to adopt artificial intelligence more broadly as part of efforts to increase efficiency and quality overall.” More

  • in

    Robots predict human intention for faster builds

    Humans have a way of understandings others’ goals, desires and beliefs, a crucial skill that allows us to anticipate people’s actions. Taking bread out of the toaster? You’ll need a plate. Sweeping up leaves? I’ll grab the green trash can.
    This skill, often referred to as “theory of mind,” comes easily to us as humans, but is still challenging for robots. But, if robots are to become truly collaborative helpers in manufacturing and in everyday life, they need to learn the same abilities.
    In a new paper, a best paper award finalist at the ACM/IEEE International Conference on Human-Robot Interaction (HRI), USC Viterbi computer science researchers aim to teach robots how to predict human preferences in assembly tasks, so they can one day help out on everything from building a satellite to setting a table.
    “When working with people, a robot needs to constantly guess what the person will do next,” said lead author Heramb Nemlekar, a USC computer science PhD student working under the supervision of Stefanos Nikolaidis, an assistant professor of computer science. “For example, if the robot thinks the person will need a screwdriver to assemble the next part, it can get the screwdriver ahead of time so that the person does not have to wait. This way the robot can help people finish the assembly much faster.”
    But, as anyone who has co-built furniture with a partner can attest, predicting what a person will do next is difficult: different people prefer to build the same product in different ways. While some people want to start with the most difficult parts to get them over with, others may want to start with the easiest parts to save energy.
    Making predictions
    Most of the current techniques require people to show the robot how they would like to perform the assembly, but this takes time and effort and can defeat the purpose, said Nemlekar. “Imagine having to assemble an entire airplane just to teach the robot your preferences,” he said.

    In this new study, however, the researchers found similarities in how an individual will assemble different products. For instance, if you start with the hardest part when building an Ikea sofa, you are likely to use the same tact when putting together a baby’s crib.
    So, instead of “showing” the robot their preferences in a complex task, they created a small assembly task (called a “canonical” task) that people can easily and quickly perform. In this case, putting together parts of a simple model airplane, such as the wings, tail and propeller.
    The robot “watched” the human complete the task using a camera placed directly above the assembly area, looking down. To detect the parts operated by the human, the system used AprilTags, similar to QR codes, attached to the parts.
    Then, the system used machine learning to learn a person’s preference based on their sequence of actions in the canonical task.
    “Based on how a person performs the small assembly, the robot predicts what that person will do in the larger assembly,” said Nemlekar. “For example, if the robot sees that a person likes to start the small assembly with the easiest part, it will predict that they will start with the easiest part in the large assembly as well.”
    Building trust

    In the researchers’ user study, their system was able to predict the actions that humans will take with around 82% accuracy.
    “We hope that our research can make it easier for people to show robots what they prefer,” said Nemlekar. “By helping each person in their preferred way, robots can reduce their work, save time and even build trust with them.”
    For instance, imagine you’re assembling a piece of furniture at home, but you’re not particularly handy and struggle with the task. A robot that has been trained to predict your preferences could provide you with the necessary tools and parts ahead of time, making the assembly process easier.
    This technology could also be useful in industrial settings where workers are tasked with assembling products on a mass scale, saving time and reducing the risk of injury or accidents. Additionally, it could help persons with disabilities or limited mobility to more easily assemble products and maintain independence.
    Quickly learning preferences
    The goal is not to replace humans on the factory floor, say the researchers. Instead, they hope this research will lead to significant improvements in the safety and productivity of assembly workers in human-robot hybrid factories. “Robots can perform the non-value-added or ergonomically challenging tasks that are currently being performed by workers.
    As for the next steps, the researchers plan to develop a method to automatically design canonical tasks for different types of assembly task. They also aim to evaluate the benefit of learning human preferences from short tasks and predicting their actions in a complex task in different contexts, for instance, personal assistance in homes.
    “While we observed that human preferences transfer from canonical to actual tasks in assembly manufacturing, I expect similar findings in other applications as well,” said Nikolaidis. “A robot that can quickly learn our preferences can help us prepare a meal, rearrange furniture or do house repairs, having a significant impact in our daily lives.” More

  • in

    Here’s why the geometric patterns in salt flats worldwide look so similar

    From Death Valley to Chile to Iran, similarly sized polygons of salt form in playas all over the world — and subterranean fluid flows might be the key to solving the long-standing puzzle of why.

    Geometric shapes such as pentagons and hexagons spontaneously form in a wide range of geologic settings. Dried mud, ice and rock often crack into polygons, but these patterns tend to vary dramatically in size.

    So why are all playas so persistently similar? The answer lies underground, physicist Jana Lasser and colleagues propose February 24 in Physical Review X. With sophisticated mathematical models, computer simulations and experiments performed at Owens Lake in California, the team connected what they saw on the surface with what is going on beneath.

    “Fluid flows and convection underground are uniquely able to explain why the patterns form,” says Lasser, of the Graz University of Technology in Austria.

    This 3-D approach was key to explaining the universality of salty polygons.

    Salt flats form in places where rainfall is scarce and there’s a lot of evaporation (SN: 12/5/07). Groundwater seeping up to the surface evaporates, leaving a crust of salts and other minerals that had been dissolved in the water. Most striking, this process results in low ridges of concentrated salt that divide the playa into polygons: mostly hexagons with a smattering of pentagons and other geometric shapes.

    The type of salt varies from one playa to another. Table salt, or sodium chloride, dominates in some playas, but others have more sulfite salts. And the salt crusts themselves range in thickness from a few millimeters to several meters. That variation seems to be why previous attempts to describe the playas’ patterns failed.

    Whether the crusts are meter- or millimeter-thick, salt pans feature polygons that are 1 to 2 meters across. Previous models based on cracking, expansion and other phenomena that describe how mud and rock fracture instead produce polygons with sizes that vary according to crust thickness.

    As groundwater evaporates from the surface, it concentrates salt in the remaining groundwater. That salty water, now denser and heavier, sinks, forcing other less dense water upward. Lasser and colleagues showed that over time, the circulation, known as convection, tends to push the descending plumes of saltier water into a network of vertical sheets. The surface above these sheets accrues more salt, so thick salt ridges grow there. Thinner crusts of salt form between, where less salty water upwells, spontaneously making the characteristic polygons shared by playas around the world.

    Computer simulations of the fluid dynamics beneath the surface of salt flats demonstrate how the sinking of high-salinity groundwater (purple plumes) forms distinctive polygons on the surface (red is areas with the highest downward flow).J. Lasser et al/Physical Review X

    The equations the researchers used describe the relative salinity of the groundwater, the pressure within the fluid and the speed at which the water circulates. Computer simulations that embraced the full complexity of the 3-D problem started with no salt crust or polygons and produced something that looks very much like real playas.

    “This fluid dynamical model makes much more sense than a model that ignores what’s happening beneath the surface,” says physicist Julyan Cartwright of the Spanish National Research Council, who is based in Granada and was not involved in the research.

    Tests at Owens Lake helped the team verify and refine the model. “Physics is so much more than just sitting in front of a computer,” Lasser says, “and I wanted to do something that involves experiments.”

    The lake dried up in the 1920s as water was diverted to Los Angeles. The deposited minerals on the remaining salt flat include large natural concentrations of arsenic, which blows away with the dust kicked up by wind — creating serious health hazards. Among other remediation efforts, brine has been pumped onto the lake bed to try to create a more stable salt crust (SN: 11/28/01). That human intervention gave the researchers the opportunity to test their ideas in a controlled way.

    “The whole area is destroyed,” Lasser says, “but for us it was the perfect research environment.” More

  • in

    DMI allows magnon-magnon coupling in hybrid perovskites

    An international group of researchers has created a mixed magnon state in an organic hybrid perovskite material by utilizing the Dzyaloshinskii-Moriya-Interaction (DMI). The resulting material has potential for processing and storing quantum computing information. The work also expands the number of potential materials that can be used to create hybrid magnonic systems.
    In magnetic materials, quasi-particles called magnons direct the electron spin within the material. There are two types of magnons — optical and acoustic — which refer to the direction of their spin.
    “Both optical and acoustic magnons propagate spin waves in antiferromagnets,” says Dali Sun, associate professor of physics and member of the Organic and Carbon Electronics Lab (ORaCEL) at North Carolina State University. “But in order to use spin waves to process quantum information, you need a mixed spin wave state.”
    “Normally two magnon modes cannot generate a mixed spin state due to their different symmetries,” Sun says. “But by harnessing the DMI we discovered a hybrid perovskite with a mixed magnon state.” Sun is also a corresponding author of the research.
    The researchers accomplished this by adding an organic cation to the material, which created a particular interaction called the DMI. In short, the DMI breaks the symmetry of the material, allowing the spins to mix.
    The team utilized a copper based magnetic hybrid organic-inorganic perovskite, which has a unique octahedral structure. These octahedrons can tilt and deform in different ways. Adding an organic cation to the material breaks the symmetry, creating angles within the material that allow the different magnon modes to couple and the spins to mix.
    “Beyond the quantum implications, this is the first time we’ve observed broken symmetry in a hybrid organic-inorganic perovskite,” says Andrew Comstock, NC State graduate research assistant and first author of the research.
    “We found that the DMI allows magnon coupling in copper-based hybrid perovskite materials with the correct symmetry requirements,” Comstock says. “Adding different cations creates different effects. This work really opens up ways to create magnon coupling from a lot of different materials — and studying the dynamic effects of this material can teach us new physics as well.”
    The work appears in Nature Communications and was primarily supported by the U.S. Department of Energy’s Center for Hybrid Organic Inorganic Semiconductors for Energy (CHOISE). Chung-Tao Chou of the Massachusetts Institute of Technology is co-first author of the work. Luqiao Liu of MIT, and Matthew Beard and Haipeng Lu of the National Renewable Energy Laboratory are co-corresponding authors of the research. More

  • in

    Students use machine learning in lesson designed to reveal issues, promise of A.I.

    In a new study, North Carolina State University researchers had 28 high school students create their own machine-learning artificial intelligence (AI) models for analyzing data. The goals of the project were to help students explore the challenges, limitations and promise of AI, and to ensure a future workforce is prepared to make use of AI tools.
    The study was conducted in conjunction with a high school journalism class in the Northeast. Since then, researchers have expanded the program to high school classrooms in multiple states, including North Carolina. NC State researchers are looking to partner with additional schools to collaborate in bringing the curriculum into classrooms.
    “We want students, from a very young age, to open up that black box so they aren’t afraid of AI,” said the study’s lead author Shiyan Jiang, assistant professor of learning design and technology at NC State. “We want students to know the potential and challenges of AI, and so they think about how they, the next generation, can respond to the evolving role of AI and society. We want to prepare students for the future workforce.”
    For the study, researchers developed a computer program called StoryQ that allows students to build their own machine-learning models. Then, researchers hosted a teacher workshop about the machine learning curriculum and technology in one-and-a-half hour sessions each week for a month. For teachers who signed up to participate further, researchers did another recap of the curriculum for participating teachers, and worked out logistics.
    “We created the StoryQ technology to allow students in high school or undergraduate classrooms to build what we call ‘text classification’ models,” Jiang said. “We wanted to lower the barriers so students can really know what’s going on in machine-learning, instead of struggling with the coding. So we created StoryQ, a tool that allows students to understand the nuances in building machine-learning and text classification models.”
    A teacher who decided to participate led a journalism class through a 15-day lesson where they used StoryQ to evaluate a series of Yelp reviews about ice cream stores. Students developed models to predict if reviews were “positive” or “negative” based on the language.

    “The teacher saw the relevance of the program to journalism,” Jiang said. “This was a very diverse class with many students who are under-represented in STEM and in computing. Overall, we found students enjoyed the lessons a lot, and had great discussions about the use and mechanism of machine-learning.”
    Researchers saw that students made hypotheses about specific words in the Yelp reviews, which they thought would predict if a review would be positive, or negative. For example, they expected reviews containing the word “like” to be positive. Then, the teacher guided the students to analyze whether their models correctly classified reviews. For example, a student who used the word “like” to predict reviews found that more than half of reviews containing the word were actually negative. Then, researchers said students used trial and error to try to improve the accuracy of their models.
    “Students learned how these models make decisions, and the role that humans can play in creating these technologies, and the kind of perspectives that can be brought in when they create AI technology,” Jiang said.
    From their discussions, researchers found that students had mixed reactions to AI technologies. Students were deeply concerned, for example, about the potential to use AI to automate processes for selecting students or candidates for opportunities like scholarships or programs.
    For future classes, researchers created a shorter, five-hour program. They’ve launched the program in two high schools in North Carolina, as well as schools in Georgia, Maryland and Massachusetts. In the next phase of their research, they are looking to study how teachers across disciplines collaborate to launch an AI-focused program and create a community of AI learning.
    “We want to expand the implementation in North Carolina,” Jiang said. “If there are any schools interested, we are always ready to bring this program to a school. Since we know teachers are super busy, we’re offering a shorter professional development course, and we also provide a stipend for teachers. We will go into the classroom to teach if needed, or demonstrate how we would teach the curriculum so teachers can replicate, adapt, and revise it. We will support teachers in all the ways we can.”
    The study, “High school students’ data modeling practices and processes: From modeling unstructured data to evaluating automated decisions,” was published online March 13 in the journal Learning, Media and Technology. Co-authors included Hengtao Tang, Cansu Tatar, Carolyn P. Rosé and Jie Chao. The work was supported by the National Science Foundation under grant number 1949110. More

  • in

    New classification of chess openings

    Using real data from an online chess platform, scientists of the Complexity Science Hub and the Centro Ricerche Enrico Fermi (CREF) studied similarities of different chess openings. Based on these similarities, they developed a new classification method which can complement the standard classification.
    “To find out how similar chess openings actually are to each other — meaning in real game behavior — we drew on the wisdom of the crowd,” Giordano De Marzo of the Complexity Science Hub and the Centro Ricerche Enrico Fermi (CREF) explains. The researchers analyzed 3,746,135 chess games, 18,253 players and 988 different openings from the chess platform Lichess and observed who plays which opening games. If several players choose two specific opening games over and over again, it stands to reason that they will be similar. Opening games that are so popular that they occur together with most others were excluded. “We also only included players in our analyses that had a rating above 2,000 on the platform Lichess. Total novices could randomly play any opening games, which would skew our analyses,” explains Vito D.P. Servedio of the Complexity Science Hub.
    Ten Clusters Clearly Delineated
    In this way, the researchers found that certain opening games group together. Ten different clusters clearly stood out according to actual similarities in playing behavior. “And these clusters don’t necessarily coincide with the common classification of chess openings,” says De Marzo. For example, certain opening games from different classes were played repeatedly by the same players. Therefore, although these strategies are classified in different classes, they must have some similarity. So, they are all in the same cluster. Each cluster thus represents a certain style of play — for example, rather defensive or very offensive. Moreover, the method of classification that the researchers have developed here can be applied not only to chess, but to similar games such as Go or Stratego.
    Complement the Standard Classification
    The opening phase in chess is usually less than 20 moves. Depending on which pieces are moved first, one speaks of an open, half-open, closed or irregular opening. The standard classification, the so-called ECO Code (Encyclopaedia of Chess Openings), divides them into five main groups: A, B, C, D and E. “Since this has evolved historically, it contains very useful information. Our clustering represents a new order that is close to the used one and can add to it by showing players how similar openings actually are to each other,” Servedio explains. After all, something that grows historically cannot be reordered from scratch. “You can’t say A20 now becomes B3. That would be like trying to exchange words in a language,” adds De Marzo.
    Rate Players and Opening Games
    In addition, their method also allowed the researchers to determine how good a player and how difficult a particular opening game is. The basic assumption: if a particular opening game is played by many people, it is likely to be rather easy. So, they examined which opening games were played the most and who played them. This gave the researchers a measure of how difficult an opening game is (= complexity) and a measure of how good a player is (= fitness). Matching these with the players’ rating on the chess platform itself showed a significant correlation. “On the one hand, this underlines the significance of our two newly introduced measures, but also the accuracy of our analysis,” explains Servedio. To ensure the relevance and validity of these results from a chess theory perspective, the researchers sought the expertise of a chess grandmaster who wishes to remain anonymous. More

  • in

    Absolute zero in the quantum computer

    The absolute lowest temperature possible is -273.15 degrees Celsius. It is never possible to cool any object exactly to this temperature — one can only approach absolute zero. This is the third law of thermodynamics.
    A research team at TU Wien (Vienna) has now investigated the question: How can this law be reconciled with the rules of quantum physics? They succeeded in developing a “quantum version” of the third law of thermodynamics: Theoretically, absolute zero is attainable. But for any conceivable recipe for it, you need three ingredients: Energy, time and complexity. And only if you have an infinite amount of one of these ingredients can you reach absolute zero.
    Information and thermodynamics: an apparent contradiction
    When quantum particles reach absolute zero, their state is precisely known: They are guaranteed to be in the state with the lowest energy. The particles then no longer contain any information about what state they were in before. Everything that may have happened to the particle before is perfectly erased. From a quantum physics point of view, cooling and deleting information are thus closely related.
    At this point, two important physical theories meet: Information theory and thermodynamics. But the two seem to contradict each other: “From information theory, we know the so-called Landauer principle. It says that a very specific minimum amount of energy is required to delete one bit of information,” explains Prof. Marcus Huber from the Atomic Institute of TU Wien. Thermodynamics, however, says that you need an infinite amount of energy to cool anything down exactly to absolute zero. But if deleting information and cooling to absolute zero are the same thing — how does that fit together?
    Energy, time and complexity
    The roots of the problem lie in the fact that thermodynamics was formulated in the 19th century for classical objects — for steam engines, refrigerators or glowing pieces of coal. At that time, people had no idea about quantum theory. If we want to understand the thermodynamics of individual particles, we first have to analyse how thermodynamics and quantum physics interact — and that is exactly what Marcus Huber and his team did.
    “We quickly realised that you don’t necessarily have to use infinite energy to reach absolute zero,” says Marcus Huber. “It is also possible with finite energy — but then you need an infinitely long time to do it.” Up to this point, the considerations are still compatible with classical thermodynamics as we know it from textbooks. But then the team came across an additional detail of crucial importance:
    “We found that quantum systems can be defined that allow the absolute ground state to be reached even at finite energy and in finite time — none of us had expected that,” says Marcus Huber. “But these special quantum systems have another important property: they are infinitely complex.” So you would need infinitely precise control over infinitely many details of the quantum system — then you could cool a quantum object to absolute zero in finite time with finite energy. In practice, of course, this is just as unattainable as infinitely high energy or infinitely long time.
    Erasing data in the quantum computer
    “So if you want to perfectly erase quantum information in a quantum computer, and in the process transfer a qubit to a perfectly pure ground state, then theoretically you would need an infinitely complex quantum computer that can perfectly control an infinite number of particles,” says Marcus Huber. In practice, however, perfection is not necessary — no machine is ever perfect. It is enough for a quantum computer to do its job fairly well. So the new results are not an obstacle in principle to the development of quantum computers.
    In practical applications of quantum technologies, temperature plays a key role today — the higher the temperature, the easier it is for quantum states to break and become unusable for any technical use. “This is precisely why it is so important to better understand the connection between quantum theory and thermodynamics,” says Marcus Huber. “There is a lot of interesting progress in this area at the moment. It is slowly becoming possible to see how these two important parts of physics intertwine.” More

  • in

    New cyber software can verify how much knowledge AI really knows

    With a growing interest in generative artificial intelligence (AI) systems worldwide, researchers at the University of Surrey have created software that is able to verify how much information an AI farmed from an organisation’s digital database.
    Surrey’s verification software can be used as part of a company’s online security protocol, helping an organisation understand whether an AI has learned too much or even accessed sensitive data.
    The software is also capable of identifying whether AI has identified and is capable of exploiting flaws in software code. For example, in an online gaming context, it could identify whether an AI has learned to always win in online poker by exploiting a coding fault.
    Dr Solofomampionona Fortunat Rajaona is Research Fellow in formal verification of privacy at the University of Surrey and the lead author of the paper. He said:
    “In many applications, AI systems interact with each other or with humans, such as self-driving cars in a highway or hospital robots. Working out what an intelligent AI data system knows is an ongoing problem which we have taken years to find a working solution for.
    “Our verification software can deduce how much AI can learn from their interaction, whether they have enough knowledge that enable successful cooperation, and whether they have too much knowledge that will break privacy. Through the ability to verify what AI has learned, we can give organisations the confidence to safely unleash the power of AI into secure settings.”
    The study about Surrey’s software won the best paper award at the 25th International Symposium on Formal Methods.
    Professor Adrian Hilton, Director of the Institute for People-Centred AI at the University of Surrey, said:
    “Over the past few months there has been a huge surge of public and industry interest in generative AI models fuelled by advances in large language models such as ChatGPT. Creation of tools that can verify the performance of generative AI is essential to underpin their safe and responsible deployment. This research is an important step towards is an important step towards maintaining the privacy and integrity of datasets used in training.”
    Further information: https://openresearch.surrey.ac.uk/esploro/outputs/99723165702346 More