More stories

  • in

    Robo-teammate can detect, share 3D changes in real-time

    Something is different, and you can’t quite put your finger on it. But your robot can.
    Even small changes in your surroundings could indicate danger. Imagine a robot could detect those changes, and a warning could immediately alert you through a display in your eyeglasses. That is what U.S. Army scientists are developing with sensors, robots, real-time change detection and augmented reality wearables.
    Army researchers demonstrated in a real-world environment the first human-robot team in which the robot detects physical changes in 3D and shares that information with a human in real-time through augmented reality, who is then able to evaluate the information received and decide follow-on action.
    “This could let robots inform their Soldier teammates of changes in the environment that might be overlooked by or not perceptible to the Soldier, giving them increased situational awareness and offset from potential adversaries,” said Dr. Christopher Reardon, a researcher at the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory. “This could detect anything from camouflaged enemy soldiers to IEDs.”
    Part of the lab’s effort in contextual understanding through the Artificial Intelligence for Mobility and Maneuver Essential Research Program, this research explores how to provide contextual awareness to autonomous robotic ground platforms in maneuver and mobility scenarios. Researchers also participate with international coalition partners in the Technical Cooperation Program’s Contested Urban Environment Strategic Challenge, or TTCP CUESC, events to test and evaluate human-robot teaming technologies.
    Most academic research in the use of mixed reality interfaces for human-robot teaming does not enter real-world environments, but rather uses external instrumentation in a lab to manage the calculations necessary to share information between a human and robot. Likewise, most engineering efforts to provide humans with mixed-reality interfaces do not examine teaming with autonomous mobile robots, Reardon said.
    Reardon and his colleagues from the Army and the University of California, San Diego, published their research, Enabling Situational Awareness via Augmented Reality of Autonomous Robot-Based Environmental Change Detection, at the 12th International Conference on Virtual, Augmented, and Mixed Reality, part of the International Conference on Human-Computer Interaction.
    The research paired a small autonomous mobile ground robot, equipped with laser ranging sensors, known as LIDAR, to build a representation of the environment, with a human teammate wearing augmented reality glasses. As the robot patrolled the environment, it compared its current and previous readings to detect changes in the environment. Those changes were then instantly displayed in the human’s eyewear to determine whether the human could interpret the changes in the environment.
    In studying communication between the robot and human team, the researchers tested different resolution LIDAR sensors on the robot to collect measurements of the environment and detect changes. When those changes were shared using augmented reality to the human, the researchers found that human teammates could interpret changes that even the lower-resolution LIDARs detected. This indicates that — depending on the size of the changes expected to encounter — lighter, smaller and less expensive sensors could perform just as well, and run faster in the process.
    This capability has the potential to be incorporated into future Soldier mixed-reality interfaces such as the Army’s Integrated Visual Augmentation System goggles, or IVAS.
    “Incorporating mixed reality into Soldiers’ eye protection is inevitable,” Reardon said. “This research aims to fill gaps by incorporating useful information from robot teammates into the Soldier-worn visual augmentation ecosystem, while simultaneously making the robots better teammates to the Soldier.”
    Future studies will continue to explore how to strengthen the teaming between humans and autonomous agents by allowing the human to interact with the detected changes, which will provide more information to the robot about the context of the change-for example, changes made by adversaries versus natural environmental changes or false positives, Reardon said. This will improve the autonomous context understanding and reasoning capabilities of the robotic platform, such as by enabling the robot to learn and predict what types of changes constitute a threat. In turn, providing this understanding to autonomy will help researchers learn how improve teaming of Soldiers with autonomous platforms. More

  • in

    The mathematical magic of bending grids

    How can you turn something flat into something three-dimensional? In architecture and design this question often plays an important role. A team of mathematicians from TU Wien (Vienna) has now presented a technique that solves this problem in an amazingly simple way: You choose any curved surface and from its shape you can calculate a flat grid of straight bars that can be folded out to the desired curved structure with a single movement. The result is a stable form that can even carry loads due to its mechanical tension.
    The step into the third dimension
    Suppose you screw ordinary straight bars together at right angles to form a grid, so that a completely regular pattern of small squares is created. Such a grid can be distorted: all angles of the grid change simultaneously, parallel bars remain parallel, and the squares become parallelograms. But this does not change the fact that all bars are in the same plane. The structure is still flat.
    The crucial question now is: What happens if the bars are not parallel at the beginning, but are joined together at different angles? “Such a grid can no longer be distorted within the plane,” explains Przemyslaw Musialski. “When you open it up, the bars have to bend. They move out of the plane into the third dimension and form a curved shape.”
    At the Center for Geometry and Computational Design (GCD) (Institute for Discrete Mathematics and Geometry) at TU Wien, Musialski and his team developed a method that can be used to calculate what the flat, two-dimensional grid must look like in order to produce exactly the desired three-dimensional shape when it is unfolded. “Our method is based on findings in differential geometry, it is relatively simple and does not require computationally intensive simulations,” says Stefan Pillwein, first author of the current publication, which was presented at the SIGGRAPH conference and published in the journal ACM Transactions on Graphics.
    Experiments with the laser scanner
    The team then tried out the mathematical methods in practice: The calculated grids were made of wood, screwed together and unfolded. The resulting 3D shapes were then measured with a laser scanner. This proved that the resulting 3D structures did indeed correspond excellently to the calculated shapes.
    Now even a mini pavilion roof was produced; measuring 3.1 x 2.1 x 0.9 metres. “We wanted to know whether this technology would also work on a large scale — and it worked out perfectly,” says Stefan Pillwein.
    “Transforming a simple 2D grid into a 3D form with a single opening movement not only looks amazing, it has many technical advantages,” says Przemyslaw Musialski. “Such grids are simple and inexpensive to manufacture, they are easy to transport and set up. Our method makes it possible to create even sophisticated shapes, not just simple domes.”
    The structures also have very good static properties: “The curved elements are under tension and have a natural structural stability — in architecture this is called active bending,” explains Musialski. Very large distances can be spanned with very thin rods. This is ideal for architectural applications.

    Story Source:
    Materials provided by Vienna University of Technology. Note: Content may be edited for style and length. More

  • in

    Predicting computational power of early quantum computers

    Quantum physicists at the University of Sussex have created an algorithm that speeds up the rate of calculations in the early quantum computers which are currently being developed. They have created a new way to route the ions — or charged atoms — around the quantum computer to boost the efficiency of the calculations.
    The Sussex team have shown how calculations in such a quantum computer can be done most efficiently, by using their new ‘routing algorithm’. Their paper “Efficient Qubit Routing for a Globally Connected Trapped Ion Quantum Computer” is published in the journal Advanced Quantum Technologies.
    The team working on this project was led by Professor Winfried Hensinger and included Mark Webber, Dr Steven Herbert and Dr Sebastian Weidt. The scientists have created a new algorithm which regulates traffic within the quantum computer just like managing traffic in a busy city. In the trapped ion design the qubits can be physically transported over long distances, so they can easily interact with other qubits. Their new algorithm means that data can flow through the quantum computer without any ‘traffic jams’. This in turn gives rise to a more powerful quantum computer.
    Quantum computers are expected to be able to solve problems that are too complex for classical computers. Quantum computers use quantum bits (qubits) to process information in a new and powerful way. The particular quantum computer architecture the team analysed first is a ‘trapped ion’ quantum computer, consisting of silicon microchips with individual charged atoms, or ions, levitating above the surface of the chip. These ions are used to store data, where each ion holds one quantum bit of information. Executing calculations on such a quantum computer involves moving around ions, similar to playing a game of Pacman, and the faster and more efficiently the data (the ions) can be moved around, the more powerful the quantum computer will be.
    In the global race to build a large scale quantum computer there are two leading methods, ‘superconducting’ devices which groups such as IBM and Google focus on, and ‘trapped ion’ devices which are used by the University of Sussex’s Ion Quantum Technology group, and the newly emerged company Universal Quantum, among others.
    Superconducting quantum computers have stationary qubits which are typically only able to interact with qubits that are immediately next to each other. Calculations involving distant qubits are done by communicating through a chain of adjacent qubits, a process similar to the telephone game (also referred to as ‘Chinese Whispers’), where information is whispered from one person to another along a line of people. In the same way as in the telephone game, the information tends to get more corrupted the longer the chain is. Indeed, the researchers found that this process will limit the computational power of superconducting quantum computers.
    In contrast, by deploying their new routing algorithm for their trapped ion architecture, the Sussex scientists have discovered that their quantum computing approach can achieve an impressive level of computational power. ‘Quantum Volume’ is a new benchmark which is being used to compare the computational power of near term quantum computers. They were able to use Quantum Volume to compare their architecture against a model for superconducting qubits, where they assumed similar levels of errors for both approaches. They found that the trapped-ion approach performed consistently better than the superconducting qubit approach, because their routing algorithm essentially allows qubits to directly interact with many more qubits, which in turn gives rise to a higher expected computational power.
    Mark Webber, a doctoral researcher in the Sussex Centre for Quantum technologies, at the University of Sussex, said:
    “We can now predict the computational power of the quantum computers we are constructing. Our study indicates a fundamental advantage for trapped ion devices, and the new routing algorithm will allow us to maximize the performance of early quantum computers.”
    Professor Hensinger, director of the Sussex Centre for Quantum Technologies at the University of Sussex said:
    “Indeed, this work is yet another stepping stone towards building practical quantum computers that can solve real world problems.”
    Professor Winfried Hensinger and Dr Sebastian Weidt have recently launched their spin-out company Universal Quantum which aims to build the world’s first large scale quantum computer. It has attracted backing from some of the world’s most powerful tech investors. The team was the first to publish a blue-print for how to build a large scale trapped ion quantum computer in 2017.

    Story Source:
    Materials provided by University of Sussex. Original written by Anna Ford. Note: Content may be edited for style and length. More

  • in

    Machine learning peeks into nano-aquariums

    In the nanoworld, tiny particles such as proteins appear to dance as they transform and assemble to perform various tasks while suspended in a liquid. Recently developed methods have made it possible to watch and record these otherwise-elusive tiny motions, and researchers now take a step forward by developing a machine learning workflow to streamline the process.
    The new study, led by Qian Chen, a professor of materials science and engineering at the University of Illinois, Urbana-Champaign, builds upon her past work with liquid-phase electron microscopy and is published in the journal ACS Central Science.
    Being able to see — and record — the motions of nanoparticles is essential for understanding a variety of engineering challenges. Liquid-phase electron microscopy, which allows researchers to watch nanoparticles interact inside tiny aquariumlike sample containers, is useful for research in medicine, energy and environmental sustainability and in fabrication of metamaterials, to name a few. However, it is difficult to interpret the dataset, the researchers said. The video files produced are large, filled with temporal and spatial information, and are noisy due to background signals — in other words, they require a lot of tedious image processing and analysis.
    “Developing a method even to see these particles was a huge challenge,” Chen said. “Figuring out how to efficiently get the useful data pieces from a sea of outliers and noise has become the new challenge.”
    To confront this problem, the team developed a machine learning workflow that is based upon an artificial neural network that mimics, in part, the learning potency of the human brain. The program builds off of an existing neural network, known as U-Net, that does not require handcrafted features or predetermined input and has yielded significant breakthroughs in identifying irregular cellular features using other types of microscopy, the study reports.
    “Our new program processed information for three types of nanoscale dynamics including motion, chemical reaction and self-assembly of nanoparticles,” said lead author and graduate student Lehan Yao. “These represent the scenarios and challenges we have encountered in the analysis of liquid-phase electron microscopy videos.”
    The researchers collected measurements from approximately 300,000 pairs of interacting nanoparticles, the study reports.
    As found in past studies by Chen’s group, contrast continues to be a problem while imaging certain types of nanoparticles. In their experimental work, the team used particles made out of gold, which is easy to see with an electron microscope. However, particles with lower elemental or molecular weights like proteins, plastic polymers and other organic nanoparticles show very low contrast when viewed under an electron beam, Chen said.
    “Biological applications, like the search for vaccines and drugs, underscore the urgency in our push to have our technique available for imaging biomolecules,” she said. “There are critical nanoscale interactions between viruses and our immune systems, between the drugs and the immune system, and between the drug and the virus itself that must be understood. The fact that our new processing method allows us to extract information from samples as demonstrated here gets us ready for the next step of application and model systems.”
    The team has made the source code for the machine learning program used in this study publicly available through the supplemental information section of the new paper. “We feel that making the code available to other researchers can benefit the whole nanomaterials research community,” Chen said.
    See the liquid-phase electron microscopy with combined machine learning in action: https://www.youtube.com/watch?v=0NESPF8Rwsc More

  • in

    A measurement of positronium’s energy levels confounds scientists

    Positronium is positively puzzling.
    A new measurement of the exotic “atom” — consisting of an electron and its antiparticle, a positron — disagrees with theoretical calculations, scientists report in the Aug. 14 Physical Review Letters. And physicists are at a loss to explain it.
    A flaw in either the calculations or the experiment seems unlikely, researchers say. And new phenomena, such as undiscovered particles, also don’t provide an easy answer, adds theoretical physicist Jesús Pérez Ríos of the Fritz Haber Institute of the Max Planck Society in Berlin. “Right now, the best I can tell you is, we don’t know,” says Pérez Ríos, who was not involved with the new research.
    Positronium is composed of an electron, with a negative charge, circling in orbit with a positron, with a positive charge — making what’s effectively an atom without a nucleus (SN: 9/12/07). With just two particles and free from the complexities of a nucleus, positronium is appealingly simple. Its simplicity means it can be used to precisely test the theory of quantum electrodynamics, which explains how electrically charged particles interact.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    A team of physicists from University College London measured the separation between two specific energy levels of positronium, what’s known as its fine structure. The researchers formed positronium by colliding a beam of positrons with a target, where they met up with electrons. After manipulating the positronium atoms with a laser to put them in the appropriate energy level, the team hit them with microwave radiation to induce some of them to jump to another energy level.
    The researchers pinpointed the frequency of radiation needed to make the atoms take the leap, which is equivalent to finding the size of the gap between the energy levels. While the frequency predicted from calculations was about 18,498 megahertz, the researchers measured about 18,501 megahertz, a difference of about 0.02 percent. Given that the estimated experimental error was only about 0.003 percent, that’s a wide gap.
    The team searched for experimental issues that could explain the result, but came up empty. Additional experiments are now needed to help investigate the mismatch, says physicist Akira Ishida of the University of Tokyo, who was not involved with the study. “If there is still significant discrepancy after further precise measurements, the situation becomes much more exciting.”
    The theoretical prediction also seems solid. In quantum electrodynamics, making predictions involves calculating to a certain level of precision, leaving out terms that are less significant and more difficult to calculate. Those additional terms are expected to be too small to account for the discrepancy. But, “it’s conceivable that you could be surprised,” says theoretical physicist Greg Adkins of Franklin & Marshall College in Lancaster, Pa., also not involved with the research.
    If the experiments and the theoretical calculations check out, the discrepancy might be due to a new particle, but that explanation also seems unlikely. A new particle’s effects probably would have shown up in earlier experiments. For example, says Pérez Ríos, positronium’s energy levels could be affected by a hypothetical axion-like particle. That’s a lightweight particle that has the potential to explain dark matter, an invisible type of matter thought to permeate the universe. But if that type of particle was causing this mismatch, researchers would also have seen its effects in measurements of the magnetic properties of the electron and its heavier cousin, the muon.
    That leaves scientists still searching for an answer, says physicist David Cassidy, a coauthor of the study. “It’s going to be something surprising. I just don’t know what.­” More

  • in

    Electronic alert reduces excessive prescribing of short-acting asthma relievers

    An automatic, electronic alert on general practitioners’ (GPs) computer screens can help to prevent excessive prescribing of short-acting asthma reliever medication, according to research presented at the ‘virtual’ European Respiratory Society International Congress.
    The alert pops up when GPs open the medical records for a patient who has been issued with three prescriptions for short-acting reliever inhalers, such as salbutamol, within a three-month period. It suggests the patient should have an asthma review to assess symptoms and improve asthma control. Short-acting beta2-agonists (SABAs), usually described as blue inhalers, afford short-term relief of asthma symptoms by expanding the airways, but do not deal with the underlying inflammatory cause.
    “Excessive use of reliever inhalers such as salbutamol is an indicator of poorly controlled asthma and a risk factor for asthma attacks. It has also been implicated in asthma-related deaths. Yet, despite national and international asthma guidelines, excessive prescribing of short-acting beta2-agonists persists,” said Dr Shauna McKibben, an honorary research fellow at the Institute of Population Health Sciences Queen Mary University of London (QMUL), UK, and clinical nurse specialist in asthma and allergy at Imperial College Healthcare NHS Trust, London, who led the research. “This research aimed to identify and target excessive SABA prescribing using an electronic alert in GPs’ computer systems to identify at-risk patients, change prescribing behaviour and improve asthma management.”
    The study of 18,244 asthma patients in 132 general practices in north-east London found a 6% reduction in the excessive prescribing of reliever inhalers in the 12 months following the alert first appearing on patients’ records. In addition, three months after the alert, asthma reviews increased by 12%, within six months after the alert, repeat prescribing of SABAs reduced by 5% and asthma exacerbations requiring treatment with oral steroids reduced by 8%.
    The alert to identify excessive SABA prescribing was introduced in 2015 on GPs’ computer systems that used EMIS clinical software. At the time of the research EMIS was used by almost all general practices in north-east London, and 56% of English practices used it by 2017.
    Dr McKibben analysed data on SABA prescribing for patients in all practices in the north-east London boroughs of City and Hackney, Tower Hamlets and Newham between 2015 and 2016. She compared these with excessive SABA prescribing between 2013 to 2014, before the alert was introduced.

    advertisement

    She said: “The most important finding is the small but potentially clinically significant reduction in SABA prescribing in the 12 months after the alert. This, combined with the other results, suggests that the alert prompts a review of patients who may have poor asthma control. An asthma review facilitates the assessment of SABA use and is an important opportunity to improve asthma management.”
    Dr McKibben also asked a sample of GPs, receptionists and nurses in general practice about their thoughts on the alert.
    “The alert was viewed as a catalyst for asthma review; however, the provision of timely review was challenging and response to the alert was dependent on local practice resources and clinical priorities,” she said.
    A limitation of the research was that the alert assumed that only one SABA inhaler was issued per prescription, when often two at a time may be issued. “Therefore, excessive SABA prescribing and the subsequent reduction in prescribing following the alert may be underestimated,” said Dr McKibben.
    She continued: “Excessive SABA use is only one indicator for poor asthma control but the risks are not well understood by patients and are often overlooked by healthcare professionals. Further research into the development and robust evaluation of tools to support primary care staff in the management of people with asthma is essential to improve asthma control and reduce hospital admissions.”
    The study’s findings are now being used to support and inform the REAL-HEALTH Respiratory initiative, a Barts Charity funded three-year programme with the clinical effectiveness group at QMUL. The initiative provides general practices with EMIS IT tools to support the identification of patients with high-risk asthma. This includes an electronic alert for excessive SABA prescribing and an asthma prescribing tool to identify patients with poor asthma control who may be at risk of hospital admission.
    Daiana Stolz, who was not involved in the research, is the European Respiratory Society Education Council Chair and Professor of Respiratory Medicine and a leading physician at the University Hospital Basel, Switzerland. She said: “This study shows how a relatively simple intervention, an electronic alert popping up on GPs’ computers when they open a patient’s records, can prompt a review of asthma medication and can lead to a reduction in excessive prescribing of short-acting asthma relievers and better asthma control. However, the fact that general practices often struggled to provide a timely asthma review in a period before the COVID-19 pandemic, suggests that far more resources need to be made available to primary care, particularly in this pandemic period.” More

  • in

    'Selfies' could be used to detect heart disease

    Sending a “selfie” to the doctor could be a cheap and simple way of detecting heart disease, according to the authors of a new study published today (Friday) in the European Heart Journal.
    The study is the first to show that it’s possible to use a deep learning computer algorithm to detect coronary artery disease (CAD) by analysing four photographs of a person’s face.
    Although the algorithm needs to be developed further and tested in larger groups of people from different ethnic backgrounds, the researchers say it has the potential to be used as a screening tool that could identify possible heart disease in people in the general population or in high-risk groups, who could be referred for further clinical investigations.
    “To our knowledge, this is the first work demonstrating that artificial intelligence can be used to analyse faces to detect heart disease. It is a step towards the development of a deep learning-based tool that could be used to assess the risk of heart disease, either in outpatient clinics or by means of patients taking ‘selfies’ to perform their own screening. This could guide further diagnostic testing or a clinical visit,” said Professor Zhe Zheng, who led the research and is vice director of the National Center for Cardiovascular Diseases and vice president of Fuwai Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, People’s Republic of China.
    He continued: “Our ultimate goal is to develop a self-reported application for high risk communities to assess heart disease risk in advance of visiting a clinic. This could be a cheap, simple and effective of identifying patients who need further investigation. However, the algorithm requires further refinement and external validation in other populations and ethnicities.”
    It is known already that certain facial features are associated with an increased risk of heart disease. These include thinning or grey hair, wrinkles, ear lobe crease, xanthelasmata (small, yellow deposits of cholesterol underneath the skin, usually around the eyelids) and arcus corneae (fat and cholesterol deposits that appear as a hazy white, grey or blue opaque ring in the outer edges of the cornea). However, they are difficult for humans to use successfully to predict and quantify heart disease risk.

    advertisement

    Prof. Zheng, Professor Xiang-Yang Ji, who is director of the Brain and Cognition Institute in the Department of Automation at Tsinghua University, Beijing, and other colleagues enrolled 5,796 patients from eight hospitals in China to the study between July 2017 and March 2019. The patients were undergoing imaging procedures to investigate their blood vessels, such as coronary angiography or coronary computed tomography angiography (CCTA). They were divided randomly into training (5,216 patients, 90%) or validation (580, 10%) groups.
    Trained research nurses took four facial photos with digital cameras: one frontal, two profiles and one view of the top of the head. They also interviewed the patients to collect data on socioeconomic status, lifestyle and medical history. Radiologists reviewed the patients’ angiograms and assessed the degree of heart disease depending on how many blood vessels were narrowed by 50% or more (≥ 50% stenosis), and their location. This information was used to create, train and validate the deep learning algorithm.
    The researchers then tested the algorithm on a further 1,013 patients from nine hospitals in China, enrolled between April 2019 and July 2019. The majority of patients in all the groups were of Han Chinese ethnicity.
    They found that the algorithm out-performed existing methods of predicting heart disease risk (Diamond-Forrester model and the CAD consortium clinical score). In the validation group of patients, the algorithm correctly detected heart disease in 80% of cases (the true positive rate or ‘sensitivity’) and correctly detected heart disease was not present in 61% of cases (the true negative rate or ‘specificity’). In the test group, the sensitivity was 80% and specificity was 54%.
    Prof. Ji said: “The algorithm had a moderate performance, and additional clinical information did not improve its performance, which means it could be used easily to predict potential heart disease based on facial photos alone. The cheek, forehead and nose contributed more information to the algorithm than other facial areas. However, we need to improve the specificity as a false positive rate of as much as 46% may cause anxiety and inconvenience to patients, as well as potentially overloading clinics with patients requiring unnecessary tests.”
    As well as requiring testing in other ethnic groups, limitations of the study include the fact that only one centre in the test group was different to those centres which provided patients for developing the algorithm, which may further limit its generalisabilty to other populations.

    advertisement

    In an accompanying editorial, Charalambos Antoniades, Professor of Cardiovascular Medicine at the University of Oxford, UK, and Dr Christos Kotanidis, a DPhil student working under Prof. Antoniades at Oxford, write: “Overall, the study by Lin et al. highlights a new potential in medical diagnostics……The robustness of the approach of Lin et al. lies in the fact that their deep learning algorithm requires simply a facial image as the sole data input, rendering it highly and easily applicable at large scale.”
    They continue: “Using selfies as a screening method can enable a simple yet efficient way to filter the general population towards more comprehensive clinical evaluation. Such an approach can also be highly relevant to regions of the globe that are underfunded and have weak screening programmes for cardiovascular disease. A selection process that can be done as easily as taking a selfie will allow for a stratified flow of people that are fed into healthcare systems for first-line diagnostic testing with CCTA. Indeed, the ‘high risk’ individuals could have a CCTA, which would allow reliable risk stratification with the use of the new, AI-powered methodologies for CCTA image analysis.”
    They highlight some of the limitations that Prof. Zheng and Prof. Ji also include in their paper. These include the low specificity of the test, that the test needs to be improved and validated in larger populations, and that it raises ethical questions about “misuse of information for discriminatory purposes. Unwanted dissemination of sensitive health record data, that can easily be extracted from a facial photo, renders technologies such as that discussed here a significant threat to personal data protection, potentially affecting insurance options. Such fears have already been expressed over misuse of genetic data, and should be extensively revisited regarding the use of AI in medicine.”
    The authors of the research paper agree on this point. Prof. Zheng said: “Ethical issues in developing and applying these novel technologies is of key importance. We believe that future research on clinical tools should pay attention to the privacy, insurance and other social implications to ensure that the tool is used only for medical purposes.”
    Prof. Antoniades and Dr. Kotanidis also write in their editorial that defining CAD as ≥ 50% stenosis in one major coronary artery “may be a simplistic and rather crude classification as it pools in the non-CAD group individuals that are truly healthy, but also people who have already developed the disease but are still at early stages (which might explain the low specificity observed).” More

  • in

    Skat and poker: More luck than skill?

    Chess requires playing ability and strategic thinking; in roulette, chance determines victory or defeat, gain or loss. But what about skat and poker? Are they games of chance or games of skill in game theory? This classification also determines whether play may involve money. Prof. Dr Jörg Oechssler and his team of economists at Heidelberg University studied this question, developing a rating system similar to the Elo system used for chess. According to their study, both skat and poker involve more than 50 per cent luck, yet over the long term, skill prevails.
    “Whether a game is one of skill or luck also determines whether it can be played for money. But assigning a game to these categories is difficult owing to the many shades of gradation between extremes like roulette and chess,” states Prof. Oechssler. Courts in Germany legally classify poker as a game of chance that can be played only in government-sanctioned casinos, whereas skat is considered a game of skill. This classification stems from a court decision taken in 1906. One frequently used assessment criterion is whether the outcome for one player depends more than 50 per cent on luck. But how can this be measured objectively?
    It is this question the Heidelberg researchers investigated in their game theoretic study. Using data from more than four million online games of chess, poker, and skat, they developed a rating system for poker and skat based on the Elo method for chess, which calculates the relative skill levels of individual players. “Because chess is purely a game of skill, the rating distribution is very wide, ranging from 1,000 for a novice to over 2.800 for the current world champion. So the wider the distribution, the more important skill is,” explains Dr Peter Dürsch. In a game involving more luck and chance, the numbers are therefore not likely to be so far apart.
    The Heidelberg research confirms exactly that: the distribution is much narrower in poker and skat. Whereas the standard deviation — the average deviation from the mean — for chess is over 170, the other two games did not exceed 30. To create a standard of comparison for a game involving more than 50 per cent luck, the researchers replaced every other game in their chess data set with a coin toss. This produced a deviation of 45, which is still much higher than poker and skat. “Both games fall below the 50 per cent skill level, and therefore depend mainly on luck,” states Marco Lambrecht. “Skill, however, does prevail in the long run. Our analyses show that after about one hundred games, a poker player who is one standard deviation better than his opponent is 75 per cent more likely to have won more games than his opponent.”
    In principle, the method can be applied to all games where winners are determined, report the researchers. The percentage of skill in the popular card game Mau-Mau, for example, is far less than poker, whereas the Chinese board game Go involves even more skill than chess.

    Story Source:
    Materials provided by University of Heidelberg. Note: Content may be edited for style and length. More