More stories

  • in

    Statistical oversight could explain inconsistencies in nutritional research

    People often wonder why one nutritional study tells them that eating too many eggs, for instance, will lead to heart disease and another tells them the opposite. The answer to this and other conflicting food studies may lie in the use of statistics, according to a report published today in the American Journal of Clinical Nutrition.
    The research, led by scientists at the University of Leeds and The Alan Turing Institute — The National Institute for data science and artificial intelligence — reveals that the standard and most common statistical approach to studying the relationship between food and health can give misleading and meaningless results.
    Lead author Georgia Tomova, a PhD researcher in the University of Leeds’ Institute for Data Analytics and The Alan Turing Institute, said: “These findings are relevant to everything we think we know about the effect of food on health.
    “It is well known that different nutritional studies tend to find different results. One week a food is apparently harmful and the next week it is apparently good for you.”
    The researchers found that the widespread practice of statistically controlling, or allowing for, someone’s total energy intake can lead to dramatic changes in the interpretation of the results.
    Controlling for other foods eaten can then further skew the results, so that a harmful food appears beneficial or vice versa. More

  • in

    Smelling in VR environment possible with new gaming technology

    An odor machine, so-called olfactometer, makes it possible to smell in VR environments. First up is a “wine tasting game” where the user smells wine in a virtual wine cellar and gets points if the guess on aromas in each wine is correct. The new technology that can be printed on 3D printers has been developed in collaboration between Stockholm University and Malmö University. The research, funded by the Marianne and Marcus Wallenberg Foundation, was recently published in the International Journal of Human — Computer Studies.
    “We hope that the new technical possibilities will lead to scents having a more important role in game development, says Jonas Olofsson, professor of psychology and leader of the research project at Stockholm University.
    In the past, computer games have focused mostly on what we can see — moving images on screens. Other senses have not been present. But an interdisciplinary research group at Stockholm University and Malmö University has now constructed a scent machine that can be controlled by a gaming computer. In the game, the participant moves in a virtual wine cellar, picking up virtual wine glasses containing different types of wine, guessing the aromas. The small scent machine is attached to the VR system’s controller, and when the player lifts the glass, it releases a scent.
    “The possibility to move on from a passive to a more active sense of smell in the game world paves the way for the development of completely new smell-based game mechanics based on the players’ movements and judgments,” says Simon Niedenthal, interaction and game researcher at Malmö University.
    The olfactometer consists of four different valves each connected to a channel. In the middle there is a fan sucking the air into a tube. With the help of the computer, the player can control the four channels so that they open to different degrees and provide different mixtures of scent. Scent blends that can mimic the complexity of a real wine glass. The game has different levels of difficulty with increasing levels of complexity.
    “In the same way that a normal computer game becomes more difficult the better the player becomes; the scent game can also challenge players who already have a sensitive nose. This means that the scent machine can even be used to train wine tasters or perfumers,” says Jonas Olofsson. More

  • in

    Does reducing screen time increase productivity? Not necessarily

    Have you ever been accused (or accused someone else) of wasting too much time by looking at a cellphone? Turns out, that time might not be wasted time at all.
    According to research recently published by Kaveh Abhari of San Diego State University and Isaac Vaghefi of City University of New York, using existing smartphone applications to monitor cellphone screen time can enhance focused or mindful cellphone usage, which, in turn, leads to higher perceived productivity and user satisfaction. The research was recently published in AIS Transactions on Human-Computer Interaction (THCI).
    The Positive Effect of Self-Monitoring
    Abhari (associate professor of management information systems at SDSU’s Fowler College of Business) and Vaghefi (assistant professor of information systems at the Zicklin School of Business at Baruch College) said while there was substantial research establishing the negative effects of cellphone screen time (intolerance, withdrawal, and conflict with job-related tasks), their research was designed to determine if self-regulatory behaviors could lead to modified user behavior for more positive outcomes.
    “We theorized that individuals who tracked their cellphone usage and set goals surrounding that usage tended to have enhanced productivity and contentment with their productivity as they met their stated objectives,” said Abhari. “Previous research has shown that goal setting tends to raise performance expectations and we wanted to see if this theory held true for smartphone screen time as well.”
    Putting it to the Test
    To make this determination, the researchers surveyed 469 participating university undergraduate students in California, New York, and Hawaii. The three-week survey required all participants to complete four questionnaires and about half of them were required to download a screen-monitoring application to their phones. This app allowed users to monitor and set limits or goals with their cellphone screen time.
    When the results were analyzed, researchers measured the perceived productivity of screen time reported by those surveyed, as well as the amount of screen time and the fatigue associated with self-monitoring. They also reviewed participants’ contentment with their productivity achieved through cellphone screen time. “Self-monitoring appears necessary to encourage the optimized use of smartphones,” said Abhari. “The results suggest that optimizing but not minimizing screen time is more likely to increase user productivity.”
    The Effect of Fatigue
    However, the researchers also found that self-monitoring induces fatigue and weakens the effect on productivity, though it was not a significant factor affecting the relationship between self-monitoring and contentment with productivity achievement.
    In conclusion, Abhari and Vaghefi determined that while uncontrolled cellphone use (or cellphone addiction) could negatively impact people’s lives, monitored screen time — particularly monitored screen time with specific goals in mind — can result in positive outcomes and higher overall user satisfaction. “This study could lead system developers to embed features into mobile devices that enable self-monitoring,” said Abhari. “These features could improve quality screen time and enhance the relationship between humans and digital technology.”
    Story Source:
    Materials provided by San Diego State University. Original written by Suzanne Finch. Note: Content may be edited for style and length. More

  • in

    Seeing electron movement at fastest speed ever could help unlock next-level quantum computing

    The key to maximizing traditional or quantum computing speeds lies in our ability to understand how electrons behave in solids, and a collaboration between the University of Michigan and the University of Regensburg captured electron movement in attoseconds — the fastest speed yet.
    Seeing electrons move in increments of one quintillionth of a second could help push processing speeds up to a billion times faster than what is currently possible. In addition, the research offers a “game-changing” tool for the study of many-body physics.
    “Your current computer’s processor operates in gigahertz, that’s one billionth of a second per operation,” said Mackillo Kira, U-M professor of electrical engineering and computer science, who led the theoretical aspects of the study published in Nature. “In quantum computing, that’s extremely slow because electrons within a computer chip collide trillions of times a second and each collision terminates the quantum computing cycle.
    “What we’ve needed, in order to push performance forward, are snapshots of that electron movement that are a billion times faster. And now we have it.”
    Rupert Huber, professor of physics at the University of Regensburg and corresponding author of the study, said the result’s potential impact in the field of many-body physics could surpass its computing impact.
    “Many-body interactions are the microscopic driving forces behind the most coveted properties of solids — ranging from optical and electronic feats to intriguing phase transitions — but they have been notoriously difficult to access,” said Huber, who led the experiment. “Our solid-state attoclock could become a real game changer, allowing us to design novel quantum materials with more precisely tailored properties and help develop new materials platforms for future quantum information technology.”
    To see electron movement within two-dimensional quantum materials, researchers typically use short bursts of focused extreme ultraviolet (XUV) light. Those bursts can reveal the activity of electrons attached to an atom’s nucleus. But the large amounts of energy carried in those bursts prevent clear observation of the electrons that travel through semiconductors — as in current computers and in materials under exploration for quantum computers. More

  • in

    Are smartwatch health apps to detect atrial fibrillation smart enough?

    Extended cardiac monitoring in patients and the use of implantable cardiovascular electronic devices can increase detection of atrial fibrillation (AF), but the devices have limitations including short battery life and lack of immediate feedback. Can new smartphone tools that can record an electrocardiogram (ECG) strip and make an automated diagnosis overcome these limitations and facilitate timely diagnosis? The largest study to date, in the Canadian Journal of Cardiology, published by Elsevier, finds that the use of these devices is challenging in patients with abnormal ECGs. Better algorithms and machine learning may help these tools provide more accurate diagnoses, investigators say.
    “Earlier studies have validated the accuracy of the Apple Watch for the diagnosis of AF in a limited number of patients with similar clinical profiles,” explained lead investigator Marc Strik, MD, PhD, LIRYC institute, Bordeaux University Hospital, Bordeaux, France. “We tested the accuracy of the Apple Watch ECG app to detect AF in patients with a variety of coexisting ECG abnormalities.”
    The study included 734 consecutive hospitalized patients. Each patient underwent a 12-lead ECG, immediately followed by a 30-second Apple Watch recording. The smartwatch’s automated single-lead ECG AF detections were classified as “no signs of atrial fibrillation,” “atrial fibrillation,” or “inconclusive reading.” Smartwatch recordings were given to an electrophysiologist who conducted a blinded interpretation, assigning each tracing a diagnosis of “AF,” “absence of AF,” or “diagnosis unclear.” A second blinded electrophysiologist interpreted 100 randomly selected traces to determine the extent to which the observers agreed.
    In approximately one in every five patients, the smartwatch ECG failed to produce an automatic diagnosis. The risk of having a false positive automated AF detection was higher for patients with premature atrial and ventricular contractions (PACs/PVCs), sinus node dysfunction, and second- or third-degree atrioventricular-block. For patients in AF, the risk of having a false negative tracing (missed AF) was higher for patients with ventricular conduction abnormalities (interventricular conduction delay) or rhythms controlled by an implanted pacemaker.
    The cardiac electrophysiologists had a high level of agreement for differentiation between AF and non-AF. The smartphone app correctly identified 78% of the patients who were in AF and 81% who were not in AF. The electrophysiologists identified 97% of the patients who were in AF and 89% who were not.
    Patients with PVCs were three times more likely to have false positive AF diagnoses from the smartwatch ECG, and the identification of patients with atrial tachycardia (AT) and atrial flutter (AFL) was very poor.
    “These observations are not surprising, as smartwatch automated detection algorithms are based solely on cycle variability,” Dr. Strik noted, explaining that PVCs cause short and long cycles, which increase cycle variability. “Ideally, an algorithm would better discriminate between PVCs and AF. Any algorithm limited to the analysis of cycle variability will have poor accuracy in detecting AT/AFL. Machine learning approaches may increase smartwatch AF detection accuracy in these patients.”
    In an accompanying editorial, Andrés F. Miranda-Arboleda, MD, and Adrian Baranchuk, MD, Division of Cardiology, Kingston Health Science Center, Kingston, ON, Canada, observed that this is the first “real-world” study focusing on the use of the Apple Watch as a diagnostic tool for AF.
    “It is of remarkable importance because it allowed us to learn the performance of the Apple Watch in the diagnosis of AF is significantly affected by the presence of underlying ECG abnormalities. In a certain manner, the smartwatch algorithms for the detection of AF in patients with cardiovascular conditions are not yet smart enough. But they may soon be,” Dr. Miranda-Arboleda and Dr. Baranchuk said.
    “With the growing use of smartwatches in medicine, it is important to know which medical conditions and ECG abnormalities could impact and alter the detection of AF by the smartwatch in order to optimize the care of our patients,” Dr Strik said. “Smartwatch detection of AF has great potential, but it is more challenging in patients with pre-existing cardiac disease.”
    Story Source:
    Materials provided by Elsevier. Note: Content may be edited for style and length. More

  • in

    Physicists use 'electron correlations' to control topological materials

    For the first time, U.S. and European physicists have found a way to switch the topological state of a quantum material on and off.
    Because they are extremely stable and have immutable features that cannot be erased or lost to quantum decoherence, topological states play an important role in materials research and quantum computing. In a study published in Nature Communications, researchers from Rice University, Austria’s Vienna University of Technology (TU Wien), Los Alamos National Laboratory and the Netherlands’ Radboud University described their method of using a magnetic field to activate and deactivate a topological state in a strongly correlated metal.
    “Topological properties are usually found in insulating materials with weak electron correlations,” said Rice study co-author Qimiao Si, a member of theRice Quantum Initiative and director of theRice Center for Quantum Materials (RCQM). “The material we study is metallic and is strongly correlated.”
    Strongly correlated quantum materials are those where the interactions of billions upon billions of electrons give rise to collective behaviors like unconventional superconductivity or electrons that behave as if they have more than 1,000 times their normal mass. Though physicists have studiedtopological materials for decades, they have only recently begun investigating topological metals that host strongly correlated interactions.
    Si, a theoretical physicist, has long collaborated with the study’s corresponding author, Silke Bühler-Paschen at TU Wien’s Institute of Solid State Physics. Si and Bühler-Paschen’s research groups previously made notable discoveries on topological states in strongly correlated quantum materials. In late 2017, Si’s theoretical group found a metallic topological state caused by the quintessential example of strong-correlation physics called theKondo effect, and Bühler-Paschen’s experimental group observed the state in a composite material made of cerium, bismuth and palladium. The two teams named the strongly correlated state of matter a Weyl-Kondo semimetal.
    In the new study, Bühler-Paschen’s team found small impurities or external disturbances did not bring about a dramatic change in the material’s topological properties, but the application of a laboratory-scale external magnetic field could. More

  • in

    Materials science engineers work on new material for computer chips

    The amount of energy used for computing is climbing at an exponential rate. Business intelligence and consulting firm Enerdata reports that information, communication and technology accounts for 5% to 9% of total electricity consumption worldwide.
    If growth continues unabated, computing could demand up to 20% of the world’s power generation by 2030. With power grids already under strain from weather-related events and the economy transitioning from fossil fuel to renewables, engineers desperately need to flatten computing’s energy demand curve.
    Members of Jon Ihlefeld’s multifunctional thin film group are doing their part. They are investigating a material system that will allow the semiconductor industry to co-locate computation and memory on a single chip.
    “Right now we have a computer chip that does its computing activities with a little bit of memory on it,” said Ihlefeld, associate professor of materials science and engineering and electrical and computer engineering at the University of Virginia School of Engineering and Applied Science.
    Every time the computer chip wants to talk to memory the larger memory bank, it sends a signal down the line, and that requires energy. The longer the distance, the more energy it takes. Today the distance can be quite far — up to several centimeters.
    “In a perfect world, we would get them in direct contact with each other,” Ihlefeld said. More

  • in

    Predicting risk of aneurysm rupture

    Cerebral aneurysms appear in 5% to 8% of the general population. The blood vessel rupture and resultant blood leakage within the brain can lead to severe stroke or fatal consequences. Over one quarter of patients who experience a hemorrhagic stroke die before reaching a health care facility.
    Predicting the rupture of aneurysms is crucial for medical prevention and treatment. In Physics of Fluids, by AIP Publishing, researchers from the Sree Chitra Tirunal Institute for Medical Sciences and Technology, Trivandrum, and the Indian Institute of Technology Madras, developed a patient-specific mathematical model to examine what aneurysm parameters influence rupture risk prior to surgery.
    Aneurysms occur when the weakest point of a blood vessel thins, expands, and, after a certain limit, bursts. In the case of cerebral aneurysms such as internal carotid artery bifurcation aneurysm, blood leaks into the intracranial cavity
    “Since clinicians encounter these aneurysms at various growth stages, it motivated us to analyze internal carotid artery aneurysms in a systematic manner,” said B. Jayanand Sudhir, of the Sree Chitra Tirunal Institute for Medical Sciences and Technology. “The current study is a sincere and systematic attempt to address the dynamics of blood flow at various stages to understand the initiation, progression, and rupture risk.”
    The team examined the aspect ratio and size ratio of aneurysms, which describe the shape and size characteristics of the bulge in a holistic manner. As these parameters increase and the aneurysm expands, the stress applied against the aneurysm walls and the time blood spends within the aneurysm increase. This leads the probability of rupture to rise.
    Patient-specific computed tomography scans are fed into the model, which reconstructs the geometry and blood flow of the aneurysm. It then uses mathematical equations to describe the fluid flow, generating information about the blood vessel walls and blood flow patterns.
    “This was feasible due to the access we had to the national supercomputing cluster for performing the computational fluid dynamics-based simulations,” said S.V. Patnaik of the Indian Institute of Technology Madras.
    “The novelty of this work lies in close collaboration and amalgamation of expertise from clinical and engineering backgrounds,” said Sudhir. “The aneurysm models were of different shapes, which helped us build and understand the complexity of flow structures in multilobed cerebral aneurysms.”
    Multilobed aneurysms, which include more than one balloonlike pocket of expanding blood, contained more complex blood flow structures than their single-lobed counterparts.
    The authors hope to transform the rupture risk predictions into a user-friendly software to help clinicians and neurosurgeons prioritize and manage high-risk patients. They plan to use the model to assess the effectiveness of different treatment options for aneurysms.
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More