More stories

  • in

    Artificial intelligence tool may help predict heart attacks

    Investigators from Cedars-Sinai have created an artificial intelligence-enabled tool that may make it easier to predict if a person will have a heart attack.
    The tool, described in The Lancet Digital Health, accurately predicted which patients would experience a heart attack in five years based on the amount and composition of plaque in arteries that supply blood to the heart.
    Plaque buildup can cause arteries to narrow, which makes it difficult for blood to get to the heart, increasing the likelihood of a heart attack. A medical test called a coronary computed tomography angiography (CTA) takes 3D images of the heart and arteries and can give doctors an estimate of how much a patient’s arteries have narrowed. Until now, however, there has not been a simple, automated and rapid way to measure the plaque visible in the CTA images.
    “Coronary plaque is often not measured because there is not a fully automated way to do it,” said Damini Dey, PhD, director of the quantitative image analysis lab in the Biomedical Imaging Research Institute at Cedars-Sinai and senior author of the study. “When it is measured, it takes an expert at least 25 to 30 minutes, but now we can use this program to quantify plaque from CTA images in five to six seconds.”
    Dey and colleagues analyzed CTA images from 1,196 people who underwent a coronary CTA at 11 sites in Australia, Germany, Japan, Scotland and the United States. The investigators trained the AI algorithm to measure plaque by having it learn from coronary CTA images, from 921 people, that already had been analyzed by trained doctors.
    The algorithm works by first outlining the coronary arteries in 3D images, then identifying the blood and plaque deposits within the coronary arteries. Investigators found the tool’s measurements corresponded with plaque amounts seen in coronary CTAs. They also matched results with images taken by two invasive tests considered to be highly accurate in assessing coronary artery plaque and narrowing: intravascular ultrasound and catheter-based coronary angiography.
    Finally, the investigators discovered that measurements made by the AI algorithm from CTA images accurately predicted heart attack risk within five years for 1,611 people who were part of a multicenter trial called the SCOT-HEART trial.
    “More studies are needed, but it’s possible we may be able to predict if and how soon a person is likely to have a heart attack based on the amount and composition of the plaque imaged with this standard test,” said Dey, who is also professor of Biomedical Sciences at Cedars-Sinai.
    Dey and colleagues are continuing to study how well their AI algorithm quantifies plaque deposits in patients who undergo coronary CTA.
    Funding: The study was funded by the National Heart, Lung, and Blood Institute under award number 1R01HL148787-01A1.
    Story Source:
    Materials provided by Cedars-Sinai Medical Center. Note: Content may be edited for style and length. More

  • in

    Design tweak helps prevent malfunction in yarns designed to store energy

    In a new study, North Carolina State University researchers found a way to prevent electrical malfunctions in yarns designed to store electrical energy. Ultimately, the findings could help advance the development of “smart textiles” that would capture energy from the wearer’s movements and power sensors and wearable electronics.
    The researchers reported in npj Flexible Electronics that they were able to prevent short-circuiting in yarns that act as supercapacitors — which are electrical devices that store energy — by wrapping the yarns with an insulating thread. They also tested the strength and durability of the yarns to make sure they could still work after going through knitting and weaving processes.
    “A supercapacitor functions like a battery, but in this case, we’re working on a flexible battery shaped as a textile yarn that you could weave or knit into your T-shirt or sweater,” said Wei Gao, associate professor of textile engineering, chemistry and science and a University Faculty Scholar at NC State. “In this study, we have woven this yarn into a piece of fabric so that it can store electrical energy, and eventually we want to use it to power whatever electronic devices you need, whether it be a sensor, a light or even a cell phone.”
    While research into these so-called “yarn-shaped supercapacitors” is promising, researchers say developers face a consistent problem with their design: the yarn-shaped supercapacitors are more likely to short circuit as their length increases. Short-circuiting is when the electric current flows through an unintended path. It is a safety concern because a short circuit can result in a burst of heat energy or even a fire.
    “Everybody is trying to make smart electronics that can be incorporated into cloth or fabric,” Gao said. “What we found is if you try to make a supercapacitor yarn longer than 8 inches, it’s pretty easy for this device to short-circuit. It’s pretty dangerous, and it’s something nobody wants to encounter when wearing a smart suit.”
    To solve that problem, the researchers tested what would happen when they wrapped the super-capacitor yarn electrodes with insulating threads. The idea was that the threads would act as a physical barrier, keeping the opposite electrodes from contacting each other and preventing short-circuiting. They tested their device’s performance by connecting the electrodes to a power source and recording the device’s current response. They also tested how well the yarns were able to hold a charge. They found that the yarns kept 90% of the initial energy after charging and discharging them 10,000 times. More

  • in

    Social media data could help predict the next COVID surge

    In the summer of 2021, as the third wave of the COVID-19 pandemic wore on in the United States, infectious disease forecasters began to call attention to a disturbing trend.
    The previous January, as models warned that U.S. infections would continue to rise, cases plummeted instead. In July, as forecasts predicted infections would flatten, the Delta variant soared, leaving public health agencies scrambling to reinstate mask mandates and social distancing measures.
    “Existing forecast models generally did not predict the big surges and peaks,” said geospatial data scientist Morteza Karimzadeh, an assistant professor of geography at CU Boulder. “They failed when we needed them most.”
    New research from Karimzadeh and his colleagues suggests a new approach, using artificial intelligence and vast, anonymized datasets from Facebook could not only yield more accurate COVID-19 forecasts, but also revolutionize the way we track other infectious diseases, including the flu.
    Their findings, published in the International Journal of Data Science and Analytics, conclude this short-term forecasting method significantly outperforms conventional models for projecting COVID trends at the county level.
    Karimzadeh’s team is now one of about a dozen, including those from Columbia University and the Massachusetts Institute of Technology (MIT), submitting weekly projections to the COVID-19 Forecast Hub, a repository that aggregates the best data possible to create an “ensemble forecast” for the Centers for Disease Control. Their forecasts generally rank in the top two for accuracy each week. More

  • in

    Tiny, cheap solution for quantum-secure encryption

    It’s fairly reasonable to assume that an encrypted email can’t be seen by prying eyes. That’s because in order to break through most of the encryption systems we use on a day-to-day basis, unless you are the intended recipient, you’d need the answer to a mathematical problem that’s nearly impossible for a computer to solve in a reasonable amount of time.
    Nearly impossible for modern-day computers, at least.
    “If quantum computing becomes a reality, however, some of those problems are not hard anymore,” said Shantanu Chakrabartty, the Clifford W. Murphy Professor and vice dean for research and graduate education in the Preston M. Green Department of Electrical & Systems Engineering at the McKelvey School of Engineering.
    Already these new computing paradigms are becoming a reality and could soon be deployable. Hackers are already preparing by storing encrypted transactions now with the expectation they can decipher the information later.
    Chakrabartty’s lab at Washington University in St. Louis proposes a security system that is not only resistant to quantum attacks, but is also inexpensive, more convenient, and scalable without the need for fancy new equipment.
    This research will appear in the IEEE Transactions of Information Forensics Science.
    Security is often managed today by key distribution systems in which one person sends information hidden behind a key, maybe a long string of seemingly unassociated numbers. The receiver of that information can access the information if they possess another specific key. The two keys are related in a mathematical way that is nearly impossible to guess, but can be easily solved with the right algorithm or using a quantum computer. More

  • in

    Tackling large data sets and many parameter problems in particle physics

    One of the major challenges in particle physics is how to interpret large data sets that consist of many different observables in the context of models with different parameters.
    A new paper published in EPJ Plus, authored by Ursula Laa from the Institute of Statistics at BOKU University, Vienna, and German Valencia from the School of Physics and Astronomy, Monash University, Clayton, Australia, looks at the simplification of large data set and many parameter problems using tools to split large parameter spaces into a small number of regions.
    “We applied our tools to the so-called B-anomaly problem. In this problem there is a large number of experimental results and a theory that predicts them in terms of several parameters,” Laa says. “The problem has received much attention because the preferred parameters to explain the observations do not correspond to those predicted by the standard model of particle physics, and as such the results would imply new physics.”
    Valencia continues by explaining the paper shows how the Pandemonium tool can provide an interactive graphical way to study the connections between characteristics in the observations and regions of parameter space.
    “In the B-anomaly problem, for example, we can clearly visualise the tension between two important observables that have been singled out in the past,” Valencia says. “We can also see which improved measurements would be best to address that tension.
    “This can be most helpful in prioritising future experiments to address unresolved questions.”
    Laa elaborates by explaining that the methods developed and used by the duo are applicable to many other problems, in particular for models and observables that are less well understood than the applications discussed in the paper, such as multi Higgs models.
    “A challenge is the visualization of multidimensional parameter spaces, the current interface only allows the user to visualise high dimensional data spaces interactively,” Laa concludes. “The challenge is to automate this, which will be addressed in future work, using techniques from dimension reduction.”
    Story Source:
    Materials provided by Springer. Note: Content may be edited for style and length. More

  • in

    The opto-ionic effect: Light may increase performance of fuel cells and lithium-ion batteries

    Lithium-ion batteries, fuel cells and many other devices depend on the high mobility of ions in order to work properly. But there a large number of obstacles to such mobility. A research team led by Jennifer L. M. Rupp of the Technical University of Munich (TUM) and Harry L. Tuller of the Massachusetts Institute of Technology (MIT) have now shown for the first time that light can be used to increase the mobility of ions and improve the performance of such devices.
    A charge can be transported by a material in a number of different ways. The most familiar is the electrical conductivity of metals, where the charge is borne by electrons. In many devices, however, ions transport the charge. One example is lithium-ion batteries in which lithium ions move during charging and discharging. Similarly, fuel cells rely on the transport of hydrogen and oxygen ions in order to conduct electricity.
    Ceramics are currently being investigated as solid electrolytes for transporting oxygen ions. But: “What we find is that the ionic conductivity — the rate at which the ions can move and, therefore, how efficient the resulting device can be — is often markedly degraded by the fact that the ions get blocked at grain boundaries,” says Prof. Harry L. Tuller of the Massachusetts Institute of Technology.
    Light puts ions on the go
    In their current publication Tuller and his colleague Jennifer L. M. Rupp, Professor for solid-state electrolyte chemistry at the Technical University of Munich, show how light can be used to reduce the barriers encountered by ions at ceramic grain boundaries.
    Many devices based on ion conductivity, such as solid-oxide fuel cells, have to operate at very high temperatures in order for the ions to be able to overcome the grain boundary barriers. Operating temperatures of up to 700° Celsius, however, present their own challenges: Materials age faster and the infrastructure for maintaining these high temperatures is costly. More

  • in

    Active video games provide alternative workout

    Working out isn’t known for being fun. But new active video and virtual reality games may help change that.
    Exergaming, or active video gaming, may be the perfect introduction to helping people be more active, according to new research from the University of Georgia.
    Just about anyone can find an exergame to match their interests. Dance Dance Revolution, EA Sports Active and Beat Saber are some of the more popular exergames. Active gaming options exist for most gaming consoles, like Xbox and Nintendo. And previous research has shown that exergaming can have physical benefits, particularly when it takes the place of traditional sedentary video gaming.
    This study showed that exergamers felt high levels of satisfaction and a sense of autonomy over their exercise regimen.
    “When an individual feels autonomous, they’re more likely to exercise or exergame on their own,” said Sami Yli-Piipari, co-author of the study and an associate professor in the Department of Kinesiology in the Mary Frances Early College of Education. “They feel ownership over what they are doing, and they’re doing it for themselves, so it’s more likely they will keep up the activity.”
    Traditional exercise, such as weightlifting or running, doesn’t appeal to some people. But they might be open to active video gaming because it doesn’t seem like exercise. It’s just fun. More

  • in

    'Off label' use of imaging databases could lead to bias in AI algorithms

    Significant advances in artificial intelligence (AI) over the past decade have relied upon extensive training of algorithms using massive, open-source databases. But when such datasets are used “off label” and applied in unintended ways, the results are subject to machine learning bias that compromises the integrity of the AI algorithm, according to a new study by researchers at the University of California, Berkeley, and the University of Texas at Austin.
    The findings, published this week in the Proceedings of the National Academy of Sciences, highlight the problems that arise when data published for one task are used to train algorithms for a different one.
    The researchers noticed this issue when they failed to replicate the promising results of a medical imaging study. “After several months of work, we realized that the image data used in the paper had been preprocessed,” said study principal investigator Michael Lustig, UC Berkeley professor of electrical engineering and computer sciences. “We wanted to raise awareness of the problem so researchers can be more careful and publish results that are more realistic.”
    The proliferation of free online databases over the years has helped support the development of AI algorithms in medical imaging. For magnetic resonance imaging (MRI), in particular, improvements in algorithms can translate into faster scanning. Obtaining an MR image involves first acquiring raw measurements that code a representation of the image. Image reconstruction algorithms then decode the measurements to produce the images that clinicians use for diagnostics.
    Some datasets, such as the well-known ImageNet, include millions of images. Datasets that include medical images can be used to train AI algorithms used to decode the measurements obtained in a scan. Study lead author Efrat Shimron, a postdoctoral researcher in Lustig’s lab, said new and inexperienced AI researchers may be unaware that the files in these medical databases are often preprocessed, not raw.
    As many digital photographers know, raw image files contain more data than their compressed counterparts, so training AI algorithms on databases of raw MRI measurements is important. But such databases are scarce, so software developers sometimes download databases with processed MR images, synthesize seemingly raw measurements from them, and then use those to develop their image reconstruction algorithms. More