More stories

  • in

    Avatar marketing: Moving beyond gimmicks to results

    Researchers from University of Texas-Arlington, University of Virginia, Sun Yat-Sen University, and University of Washington published a new paper in the Journal of Marketing that seeks to advance the discipline of avatar-based marketing.
    The study, forthcoming in the Journal of Marketing, is titled “An Emerging Theory of Avatar Marketing” and is authored by Fred Miao, Irina Kozlenkova, Haizhong Wang, Tao Xie, and Robert Palmatier.
    In 2020, Samsung’s Star Labs brought digital avatars to CES 2020. However, this promotion was burned by its own fanfare. The avatars looked realistic and successfully answered some questions, but only when they were heavily controlled. As this example illustrates, avatar-based marketing is still in its nascent stage.
    A pressing question is “How to design effective avatars?” Given the considerable amount of ambiguity about the definition of avatar, the researchers first identify and evaluate key conceptual elements of the term avatar and offer this definition: digital entities with anthropomorphic appearance, controlled by a human or software, that have an ability to interact.
    Based on this definition, they present a typology of avatar design to isolate elements that academics and managers can leverage to ensure avatars’ effectiveness for achieving specific goals (e.g., providing standard vs. personalized solutions). Design elements affect avatars’ form realism and behavioral realism. Form realism refers to the extent to which the avatar’s shape appears human, while behavioral realism captures the degree to which it behaves as a human would in the physical world. Form realism includes design elements such as spatial dimension (2D/3D), movement (static vs. dynamic), and human characteristics (e.g., name, gender), whereas behavioral realism captures the avatar’s communication modality (e.g., verbal), response type (scripted vs. natural response), social content, and its controlling entity.
    The study reveals a key limitation in avatar design: lack of consideration of the alignment between form and behavioral realism of avatars. As Miao explains, “If the levels of form and behavioral realism are mismatched, the consequences for avatars’ effectiveness may be profound and can help explain inconsistent avatar performance.”
    Integrating form and behavioral realism, the study features a 2 x 2 avatar taxonomy that identifies four distinct categories of avatars: simplistic, superficial, intelligent unrealistic, and digital human avatars. A simplistic avatar has an unrealistic human appearance (e.g., 2D, visually static, cartoonish image) and engages in low intelligence behaviors (e.g., scripted, only task-specific communication). For example, in the Netherlands, ING Bank’s 2D, cartoonish-looking avatar Inge responds to simple customer inquiries from a set of predetermined answers. In contrast, a superficial avatar has a realistic anthropomorphic appearance (e.g., 3D, visually dynamic, photorealistic image), such as Natwest Bank’s Cora, but low behavioral realism in that it is only able to offer preprogrammed answers to specific questions. An intelligent unrealistic avatar (e.g., REA) is characterized by humanlike cognitive and emotional intelligence, but exhibits an unrealistic (e.g., cartoonish) human image. These avatars can engage customers in real-time, complex transactions without being mistaken for human agents. Finally, a digital human avatar such as SK-II’s YUMI is the most advanced category of avatars, characterized by both a highly realistic anthropomorphic appearance and humanlike cognitive and emotional intelligence, and is designed to provide the highest degree of realism during interactions with human users.

    advertisement

    Based on observations of relative effectiveness of these avatars in practice, the researchers present propositions that predict outcomes of avatar marketing. In particular:
    -As the form realism of an avatar increases, so do customers’ expectations for its behavioral realism.
    -Differences between the avatar’s form and behavioral realism have asymmetric effects, such that customers experience positive (negative) disconfirmation when an avatar’s behavioral realism is greater (less) than its form realism.
    Recall the avatar of Samsung’s Star Labs, which is high in form realism but low in behavioral realism. Kozlenkova says that “Our analysis indicates that Samsung’s avatar sets audience expectations too high, which may have led to a negative disconfirmation, thereby resulting in an unfavorable customer experience.”
    Avatars’ effectiveness may be highly contingent on the level of perceived uncertainty users experience during their interactions with avatars as well as choice of media channel (e.g., smartphones vs. desktops). Finally, design efforts should take the customer relationship phase into account because the relative effects of customers’ cognitive, affective, and social responses differ across relationship stages.
    The framework generates practical implications that urge firms to consider five interrelated areas: (1) when to deploy avatars, (2) avatar form realism, (3) avatar behavioral realism, (4) form-behavioral realism alignment, and (5) avatar contingency effects for optimal avatar-based marketing applications. More

  • in

    Problematic internet use and teen depression are closely linked

    Most teenagers don’t remember life before the internet. They have grown up in a connected world, and being online has become one of their main sources of learning, entertaining and socializing.
    As many previous studies have pointed out, and as many parents worry, this reality does not come risk-free. Whereas time on the internet can be informative, instructive and even pleasant, there is already significant literature on the potential harm caused by young children’s problematic internet use (PIU).
    However, a new study led by István Tóth-Király, a Horizon Postdoctoral Fellow at the Substantive-Methodological Synergy Research Laboratory in Concordia’s Department of Psychology, is one of only a few that examines PIU’s effects on older adolescents. The paper was co-written by professor of psychology Alexandre Morin and Lauri Hietajärvi and Katariina Salmela-Aro of the University of Helsinki.
    The paper, published in the journal Child Development, looks at data gathered by a longitudinal study of 1,750 high school students in Helsinki over three years.
    It begins by asking three big questions: what were some of the predictors or determinants of PIU? How did PIU change over the course of late adolescence, in this case, ages 16-19? And what are the consequences of PIU among the age group?
    At-risk signals
    The researchers identified three principal determinants of PIU among adolescents. The first was loneliness, defined as a lack of satisfying interpersonal relationships or the perceived inadequacy of social networks. Other studies on PIU also identified loneliness as a predictor.

    advertisement

    Parenting practices, as perceived by the teen, also predicted PIU. The researchers looked at both parental caring, such as the expressions of warmth, empathy, interest and closeness towards the child, and parental neglect, defined as the uneven availability or unresponsiveness to the child’s needs.
    Not surprisingly, better parenting is linked to lower PIU, while neglectful parenting is linked to higher PIU. The researchers noted the differences in how maternal and paternal behaviour affected usage.
    Maternal caring especially was associated with lower PIU, suggesting that high-quality mother-child relationships might have led to a decrease in the need to use the internet excessively. Paternal neglect, on the other hand, had a stronger relationship with higher PIU, as a lack of guidance and limits hindered a teen’s ability to set personal boundaries.
    Finally, the researchers considered gender. They found boys more likely to engage in PIU than girls, as they tend to be prone to more addictive-like behaviour, are more impulsive and, as suggested by other studies, may have more online options such as gaming or watching YouTube videos or pornography. Girls may be more likely to be online for socializing purposes.
    Circular and harmful effects
    The researchers then looked at outcomes associated with PIU, again identifying three broad categories.

    advertisement

    The first is depressive symptoms. If left unchecked, PIU appears to come with higher levels of depression. The two have been linked in previous studies, but Tóth-Király says their findings suggest a new interpretation.
    “Our study tries to understand this relationship in a bi-directional or reciprocal way,” he says. “We think that PIU and depressive symptoms are likely to be co-occurring instead of one determining the other. They likely reinforce one another over time.”
    The other outcomes linked to PIU are higher levels of substance abuse and lower levels of academic achievement. These were to be expected, and were also believed to be co-occurring.
    Tóth-Király says some teens go through a phase of heavy internet use, usually around mid-adolescence. Time spent online tends to decrease as the children mature, develop their own goals and boundaries and form their first romantic relationships. He adds that being online for hours is not necessarily damaging, even if it does seem excessive to parents.
    “If adolescents spend a lot of time on the internet but it doesn’t really impact their mental health or their grades or doesn’t seem to have any substantial negative consequences, then we cannot really say this is problematic behaviour,” he says. More

  • in

    Predicting motion sickness severity from virtual reality

    A new study led by Head of the Rokers Vision Laboratory and NYUAD Associate Professor of Psychology Bas Rokers explored why the severity of motion sickness varies from person to person by investigating sources of cybersickness during VR use.
    In the new study, Variations in visual sensitivity predict motion sickness in virtual reality published in the journal Entertainment Computing, Rokers and his team used VR headsets to simulate visual cues and present videos that induced moderate levels of motion sickness. They found that a person’s ability to detect visual cues predicted the severity of motion sickness symptoms. Specifically, discomfort was due to a specific sensory cue called motion parallax, which is defined as the relative movement of different parts of the environment.
    A previously reported source of variability in motion sickness severity, gender, was also evaluated but not confirmed. The researchers conclude that previously reported gender differences may have been due to poor personalization of VR displays, most of which default to male settings.
    These findings suggest a number of strategies to mitigate motion sickness in VR, including reducing or eliminating specific sensory cues, and ensuring device settings are personalized to each user. Understanding the sources of motion sickness, especially while using technology, not only has the potential to alleviate discomfort, but also to make VR technology a more widely accessible resource for education, job training, healthcare, and entertainment.
    “As we tested sensitivity to sensory cues, a robust relationship emerged. It was clear that the greater an individual’s sensitivity to motion parallax cues, the more severe the motion sickness symptoms,” said Rokers. “It is our hope that these findings will help lead to the more widespread use of powerful VR technologies by removing barriers that prevent many people from taking advantage of its potential.”

    Story Source:
    Materials provided by New York University. Note: Content may be edited for style and length. More

  • in

    New tool makes students better at detecting fake imagery and videos

    Researchers at Uppsala University have developed a digital self-test that trains users to assess news items, images and videos presented on social media. The self-test has also been evaluated in a scientific study, which confirmed the researchers’ hypothesis that the tool genuinely improved the students’ ability to apply critical thinking to digital sources.
    The new tool and the scientific review of it are part of the News Evaluator project to investigate new methods of enhancing young people’s capacity for critical awareness of digital sources, a key component of digital literacy.
    “As research leader in the project, I’m surprised how complicated it is to develop this type of tool against misleading information — one that’s usable on a large scale. Obviously, critically assessing digital sources is complicated. We’ve been working on various designs and tests, with major experiments in school settings, for years. Now we’ve finally got a tool that evidently works. The effect is clearly positive and now we launch the self-test on our News Evaluator website http://www.newsevaluator.com, so that all anyone can test themselves for free,” says Thomas Nygren, associate professor at Uppsala University.
    The tool is structured in a way that allows students to work with it, online, on their own. They get to see news articles in a social-media format, with pictures or videos, and the task is to determine how credible they are. Is there really wood pulp in Parmesan cheese, for instance?
    “The aim is for the students to get better at uncovering what isn’t true, but also improve their understanding of what may be true even if it seems unlikely at first,” Nygren says.
    As user support, the tool contains guidance. Students can follow how a professional would have gone about investigating the authenticity of the statements or images — by opening a new window and doing a separate search alongside the test, or doing a reverse image search, for example. The students are encouraged to learn “lateral reading” (verifying what you read by double checking news). After solving the tasks, the students get feedback on their performance.
    When the tool was tested with just over 200 students’ help, it proved to have had a beneficial effect on their ability to assess sources critically. Students who had received guidance and feedback from the tool showed distinctly better results than those who had not been given this support. The tool also turned out to provide better results in terms of the above-mentioned ability than other, comparable initiatives that require teacher participation and more time.
    Apart from practical tips such as opening a new search tab, doing reverse image searches and not always choosing the search result at the top of the hit page (but, rather, the one that comes from a source you recognise), Nygren has a general piece of advice that can help us all become more critically aware in the digital world:
    “Make sure you keep up to date with information and news from trustworthy sources with credible practices of fact-checking, such as the national TV news programmes or an established daily newspaper. It’s difficult and arduous being critical about sources all the time.”

    Story Source:
    Materials provided by Uppsala University. Note: Content may be edited for style and length. More

  • in

    Microwave-assisted recording technology promises high-density hard disk performance

    Researchers at Toshiba Corporation in Japan have studied the operation of a small device fabricated in the write gap of a hard disk drive’s write head to extend its recording density. The device, developed by HWY Technologies, is based on a design concept known as microwave-assisted magnetic recording, or MAMR.
    This technology, reported in the Journal of Applied Physics, by AIP Publishing, uses a microwave field generator known as a spin-torque oscillator. The spin-torque oscillator emits a microwave field causing the magnetic particles of the recording medium to wobble the way a spinning top does. This makes them much easier to flip over when the write head applies a recording magnetic field in the writing process.
    In a computer’s hard drive, each bit of data is stored in magnetic particles known as grains. The magnetic orientation of the grains determines whether the bit is a 0 or a 1.
    Making the grains smaller allows them to be packed together more tightly. This increases the storage capacity, but it also makes the data bits unstable. The development of MAMR allows more stable magnetic materials to be used but also limits the type of recording media that can be developed.
    The investigators focused on another effect known as the flux control (FC) effect, which also occurs in MAMR. This effect improves the recording field and is maximized when the magnetization of the spin torque oscillator is completely reversed against the gap field.
    The advantage of the FC effect is that improvement is obtained in any magnetic recording, according to author Hirofumi Suto. This is significant, since it would no longer be necessary to use recording media specially designed for the MAMR technology.
    The FC device, a type of spin-torque oscillator designed to maximize the FC effect, consists of two magnetic layers fabricated directly in the write gap of the write head. A bias current supplied to the device reverses the magnetization of one of the layers through an effect known as spin-transfer torque.
    The investigators experimented with different bias currents and found the reversal of magnetization occurred more quickly at higher currents. Upon comparing their experiments to a computational model, they also determined the recording field was enhanced by the FC effect, improving the writability of the write head and exceeding the performance of conventional write heads.
    The FC device operates effectively at a fast write rate of approximately 3 gigabits per second, according to Suto. These results provide evidence that the FC device operates as designed and show that FC-MAMR is a promising technology for extending the areal density of hard disk drives.
    Toshiba plans to introduce hard disk drives using MAMR technology that will increase hard disk capacity to 16-18 terabytes.

    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Microchips of the future: Suitable insulators are still missing

    For decades, there has been a trend in microelectronics towards ever smaller and more compact transistors. 2D materials such as graphene are seen as a beacon of hope here: they are the thinnest material layers that can possibly exist, consisting of only one or a few atomic layers. Nevertheless, they can conduct electrical currents — conventional silicon technology, on the other hand, no longer works properly if the layers become too thin.
    However, such materials are not used in a vacuum; they have to be combined with suitable insulators — in order to seal them off from unwanted environmental influences, and also in order to control the flow of current via the so-called field effect. Until now, hexagonal boron nitride (hBN) has frequently been used for this purpose as it forms an excellent environment for 2D materials. However, studies conducted by TU Wien, in cooperation with ETH Zurich, the Russian Ioffe Institute and researchers from Saudi Arabia and Japan, now show that, contrary to previous assumptions, thin hBN layers are not suitable as insulators for future miniaturised field-effect transistors, as exorbitant leakage currents occur. So if 2D materials are really to revolutionise the semiconductor industry, one has to start looking for other insulator materials. The study has now been published in the scientific journal “Nature Electronics.”
    The supposedly perfect insulator material
    “At first glance, hexagonal boron nitride fits graphene and two-dimensional materials better than any other insulator,” says Theresia Knobloch, first author of the study, who is currently working on her dissertation in Tibor Grasser’s team at the Institute of Microelectronics at TU Wien. “Just like the 2D semiconducting materials, hBN consists of individual atomic layers that are only weakly bonded to each other.”
    As a result, hBN can easily be used to make atomically smooth surfaces that do not interfere with the transport of electrons through 2D materials. “You might therefore think that hBN is the perfect material — both as a substrate on which to place thin-film semiconductors and also as a gate insulator needed to build field-effect transistors,” says Tibor Grasser.
    Small leakage currents with big effects
    A transistor can be compared to a water tap — only instead of a stream of water, electric current is switched on and off. As with a water tap, it is very important for a transistor that nothing leaks out of the valve itself.
    This is exactly what the gate insulator is responsible for in the transistor: It isolates the controlling electrode, via which the current flow is switched on and off, from the semiconducting channel itself, through which the current then flows. A modern microprocessor contains about 50 billion transistors — so even a small loss of current at the gates can play an enormous role, because it significantly increases the total energy consumption.
    In this study, the research team investigated the leakage currents that flow through thin hBN layers, both experimentally and using theoretical calculations. They found that some of the properties that make hBN such a suitable substrate also significantly increase the leakage currents through hBN. Boron nitride has a small dielectric constant, which means that the material interacts only weakly with electric fields. In consequence, the hBN layers used in miniaturised transistors must only be a few atomic layers thick so that the gate’s electric field can sufficiently control the channel. At the same time, however, the leakage currents become too large in this case, as they increase exponentially when reducing the layer thickness.
    The search for insulators
    “Our results show that hBN is not suitable as a gate insulator for miniaturised transistors based on 2D materials,” says Tibor Grasser. “This finding is an important guide for future studies, but it is only the beginning of the search for suitable insulators for the smallest transistors. Currently, no known material system can meet all the requirements, but it is only a matter of time and resources until a suitable material system is found.”
    “The problem is complex, but this makes it all the more important that many scientists devote themselves to the search for a solution, because our society will need small, fast and, above all, energy-efficient computer chips in the future,” Theresia Knobloch is convinced. More

  • in

    Making the role of AI in medicine explainable

    Researchers at Charité — Universitätsmedizin Berlin and TU Berlin as well as the University of Oslo have developed a new tissue-section analysis system for diagnosing breast cancer based on artificial intelligence (AI). Two further developments make this system unique: For the first time, morphological, molecular and histological data are integrated in a single analysis. Secondly, the system provides a clarification of the AI decision process in the form of heatmaps. Pixel by pixel, these heatmaps show which visual information influenced the AI decision process and to what extent, thus enabling doctors to understand and assess the plausibility of the results of the AI analysis. This represents a decisive and essential step forward for the future regular use of AI systems in hospitals. The results of this research have now been published in Nature Machine Intelligence.
    Cancer treatment is increasingly concerned with the molecular characterization of tumor tissue samples. Studies are conducted to determine whether and/or how the DNA has changed in the tumor tissue as well as the gene and protein expression in the tissue sample. At the same time, researchers are becoming increasingly aware that cancer progression is closely related to intercellular cross-talk and the interaction of neoplastic cells with the surrounding tissue — including the immune system.
    Although microscopic techniques enable biological processes to be studied with high spatial detail, they only permit a limited measurement of molecular markers. These are rather determined using proteins or DNA taken from tissue. As a result, spatial detail is not possible and the relationship between these markers and the microscopic structures is typically unclear. “We know that in the case of breast cancer, the number of immigrated immune cells, known as lymphocytes, in tumor tissue has an influence on the patient’s prognosis. There are also discussions as to whether this number has a predictive value — in other words if it enables us to say how effective a particular therapy is,” says Prof. Dr. Frederick Klauschen of Charité’s Institute of Pathology.
    “The problem we have is the following: We have good and reliable molecular data and we have good histological data with high spatial detail. What we don’t have as yet is the decisive link between imaging data and high-dimensional molecular data,” adds Prof. Dr. Klaus-Robert Müller, professor of machine learning at TU Berlin. Both researchers have been working together for a number of years now at the national AI center of excellence the Berlin Institute for the Foundations of Learning and Data (BIFOLD) located at TU Berlin.
    It is precisely this symbiosis which the newly published approach makes possible. “Our system facilitates the detection of pathological alterations in microscopic images. Parallel to this, we are able to provide precise heatmap visualizations showing which pixel in the microscopic image contributed to the diagnostic algorithm and to what extent,” explains Prof. Müller. The research team has also succeeded in significantly further developing this process: “Our analysis system has been trained using machine learning processes so that it can also predict various molecular characteristics, including the condition of the DNA, the gene expression as well as the protein expression in specific areas of the tissue, on the basis of the histological images.
    Next on the agenda are certification and further clinical validations — including tests in tumor routine diagnostics. However, Prof. Klauschen is already convinced of the value of the research: “The methods we have developed will make it possible in the future to make histopathological tumor diagnostics more precise, more standardized and qualitatively better.”

    Story Source:
    Materials provided by Charité – Universitätsmedizin Berlin. Note: Content may be edited for style and length. More

  • in

    First AI system for contactless monitoring of heart rhythm using smart speakers

    Smart speakers, such as Amazon Echo and Google Home, have proven adept at monitoring certain health care issues at home. For example, researchers at the University of Washington have shown that these devices can detect cardiac arrests or monitor babies breathing.
    But what about tracking something even smaller: the minute motion of individual heartbeats in a person sitting in front of a smart speaker?
    UW researchers have developed a new skill for a smart speaker that for the first time monitors both regular and irregular heartbeats without physical contact. The system sends inaudible sounds from the speaker out into a room and, based on the way the sounds are reflected back to the speaker, it can identify and monitor individual heartbeats. Because the heartbeat is such a tiny motion on the chest surface, the team’s system uses machine learning to help the smart speaker locate signals from both regular and irregular heartbeats.
    When the researchers tested this system on healthy participants and hospitalized cardiac patients, the smart speaker detected heartbeats that closely matched the beats detected by standard heartbeat monitors. The team published these findings March 9 in Communications Biology.
    “Regular heartbeats are easy enough to detect even if the signal is small, because you can look for a periodic pattern in the data,” said co-senior author Shyam Gollakota, a UW associate professor in the Paul G. Allen School of Computer Science & Engineering. “But irregular heartbeats are really challenging because there is no such pattern. I wasn’t sure that it would be possible to detect them, so I was pleasantly surprised that our algorithms could identify irregular heartbeats during tests with cardiac patients.”
    While many people are familiar with the concept of a heart rate, doctors are more interested in the assessment of heart rhythm. Heart rate is the average of heartbeats over time, whereas a heart rhythm describes the pattern of heartbeats.

    advertisement

    For example, if a person has a heart rate of 60 beats per minute, they could have a regular heart rhythm — one beat every second — or an irregular heart rhythm — beats are randomly scattered across that minute but they still average out to 60 beats per minute.
    “Heart rhythm disorders are actually more common than some other well-known heart conditions. Cardiac arrhythmias can cause major morbidities such as strokes, but can be highly unpredictable in occurrence, and thus difficult to diagnose,” said co-senior author Dr. Arun Sridhar, assistant professor of cardiology at the UW School of Medicine. “Availability of a low-cost test that can be performed frequently and at the convenience of home can be a game-changer for certain patients in terms of early diagnosis and management.”
    The key to assessing heart rhythm lies in identifying the individual heartbeats. For this system, the search for heartbeats begins when a person sits within 1 to 2 feet in front of the smart speaker. Then the system plays an inaudible continuous sound, which bounces off the person and then returns to the speaker. Based on how the returned sound has changed, the system can isolate movements on the person — including the rise and fall of their chest as they breathe.
    “The motion from someone’s breathing is orders of magnitude larger on the chest wall than the motion from heartbeats, so that poses a pretty big challenge,” said lead author Anran Wang, a doctoral student in the Allen School. “And the breathing signal is not regular so it’s hard to simply filter it out. Using the fact that smart speakers have multiple microphones, we designed a new beam-forming algorithm to help the speakers find heartbeats.”
    The team designed what’s called a self-supervised machine learning algorithm, which learns on the fly instead of from a training set. This algorithm combines signals from all of the smart speaker’s multiple microphones to identify the elusive heartbeat signal.

    advertisement

    “This is similar to how Alexa can always find my voice even if I’m playing a video or if there are multiple people talking in the room,” Gollakota said. “When I say, ‘Hey, Alexa,’ the microphones are working together to find me in the room and listen to what I say next. That’s basically what’s happening here but with the heartbeat.”
    The heartbeat signals that the smart speaker detects don’t look like the typical peaks that are commonly associated with traditional heartbeat monitors. The researchers used a second algorithm to segment the signal into individual heartbeats so that the system could extract what is known as the inter-beat interval, or the amount of time between two heartbeats.
    “With this method, we are not getting the electric signal of the heart contracting. Instead we’re seeing the vibrations on the skin when the heart beats,” Wang said.
    The researchers tested a prototype smart speaker running this system on two groups: 26 healthy participants and 24 hospitalized patients with a diversity of cardiac conditions, including atrial fibrillation and heart failure. The team compared the smart speaker’s inter-beat interval with one from a standard heartbeat monitor. Of the nearly 12,300 heartbeats measured for the healthy participants, the smart speaker’s median inter-beat interval was within 28 milliseconds of the standard monitor. The smart speaker performed almost as well with cardiac patients: of the more than 5,600 heartbeats measured, the median inter-beat interval was within 30 milliseconds of the standard.
    Currently this system is set up for spot checks: If a person is concerned about their heart rhythm, they can sit in front of a smart speaker to get a reading. But the research team hopes that future versions could continuously monitor heartbeats while people are asleep, something that could help doctors diagnose conditions such as sleep apnea.
    “If you have a device like this, you can monitor a patient on an extended basis and define patterns that are individualized for the patient. For example, we can figure out when arrhythmias are happening for each specific patient and then develop corresponding care plans that are tailored for when the patients actually need them,” Sridhar said. “This is the future of cardiology. And the beauty of using these kinds of devices is that they are already in people’s homes.”
    This research was funded by the National Science Foundation. More