More stories

  • in

    Direction decided by rate of coin flip in quantum world

    Flip a coin. Heads? Take a step to the left. Tails? Take a step to the right. In the quantum world? Go in both directions at once, like a wave spreading out. Called the walker analogy, this random process can be applied in both classical and quantum algorithms used in state-of-the-art technologies such as artificial intelligence and data search processes. However, the randomness also makes the walk difficult to control, making it more difficult to precisely design systems.
    A research team based in Japan may be moving toward a more controlled walk by unveiling the mechanism underlying the directional decision of each quantum step and introducing a way to potentially control the direction of movement. They published their results on October 16 in Scientific Reports, a Nature Research journal.
    “In our study, we focused on the coin determining the behavior of the quantum walk to explore controllability,” said paper author Haruna Katayama, graduate student in the Graduate School of Integrated Arts and Sciences at Hiroshima University.
    In classical systems, the coin directs the walker in space: right or left. In quantum systems, a coin is infinitely less reliable, since the walker operates both as a particle stood in one space and as a wave stretched out in every possibility across time.
    “We introduced the time-dependent coin of which the probability of landing on heads or tails varies temporally for unveiling the function of the coin,” Katayama said.
    This time-dependent coin can shift the walker’s position, the researchers found, but the wave characteristic of the walker obscured how much physical space the coin controlled.
    “We succeeded in clarifying the equivalence of two completely different concepts — the equivalence of the rate of change in coin probability and the velocity of the wave — for the first time,” Katayama said. “This unveiled mechanism enables us to control the quantum walk on demand by manipulating the coin with preserving the random process, providing core fundamental elements of innovative quantum information processing technologies such as quantum computing.”
    The researchers determined that how quickly the coin flipped directly affected how quickly the wave moved, resulting in some control of the walker’s movement.
    “The walking mechanism enables us to tailor quantum walks as we desire by manipulating the coin flipping rate,” Katayama said. “In addition, we have found that the quantum walk with the desired trajectory can be realized on demand by designing the coin. Our results open the path towards the control of quantum walks.”

    Story Source:
    Materials provided by Hiroshima University. Note: Content may be edited for style and length. More

  • in

    Early results from DETECT study suggest fitness trackers can predict COVID-19 infections

    Examining data from the first six weeks of their landmark DETECT study, a team of scientists from the Scripps Research Translational Institute sees encouraging signs that wearable fitness devices can improve public health efforts to control COVID-19.
    The DETECT study, launched on March 25, uses a mobile app to collect smartwatch and activity tracker data from consenting participants, and also gathers their self-reported symptoms and diagnostic test results. Any adult living in the United States is eligible to participate in the study by downloading the research app, MyDataHelps.
    In a study that appears today in Nature Medicine, the Scripps Research team reports that wearable devices like Fitbit are capable of identifying cases of COVID-19 by evaluating changes in heart rate, sleep and activity levels, along with self-reported symptom data — and can identify cases with greater success than looking at symptoms alone.
    “What’s exciting here is that we now have a validated digital signal for COVID-19. The next step is to use this to prevent emerging outbreaks from spreading,” says Eric Topol, MD, director and founder of the Scripps Research Translational Institute and executive vice president of Scripps Research. “Roughly 100 million Americans already have a wearable tracker or smartwatch and can help us; all we need is a tiny fraction of them — just 1 percent or 2 percent — to use the app.”
    With data from the app, researchers can see when participants fall out of their normal range for sleep, activity level or resting heart rate; deviations from individual norms are a sign of viral illness or infection.
    But how do they know if the illness causing those changes is COVID-19? To answer that question, the team reviewed data from those who reported developing symptoms and were tested for the novel coronavirus. Knowing the test results enabled them to pinpoint specific changes indicative of COVID-19 versus other illnesses.

    advertisement

    “One of the greatest challenges in stopping COVID-19 from spreading is the ability to quickly identify, trace and isolate infected individuals,” says Giorgio Quer, PhD, director of artificial intelligence at Scripps Research Translational Institute and first author of the study. “Early identification of those who are pre-symptomatic or even asymptomatic would be especially valuable, as people may potentially be even more infectious during this period. That’s the ultimate goal.”
    For the study, the team used health data from fitness wearables and other devices to identify — with roughly 80% prediction accuracy — whether a person who reported symptoms was likely to have COVID-19. This is a significant improvement from other models that only evaluated self-reported symptoms.
    As of June 7, 30,529 individuals had enrolled in the study, with representation from every U.S. state. Of these, 3,811 reported symptoms, 54 tested positive for the coronavirus and 279 tested negative. More sleep and less activity than an individual’s normal levels were significant factors in predicting coronavirus infection.
    The predictive model under development in DETECT might someday help public health officials spot coronavirus hotspots early. It also may encourage people who are potentially infected to immediately seek diagnostic testing and, if necessary, quarantine themselves to avoid spreading the virus.
    “We know that common screening practices for the coronavirus can easily miss pre-symptomatic or asymptomatic cases,” says Jennifer Radin, PhD, an epidemiologist at the Scripps Research Translational Institute who is leading the study. “And infrequent viral tests, with often-delayed results, don’t offer the real-time insights we need to control the spread of the virus.”
    The DETECT team is now actively recruiting more participants for this important research. The goal to enroll more than 100,000 people, which will help the scientists improve their predictions of who will get sick, including those who are asymptomatic. In addition, Radin and her colleagues plan to incorporate data from frontline essential workers who are at an especially high risk of infection.
    Learn more about DETECT at detectstudy.org.
    The study, “Wearable Sensor Data and Self-reported Symptoms for COVID-19 Detection,” is authored by Giorgio Quer, Jennifer M. Radin, Matteo Gadaleta, Katie Baca-Motes, Lauren Ariniello, Edward Ramos, Vik Kheterpal, Eric J. Topol and Steven R Steinhubl.
    Funding for the research was provided by the National Center for Advancing Translational Sciences at the National Institutes of Health [UL1TR00255]. More

  • in

    Corporations directing our attention online more than we realize

    It’s still easy to think we’re in control when browsing the internet, but a new study argues much of that is ‘an illusion.’ Corporations are ‘nudging’ us online more than we realize, and often in hidden ways. Researchers analyzed click-stream data on a million people over one month of internet use to find common browsing sequences, then connected that with site and platform ownership and partnerships, as well as site design and other factors. More

  • in

    Trust levels in AI predicted by people's relationship style

    A University of Kansas interdisciplinary team led by relationship psychologist Omri Gillath has published a new paper in the journal Computers in Human Behavior showing people’s trust in artificial intelligence (AI) is tied to their relationship or attachment style.
    The research indicates for the first time that people who are anxious about their relationships with humans tend to have less trust in AI as well. Importantly, the research also suggests trust in artificial intelligence can be increased by reminding people of their secure relationships with other humans.
    Grand View Research estimated the global artificial-intelligence market at $39.9 billion in 2019, projected to expand at a compound annual growth rate of 42.2% from 2020 to 2027. However, lack of trust remains a key obstacle to adopting new artificial intelligence technologies.
    The new research by Gillath and colleagues suggests new ways to boost trust in artificial intelligence.
    In three studies, attachment style, thought to play a central role in romantic and parent-child relationships, was shown also to affect people’s trust in artificial intelligence. Some of the research’s key findings:
    People’s attachment anxiety predicts less trust in artificial intelligence.
    Enhancing attachment anxiety reduced trust in artificial intelligence.

    advertisement

    Conversely, enhancing attachment security increases trust in artificial intelligence.

    These effects are unique to attachment security and were not found with exposure to positive affect cues.
    “Most research on trust in artificial intelligence focuses on cognitive ways to boost trust. Here we took a different approach by focusing on a ‘relational affective’ route to boost trust, seeing AI as a partner or a team member rather than a device,” said Gillath, professor of psychology at KU.
    “Finding associations between one’s attachment style — an individual difference representing the way people feel, think and behave in close relationships — and her trust in AI paves the way to new understandings and potentially new interventions to induce trust.”
    The research team includes investigators from a wide array of disciplines, including psychology, engineering, business and medicine. This interdisciplinary approach provides a new perspective on artificial intelligence, trust and associations with relational and affective factors.
    “The findings show you can predict and increase people’s trust levels in non-humans based on their early relationships with humans,” Gillath said. “This has the potential to improve adoption of new technologies and the integration of AI in the workplace.”

    Story Source:
    Materials provided by University of Kansas. Note: Content may be edited for style and length. More

  • in

    Researchers take a stand on algorithm design for job centers: Landing a job isn't always the right goal

    Imagine that you are a job consultant. You are sitting across from your client, an unemployed individual.
    After locating them in the system, up pops the following text on the computer screen; ‘increased risk of long-term unemployment’.
    Such assessments are made by an algorithm that, via data on the citizen’s gender, age, residence, education, income, ethnicity, history of illness, etc., spits out an estimate of how long the person — compared to other people from similar backgrounds — is expected to remain in the system and receive benefits.
    But is it reasonable to characterize individual citizens on the basis of what those with similar backgrounds have managed in their job searches? According to a new study from the University of Copenhagen, no.
    “You have to understand that people are human. We get older, become ill and experience tragedies and triumphs. So instead of trying to predict risks for individuals, we ought to look at implementing improved and more transparent courses in the job center arena,” says Naja Holten Møller, an assistant professor at the Department of Computer Science, and one of the researchers behind the study.
    Together with two colleagues from the same department, Professor Thomas Hildebrandt and Professor Irina Shklovski, Møller has explored possible alternatives to using algorithms that predict job readiness for unemployed individuals as well as the ethical aspects that may arise.

    advertisement

    “We studied how to develop algorithms in an ethical and responsible manner, where the goals determined for the algorithm make sense to job consultants as well. Here, it is crucial to find a balance, where the unemployed individual’s current situation is assessed by a job consultant, while at the same time, one learns from similar trajectories using an algorithm,” says Naja Holten Møller.
    Job consultants need to help create the algorithm
    The use of job search algorithms is not a well-thought scenario. Nevertheless, the Danish Agency for Labour Market and Recruitment has already rolled out this type of algorithm to predict the risk of long-term unemployment among the citizenry — despite criticism from several data law experts.
    “Algorithms used in the public sphere must not harm citizens, obviously. By challenging the scenario and the very assumption that the goal of an unemployed person at a job centre is always to land a job, we are better equipped to understand ethical challenges. Unemployment can have many causes. Thus, the study shows that a quick clarification of time frames for the most vulnerable citizens may be a better goal. By doing so, we can avoid the deployment of algorithms that do great harm,” explains Naja Holten Møller.
    The job consultants surveyed in the study expressed concern about how the algorithm’s assessment would affect their own judgment, specifically in relation to vulnerable citizens.

    advertisement

    “A framework must be established in which job consultants can have a real influence on the underlying targets that guide the algorithm. Accomplishing this is difficult and will take time, but is crucial for the outcome. At the same time, it should be kept in mind that algorithms which help make decisions can greatly alter the work of job consultants. Thus, an ethical approach involves considering their advice,” explains Naja Holten Møller.
    We must consider the ethical aspects
    While algorithms can be useful for providing an idea of, for example, how long an individual citizen might expect to be unemployed, this does not mean that it is ethically justifiable to use such predictions in job centers, points out Naja Holten Møller.
    “There is a dream that the algorithm can identify patterns that others are oblivious to. Perhaps it can seem that, for those who have experienced a personal tragedy, a particular path through the system is best. For example, the algorithm could determine that because you’ve been unemployed due to illness or a divorce, your ability to avoid long-term unemployment depends on such and such,” she says, concluding:
    “But what will we do with this information, and can it be deployed in a sensible way to make better decisions? Job consultants are often able to assess for themselves whether a person is likely to be unemployed for an extended period of time. These assessments are shaped by in-person meetings, professionalism and experience — and it is here, within these meetings, that an ethical development of new systems for the public can best be spawned.” More

  • in

    Graphene-based memory resistors show promise for brain-based computing

    As progress in traditional computing slows, new forms of computing are coming to the forefront. At Penn State, a team of engineers is attempting to pioneer a type of computing that mimics the efficiency of the brain’s neural networks while exploiting the brain’s analog nature.
    Modern computing is digital, made up of two states, on-off or one and zero. An analog computer, like the brain, has many possible states. It is the difference between flipping a light switch on or off and turning a dimmer switch to varying amounts of lighting.
    Neuromorphic or brain-inspired computing has been studied for more than 40 years, according to Saptarshi Das, the team leader and Penn State assistant professor of engineering science and mechanics. What’s new is that as the limits of digital computing have been reached, the need for high-speed image processing, for instance for self-driving cars, has grown. The rise of big data, which requires types of pattern recognition for which the brain architecture is particularly well suited, is another driver in the pursuit of neuromorphic computing.
    “We have powerful computers, no doubt about that, the problem is you have to store the memory in one place and do the computing somewhere else,” Das said.
    The shuttling of this data from memory to logic and back again takes a lot of energy and slows the speed of computing. In addition, this computer architecture requires a lot of space. If the computation and memory storage could be located in the same space, this bottleneck could be eliminated.
    “We are creating artificial neural networks, which seek to emulate the energy and area efficiencies of the brain,” explained Thomas Shranghamer, a doctoral student in the Das group and first author on a paper recently published in Nature Communications. “The brain is so compact it can fit on top of your shoulders, whereas a modern supercomputer takes up a space the size of two or three tennis courts.”
    Like synapses connecting the neurons in the brain that can be reconfigured, the artificial neural networks the team is building can be reconfigured by applying a brief electric field to a sheet of graphene, the one-atomic-thick layer of carbon atoms. In this work they show at least 16 possible memory states, as opposed to the two in most oxide-based memristors, or memory resistors.
    “What we have shown is that we can control a large number of memory states with precision using simple graphene field effect transistors,” Das said.
    The team thinks that ramping up this technology to a commercial scale is feasible. With many of the largest semiconductor companies actively pursuing neuromorphic computing, Das believes they will find this work of interest.
    The Army Research Office supported this work. The team has filed for a patent on this invention.

    Story Source:
    Materials provided by Penn State. Original written by Walt Mills. Note: Content may be edited for style and length. More

  • in

    Physicists circumvent centuries-old theory to cancel magnetic fields

    A team of scientists including two physicists at the University of Sussex has found a way to circumvent a 178-year old theory which means they can effectively cancel magnetic fields at a distance. They are the first to be able to do so in a way which has practical benefits.
    The work is hoped to have a wide variety of applications. For example, patients with neurological disorders such as Alzheimer’s or Parkinson’s might in future receive a more accurate diagnosis. With the ability to cancel out ‘noisy’ external magnetic fields, doctors using magnetic field scanners will be able to see more accurately what is happening in the brain.
    The study “Tailoring magnetic fields in inaccessible regions” is published in Physical Review Letters. It is an international collaboration between Dr Mark Bason and Jordi Prat-Camps at the University of Sussex, and Rosa Mach-Batlle and Nuria Del-Valle from the Universitat Autonoma de Barcelona and other institutions.
    “Earnshaw’s Theorem” from 1842 limits the ability to shape magnetic fields. The team were able to calculate an innovative way to circumvent this theory in order to effectively cancel other magnetic fields which can confuse readings in experiments.
    In practical terms, they achieved this through creating a device comprised of a careful arrangement of electrical wires. This creates additional fields and so counteracts the effects of the unwanted magnetic field. Scientists have been struggling with this challenge for years but now the team has found a new strategy to deal with these problematic fields. While a similar effect has been achieved at much higher frequencies, this is the first time it has been achieved at low frequencies and static fields — such as biological frequencies — which will unlock a host of useful applications.
    Other possible future applications for this work include:
    Quantum technology and quantum computing, in which ‘noise’ from exterior magnetic fields can affect experimental readings
    Neuroimaging, in which a technique called ‘transcranial magnetic stimulation’ activates different areas of the brain through magnetic fields. Using the techniques in this paper, doctors might be able to more carefully address areas of the brain needing stimulation.
    Biomedicine, to better control and manipulate nanorobots and magnetic nanoparticles that are moved inside a body by means of external magnetic fields. Potential applications for this development include improved drug delivery and magnetic hyperthermia therapies.
    Dr Rosa Mach-Batlle, the lead author on the paper from the Universitat Autonoma de Barcelona, said: “Starting from the fundamental question of whether it was possible or not to create a magnetic source at a distance, we came up with a strategy for controlling magnetism remotely that we believe could have a significant impact in technologies relying on the magnetic field distribution in inaccessible regions, such as inside of a human body.”
    Dr Mark Bason from the School of Mathematical and Physical Sciences at the University of Sussex said: “We’ve discovered a way to circumvent Earnshaw’s theorem which many people didn’t imagine was possible. As a physicist, that’s pretty exciting. But it’s not just a theoretical exercise as our research might lead to some really important applications: more accurate diagnosis for Motor Neurone Disease patients in future, for example, better understanding of dementia in the brain, or speeding the development of quantum technology.”

    Story Source:
    Materials provided by University of Sussex. Note: Content may be edited for style and length. More

  • in

    Forecasting elections with a model of infectious diseases

    Forecasting elections is a high-stakes problem. Politicians and voters alike are often desperate to know the outcome of a close race, but providing them with incomplete or inaccurate predictions can be misleading. And election forecasting is already an innately challenging endeavor — the modeling process is rife with uncertainty, incomplete information, and subjective choices, all of which must be deftly handled. Political pundits and researchers have implemented a number of successful approaches for forecasting election outcomes, with varying degrees of transparency and complexity. However, election forecasts can be difficult to interpret and may leave many questions unanswered after close races unfold.
    These challenges led researchers to wonder if applying a disease model to elections could widen the community involved in political forecasting. In a paper publishing today in SIAM Review, Alexandria Volkening (Northwestern University), Daniel F. Linder (Augusta University), Mason A. Porter (University of California, Los Angeles), and Grzegorz A. Rempala (The Ohio State University) borrowed ideas from epidemiology to develop a new method for forecasting elections. The team hoped to expand the community that engages with polling data and raise research questions from a new perspective; the multidisciplinary nature of their infectious disease model was a virtue in this regard. “Our work is entirely open-source,” Porter said. “Hopefully that will encourage others to further build on our ideas and develop their own methods for forecasting elections.”
    In their new paper, the authors propose a data-driven mathematical model of the evolution of political opinions during U.S. elections. They found their model’s parameters using aggregated polling data, which enabled them to track the percentages of Democratic and Republican voters over time and forecast the vote margins in each state. The authors emphasized simplicity and transparency in their approach and consider these traits to be particular strengths of their model. “Complicated models need to account for uncertainty in many parameters at once,” Rempala said.
    This study predominantly focused on the influence that voters in different states may exert on each other, since accurately accounting for interactions between states is crucial for the production of reliable forecasts. The election outcomes in states with similar demographics are often correlated, and states may also influence each other asymmetrically; for example, the voters in Ohio may more strongly influence the voters in Pennsylvania than the reverse. The strength of a state’s influence can depend on a number of factors, including the amount of time that candidates spend campaigning there and the state’s coverage in the news. To develop their forecasting approach, the team repurposed ideas from the compartmental modeling of biological diseases. Mathematicians often utilize compartmental models — which categorize individuals into a few distinct types (i.e., compartments) — to examine the spread of infectious diseases like influenza and COVID-19. A widely-studied compartmental model called the susceptible-infected-susceptible (SIS) model divides a population into two groups: those who are susceptible to becoming sick and those who are currently infected. The SIS model then tracks the fractions of susceptible and infected individuals in a community over time, based on the factors of transmission and recovery. When an infected person interacts with a susceptible person, the susceptible individual may become infected. An infected person also has a certain chance of recovering and becoming susceptible again.
    Because there are two major political parties in the U.S., the authors employed a modified version of an SIS model with two types of infections. “We used techniques from mathematical epidemiology because they gave us a means of framing relationships between states in a familiar, multidisciplinary way,” Volkening said. While elections and disease dynamics are certainly different, the researchers treated Democratic and Republican voting inclinations as two possible kinds of “infections” that can spread between states. Undecided, independent, or minor-party voters all fit under the category of susceptible individuals. “Infection” was interpreted as adopting Democratic or Republican opinions, and “recovery” represented the turnover of committed voters to undecided ones.
    In the model, committed voters can transmit their opinions to undecided voters, but the opposite is not true. The researchers took a broad view of transmission, interpreting opinion persuasion as occurring through both direct communication between voters and more indirect methods like campaigning, news coverage, and debates. Individuals can interact and lead to other people changing their opinions both within and between states.
    To determine the values of their models’ mathematical parameters, the authors used polling data on senatorial, gubernatorial, and presidential races from HuffPost Pollster for 2012 and 2016 and RealClearPolitics for 2018. They fit the model to the data for each individual race and simulated the evolution of opinions in the year leading up to each election by tracking the fractions of undecided, Democratic, and Republican voters in each state from January until Election Day. The researchers simulated their final forecasts as if they made them on the eve of Election Day, including all of the polling data but omitting the election results.
    Despite its basis in an unconventional field for election forecasting — namely, epidemiology — the resulting model performed surprisingly well. It forecast the 2012 and 2016 U.S. races for governor, Senate, and presidential office with a similar success rate as popular analyst sites FiveThirtyEight and Sabato’s Crystal Ball. For example, the authors’ success rate for predicting party outcomes at the state level in the 2012 and 2016 presidential elections was 94.1 percent, while FiveThirtyEight had a success rate of 95.1 percent and Sabato’s Crystal Ball had a success rate of 93.1 percent. “We were all initially surprised that a disease-transmission model could produce meaningful forecasts of elections,” Volkening said.
    After establishing their model’s capability to forecast outcomes on the eve of Election Day, the authors sought to determine how early the model could create accurate forecasts. Predictions that are made in the weeks and months before Election Day are particularly meaningful, but producing early forecasts is challenging because fewer polling data are available for model training. By employing polling data from the 2018 senatorial races, the team’s model was able to produce stable forecasts from early August onward with the same success rate as FiveThirtyEight’s final forecasts for those races.
    Despite clear differences between contagion and voting dynamics, this study suggests a valuable approach for describing how political opinions change across states. Volkening is currently applying this model — in collaboration with Northwestern University undergraduate students Samuel Chian, William L. He, and Christopher M. Lee — to forecast the 2020 U.S. presidential, senatorial, and gubernatorial elections. “This project has made me realize that it’s challenging to judge forecasts, especially when some elections are decided by a vote margin of less than one percent,” Volkening said. “The fact that our model does well is exciting, since there are many ways to make it more realistic in the future. We hope that our work encourages folks to think more critically about how they judge forecasts and get involved in election forecasting themselves.” More