More stories

  • in

    Virtual follow-up care is more convenient and just as beneficial to surgical patients

    Surgical patients who participate in virtual follow-up visits after their operations spend a similar amount of time with surgical team members as those who meet face-to-face. Moreover, these patients benefit by spending less time waiting at and traveling to the clinic for in-person appointments, according to research findings presented at the virtual American College of Surgeons Clinical Congress 2020.
    “I think it’s really valuable for patients to understand that, in the virtual space scenario, they are still going to get quality time with their surgical team,” said lead study author Caroline Reinke, MD, FACS, associate professor of surgery at Atrium Health in Charlotte, N.C. “A virtual appointment does not shorten that time, and there is still an ability to answer questions, connect, and address ongoing medical care.”
    Due to the Coronavirus Disease 2019 (COVID-19) pandemic and the widespread adoption of technology, many surgical patients are being offered virtual appointments in place of traditional in-person visits. The researchers say this is one of the first studies to look at how patients spend their time in post-operative virtual visits compared with face-to-face consultations.
    The study design was a non-inferiority, randomized controlled trial that involved more than 400 patients who underwent laparoscopic appendectomy or cholecystectomy at two hospitals in Charlotte, N.C. and were randomized 2:1 to a post-discharge virtual visit or to an in-person visit. The study began in August 2017 but was put on hold in March 2020 due to COVID-19.
    “Other studies have looked at the total visit time, but they haven’t been able to break down the specific amount of time the patient spends with the provider. And we wanted to know if that was the same or different between a virtual visit and an in-person visit,” Dr. Reinke said. “We wanted to get down to the nitty gritty of how much face time was actually being spent between the surgical team member and the patient.”
    Researchers tracked total time the patients spent checking in, waiting in the waiting room and exam room, meeting with the surgical team member, and being discharged after the exam. For in-person visits, on-site waiting time and an estimated drive time was factored into the overall time commitment.

    advertisement

    Just 64 percent of patients completed the follow-up visit. “Sometimes, patients are doing so well after minimally invasive surgery that about 30 percent of these patients don’t show up for a post-operative visit,” Dr. Reinke said.
    Overall, results showed that the total clinic time was longer for in-person visits than virtual visits (58 minutes vs. 19 minutes). However, patients in both groups spent the same amount of face time with a member of their surgical team (8.3 minutes vs. 8.2 minutes) discussing their post-operative recovery.
    “I was pleasantly surprised that the amount of time patients spent with the surgical team member was the same, because one of the main concerns with virtual visits is that patients feel disconnected and that there isn’t as much value in it,” Dr. Reinke said.
    Importantly, patients placed a high value on convenience and flexibility. “We received overwhelmingly positive responses to this patient-centered care option.” Dr. Reinke said. “Patients were able to do the post-operative visit at work or at home while caring for children, without having to disrupt their day in such a significant way.”
    The researchers also found that patients embraced the virtual scenario. The satisfaction rate between both groups of patients was similar (94 percent vs. 98 percent).

    advertisement

    In addition, wait time was much less for patients who got virtual care. “Even for virtual visits, the amount of time the patients spent checking in and waiting was about 55 percent of total time. Because virtual visits have the same regulations as in-person visits, even if you take out the components of waiting room and patient flow within the clinic, patients are still spending about half of their time on the logistics of check in,” Dr. Reinke. “Yet, with virtual visits, there is still much less time spent waiting, about 80 percent less time.”
    Still, some patients are not comfortable with the technology. The number of patients who couldn’t or didn’t want to do a virtual visit was higher than expected, according to the authors.
    “I think there are some patients that would really just rather come in and shake someone’s hand,” Dr. Reinke said. “I think for surgery it’s a little bit different, because with surgical care there are incisions to check on. However, we were able to check on incisions pretty easily, having patients show us their incisions virtually on the video screen.”
    This research was supported by the American College of Surgeons Franklin H. Martin Faculty Research Fellowship. “FACS” designates that a surgeon is a Fellow of the American College of Surgeons.
    Citation: The Value of Time: Analysis of Surgical Post-Discharge Virtual vs. In-Person Visits. Scientific Forum, American College of Surgeons Clinical Congress 2020, October 3-7, 2020. More

  • in

    New model examines how societal influences affect U.S. political opinions

    Northwestern University researchers have developed the first quantitative model that captures how politicized environments affect U.S. political opinion formation and evolution.
    Using the model, the researchers seek to understand how populations change their opinions when exposed to political content, such as news media, campaign ads and ordinary personal exchanges. The math-based framework is flexible, allowing future data to be incorporated as it becomes available.
    “It’s really powerful to understand how people are influenced by the content that they see,” said David Sabin-Miller, a Northwestern graduate student who led the study. “It could help us understand how populations become polarized, which would be hugely beneficial.”
    “Quantitative models like this allow us to run computational experiments,” added Northwestern’s Daniel Abrams, the study’s senior author. “We could simulate how various interventions might help fix extreme polarization to promote consensus.”
    The paper will be published on Thursday (Oct. 1) in the journal Physical Review Research.
    Abrams is an associate professor of engineering sciences and applied mathematics in Northwestern’s McCormick School of Engineering. Sabin-Miller is a graduate student in Abrams’ laboratory.

    advertisement

    Researchers have been modeling social behavior for hundreds of years. But most modern quantitative models rely on network science, which simulates person-to-person human interactions.
    The Northwestern team takes a different, but complementary, approach. They break down all interactions into perceptions and reactions. A perception takes into account how people perceive a politicized experience based on their current ideology. A far-right Republican, for example, likely will perceive the same experience differently than a far-left Democrat.
    After perceiving new ideas or information, people might change their opinions based on three established psychological effects: attraction/repulsion, tribalism and perceptual filtering. Northwestern’s quantitative model incorporates all three of these and examines their impact.
    “Typically, ideas that are similar to your beliefs can be convincing or attractive,” Sabin-Miller said. “But once ideas go past a discomfort point, people start rejecting what they see or hear. We call this the ‘repulsion distance,’ and we are trying to define that limit through modeling.”
    People also react differently depending on whether or not the new idea or information comes from a trusted source. Known as tribalism, people tend to give the benefit of the doubt to a perceived ally. In perceptual filtering, people — either knowingly through direct decisions or unknowingly through algorithms that curate content — determine what content they see.
    “Perceptual filtering is the ‘media bubble’ that people talk about,” Abrams explained. “You’re more likely to see things that are consistent with your existing beliefs.”
    Abrams and Sabin-Miller liken their new model to thermodynamics in physics — treating individual people like gas molecules that distribute around a room.
    “Thermodynamics does not focus on individual particles but the average of a whole system, which includes many, many particles,” Abrams said. “We hope to do the same thing with political opinions. Even though we can’t say how or when one individual’s opinion might change, we can look at how the whole population changes, on average.”

    Story Source:
    Materials provided by Northwestern University. Original written by Amanda Morris. Note: Content may be edited for style and length. More

  • in

    New tool shows main highways of disease development

    As people get older they often jump from disease to disease and carry the burden of more chronic diseases at once. But is there a system in the way diseases follow each other? Danish researchers have for the past six years developed a comprehensive tool, the Danish Disease Trajectory Browser, that utilizes 25 years of public health data from Danish patients to explore what they call the main highways of disease development.
    “A lot of research focus is on investigating one disease at a time. We try to add a time perspective and look at multiple diseases following each other to discover where are the most common trajectories — what are the disease highways that we as people encounter,” says professor Søren Brunak from the Novo Nordisk Foundation Center for Protein Research at University of Copenhagen.
    To illustrate the use of the tool the research group looked at data for Down Syndrome patients and showed, as expected, that these patients in general are diagnosed with Alzheimer’s Disease at an earlier age that others. Other frequent diseases are displayed as well.
    The Danish Disease Trajectory Browser is published in Nature Communications.
    Making health data accessible for research
    In general, there is a barrier for working with health data in research. Both in terms of getting approval from authorities to handle patient data and the fact that researchers need specific technical skills to extract meaningful information from the data.

    advertisement

    “We wanted to make an easily accessible tool for researchers and health professionals where they don’t necessarily need to know all the details. The statistical summary data on disease to disease jumps in the tool are not person-sensitive. We compute statistics over many patients and have boiled it down to data points that visualize how often patients with one disease get a specific other disease at a later point. So we are focusing on the sequence of diseases,” says Søren Brunak.
    The Danish Disease Trajectory Browser is freely available for the scientific community and uses WHO’s disease codes. Even though there are regional differences in disease patterns the tool is highly relevant in an international context to compare i.e. how fast diseases progress in different countries.
    Disease trajectories can help in personalized medicine
    For Søren Brunak the tool has a great potential in personalized medicine.
    “In personalized medicine a part of the job is to divide patients into subgroups that will benefit most from a specific treatment. By knowing the disease trajectories you can create subgroups of patients not just by their current disease, but based on their previous conditions and expected future conditions as well. In that way you find different subgroups of patients that may need different treatment strategies,” Søren Brunak explains.
    Currently the Disease Trajectory Browser contains data from 1994 to 2018 and will continuously be updated with new data.
    The Danish Disease Trajectory Browser is freely accessible here: http://dtb.cpr.ku.dk

    Story Source:
    Materials provided by University of Copenhagen The Faculty of Health and Medical Sciences. Note: Content may be edited for style and length. More

  • in

    Tool helps clear biases from computer vision

    Researchers at Princeton University have developed a tool that flags potential biases in sets of images used to train artificial intelligence (AI) systems. The work is part of a larger effort to remedy and prevent the biases that have crept into AI systems that influence everything from credit services to courtroom sentencing programs.
    Although the sources of bias in AI systems are varied, one major cause is stereotypical images contained in large sets of images collected from online sources that engineers use to develop computer vision, a branch of AI that allows computers to recognize people, objects and actions. Because the foundation of computer vision is built on these data sets, images that reflect societal stereotypes and biases can unintentionally influence computer vision models.
    To help stem this problem at its source, researchers in the Princeton Visual AI Lab have developed an open-source tool that automatically uncovers potential biases in visual data sets. The tool allows data set creators and users to correct issues of underrepresentation or stereotypical portrayals before image collections are used to train computer vision models. In related work, members of the Visual AI Lab published a comparison of existing methods for preventing biases in computer vision models themselves, and proposed a new, more effective approach to bias mitigation.
    The first tool, called REVISE (REvealing VIsual biaSEs), uses statistical methods to inspect a data set for potential biases or issues of underrepresentation along three dimensions: object-based, gender-based and geography-based. A fully automated tool, REVISE builds on earlier work that involved filtering and balancing a data set’s images in a way that required more direction from the user. The study was presented Aug. 24 at the virtual European Conference on Computer Vision.
    REVISE takes stock of a data set’s content using existing image annotations and measurements such as object counts, the co-occurrence of objects and people, and images’ countries of origin. Among these measurements, the tool exposes patterns that differ from median distributions.
    For example, in one of the tested data sets, REVISE showed that images including both people and flowers differed between males and females: Males more often appeared with flowers in ceremonies or meetings, while females tended to appear in staged settings or paintings. (The analysis was limited to annotations reflecting the perceived binary gender of people appearing in images.)
    Once the tool reveals these sorts of discrepancies, “then there’s the question of whether this is a totally innocuous fact, or if something deeper is happening, and that’s very hard to automate,” said Olga Russakovsky, an assistant professor of computer science and principal investigator of the Visual AI Lab. Russakovsky co-authored the paper with graduate student Angelina Wang and Arvind Narayanan, an associate professor of computer science.

    advertisement

    For example, REVISE revealed that objects including airplanes, beds and pizzas were more likely to be large in the images including them than a typical object in one of the data sets. Such an issue might not perpetuate societal stereotypes, but could be problematic for training computer vision models. As a remedy, the researchers suggest collecting images of airplanes that also include the labels mountain, desert or sky.
    The underrepresentation of regions of the globe in computer vision data sets, however, is likely to lead to biases in AI algorithms. Consistent with previous analyses, the researchers found that for images’ countries of origin (normalized by population), the United States and European countries were vastly overrepresented in data sets. Beyond this, REVISE showed that for images from other parts of the world, image captions were often not in the local language, suggesting that many of them were captured by tourists and potentially leading to a skewed view of a country.
    Researchers who focus on object detection may overlook issues of fairness in computer vision, said Russakovsky. “However, this geography analysis shows that object recognition can still can be quite biased and exclusionary, and can affect different regions and people unequally,” she said.
    “Data set collection practices in computer science haven’t been scrutinized that thoroughly until recently,” said co-author Angelina Wang, a graduate student in computer science. She said images are mostly “scraped from the internet, and people don’t always realize that their images are being used [in data sets]. We should collect images from more diverse groups of people, but when we do, we should be careful that we’re getting the images in a way that is respectful.”
    “Tools and benchmarks are an important step … they allow us to capture these biases earlier in the pipeline and rethink our problem setup and assumptions as well as data collection practices,” said Vicente Ordonez-Roman, an assistant professor of computer science at the University of Virginia who was not involved in the studies. “In computer vision there are some specific challenges regarding representation and the propagation of stereotypes. Works such as those by the Princeton Visual AI Lab help elucidate and bring to the attention of the computer vision community some of these issues and offer strategies to mitigate them.”
    A related study from the Visual AI Lab examined approaches to prevent computer vision models from learning spurious correlations that may reflect biases, such as overpredicting activities like cooking in images of women, or computer programming in images of men. Visual cues such as the fact that zebras are black and white, or basketball players often wear jerseys, contribute to the accuracy of the models, so developing effective models while avoiding problematic correlations is a significant challenge in the field.

    advertisement

    In research presented in June at the virtual International Conference on Computer Vision and Pattern Recognition, electrical engineering graduate student Zeyu Wang and colleagues compared four different techniques for mitigating biases in computer vision models.
    They found that a popular technique known as adversarial training, or “fairness through blindness,” harmed the overall performance of image recognition models. In adversarial training, the model cannot consider information about the protected variable — in the study, the researchers used gender as a test case. A different approach, known as domain-independent training, or “fairness through awareness,” performed much better in the team’s analysis.
    “Essentially, this says we’re going to have different frequencies of activities for different genders, and yes, this prediction is going to be gender-dependent, so we’re just going to embrace that,” said Russakovsky.
    The technique outlined in the paper mitigates potential biases by considering the protected attribute separately from other visual cues.
    “How we really address the bias issue is a deeper problem, because of course we can see it’s in the data itself,” said Zeyu Wang. “But in in the real world, humans can still make good judgments while being aware of our biases” — and computer vision models can be set up to work in a similar way, he said. More

  • in

    AI can detect COVID-19 in the lungs like a virtual physician, new study shows

    A University of Central Florida researcher is part of a new study showing that artificial intelligence can be nearly as accurate as a physician in diagnosing COVID-19 in the lungs.
    The study, recently published in Nature Communications, shows the new technique can also overcome some of the challenges of current testing.
    Researchers demonstrated that an AI algorithm could be trained to classify COVID-19 pneumonia in computed tomography (CT) scans with up to 90 percent accuracy, as well as correctly identify positive cases 84 percent of the time and negative cases 93 percent of the time.
    CT scans offer a deeper insight into COVID-19 diagnosis and progression as compared to the often-used reverse transcription-polymerase chain reaction, or RT-PCR, tests. These tests have high false negative rates, delays in processing and other challenges.
    Another benefit to CT scans is that they can detect COVID-19 in people without symptoms, in those who have early symptoms, during the height of the disease and after symptoms resolve.
    However, CT is not always recommended as a diagnostic tool for COVID-19 because the disease often looks similar to influenza-associated pneumonias on the scans.

    advertisement

    The new UCF co-developed algorithm can overcome this problem by accurately identifying COVID-19 cases, as well as distinguishing them from influenza, thus serving as a great potential aid for physicians, says Ulas Bagci, an assistant professor in UCF’s Department of Computer Science.
    Bagci was a co-author of the study and helped lead the research.
    “We demonstrated that a deep learning-based AI approach can serve as a standardized and objective tool to assist healthcare systems as well as patients,” Bagci says. “It can be used as a complementary test tool in very specific limited populations, and it can be used rapidly and at large scale in the unfortunate event of a recurrent outbreak.”
    Bagci is an expert in developing AI to assist physicians, including using it to detect pancreatic and lung cancers in CT scans.
    He also has two large, National Institutes of Health grants exploring these topics, including $2.5 million for using deep learning to examine pancreatic cystic tumors and more than $2 million to study the use of artificial intelligence for lung cancer screening and diagnosis.

    advertisement

    To perform the study, the researchers trained a computer algorithm to recognize COVID-19 in lung CT scans of 1,280 multinational patients from China, Japan and Italy.
    Then they tested the algorithm on CT scans of 1,337 patients with lung diseases ranging from COVID-19 to cancer and non-COVID pneumonia.
    When they compared the computer’s diagnoses with ones confirmed by physicians, they found that the algorithm was extremely proficient in accurately diagnosing COVID-19 pneumonia in the lungs and distinguishing it from other diseases, especially when examining CT scans in the early stages of disease progression.
    “We showed that robust AI models can achieve up to 90 percent accuracy in independent test populations, maintain high specificity in non-COVID-19 related pneumonias, and demonstrate sufficient generalizability to unseen patient populations and centers,” Bagci says.
    The UCF researcher is a longtime collaborator with study co-authors Baris Turkbey and Bradford J. Wood. Turkbey is an associate research physician at the NIH’s National Cancer Institute Molecular Imaging Branch, and Wood is the director of NIH’s Center for Interventional Oncology and chief of interventional radiology with NIH’s Clinical Center.
    This research was supported with funds from the NIH Center for Interventional Oncology and the Intramural Research Program of the National Institutes of Health, intramural NIH grants, the NIH Intramural Targeted Anti-COVID-19 program, the National Cancer Institute and NIH.
    Bagci received his doctorate in computer science from the University of Nottingham in England and joined UCF’s Department of Computer Science, part of the College of Engineering and Computer Science, in 2015. He is the Science Applications International Corp (SAIC) chair in UCF’s Department of Computer Science and a faculty member of UCF’s Center for Research in Computer Vision. SAIC is a Virginia-based government support and services company. More

  • in

    Drugs aren't typically tested on women — artificial intelligence could correct that bias

    Researchers at Columbia University have developed AwareDX — Analysing Women At Risk for Experiencing Drug toXicity — a machine learning algorithm that identifies and predicts differences in adverse drug effects between men and women by analyzing 50 years’ worth of reports in an FDA database. The algorithm, described September 22 in the journal Patterns, automatically corrects for the biases in these data that stem from an overrepresentation of male subjects in clinical research trials.
    Though men and women can have different responses to medications — the sleep aid Ambien, for example, metabolizes more slowly in women, causing next-day grogginess — even doctors may not know about these differences because most clinical trial data itself are biased toward men. This trickles down to impact prescribing guidelines, drug marketing, and ultimately, patients’ health.
    “Pharma has a history of ignoring complex problems. Traditionally, clinical trials have not even included women in their studies. The old-fashioned way used to be to get a group of healthy guys together to give them the drug, make sure it didn’t kill them, and you’re off to the races. As a result, we have a lot less information about how women respond to drugs than men,” says Nicholas Tatonetti (@nicktatonetti), an associate professor of biomedical informatics at Columbia University and a co-author on the paper. “We haven’t had the ability to evaluate these differences before, or even to quantify them.”
    Tatonetti teamed up with one of his students — Payal Chandak, a senior biomedical informatics major at Columbia University and the other co-author on the paper. Together they developed AwareDX. Because it is a machine learning algorithm, AwareDX can automatically adjust for sex-based biases in a way that would take concerted effort to do manually.
    “Machine learning is definitely a buzzword, but essentially the idea is to correct for these biases before you do any other statistical analysis by building a balanced subset of patients with equal parts men and women for each drug,” says Chandak.
    The algorithm uses data from the FDA Adverse Event Reporting System (FAERS), which contains reports of adverse drug effects from consumers, healthcare providers, and manufacturers all the way back to 1968. AwareDX groups the data into sex-balanced subsets before looking for patterns and trends. To improve the results, the algorithm then repeats the whole process 25 times.
    The researchers compiled the results into a bank of over 20,000 potential sex-specific drug effects, which can then be verified either by looking back at older data or by conducting new studies down the line. Though there is a lot of work left to do, the researchers have already had success verifying the results for several drugs based on previous genetic research.
    For example, the ABCB1 gene, which affects how much of a drug is usable by the body and for how long, is known to be more active in men than women. Because of this, the researchers expected to see a greater risk of muscle aches for men taking simvastatin — a cholesterol medication — and a greater risk of slowing heart rate for women taking risperidone — an antipsychotic. AwareDX successfully predicted both of these effects.
    “The most exciting thing to me is that not only do we have a database of adverse events that we’ve developed from this FDA resource, but we’ve shown that for some of these events, there is preexisting knowledge of genetic differences between men and women,” says Chandak. “Using that knowledge, we can actually predict different responses that men and women should have and validate our method against those. That gives us a lot of confidence in the method itself.”
    By continuing to verify their results, the researchers hope that the insights from AwareDX will help doctors make more informed choices when prescribing drugs, especially to women. “Doctors actually look at adverse effect information specific to the drug they prescribe. So once this information is studied further and corroborated, it’s actually going to impact drug prescriptions and people’s health,” says Tatonetti.
    This work was supported by National Institutes of Health.

    Story Source:
    Materials provided by Cell Press. Note: Content may be edited for style and length. More

  • in

    Screen time can change visual perception — and that's not necessarily bad

    The coronavirus pandemic has shifted many of our interactions online, with Zoom video calls replacing in-person classes, work meetings, conferences and other events. Will all that screen time damage our vision?
    Maybe not. It turns out that our visual perception is highly adaptable, according to research from Psychology Professor and Cognitive and Brain Sciences Coordinator Peter Gerhardstein’s lab at Binghamton University.
    Gerhardstein, Daniel Hipp and Sara Olsen — his former doctoral students — will publish “Mind-Craft: Exploring the Effect of Digital Visual Experience on Changes in Orientation Sensitivity in Visual Contour Perception,” in an upcoming issue of the academic journal Perception. Hipp, the lead author and main originator of the research, is now at the VA Eastern Colorado Health Care System’s Laboratory for Clinical and Translational Research. Olsen, who designed stimuli for the research and aided in the analysis of the results, is now at the University of Minnesota’s Department of Psychiatry.
    “The finding in the work is that the human perceptual system rapidly adjusts to a substantive alteration in the statistics of the visual world, which, as we show, is what happens when someone is playing video games,” Gerhardstein said.
    The experiments
    The research focuses on a basic element of vision: our perception of orientation in the environment.

    advertisement

    Take a walk through the Binghamton University Nature Preserve and look around. Stimuli — trees, branches, bushes, the path — are oriented in many different angles. According to an analysis by Hipp, there is a slight predominance of horizontal and then vertical planes — think of the ground and the trees — but no shortage of oblique angles.
    Then consider the “carpentered world” of a cityscape — downtown Binghamton, perhaps. The percentage of horizontal and vertical orientations increases dramatically, while the obliques fall away. Buildings, roofs, streets, lampposts: The cityscape is a world of sharp angles, like the corner of a rectangle. The digital world ramps up the predominance of the horizontal and vertical planes, Gerhardstein explained.
    Research shows that we tend to pay more attention to horizontal and vertical orientations, at least in the lab; in real-world environments, these differences probably aren’t noticeable, although they likely still drive behavior. Painters, for example, tend to exacerbate these distinctions in their work, a focus of a different research group.
    Orientation is a fundamental aspect of how our brain and eyes work together to build the visual world. Interestingly, it’s not fixed; our visual system can adapt to changes swiftly, as the group’s two experiments show.
    The first experiment established a method of eye tracking that doesn’t require an overt response, such as touching a screen. The second had college students play four hours of Minecraft — one of the most popular computer games in the world — before and after showing them visual stimuli. Then, researchers determined subjects’ ability to perceive phenomena in the oblique and vertical/horizontal orientations using the eye-tracking method from the first experiment.

    advertisement

    A single session produced a clearly detectable change. While the screen-less control group showed no changes in their perception, the game-players detected horizontal and vertical orientations more easily. Neither group changed their perception in oblique orientations.
    We still don’t know how temporary these changes are, although Gerhardstein speculates that the vision of the game-playing research subjects likely returned to normal quickly.
    “So, the immediate takeaway is the impressive extent to which the young adult visual system can rapidly adapt to changes in the statistics of the visual environment,” he said.
    In the next phase of research, Gerhardstein’s lab will track the visual development of two groups of children, one assigned to regularly play video games and the other to avoid screen-time, including television. If the current experiment is any indication, there may be no significant differences, at least when it comes to orientation sensitivity. The pandemic has put in-person testing plans on hold, although researchers have given a survey about children’s playing habits to local parents and will use the results to design a study.
    Adaptive vision
    Other research groups who have examined the effects of digital exposure on other aspects of visual perception have concluded that long-term changes do take place, at least some of which are seen as helpful.
    Helpful? Like other organisms, humans tend to adapt fully to the environment they experience. The first iPhone came out in 2008 and the first iPad in 2010. Children who are around 10 to 12 years old have grown up with these devices, and will live and operate in a digital world as adults, Gerhardstein pointed out.
    “Is it adaptive for them to develop a visual system that is highly sensitive to this particular environment? Many would argue that it is,” he said. “I would instead suggest that a highly flexible system that can shift from one perceptual ‘set’ to another rapidly, so that observers are responding appropriately to the statistics of a digital environment while interacting with digital media, and then shifting to respond appropriately to the statistics of a natural scene or a cityscape, would be most adaptive.” More

  • in

    New detector breakthrough pushes boundaries of quantum computing

    Physicists at Aalto University and VTT Technical Research Centre of Finland have developed a new detector for measuring energy quanta at unprecedented resolution. This discovery could help bring quantum computing out of the laboratory and into real-world applications. The results have been published today in Nature.
    The type of detector the team works on is called a bolometer, which measures the energy of incoming radiation by measuring how much it heats up the detector. Professor Mikko Möttönen’s Quantum Computing and Devices group at Aalto has been developing their expertise in bolometers for quantum computing over the past decade, and have now developed a device that can match current state-of-the-art detectors used in quantum computers.
    ‘It is amazing how we have been able to improve the specs of our bolometer year after year, and now we embark on an exciting journey into the world of quantum devices,’ says Möttönen.
    Measuring the energy of qubits is at the heart of how quantum computers operate. Most quantum computers currently measure a qubit’s energy state by measuring the voltage induced by the qubit. However, there are three problems with voltage measurements: firstly, measuring the voltage requires extensive amplification circuitry, which may limit the scalability of the quantum computer; secondly, this circuitry consumes a lot of power; and thirdly, the voltage measurements carry quantum noise which introduces errors in the qubit readout. Quantum computer researchers hope that by using bolometers to measure qubit energy, they can overcome all of these complications, and now Professor Möttönen’s team have developed one that is fast enough and sensitive enough for the job.
    ‘Bolometers are now entering the field of quantum technology and perhaps their first application could be in reading out the quantum information from qubits. The bolometer speed and accuracy seems now right for it,’ says Professor Möttönen.
    The team had previously produced a bolometer made of a gold-palladium alloy with unparalleled low noise levels in its measurements, but it was still too slow to measure qubits in quantum computers. The breakthrough in this new work was achieved by swapping from making the bolometer out of gold-palladium alloys to making them out of graphene. To do this, they collaborated with Professor Pertti Hakonen’s NANO group — also at Aalto University — who have expertise in fabricating graphene-based devices. Graphene has a very low heat capacity, which means that it is possible to detect very small changes in its energy quickly. It is this speed in detecting the energy differences that makes it perfect for a bolometer with applications in measuring qubits and other experimental quantum systems. By swapping to graphene, the researchers have produced a bolometer that can make measurements in well below a microsecond, as fast as the technology currently used to measure qubits.
    ‘Changing to graphene increased the detector speed by 100 times, while the noise level remained the same. After these initial results, there is still a lot of optimisation we can do to make the device even better,’ says Professor Hakonen.
    Now that the new bolometers can compete when it comes to speed, the hope is to utilise the other advantages bolometers have in quantum technology. While the bolometers reported in the current work performs on par with the current state-of-the-art voltage measurements, future bolometers have the potential to outperform them. Current technology is limited by Heisenberg’s uncertainty principle: voltage measurements will always have quantum noise, but bolometers do not. This higher theoretical accuracy, combined with the lower energy demands and smaller size — the graphene flake could fit comfortably inside a single bacterium — means that bolometers are an exciting new device concept for quantum computing.
    The next steps for their research is to resolve the smallest energy packets ever observed using bolometers in real-time and to use the bolometer to measure the quantum properties of microwave photons, which not only have exciting applications in quantum technologies such as computing and communications, but also in fundamental understanding of quantum physics.
    Many of the scientists involved in the researchers also work at IQM, a spin-out of Aalto University developing technology for quantum computers. “IQM is constantly looking for new ways to enhance its quantum-computer technology and this new bolometer certainly fits the bill,” explains Dr Kuan Yen Tan, Co-Founder of IQM who was also involved in the research.

    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More