More stories

  • in

    Scientists find evidence of exotic state of matter in candidate material for quantum computers

    Using a novel technique, scientists working at the Florida State University-headquartered National High Magnetic Field Laboratory have found evidence for a quantum spin liquid, a state of matter that is promising as a building block for the quantum computers of tomorrow.
    Researchers discovered the exciting behavior while studying the so-called electron spins in the compound ruthenium trichloride. Their findings, published today in the journal Nature Physics , show that electron spins interact across the material, effectively lowering the overall energy. This type of behavior — consistent with a quantum spin liquid — was detected in ruthenium trichloride at high temperatures and in high magnetic fields.
    Spin liquids, first theorized in 1973, remain something of a mystery. Despite some materials showing promising signs for this state of matter, it is extremely challenging to definitively confirm its existence. However, there is great interest in them because scientists believe they could be used for the design of smarter materials in a variety of applications, such as quantum computing.
    This study provides strong support that ruthenium trichloride is a spin liquid, said physicist Kim Modic, a former graduate student who worked at the MagLab’s pulsed field facility and is now an assistant professor at the Institute of Science and Technology Austria.
    “I think this paper provides a fresh perspective on ruthenium trichloride and demonstrates a new way to look for signatures of spin liquids,” said Modic, the paper’s lead author.
    For decades, physicists have extensively studied the charge of an electron, which carries electricity, paving the way for advances in electronics, energy and other areas. But electrons also have a property called spin. Scientists want to also leverage the spin aspect of electrons for technology, but the universal behavior of spins is not yet fully understood.

    advertisement

    In simple terms, electrons can be thought of as spinning on an axis, like a top, oriented in some direction. In magnetic materials, these spins align with one another, either in the same or opposite directions. Called magnetic ordering, this behavior can be induced or suppressed by temperature or magnetic field. Once the magnetic order is suppressed, more exotic states of matter could emerge, such as quantum spin liquids.
    In the search for a spin liquid, the research team homed in on ruthenium trichloride. Its honeycomb-like structure, featuring a spin at each site, is like a magnetic version of graphene — another hot topic in condensed matter physics.
    “Ruthenium is much heavier than carbon, which results in strong interactions among the spins,” said MagLab physicist Arkady Shekhter, a co-author on the paper.
    The team expected those interactions would enhance magnetic frustration in the material. That’s a kind of “three’s company” scenario in which two spins pair up, leaving the third in a magnetic limbo, which thwarts magnetic ordering. That frustration, the team hypothesized, could lead to a spin liquid state. Their data ended up confirming their suspicions.
    “It seems like, at low temperatures and under an applied magnetic field, ruthenium trichloride shows signs of the behavior that we’re looking for,” Modic said. “The spins don’t simply orient themselves depending on the alignment of neighboring spins, but rather are dynamic — like swirling water molecules — while maintaining some correlation between them.”
    The findings were enabled by a new technique that the team developed called resonant torsion magnetometry, which precisely measures the behavior of electron spins in high magnetic fields and could lead to many other new insights about magnetic materials, Modic said.

    advertisement

    “We don’t really have the workhorse techniques or the analytical machinery for studying the excitations of electron spins, like we do for charge systems,” Modic said. “The methods that do exist typically require large sample sizes, which may not be available. Our technique is highly sensitive and works on tiny, delicate samples. This could be a game-changer for this area of research.”
    Modic developed the technique as a postdoctoral researcher and then worked with MagLab physicists Shekhter and Ross McDonald, another co-author on the paper, to measure ruthenium trichloride in high magnetic fields.
    Their technique involved mounting ruthenium trichloride samples onto a cantilever the size of a strand of hair. They repurposed a quartz tuning fork — similar to that in a quartz crystal watch — to vibrate the cantilever in a magnetic field. Instead of using it to tell time precisely, they measured the frequency of vibration to study the interaction between the spins in ruthenium trichloride and the applied magnetic field. They performed their measurements in two powerful magnets at the National MagLab.
    “The beauty of our approach is that it’s a relatively simple setup, which allowed us to carry out our measurements in both a 35-tesla resistive magnet and a 65-tesla pulsed field magnet,” Modic said.
    The next step in the research will be to study this system in the MagLab’s world-record 100-tesla pulsed magnet.
    “That high of a magnetic field should allow us to directly observe the suppression of the spin liquid state, which will help us learn even more about this compound’s inner workings,” Shekhter said. More

  • in

    New algorithm could unleash the power of quantum computers

    A new algorithm that fast forwards simulations could bring greater use ability to current and near-term quantum computers, opening the way for applications to run past strict time limits that hamper many quantum calculations.
    “Quantum computers have a limited time to perform calculations before their useful quantum nature, which we call coherence, breaks down,” said Andrew Sornborger of the Computer, Computational, and Statistical Sciences division at Los Alamos National Laboratory, and senior author on a paper announcing the research. “With a new algorithm we have developed and tested, we will be able to fast forward quantum simulations to solve problems that were previously out of reach.”
    Computers built of quantum components, known as qubits, can potentially solve extremely difficult problems that exceed the capabilities of even the most powerful modern supercomputers. Applications include faster analysis of large data sets, drug development, and unraveling the mysteries of superconductivity, to name a few of the possibilities that could lead to major technological and scientific breakthroughs in the near future.
    Recent experiments have demonstrated the potential for quantum computers to solve problems in seconds that would take the best conventional computer millennia to complete. The challenge remains, however, to ensure a quantum computer can run meaningful simulations before quantum coherence breaks down.
    “We use machine learning to create a quantum circuit that can approximate a large number of quantum simulation operations all at once,” said Sornborger. “The result is a quantum simulator that replaces a sequence of calculations with a single, rapid operation that can complete before quantum coherence breaks down.”
    The Variational Fast Forwarding (VFF) algorithm that the Los Alamos researchers developed is a hybrid combining aspects of classical and quantum computing. Although well-established theorems exclude the potential of general fast forwarding with absolute fidelity for arbitrary quantum simulations, the researchers get around the problem by tolerating small calculation errors for intermediate times in order to provide useful, if slightly imperfect, predictions.
    In principle, the approach allows scientists to quantum-mechanically simulate a system for as long as they like. Practically speaking, the errors that build up as simulation times increase limits potential calculations. Still, the algorithm allows simulations far beyond the time scales that quantum computers can achieve without the VFF algorithm.
    One quirk of the process is that it takes twice as many qubits to fast forward a calculation than would make up the quantum computer being fast forwarded. In the newly published paper, for example, the research group confirmed their approach by implementing a VFF algorithm on a two qubit computer to fast forward the calculations that would be performed in a one qubit quantum simulation.
    In future work, the Los Alamos researchers plan to explore the limits of the VFF algorithm by increasing the number of qubits they fast forward, and checking the extent to which they can fast forward systems. The research was published September 18, 2020 in the journal npj Quantum Information.

    Story Source:
    Materials provided by DOE/Los Alamos National Laboratory. Note: Content may be edited for style and length. More

  • in

    New shortcut enables faster creation of spin pattern in magnet

    Physicists have discovered a much faster approach to create a pattern of spins in a magnet. This ‘shortcut’ opens a new chapter in topology research. Interestingly, this discovery also offers an additional method to achieve more efficient magnetic data storage. The research will be published on 5 October in Nature Materials.
    Physicists previously demonstrated that laser light can create a pattern of magnetic spins. Now they have discovered a new route that enables this to be done much more quickly, in less than 300 picoseconds (a picosecond is one millionth of a millionth of a second). This is much faster than was previously thought possible.
    Useful for data storage: skyrmions
    Magnets consist of many small magnets, which are called spins. Normally, all the spins point in the same direction, which determines the north and south poles of the magnet. But the directions of the spins together sometimes form vortex-like configurations known as skyrmions.
    “These skyrmions in magnets could be used as a new type of data storage,” explains Johan Mentink, physicist at Radboud University. For a number of years, Radboud scientists have been looking for optimal ways to control magnetism with laser light and ultimately use it for more efficient data storage. In this technique, very short pulses of light are fired at a magnetic material. This reverses the magnetic spins in the material, which changes a bit from a 0 to a 1.
    “Once the magnetic spins take the vortex-like shape of a skyrmion, this configuration is hard to erase,” says Mentink. “Moreover, these skyrmions are only a few nanometers (one billionth of a meter) in size, so you can store a lot of data on a very small piece of material.”
    Shortcut
    The phase transition between these two states in a magnet — all the spins pointing in one direction to a skyrmion — is comparable to a road over a high mountain. The researchers have discovered that you can take a ‘shortcut’ through the mountain by heating the material very quickly with a laser pulse. Thereby, the threshold for the phase transition becomes lower for a very short time.
    A remarkable aspect of this new approach is that the material is first brought into a very chaotic state, in which the topology — which can be seen as the number of skyrmions in the material — fluctuates strongly. The researchers discovered this approach by combining X-rays generated by the European free electron laser in Hamburg with extremely advanced electron microscopy and spin dynamics simulations. “This research therefore involved an enormous team effort,” Mentink emphasises.
    New possibilities
    This fundamental discovery has opened a new chapter in topology research. Mentink expects that many more scientists will now start to look for similar ways to ‘take a shortcut through the mountain’ in other materials.
    This discovery also enables new approaches to create faster and more efficient data storage. There is an increasing need for this, for example due to the gigantic, energy-guzzling data centres that are required for massive data storage in the cloud. Magnetic skyrmions can provide a solution to this problem. Because they are very small and can be created very quickly with light, a lot of information can potentially be stored very quickly and efficiently on a small area.

    Story Source:
    Materials provided by Radboud University Nijmegen. Note: Content may be edited for style and length. More

  • in

    Deep learning gives drug design a boost

    When you take a medication, you want to know precisely what it does. Pharmaceutical companies go through extensive testing to ensure that you do.
    With a new deep learning-based technique created at Rice University’s Brown School of Engineering, they may soon get a better handle on how drugs in development will perform in the human body.
    The Rice lab of computer scientist Lydia Kavraki has introduced Metabolite Translator, a computational tool that predicts metabolites, the products of interactions between small molecules like drugs and enzymes.
    The Rice researchers take advantage of deep-learning methods and the availability of massive reaction datasets to give developers a broad picture of what a drug will do. The method is unconstrained by rules that companies use to determine metabolic reactions, opening a path to novel discoveries.
    “When you’re trying to determine if a compound is a potential drug, you have to check for toxicity,” Kavraki said. “You want to confirm that it does what it should, but you also want to know what else might happen.”
    The research by Kavraki, lead author and graduate student Eleni Litsa and Rice alumna Payel Das of IBM’s Thomas J. Watson Research Center, is detailed in the Royal Society of Chemistry journal Chemical Science.
    The researchers trained Metabolite Translator to predict metabolites through any enzyme, but measured its success against the existing rules-based methods that are focused on the enzymes in the liver. These enzymes are responsible for detoxifying and eliminating xenobiotics, like drugs, pesticides and pollutants. However, metabolites can be formed through other enzymes as well.

    advertisement

    “Our bodies are networks of chemical reactions,” Litsa said. “They have enzymes that act upon chemicals and may break or form bonds that change their structures into something that could be toxic, or cause other complications. Existing methodologies focus on the liver because most xenobiotic compounds are metabolized there. With our work, we’re trying to capture human metabolism in general.
    “The safety of a drug does not depend only on the drug itself but also on the metabolites that can be formed when the drug is processed in the body,” Litsa said.
    The rise of machine learning architectures that operate on structured data, such as chemical molecules, make the work possible, she said. Transformer was introduced in 2017 as a sequence translation method that has found wide use in language translation.
    Metabolite Translator is based on SMILES (for “simplified molecular-input line-entry system”), a notation method that uses plain text rather than diagrams to represent chemical molecules.
    “What we’re doing is exactly the same as translating a language, like English to German,” Litsa said.

    advertisement

    Due to the lack of experimental data, the lab used transfer learning to develop Metabolite Translator. They first pre-trained a Transformer model on 900,000 known chemical reactions and then fine-tuned it with data on human metabolic transformations.
    The researchers compared Metabolite Translator results with those from several other predictive techniques by analyzing known SMILES sequences of 65 drugs and 179 metabolizing enzymes. Though Metabolite Translator was trained on a general dataset not specific to drugs, it performed as well as commonly used rule-based methods that have been specifically developed for drugs. But it also identified enzymes that are not commonly involved in drug metabolism and were not found by existing methods.
    “We have a system that can predict equally well with rule-based systems, and we didn’t put any rules in our system that require manual work and expert knowledge,” Kavraki said. “Using a machine learning-based method, we are training a system to understand human metabolism without the need for explicitly encoding this knowledge in the form of rules. This work would not have been possible two years ago.”
    Kavraki is the Noah Harding Professor of Computer Science, a professor of bioengineering, mechanical engineering and electrical and computer engineering and director of Rice’s Ken Kennedy Institute. Rice University and the Cancer Prevention and Research Institute of Texas supported the research. More

  • in

    Efficient pollen identification

    From pollen forecasting, honey analysis and climate-related changes in plant-pollinator interactions, analysing pollen plays an important role in many areas of research. Microscopy is still the gold standard, but it is very time consuming and requires considerable expertise. In cooperation with Technische Universität (TU) Ilmenau, scientists from the Helmholtz Centre for Environmental Research (UFZ) and the German Centre for Integrative Biodiversity Research (iDiv) have now developed a method that allows them to efficiently automate the process of pollen analysis. Their study has been published in the specialist journal New Phytologist.
    Pollen is produced in a flower’s stamens and consists of a multitude of minute pollen grains, which contain the plant’s male genetic material necessary for its reproduction. The pollen grains get caught in the tiny hairs of nectar-feeding insects as they brush past and are thus transported from flower to flower. Once there, in the ideal scenario, a pollen grain will cling to the sticky stigma of the same plant species, which may then result in fertilisation. “Although pollinating insects perform this pollen delivery service entirely incidentally, its value is immeasurably high, both ecologically and economically,” says Dr. Susanne Dunker, head of the working group on imaging flow cytometry at the Department for Physiological Diversity at UFZ and iDiv. “Against the background of climate change and the accelerating loss of species, it is particularly important for us to gain a better understanding of these interactions between plants and pollinators.” Pollen analysis is a critical tool in this regard.
    Each species of plant has pollen grains of a characteristic shape, surface structure and size. When it comes to identifying and counting pollen grains — measuring between 10 and 180 micrometres — in a sample, microscopy has long been considered the gold standard. However, working with a microscope requires a great deal of expertise and is very time-consuming. “Although various approaches have already been proposed for the automation of pollen analysis, these methods are either unable to differentiate between closely related species or do not deliver quantitative findings about the number of pollen grains contained in a sample,” continues UFZ biologist Dr. Dunker. Yet it is precisely this information that is critical to many research subjects, such as the interaction between plants and pollinators.
    In their latest study, Susanne Dunker and her team of researchers have developed a novel method for the automation of pollen analysis. To this end they combined the high throughput of imaging flow cytometry — a technique used for particle analysis — with a form of artificial intelligence (AI) known as deep learning to design a highly efficient analysis tool, which makes it possible to both accurately identify the species and quantify the pollen grains contained in a sample. Imaging flow cytometry is a process that is primarily used in the medical field to analyse blood cells but is now also being repurposed for pollen analysis. “A pollen sample for examination is first added to a carrier liquid, which then flows through a channel that becomes increasingly narrow,” says Susanne Dunker, explaining the procedure. “The narrowing of the channel causes the pollen grains to separate and line up as if they are on a string of pearls, so that each one passes through the built-in microscope element on its own and images of up to 2,000 individual pollen grains can be captured per second.” Two normal microscopic images are taken plus ten fluorescence microscopic images per grain of pollen. When excited with light radiated at certain wavelengths by a laser, the pollen grains themselves emit light. “The area of the colour spectrum in which the pollen fluoresces — and at which precise location — is sometimes very specific. This information provides us with additional traits that can help identify the individual plant species,” reports Susanne Dunker. In the deep learning process, an algorithm works in successive steps to abstract the original pixels of an image to a greater and greater degree in order to finally extract the species-specific characteristics. “Microscopic images, fluorescence characteristics and high throughput have never been used in combination for pollen analysis before — this really is an absolute first.” Where the analysis of a relatively straightforward sample takes, for example, four hours under the microscope, the new process takes just 20 minutes. UFZ has therefore applied for a patent for the novel high-throughput analysis method, with its inventor, Susanne Dunker, receiving the UFZ Technology Transfer Award in 2019.
    The pollen samples examined in the study came from 35 species of meadow plants, including yarrow, sage, thyme and various species of clover such as white, mountain and red clover. In total, the researchers prepared around 430,000 images, which formed the basis for a data set. In cooperation with TU Ilmenau, this data set was then transferred using deep learning into a highly efficient tool for pollen identification. In subsequent analyses, the researchers tested the accuracy of their new method, comparing unknown pollen samples from the 35 plant species against the data set. “The result was more than satisfactory — the level of accuracy was 96 per cent,” says Susanne Dunker. Even species that are difficult to distinguish from one another, and indeed present experts with a challenge under the microscope, could be reliably identified. The new method is therefore not only extremely fast but also highly precise.
    In the future, the new process for automated pollen analysis will play a key role in answering critical research questions about interactions between plants and pollinators. How important are certain pollinators like bees, flies and bumblebees for particular plant species? What would be the consequences of losing a species of pollinating insect or a plant? “We are now able to evaluate pollen samples on a large scale, both qualitatively and- at the same time — quantitatively. We are constantly expanding our pollen data set of insect-pollinated plants for that purpose,” comments Susanne Dunker. She aims to expand the data set to include at least those 500 plant species whose pollen is significant as a food source for honeybees. More

  • in

    Virtual follow-up care is more convenient and just as beneficial to surgical patients

    Surgical patients who participate in virtual follow-up visits after their operations spend a similar amount of time with surgical team members as those who meet face-to-face. Moreover, these patients benefit by spending less time waiting at and traveling to the clinic for in-person appointments, according to research findings presented at the virtual American College of Surgeons Clinical Congress 2020.
    “I think it’s really valuable for patients to understand that, in the virtual space scenario, they are still going to get quality time with their surgical team,” said lead study author Caroline Reinke, MD, FACS, associate professor of surgery at Atrium Health in Charlotte, N.C. “A virtual appointment does not shorten that time, and there is still an ability to answer questions, connect, and address ongoing medical care.”
    Due to the Coronavirus Disease 2019 (COVID-19) pandemic and the widespread adoption of technology, many surgical patients are being offered virtual appointments in place of traditional in-person visits. The researchers say this is one of the first studies to look at how patients spend their time in post-operative virtual visits compared with face-to-face consultations.
    The study design was a non-inferiority, randomized controlled trial that involved more than 400 patients who underwent laparoscopic appendectomy or cholecystectomy at two hospitals in Charlotte, N.C. and were randomized 2:1 to a post-discharge virtual visit or to an in-person visit. The study began in August 2017 but was put on hold in March 2020 due to COVID-19.
    “Other studies have looked at the total visit time, but they haven’t been able to break down the specific amount of time the patient spends with the provider. And we wanted to know if that was the same or different between a virtual visit and an in-person visit,” Dr. Reinke said. “We wanted to get down to the nitty gritty of how much face time was actually being spent between the surgical team member and the patient.”
    Researchers tracked total time the patients spent checking in, waiting in the waiting room and exam room, meeting with the surgical team member, and being discharged after the exam. For in-person visits, on-site waiting time and an estimated drive time was factored into the overall time commitment.

    advertisement

    Just 64 percent of patients completed the follow-up visit. “Sometimes, patients are doing so well after minimally invasive surgery that about 30 percent of these patients don’t show up for a post-operative visit,” Dr. Reinke said.
    Overall, results showed that the total clinic time was longer for in-person visits than virtual visits (58 minutes vs. 19 minutes). However, patients in both groups spent the same amount of face time with a member of their surgical team (8.3 minutes vs. 8.2 minutes) discussing their post-operative recovery.
    “I was pleasantly surprised that the amount of time patients spent with the surgical team member was the same, because one of the main concerns with virtual visits is that patients feel disconnected and that there isn’t as much value in it,” Dr. Reinke said.
    Importantly, patients placed a high value on convenience and flexibility. “We received overwhelmingly positive responses to this patient-centered care option.” Dr. Reinke said. “Patients were able to do the post-operative visit at work or at home while caring for children, without having to disrupt their day in such a significant way.”
    The researchers also found that patients embraced the virtual scenario. The satisfaction rate between both groups of patients was similar (94 percent vs. 98 percent).

    advertisement

    In addition, wait time was much less for patients who got virtual care. “Even for virtual visits, the amount of time the patients spent checking in and waiting was about 55 percent of total time. Because virtual visits have the same regulations as in-person visits, even if you take out the components of waiting room and patient flow within the clinic, patients are still spending about half of their time on the logistics of check in,” Dr. Reinke. “Yet, with virtual visits, there is still much less time spent waiting, about 80 percent less time.”
    Still, some patients are not comfortable with the technology. The number of patients who couldn’t or didn’t want to do a virtual visit was higher than expected, according to the authors.
    “I think there are some patients that would really just rather come in and shake someone’s hand,” Dr. Reinke said. “I think for surgery it’s a little bit different, because with surgical care there are incisions to check on. However, we were able to check on incisions pretty easily, having patients show us their incisions virtually on the video screen.”
    This research was supported by the American College of Surgeons Franklin H. Martin Faculty Research Fellowship. “FACS” designates that a surgeon is a Fellow of the American College of Surgeons.
    Citation: The Value of Time: Analysis of Surgical Post-Discharge Virtual vs. In-Person Visits. Scientific Forum, American College of Surgeons Clinical Congress 2020, October 3-7, 2020. More

  • in

    New model examines how societal influences affect U.S. political opinions

    Northwestern University researchers have developed the first quantitative model that captures how politicized environments affect U.S. political opinion formation and evolution.
    Using the model, the researchers seek to understand how populations change their opinions when exposed to political content, such as news media, campaign ads and ordinary personal exchanges. The math-based framework is flexible, allowing future data to be incorporated as it becomes available.
    “It’s really powerful to understand how people are influenced by the content that they see,” said David Sabin-Miller, a Northwestern graduate student who led the study. “It could help us understand how populations become polarized, which would be hugely beneficial.”
    “Quantitative models like this allow us to run computational experiments,” added Northwestern’s Daniel Abrams, the study’s senior author. “We could simulate how various interventions might help fix extreme polarization to promote consensus.”
    The paper will be published on Thursday (Oct. 1) in the journal Physical Review Research.
    Abrams is an associate professor of engineering sciences and applied mathematics in Northwestern’s McCormick School of Engineering. Sabin-Miller is a graduate student in Abrams’ laboratory.

    advertisement

    Researchers have been modeling social behavior for hundreds of years. But most modern quantitative models rely on network science, which simulates person-to-person human interactions.
    The Northwestern team takes a different, but complementary, approach. They break down all interactions into perceptions and reactions. A perception takes into account how people perceive a politicized experience based on their current ideology. A far-right Republican, for example, likely will perceive the same experience differently than a far-left Democrat.
    After perceiving new ideas or information, people might change their opinions based on three established psychological effects: attraction/repulsion, tribalism and perceptual filtering. Northwestern’s quantitative model incorporates all three of these and examines their impact.
    “Typically, ideas that are similar to your beliefs can be convincing or attractive,” Sabin-Miller said. “But once ideas go past a discomfort point, people start rejecting what they see or hear. We call this the ‘repulsion distance,’ and we are trying to define that limit through modeling.”
    People also react differently depending on whether or not the new idea or information comes from a trusted source. Known as tribalism, people tend to give the benefit of the doubt to a perceived ally. In perceptual filtering, people — either knowingly through direct decisions or unknowingly through algorithms that curate content — determine what content they see.
    “Perceptual filtering is the ‘media bubble’ that people talk about,” Abrams explained. “You’re more likely to see things that are consistent with your existing beliefs.”
    Abrams and Sabin-Miller liken their new model to thermodynamics in physics — treating individual people like gas molecules that distribute around a room.
    “Thermodynamics does not focus on individual particles but the average of a whole system, which includes many, many particles,” Abrams said. “We hope to do the same thing with political opinions. Even though we can’t say how or when one individual’s opinion might change, we can look at how the whole population changes, on average.”

    Story Source:
    Materials provided by Northwestern University. Original written by Amanda Morris. Note: Content may be edited for style and length. More

  • in

    New tool shows main highways of disease development

    As people get older they often jump from disease to disease and carry the burden of more chronic diseases at once. But is there a system in the way diseases follow each other? Danish researchers have for the past six years developed a comprehensive tool, the Danish Disease Trajectory Browser, that utilizes 25 years of public health data from Danish patients to explore what they call the main highways of disease development.
    “A lot of research focus is on investigating one disease at a time. We try to add a time perspective and look at multiple diseases following each other to discover where are the most common trajectories — what are the disease highways that we as people encounter,” says professor Søren Brunak from the Novo Nordisk Foundation Center for Protein Research at University of Copenhagen.
    To illustrate the use of the tool the research group looked at data for Down Syndrome patients and showed, as expected, that these patients in general are diagnosed with Alzheimer’s Disease at an earlier age that others. Other frequent diseases are displayed as well.
    The Danish Disease Trajectory Browser is published in Nature Communications.
    Making health data accessible for research
    In general, there is a barrier for working with health data in research. Both in terms of getting approval from authorities to handle patient data and the fact that researchers need specific technical skills to extract meaningful information from the data.

    advertisement

    “We wanted to make an easily accessible tool for researchers and health professionals where they don’t necessarily need to know all the details. The statistical summary data on disease to disease jumps in the tool are not person-sensitive. We compute statistics over many patients and have boiled it down to data points that visualize how often patients with one disease get a specific other disease at a later point. So we are focusing on the sequence of diseases,” says Søren Brunak.
    The Danish Disease Trajectory Browser is freely available for the scientific community and uses WHO’s disease codes. Even though there are regional differences in disease patterns the tool is highly relevant in an international context to compare i.e. how fast diseases progress in different countries.
    Disease trajectories can help in personalized medicine
    For Søren Brunak the tool has a great potential in personalized medicine.
    “In personalized medicine a part of the job is to divide patients into subgroups that will benefit most from a specific treatment. By knowing the disease trajectories you can create subgroups of patients not just by their current disease, but based on their previous conditions and expected future conditions as well. In that way you find different subgroups of patients that may need different treatment strategies,” Søren Brunak explains.
    Currently the Disease Trajectory Browser contains data from 1994 to 2018 and will continuously be updated with new data.
    The Danish Disease Trajectory Browser is freely accessible here: http://dtb.cpr.ku.dk

    Story Source:
    Materials provided by University of Copenhagen The Faculty of Health and Medical Sciences. Note: Content may be edited for style and length. More