More stories

  • in

    Smartphone app calculates genetic risk for heart attack

    A Scripps Research team developed a smartphone app that can calculate users’ genetic risk for coronary artery disease (CAD) — and found that users at high risk sought out appropriate medication after using the app.
    In the study, which appears in npj Digital Medicine in March 2022, the researchers detailed how their app called MyGeneRank inputs participating individuals’ genetic information from the 23andMe genetic testing company and outputs a CAD risk score based on the DNA data. Of the 721 participants who provided complete information, those with high-risk scores were much more likely to start using statins or other cholesterol-lowering therapies, compared to those with low-risk scores.
    “We saw about twice the rate of statin initiation in the high genetic risk group vs the low genetic risk group, which indicates that strategies like this could make a big contribution to public health — heart disease being the largest cause of death globally,” says study senior author Ali Torkamani, PhD, professor and director of Genomics and Genome Informatics at the Scripps Research Translational Institute.
    According to the U.S. Centers for Disease Control and Prevention, about 18 million American adults have CAD, the most common form of heart disease, which features the hardening and narrowing of arteries feeding the heart muscle. More than 300,000 Americans die of resulting heart attacks every year.
    Statins such as atorvastatin and simvastatin, as well as other, non-statin drugs that reduce bloodstream levels of cholesterol and other fat-related molecules called lipids, are now widely used, and have helped reduce the annual death rate from CAD over the past two decades. But researchers estimate that in the US nearly half of men and about 10 percent of women between 45 and 65 years old are at least at intermediate risk of CAD — yet only about a third of these individuals take lipid-lowering drugs.
    Calculating CAD risk scores and communicating that information via smartphone apps is now being considered as a highly scalable method for nudging more at-risk people to seek medical advice and get lipid-lowering medications when appropriate, thereby lowering the incidence of CAD and heart-attacks. More

  • in

    Physicist shed light on the darkness

    Experimental physicists have succeeded for the first time in controlling protected quantum states — so-called dark states — in superconducting quantum bits. The entangled states are 500 times more robust and could be used, for example, in quantum simulations. The method could also be used on other technological platforms.
    In Gerhard Kirchmair’s laboratory at the Institute of Quantum Optics and Quantum Information (IQOQI) of the Austrian Academy of Sciences in Innsbruck, Austria, superconducting quantum bits are coupled to waveguides. When several of these quantum bits are incorporated into the waveguide, they interact with each other, resulting in so-called dark states. “These are entangled quantum states that are completely decoupled from the outside world,” explains Max Zanner, first author of the paper. “They are invisible, so to speak, which is why they are called dark states.” These states are of interest for quantum simulations or the processing of quantum information — corresponding proposals have been made several times in recent years. To date, however, it has not been possible to control and manipulate these dark states appropriately without breaking their invisibility. Now, the team led by Gerhard Kirchmair has developed a system with which the dark states of superconducting circuits in a microwave waveguide can be manipulated from the outside.
    Expandable as desired
    “Until now, the problem has always been, how to control dark states that are completely decoupled from the environment,” says Gerhard Kirchmair, who is also a professor of experimental physics at the University of Innsbruck. “With a trick, we have now succeeded in finding access to these dark states.” His team built four superconducting quantum bits into a microwave waveguide and attached control lines via two lateral inlets. Using microwave radiation via these wires, the dark states can be manipulated. Together, the four superconducting circuits form a robust quantum bit with a storage time about 500 times longer than that of the individual circuits. Multiple dark states exist simultaneously in this quantum bit, which can be used for quantum simulation and quantum information processing. “In principle, this system can be extended arbitrarily,” says Matti Silveri from the Nano and Molecular Systems Research Unit at the University of Oulu, Finland.
    The successful experiment forms the starting point for further investigations of dark states and their possible applications. For the time being, these are mainly in the field of fundamental research, where there are still many open questions regarding the properties of such quantum systems. The concept developed by the Innsbruck physicists to control dark states can in principle be implemented not only with superconducting quantum bits, but also on other technological platforms. “However, the circuits we use, which function like artificial atoms, have advantages over real atoms, which are much more difficult to couple strongly to a waveguide,” Gerhard Kirchmair emphasizes.
    Nature Physics published the results in its current issue. The research was financially supported by the Austrian Science Fund FWF, the Academy of Finland, and the European Union, among others. Maximilian Zanner and Christian Schneider are members of the FWF Doctoral Program Atoms, Light and Molecules (DK-ALM) at the University of Innsbruck.
    Story Source:
    Materials provided by University of Innsbruck. Note: Content may be edited for style and length. More

  • in

    Laser flashes for cancer research

    Irradiation with fast protons is a more effective and less invasive cancer treatment than X-rays. However, modern proton therapy requires large particle accelerators, which has experts investigating alternative accelerator concepts, such as laser systems to accelerate protons. Such systems are deployed in preclinical studies to pave the way for optimal radiation therapy. A research team led by the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) has now successfully tested irradiation with laser protons on animals for the first time, as the group reports in the journal Nature Physics.
    Radiation therapy is one of the main cancer treatment methods. It usually leverages strong, focused X-ray light. Protons — the nuclei of hydrogen atoms — accelerated to high energies and bundled into small, precisely targetable bunches are an alternative. They can penetrate deep into the tissue where they deposit most of their energy in the tumor, destroying the cancer while leaving the surrounding tissue largely intact. This makes the method both more effective and less invasive than X-ray therapy. “The method is particularly suitable for irradiating tumors at the base of the skull, in the brain, and in the central nervous system,” explains HZDR researcher Dr. Elke Beyreuther. “It is also used in pediatric cancer patients to reduce possible long-term effects.”
    However, the method is significantly more complex than X-ray therapy as it requires elaborate accelerator facilities to generate the fast protons and transport them to the patient. This is why there are only a few proton therapy centers in Germany, including one at Dresden University Hospital. Currently, experts are working to steadily improve the method and adapt it to patients. Laser-based proton accelerators could make a decisive contribution here.
    Customized laser flashes
    “The approach is based on a high-power laser to generate strong and extremely short light pulses, which are fired at a thin plastic or metal foil,” explains HZDR physicist Dr. Florian Kroll. The intensity of these flashes knocks swathes of electrons out of the foil, creating a strong electric field that can bundle protons into pulses and accelerate them to high energies. Fascinatingly, the scale of this process is miniscule: The acceleration path is merely a few micrometers long.
    “We have been working on the project for 15 years, but so far, the protons hadn’t picked up enough energy for irradiation,” Beyreuther reports. “Also, the pulse intensity was too variable, so we couldn’t make sure we were delivering the right dose.” But over the past few years, scientists finally achieved crucial improvements, in particular thanks to a better understanding of the interaction between the laser flashes and the foil. “Above all, the precise shape of the laser flashes is particularly important,” Kroll explains. “We can now tailor them to create proton pulses that have sufficient energy and are also stable enough.”
    New research requirements
    Finally, the parameters had been optimized to the point that the HZDR team was able to launch a crucial series of experiments: the first-ever, controlled irradiation of tumors in mice with laser-accelerated protons. The experiments were carried out in cooperation with experts from Dresden University Hospital at the OncoRay — National Center for Radiation Research in Oncology and benchmarked with comparative experiments at the conventional proton therapy facility. “We found that our laser-driven proton source can generate biologically valuable data,” Kroll reports. “This sets the stage for further studies that will allow us to test and optimize our method.”
    Another special feature of laser-accelerated proton pulses is their enormous intensity. While in conventional proton therapy, the radiation dose is administered in a span of a few minutes, the laser-based process could occur within a millionth of a second. “There are indications that such a rapid dose administration helps spare the healthy surrounding tissue even better than before,” explains Elke Beyreuther. “We want to follow up on these indications with our experimental setup and conduct preclinical studies to investigate when and how this rapid irradiation method should be used to gain an advantage in cancer therapy.”
    Story Source:
    Materials provided by Helmholtz-Zentrum Dresden-Rossendorf. Note: Content may be edited for style and length. More

  • in

    Endless forms most beautiful: Why evolution favors symmetry

    From sunflowers to starfish, symmetry appears everywhere in biology. This isn’t just true for body plans — the molecular machines keeping our cells alive are also strikingly symmetric. But why? Does evolution have a built-in preference for symmetry?
    An international team of researchers believe so, and have combined ideas from biology, computer science and mathematics to explain why. As they report in PNAS, symmetric and other simple structures emerge so commonly because evolution has an overwhelming preference for simple “algorithms” — that is, simple instruction sets or recipes for producing a given structure.
    “Imagine having to tell a friend how to tile a floor using as few words as possible,” says Iain Johnston, a professor at the University of Bergen and author on the study. “You wouldn’t say: put diamonds here, long rectangles here, wide rectangles here. You’d say something like: put square tiles everywhere. And that simple, easy recipe gives a highly symmetric outcome.”
    The team used computational modeling to explore how this preference comes about in biology. They showed that many more possible genomes describe simple algorithms than more complex ones. As evolution searches over possible genomes, simple algorithms are more likely to be discovered — as are, in turn, the more symmetric structures that they produce. The scientists then connected this evolutionary picture to a deep result from the theoretical discipline of algorithmic information theory.
    “These intuitions can be formalized in the field of algorithmic information theory, which provides quantitative predictions for the bias towards descriptive simplicity,” says Ard Louis, professor at the University of Oxford and corresponding author on the study.
    The study’s key theoretical idea can be illustrated by a twist on a famous thought experiment in evolutionary biology, which pictures a room full of monkeys trying to write a book by typing randomly on a keyboard. Imagine the monkeys are instead trying to write a recipe. Each is far more likely to randomly hit the letters required to spell out a short, simple recipe than a long, complicated one. If we then follow any recipes the monkeys have produced — our metaphor for producing biological structures from genetic information — we will produce simple outcomes much more often than complicated ones.
    The scientists show that a wide range of biological structures and systems, from proteins to RNA and signaling networks, adopt algorithmically simple structures with probabilities as predicted by this theory. Going forward, they plan to investigate the predictions that their theory makes for biases in larger-scale developmental processes.
    Story Source:
    Materials provided by The University of Bergen. Note: Content may be edited for style and length. More

  • in

    Chemical reaction design goes virtual

    Researchers aim to streamline the time- and resource-intensive process of screening ligands during catalyst design by using virtual ligands.
    Researchers at the Institute for Chemical Reaction Design and Discovery and Hokkaido University have developed a virtual ligand-assisted (VLA) screening method, which could drastically reduce the amount of trial and error required in the lab during transition metal catalyst development. The method, published in the journal ACS Catalysis, may also lead to the discovery of unconventional catalyst designs outside the scope of chemists’ intuition.
    Ligands are molecules that are bonded to the central metal atom of a catalyst, and they affect the activity and selectivity of a catalyst. Finding the optimal ligand to catalyze a specific target reaction can be like finding a needle in a haystack. The VLA screening method provides a way to efficiently search that haystack, surveying a broad range of values for different properties to identify the features of ligands that should be most promising. This narrows down the search area for chemists in the lab and has the potential to greatly accelerate the reaction design process.
    This new work utilizes virtual ligands, which mimic the presence of real ligands; however, instead of being described by many individual constituent atoms — such as carbon or nitrogen — virtual ligands are described using only two metrics: their steric, or space-filling, properties and their electronic properties. Researchers developed approximations that describe each of these effects with a single parameter. Using this simplified description of a ligand enabled researchers to evaluate ligands in a computationally efficient way over a large range of values for these two effects. The result is a “contour map” that shows what combination of steric and electronic effects a ligand should have in order to best catalyze a specific reaction. Chemists can then focus on only testing real ligands that fit these criteria.
    Researchers used monodentate phosphorus (III) virtual ligands as a test group and verified their models for the electronic and steric properties of the virtual ligands against values calculated for corresponding real ligands.
    The VLA screening method was then employed to design ligands for a test reaction in which a CHO group and a hydrogen atom can be added to a double bond in two different possible configurations. The reaction pathway was evaluated for 20 virtual ligand cases (consisting of different assigned values for the electronic and steric parameters) to create a contour map that shows a visual trend for what types of ligands can be expected to result in a highly selective reaction.
    Computer models of real ligands were designed based on parameters extracted from the contour map and then evaluated computationally. The selectivity values predicted via the VLA screening method matched well with the values computed for the models of real ligands, showing the viability of the VLA screening method to provide guidance that aids in rational ligand design.
    Beyond saving valuable time and resources, corresponding author Satoshi Maeda anticipates the creation of powerful reaction prediction systems by combining the VLA screening method with other computational techniques.
    “Ligand screening is a pivotal process in the development of transition metal catalysis. As the VLA screening can be conducted in silico, it would save a lot of time and resources in the lab. We believe that this method not only streamlines finding an optimal ligand from a given library of ligands, but also stimulates researchers to explore the untapped chemical space of ligands,” commented corresponding author Satoshi Maeda. “Furthermore, we also expect that by combining this method with our reaction prediction technology using the Artificial Force Induced Reaction method, a new computer-driven discovery scheme of transition metal catalysis can be realized.”
    Story Source:
    Materials provided by Hokkaido University. Note: Content may be edited for style and length. More

  • in

    The next generation of robots will be shape-shifters

    Physicists have discovered a new way to coat soft robots in materials that allow them to move and function in a more purposeful way. The research, led by the UK’s University of Bath, is described today in Science Advances.
    Authors of the study believe their breakthrough modelling on ‘active matter’ could mark a turning point in the design of robots. With further development of the concept, it may be possible to determine the shape, movement and behaviour of a soft solid not by its natural elasticity but by human-controlled activity on its surface.
    The surface of an ordinary soft material always shrinks into a sphere. Think of the way water beads into droplets: the beading occurs because the surface of liquids and other soft material naturally contracts into the smallest surface area possible — i.e. a sphere. But active matter can be designed to work against this tendency. An example of this in action would be a rubber ball that’s wrapped in a layer of nano-robots, where the robots are programmed to work in unison to distort the ball into a new, pre-determined shape (say, a star).
    It is hoped that active matter will lead to a new generation of machines whose function will come from the bottom up. So, instead of being governed by a central controller (the way today’s robotic arms are controlled in factories), these new machines would be made from many individual active units that cooperate to determine the machine’s movement and function. This is akin to the workings of our own biological tissues, such as the fibres in heart muscle.
    Using this idea, scientists could design soft machines with arms made of flexible materials powered by robots embedded in their surface. They could also tailor the size and shape of drug delivery capsules, by coating the surface of nanoparticles in a responsive, active material.. This in turn could have a dramatic effect on how a drug interacts with cells in the body.
    Work on active matter challenges the assumption that the energetic cost of the surface of a liquid or soft solid must always be positive because a certain amount of energy is always necessary to create a surface.
    Dr Jack Binysh, study first author, said: “Active matter makes us look at the familiar rules of nature — rules like the fact that surface tension has to be positive — in a new light. Seeing what happens if we break these rules, and how we can harness the results, is an exciting place to be doing research.”
    Corresponding author Dr Anton Souslov added: “This study is an important proof of concept and has many useful implications. For instance, future technology could produce soft robots that are far squishier and better at picking up and manipulating delicate materials.”
    For the study, the researchers developed theory and simulations that described a 3D soft solid whose surface experiences active stresses. They found that these active stresses expand the surface of the material, pulling the solid underneath along with it, and causing a global shape change. The researchers found that the precise shape adopted by the solid could then be tailored by altering the elastic properties of the material.
    In the next phase of this work — which has already begun — the researchers will apply this general principle to design specific robots, such as soft arms or self-swimming materials. They will also look at collective behaviour — for example, what happens when you have many active solids, all packed together.
    This work was a collaboration between the Universities of Bath and Birmingham. It was funded by the Engineering and Physical Sciences Research Council (EPSRC) through New Investigator Award no. EP/T000961/1.
    Story Source:
    Materials provided by University of Bath. Note: Content may be edited for style and length. More

  • in

    Brain-based computing chips not just for AI anymore

    With the insertion of a little math, Sandia National Laboratories researchers have shown that neuromorphic computers, which synthetically replicate the brain’s logic, can solve more complex problems than those posed by artificial intelligence and may even earn a place in high-performance computing.
    The findings, detailed in a recent article in the journal Nature Electronics, show that neuromorphic simulations employing the statistical method called random walks can track X-rays passing through bone and soft tissue, disease passing through a population, information flowing through social networks and the movements of financial markets, among other uses, said Sandia theoretical neuroscientist and lead researcher James Bradley Aimone.
    “Basically, we have shown that neuromorphic hardware can yield computational advantages relevant to many applications, not just artificial intelligence to which it’s obviously kin,” said Aimone. “Newly discovered applications range from radiation transport and molecular simulations to computational finance, biology modeling and particle physics.”
    In optimal cases, neuromorphic computers will solve problems faster and use less energy than conventional computing, he said.
    The bold assertions should be of interest to the high-performance computing community because finding capabilities to solve statistical problems is of increasing concern, Aimone said.
    “These problems aren’t really well-suited for GPUs [graphics processing units], which is what future exascale systems are likely going to rely on,” Aimone said. “What’s exciting is that no one really has looked at neuromorphic computing for these types of applications before.”
    Sandia engineer and paper author Brian Franke said, “The natural randomness of the processes you list will make them inefficient when directly mapped onto vector processors like GPUs on next-generation computational efforts. Meanwhile, neuromorphic architectures are an intriguing and radically different alternative for particle simulation that may lead to a scalable and energy-efficient approach for solving problems of interest to us.” More

  • in

    Labeling key to success of software company innovations

    Companies in the software industry, where novel ideas are prized, use linguistic tactics to develop new labels for their innovations to stay ahead of competitors. Using language to signal that something is “new and different” is an important tool for success, University of California, Davis, research suggests.
    Category innovation during a study period from 1990 to 2002 included words and phrases such as “platform” and “supply chain management” — market categories that are now established.
    “There is an association between companies that use category innovation and their likelihood to IPO, suggesting category innovation is part of successful firm strategies,” said Elizabeth George Pontikes of the UC Davis Graduate School of Management who is the author of the study.
    The article, “Category Innovation in the Software Industry, 1990-2002,” was published in Strategic Management Journal in January. Pontikes looked at more than 400 labels used in news releases about innovations by more than 4,000 different software firms over 12 years. Researchers also interviewed 12 executives and venture capitalists in the software industry.
    Category innovation, as defined in the study, is a practice that involves firms claiming a new category label to describe the market they are in. A firm may do this to differentiate from rivals, or to try to become a market leader or even a “category king.”
    One executive interviewed for the study described the “tag management” label, for example, as something that “wasn’t super innovative, but it was labeling it … so it was strategic the way we were thinking about it.”
    The research found that 75% of the labels analyzed when they were new only had one or two firms using them in the first two years, when it is traditionally difficult to determine if innovations even have a nascent market. Those labels don’t gain traction until the second year of innovation, the research showed.
    Firms sometimes engage in category innovation by borrowing and recasting a little-known term or are unaware another firm had claimed the label, Pontikes said.
    Story Source:
    Materials provided by University of California – Davis. Original written by Karen Michele Nikos-Rose. Note: Content may be edited for style and length. More