More stories

  • in

    Laser flashes for cancer research

    Irradiation with fast protons is a more effective and less invasive cancer treatment than X-rays. However, modern proton therapy requires large particle accelerators, which has experts investigating alternative accelerator concepts, such as laser systems to accelerate protons. Such systems are deployed in preclinical studies to pave the way for optimal radiation therapy. A research team led by the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) has now successfully tested irradiation with laser protons on animals for the first time, as the group reports in the journal Nature Physics.
    Radiation therapy is one of the main cancer treatment methods. It usually leverages strong, focused X-ray light. Protons — the nuclei of hydrogen atoms — accelerated to high energies and bundled into small, precisely targetable bunches are an alternative. They can penetrate deep into the tissue where they deposit most of their energy in the tumor, destroying the cancer while leaving the surrounding tissue largely intact. This makes the method both more effective and less invasive than X-ray therapy. “The method is particularly suitable for irradiating tumors at the base of the skull, in the brain, and in the central nervous system,” explains HZDR researcher Dr. Elke Beyreuther. “It is also used in pediatric cancer patients to reduce possible long-term effects.”
    However, the method is significantly more complex than X-ray therapy as it requires elaborate accelerator facilities to generate the fast protons and transport them to the patient. This is why there are only a few proton therapy centers in Germany, including one at Dresden University Hospital. Currently, experts are working to steadily improve the method and adapt it to patients. Laser-based proton accelerators could make a decisive contribution here.
    Customized laser flashes
    “The approach is based on a high-power laser to generate strong and extremely short light pulses, which are fired at a thin plastic or metal foil,” explains HZDR physicist Dr. Florian Kroll. The intensity of these flashes knocks swathes of electrons out of the foil, creating a strong electric field that can bundle protons into pulses and accelerate them to high energies. Fascinatingly, the scale of this process is miniscule: The acceleration path is merely a few micrometers long.
    “We have been working on the project for 15 years, but so far, the protons hadn’t picked up enough energy for irradiation,” Beyreuther reports. “Also, the pulse intensity was too variable, so we couldn’t make sure we were delivering the right dose.” But over the past few years, scientists finally achieved crucial improvements, in particular thanks to a better understanding of the interaction between the laser flashes and the foil. “Above all, the precise shape of the laser flashes is particularly important,” Kroll explains. “We can now tailor them to create proton pulses that have sufficient energy and are also stable enough.”
    New research requirements
    Finally, the parameters had been optimized to the point that the HZDR team was able to launch a crucial series of experiments: the first-ever, controlled irradiation of tumors in mice with laser-accelerated protons. The experiments were carried out in cooperation with experts from Dresden University Hospital at the OncoRay — National Center for Radiation Research in Oncology and benchmarked with comparative experiments at the conventional proton therapy facility. “We found that our laser-driven proton source can generate biologically valuable data,” Kroll reports. “This sets the stage for further studies that will allow us to test and optimize our method.”
    Another special feature of laser-accelerated proton pulses is their enormous intensity. While in conventional proton therapy, the radiation dose is administered in a span of a few minutes, the laser-based process could occur within a millionth of a second. “There are indications that such a rapid dose administration helps spare the healthy surrounding tissue even better than before,” explains Elke Beyreuther. “We want to follow up on these indications with our experimental setup and conduct preclinical studies to investigate when and how this rapid irradiation method should be used to gain an advantage in cancer therapy.”
    Story Source:
    Materials provided by Helmholtz-Zentrum Dresden-Rossendorf. Note: Content may be edited for style and length. More

  • in

    Endless forms most beautiful: Why evolution favors symmetry

    From sunflowers to starfish, symmetry appears everywhere in biology. This isn’t just true for body plans — the molecular machines keeping our cells alive are also strikingly symmetric. But why? Does evolution have a built-in preference for symmetry?
    An international team of researchers believe so, and have combined ideas from biology, computer science and mathematics to explain why. As they report in PNAS, symmetric and other simple structures emerge so commonly because evolution has an overwhelming preference for simple “algorithms” — that is, simple instruction sets or recipes for producing a given structure.
    “Imagine having to tell a friend how to tile a floor using as few words as possible,” says Iain Johnston, a professor at the University of Bergen and author on the study. “You wouldn’t say: put diamonds here, long rectangles here, wide rectangles here. You’d say something like: put square tiles everywhere. And that simple, easy recipe gives a highly symmetric outcome.”
    The team used computational modeling to explore how this preference comes about in biology. They showed that many more possible genomes describe simple algorithms than more complex ones. As evolution searches over possible genomes, simple algorithms are more likely to be discovered — as are, in turn, the more symmetric structures that they produce. The scientists then connected this evolutionary picture to a deep result from the theoretical discipline of algorithmic information theory.
    “These intuitions can be formalized in the field of algorithmic information theory, which provides quantitative predictions for the bias towards descriptive simplicity,” says Ard Louis, professor at the University of Oxford and corresponding author on the study.
    The study’s key theoretical idea can be illustrated by a twist on a famous thought experiment in evolutionary biology, which pictures a room full of monkeys trying to write a book by typing randomly on a keyboard. Imagine the monkeys are instead trying to write a recipe. Each is far more likely to randomly hit the letters required to spell out a short, simple recipe than a long, complicated one. If we then follow any recipes the monkeys have produced — our metaphor for producing biological structures from genetic information — we will produce simple outcomes much more often than complicated ones.
    The scientists show that a wide range of biological structures and systems, from proteins to RNA and signaling networks, adopt algorithmically simple structures with probabilities as predicted by this theory. Going forward, they plan to investigate the predictions that their theory makes for biases in larger-scale developmental processes.
    Story Source:
    Materials provided by The University of Bergen. Note: Content may be edited for style and length. More

  • in

    Chemical reaction design goes virtual

    Researchers aim to streamline the time- and resource-intensive process of screening ligands during catalyst design by using virtual ligands.
    Researchers at the Institute for Chemical Reaction Design and Discovery and Hokkaido University have developed a virtual ligand-assisted (VLA) screening method, which could drastically reduce the amount of trial and error required in the lab during transition metal catalyst development. The method, published in the journal ACS Catalysis, may also lead to the discovery of unconventional catalyst designs outside the scope of chemists’ intuition.
    Ligands are molecules that are bonded to the central metal atom of a catalyst, and they affect the activity and selectivity of a catalyst. Finding the optimal ligand to catalyze a specific target reaction can be like finding a needle in a haystack. The VLA screening method provides a way to efficiently search that haystack, surveying a broad range of values for different properties to identify the features of ligands that should be most promising. This narrows down the search area for chemists in the lab and has the potential to greatly accelerate the reaction design process.
    This new work utilizes virtual ligands, which mimic the presence of real ligands; however, instead of being described by many individual constituent atoms — such as carbon or nitrogen — virtual ligands are described using only two metrics: their steric, or space-filling, properties and their electronic properties. Researchers developed approximations that describe each of these effects with a single parameter. Using this simplified description of a ligand enabled researchers to evaluate ligands in a computationally efficient way over a large range of values for these two effects. The result is a “contour map” that shows what combination of steric and electronic effects a ligand should have in order to best catalyze a specific reaction. Chemists can then focus on only testing real ligands that fit these criteria.
    Researchers used monodentate phosphorus (III) virtual ligands as a test group and verified their models for the electronic and steric properties of the virtual ligands against values calculated for corresponding real ligands.
    The VLA screening method was then employed to design ligands for a test reaction in which a CHO group and a hydrogen atom can be added to a double bond in two different possible configurations. The reaction pathway was evaluated for 20 virtual ligand cases (consisting of different assigned values for the electronic and steric parameters) to create a contour map that shows a visual trend for what types of ligands can be expected to result in a highly selective reaction.
    Computer models of real ligands were designed based on parameters extracted from the contour map and then evaluated computationally. The selectivity values predicted via the VLA screening method matched well with the values computed for the models of real ligands, showing the viability of the VLA screening method to provide guidance that aids in rational ligand design.
    Beyond saving valuable time and resources, corresponding author Satoshi Maeda anticipates the creation of powerful reaction prediction systems by combining the VLA screening method with other computational techniques.
    “Ligand screening is a pivotal process in the development of transition metal catalysis. As the VLA screening can be conducted in silico, it would save a lot of time and resources in the lab. We believe that this method not only streamlines finding an optimal ligand from a given library of ligands, but also stimulates researchers to explore the untapped chemical space of ligands,” commented corresponding author Satoshi Maeda. “Furthermore, we also expect that by combining this method with our reaction prediction technology using the Artificial Force Induced Reaction method, a new computer-driven discovery scheme of transition metal catalysis can be realized.”
    Story Source:
    Materials provided by Hokkaido University. Note: Content may be edited for style and length. More

  • in

    The next generation of robots will be shape-shifters

    Physicists have discovered a new way to coat soft robots in materials that allow them to move and function in a more purposeful way. The research, led by the UK’s University of Bath, is described today in Science Advances.
    Authors of the study believe their breakthrough modelling on ‘active matter’ could mark a turning point in the design of robots. With further development of the concept, it may be possible to determine the shape, movement and behaviour of a soft solid not by its natural elasticity but by human-controlled activity on its surface.
    The surface of an ordinary soft material always shrinks into a sphere. Think of the way water beads into droplets: the beading occurs because the surface of liquids and other soft material naturally contracts into the smallest surface area possible — i.e. a sphere. But active matter can be designed to work against this tendency. An example of this in action would be a rubber ball that’s wrapped in a layer of nano-robots, where the robots are programmed to work in unison to distort the ball into a new, pre-determined shape (say, a star).
    It is hoped that active matter will lead to a new generation of machines whose function will come from the bottom up. So, instead of being governed by a central controller (the way today’s robotic arms are controlled in factories), these new machines would be made from many individual active units that cooperate to determine the machine’s movement and function. This is akin to the workings of our own biological tissues, such as the fibres in heart muscle.
    Using this idea, scientists could design soft machines with arms made of flexible materials powered by robots embedded in their surface. They could also tailor the size and shape of drug delivery capsules, by coating the surface of nanoparticles in a responsive, active material.. This in turn could have a dramatic effect on how a drug interacts with cells in the body.
    Work on active matter challenges the assumption that the energetic cost of the surface of a liquid or soft solid must always be positive because a certain amount of energy is always necessary to create a surface.
    Dr Jack Binysh, study first author, said: “Active matter makes us look at the familiar rules of nature — rules like the fact that surface tension has to be positive — in a new light. Seeing what happens if we break these rules, and how we can harness the results, is an exciting place to be doing research.”
    Corresponding author Dr Anton Souslov added: “This study is an important proof of concept and has many useful implications. For instance, future technology could produce soft robots that are far squishier and better at picking up and manipulating delicate materials.”
    For the study, the researchers developed theory and simulations that described a 3D soft solid whose surface experiences active stresses. They found that these active stresses expand the surface of the material, pulling the solid underneath along with it, and causing a global shape change. The researchers found that the precise shape adopted by the solid could then be tailored by altering the elastic properties of the material.
    In the next phase of this work — which has already begun — the researchers will apply this general principle to design specific robots, such as soft arms or self-swimming materials. They will also look at collective behaviour — for example, what happens when you have many active solids, all packed together.
    This work was a collaboration between the Universities of Bath and Birmingham. It was funded by the Engineering and Physical Sciences Research Council (EPSRC) through New Investigator Award no. EP/T000961/1.
    Story Source:
    Materials provided by University of Bath. Note: Content may be edited for style and length. More

  • in

    Brain-based computing chips not just for AI anymore

    With the insertion of a little math, Sandia National Laboratories researchers have shown that neuromorphic computers, which synthetically replicate the brain’s logic, can solve more complex problems than those posed by artificial intelligence and may even earn a place in high-performance computing.
    The findings, detailed in a recent article in the journal Nature Electronics, show that neuromorphic simulations employing the statistical method called random walks can track X-rays passing through bone and soft tissue, disease passing through a population, information flowing through social networks and the movements of financial markets, among other uses, said Sandia theoretical neuroscientist and lead researcher James Bradley Aimone.
    “Basically, we have shown that neuromorphic hardware can yield computational advantages relevant to many applications, not just artificial intelligence to which it’s obviously kin,” said Aimone. “Newly discovered applications range from radiation transport and molecular simulations to computational finance, biology modeling and particle physics.”
    In optimal cases, neuromorphic computers will solve problems faster and use less energy than conventional computing, he said.
    The bold assertions should be of interest to the high-performance computing community because finding capabilities to solve statistical problems is of increasing concern, Aimone said.
    “These problems aren’t really well-suited for GPUs [graphics processing units], which is what future exascale systems are likely going to rely on,” Aimone said. “What’s exciting is that no one really has looked at neuromorphic computing for these types of applications before.”
    Sandia engineer and paper author Brian Franke said, “The natural randomness of the processes you list will make them inefficient when directly mapped onto vector processors like GPUs on next-generation computational efforts. Meanwhile, neuromorphic architectures are an intriguing and radically different alternative for particle simulation that may lead to a scalable and energy-efficient approach for solving problems of interest to us.” More

  • in

    Labeling key to success of software company innovations

    Companies in the software industry, where novel ideas are prized, use linguistic tactics to develop new labels for their innovations to stay ahead of competitors. Using language to signal that something is “new and different” is an important tool for success, University of California, Davis, research suggests.
    Category innovation during a study period from 1990 to 2002 included words and phrases such as “platform” and “supply chain management” — market categories that are now established.
    “There is an association between companies that use category innovation and their likelihood to IPO, suggesting category innovation is part of successful firm strategies,” said Elizabeth George Pontikes of the UC Davis Graduate School of Management who is the author of the study.
    The article, “Category Innovation in the Software Industry, 1990-2002,” was published in Strategic Management Journal in January. Pontikes looked at more than 400 labels used in news releases about innovations by more than 4,000 different software firms over 12 years. Researchers also interviewed 12 executives and venture capitalists in the software industry.
    Category innovation, as defined in the study, is a practice that involves firms claiming a new category label to describe the market they are in. A firm may do this to differentiate from rivals, or to try to become a market leader or even a “category king.”
    One executive interviewed for the study described the “tag management” label, for example, as something that “wasn’t super innovative, but it was labeling it … so it was strategic the way we were thinking about it.”
    The research found that 75% of the labels analyzed when they were new only had one or two firms using them in the first two years, when it is traditionally difficult to determine if innovations even have a nascent market. Those labels don’t gain traction until the second year of innovation, the research showed.
    Firms sometimes engage in category innovation by borrowing and recasting a little-known term or are unaware another firm had claimed the label, Pontikes said.
    Story Source:
    Materials provided by University of California – Davis. Original written by Karen Michele Nikos-Rose. Note: Content may be edited for style and length. More

  • in

    Magnetism helps electrons vanish in high-temp superconductors

    Superconductors — metals in which electricity flows without resistance — hold promise as the defining material of the near future, according to physicist Brad Ramshaw, and are already used in medical imaging machines, drug discovery research and quantum computers being built by Google and IBM.
    However, the super-low temperatures conventional superconductors need to function — a few degrees above absolute zero — make them too expensive for wide use.
    In their quest to find more useful superconductors, Ramshaw, the Dick & Dale Reis Johnson Assistant Professor of physics in the College of Arts and Sciences (A&S), and colleagues have discovered that magnetism is key to understanding the behavior of electrons in “high-temperature” superconductors. With this finding, they’ve solved a 30-year-old mystery surrounding this class of superconductors, which function at much higher temperatures, greater than 100 degrees above absolute zero. Their paper, “Fermi Surface Transformation at the Pseudogap Critical Point of a Cuprate Superconductor,” published in Nature Physics March 10.
    “We’d like to understand what makes these high-temperature superconductors work and engineer that property into some other material that is easier to adopt in technologies,” Ramshaw said.
    A central mystery to high-temperature superconductors is what happens with their electrons, Ramshaw said.
    “All metals have electrons, and when a metal becomes a superconductor, the electrons pair up with each other,” he said. “We measure something called the ‘Fermi surface,’ which you can think of as a map showing where all the electrons are in a metal.”
    To study how electrons pair up in high-temperature superconductors, researchers continuously change the number of electrons through a process known as chemical doping. In high-temperature superconductors, at a certain “critical point,” electrons seem to vanish from the Fermi surface map, Ramshaw said. More

  • in

    Physicists show how frequencies can easily be multiplied without special circuitry

    A new discovery by physicists at Martin Luther University Halle-Wittenberg (MLU) could make certain components in computers and smartphones obsolete. The team has succeeded in directly converting frequencies to higher ranges in a common magnetic material without the need for additional components. Frequency multiplication is a fundamental process in modern electronics. The team reports on its research in the latest issue of Science.
    Digital technologies and devices are already responsible for about ten percent of global electricity consumption, and the trend is rising sharply. “It is therefore necessary to develop more efficient components for information processing,” says Professor Georg Woltersdorf, a physicist from MLU.
    Non-linear electronic circuits are typically used to generate the high-frequency gigahertz signals needed to operate today’s devices. The team at MLU has now found a way to do this within a magnetic material without the electronic components that are usually used for this. Instead, the magnetization is excited by a low-frequency megahertz source. Using the newly discovered effect, the source generates several frequency components, each of which is a multiple of the excitation frequency. These cover a range of six octaves and reach up to several gigahertz. “This is like hitting the lowest note on a piano while also hearing the corresponding harmonic tones of the higher octaves,” explains Woltersdorf.
    The surprising effect of frequency multiplication is explained by synchronized switching of the dynamic magnetization on a micron scale. “Different areas do not switch at the same time. Instead, they are triggered by adjacent areas just like in a falling row of dominoes,” explains first author Chris Körner from the Institute of Physics at MLU.
    The discovery could also help make digital technologies more energy efficient in the future. It is also important for new applications. Today’s microelectronics use electron charges as information carriers. A major disadvantage of this method is that the electric charge transport releases heat and therefore requires a lot of energy. Spin electronics could provide a promising solution. In addition to using the electron’s charge, it also uses its magnetic moment, or so-called spin. Its properties open the possibility to significantly improve the energy efficiency. The newly discovered effect could enable space-saving and efficient frequency sources for spin electronics in the gigahertz range.
    The study was funded in part by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation).
    Story Source:
    Materials provided by Martin-Luther-Universität Halle-Wittenberg. Note: Content may be edited for style and length. More