More stories

  • in

    Rebooting evolution

    The building blocks of life-saving therapeutics could be developed in days instead of years thanks to new software that simulates evolution.
    Proseeker is the name of a new computational tool that mimics the processes of natural selection, producing proteins that can be used for a range of medicinal and household uses.
    The enzymes in your laundry detergent, the insulin in your diabetes medication or the antibodies used in cancer therapy are currently made in the laboratory using a painstaking process called directed evolution.
    Laboratory evolution mimics natural evolution by making mutations in naturally-sourced proteins and selecting the best mutants, to be mutated and selected again, in a time-intensive and laborious process that creates useful proteins.
    Scientists at the ARC Centre of Excellence in Synthetic Biology have now discovered a way to perform the entire process of directed evolution using a computer. It can reduce the time required from many months or even years to just days.
    The team was led by Professor Oliver Rackham, Curtin University, in collaboration with Professor Aleksandra Filipovska, the University of Western Australia, and is based at the Harry Perkins Institute of Medical Research in Perth, Western Australia.
    To prove how useful this process could be they took a protein with no function at all and gave it the ability to bind DNA.
    ‘Proteins that bind DNA are currently revolutionising the field of gene therapy where scientists are using them to reverse disease-causing mutations,’ says Professor Rackham. ‘So this could be of great use in the future.
    ‘Reconstituting the entire process of directed evolution represents a radical advance for the field.’
    Story Source:
    Materials provided by Curtin University. Original written by Lucien Wilkinson. Note: Content may be edited for style and length. More

  • in

    New methods for network visualizations enable change of perspectives and views

    When visualizing data using networks, the type of representation is crucial for extracting hidden information and relationships. The research group of Jörg Menche, Adjunct Principal Investigator at the CeMM Research Center for Molecular Medicine of the Austrian Academy of Sciences, Professor at the University of Vienna, and Group leader at Max Perutz Labs, developed a new method for generating network layouts that allow for visualizing different information of a network in two- and three-dimensional virtual space and exploring different perspectives. The results could also facilitate future research on rare diseases by providing more versatile, comprehensible representations of complex protein interactions.
    Network visualizations allow for exploring connections between individual data points. However, the more complex and larger the networks, the more difficult it becomes to find the information you are looking for. For lack of suitable layouts, so-called “hairballs” visualizations emerge, that often obscure network structure, rather than elucidate it. Scientists from Jörg Menche’s research group at CeMM and Max Perutz Labs (a joint venture of the University of Vienna and the Medical University of Vienna), developed a method that makes it possible to specify in advance which network properties and information should be visually represented in order to explore them interactively. The results have now been published in Nature Computational Science.
    Reducing complexity
    For the study, first author Christiane V. R. Hütter, a PhD student in Joerg Menche’s research group, used the latest dimensionality reduction techniques that allow visualizations for networks with thousands of points to be computed within a very short time on a standard laptop. “The key idea behind our research was to develop different views for large networks to capture the complexity and get a more comprehensive view and present it in a visually understandable way — similar to looking at maps of the same region with different information content, detailed views and perspectives.” Menche Lab scientists developed four different network layouts, which they termed cartographs, as well as two- and three-dimensional visualizations, each following different rules to open up new perspectives on a given dataset. Any network information can be encoded and visualized in this fashion, for example, the structural significance of a particular point, but also functional features. Users can switch between different layouts to get a comprehensive picture. Study leader Jörg Menche explains: “Using the new layouts, we can now specify in advance that we want to see, for example, the number of connections of a point within the network represented, or a particular functional characteristic. In a biological network, for instance, I can explore connections between genes that are associated with a particular disease and what they might have in common.”
    The interplay of genes
    The scientists performed a proof-of-concept on both simple model networks and the complex interactome network, which maps all the proteins of the human body and their interactions. This consists of more than 16,000 points and over 300,000 connections. Christiane V.R. Hütter explains: “Using our new layouts, we are now able to visually represent different features of proteins and their connections, such as the close relationship between the biological importance of a protein and its centrality within the network. We can also visualize connection patterns between a group of proteins associated with the same disease that are difficult to decipher using conventional methods.”
    Tailored solutions
    The flexibility of the new framework allows users to tailor network visualizations for a specific application. For example, the study authors were able to develop 3D interactome layouts specifically for studying the biological functions of certain genes whose mutations are suspected to cause rare diseases. Jörg Menche adds, “To facilitate the visual representation and also analysis of large networks such as the interactome, our layouts can also be integrated into a virtual reality platform.” More

  • in

    Visualization of the origin of magnetic forces by atomic resolution electron microscopy

    The joint development team of Professor Shibata (the University of Tokyo), JEOL Ltd. and Monash University succeeded in directly observing an atomic magnetic field, the origin of magnets (magnetic force), for the first time in the world. The observation was conducted using the newly developed Magnetic-field-free Atomic-Resolution STEM (MARS) (1). This team had already succeeded in observing the electric field inside atoms for the first time in 2012. However, since the magnetic fields in atoms are extremely weak compared with electric fields, the technology to observe the magnetic fields had been unexplored since the development of electron microscopes. This is an epoch-making achievement that will rewrite the history of microscope development.
    Electron microscopes have the highest spatial resolution among all currently used microscopes. However, in order to achieve ultra-high resolution so that atoms can be observed directly, we have to observe the sample by placing it in an extremely strong lens magnetic field. Therefore, atomic observation of magnetic materials that are strongly affected by the lens magnetic field such as magnets and steels had been impossible for many years. For this difficult problem, the team succeeded in developing a lens that has a completely new structure in 2019. Using this new lens, the team realized atomic observation of magnetic materials, which is not affected by the lens magnetic field. The team’s next goal was to observe the magnetic fields of atoms, which are the origin of magnets (magnetic force), and they continued technological development to achieve the goal.
    This time, the joint development team took on the challenge of observing the magnetic fields of iron (Fe) atoms in a hematite crystal (α-Fe2O3) by loading MARS with a newly developed high-sensitivity high-speed detector, and further using computer image processing technology. To observe the magnetic fields, they used the Differential Phase Contrast (DPC) method (2) at atomic resolution, which is an ultrahigh-resolution local electromagnetic field measurement method using a scanning transmission electron microscope (STEM) (3), developed by Professor Shibata et al. The results directly demonstrated that iron atoms themselves are small magnets (atomic magnet). The results also clarified the origin of magnetism (antiferromagnetism (4)) exhibited by hematite at the atomic level.
    From the present research results, the observation on atomic magnetic field was demonstrated, and a method for observation of atomic magnetic fields was established. This method is expected to become a new measuring method in the future that will lead the research and development of various magnetic materials and devices such as magnets, steels, magnetic devices, magnetic memory, magnetic semiconductors, spintronics and topological materials.
    This research was conducted by the joint development team of Professor Naoya Shibata (Director of the Institute of Engineering Innovation, School of Engineering, the University of Tokyo) and Dr. Yuji Kohno et al. (Specialists of JEOL Ltd.) in collaboration with Monash University, Australia, under the Advanced Measurement and Analysis Systems Development (SENTAN), Japan Science and Technology Agency (JST).
    Terms
    (1) Magnetic-field-free Atomic-Resolution STEM (MARS) More

  • in

    How a single nerve cell can multiply

    Neurons are constantly performing complex calculations to process sensory information and infer the state of the environment. For example, to localize a sound or to recognize the direction of visual motion, individual neurons are thought to multiply two signals. However, how such a computation is carried out has been a mystery for decades. Researchers at the Max Planck Institute for Biological Intelligence, in foundation (i.f.), have now discovered in fruit flies the biophysical basis that enables a specific type of neuron to multiply two incoming signals. This provides fundamental insights into the algebra of neurons — the computations that may underlie countless processes in the brain.
    We easily recognize objects and the direction in which they move. The brain calculates this information based on local changes in light intensity detected by our retina. The calculations occur at the level of individual neurons. But what does it mean when neurons calculate? In a network of communicating nerve cells, each cell must calculate its outgoing signal based on a multitude of incoming signals. Certain types of signals will increase and others will reduce the outgoing signal — processes that neuroscientists refer to as ‘excitation’ and ‘inhibition’.
    Theoretical models assume that seeing motion requires the multiplication of two signals, but how such arithmetic operations are performed at the level of single neurons was previously unknown. Researchers from Alexander Borst’s department at the Max Planck Institute for Biological Intelligence, i.f., have now solved this puzzle in a specific type of neuron.
    Recording from T4 cells
    The scientists focused on so-called T4 cells in the visual system of the fruit fly. These neurons only respond to visual motion in one specific direction. The lead authors Jonatan Malis and Lukas Groschner succeeded for the first time in measuring both the incoming and the outgoing signals of T4 cells. To do so, the neurobiologists placed the animal in a miniature cinema and used minuscule electrodes to record the neurons’ electrical activities. Since T4 cells are among the smallest of all neurons, the successful measurements were a methodological milestone.
    Together with computer simulations, the data revealed that the activity of a T4 cell is constantly inhibited. However, if a visual stimulus moves in a certain direction, the inhibition is briefly lifted. Within this short time window, an incoming excitatory signal is amplified: Mathematically, constant inhibition is equivalent to a division; removing the inhibition results in a multiplication. “We have discovered a simple basis for a complex calculation in a single neuron,” explains Lukas Groschner. “The inverse operation of a division is a multiplication. Neurons seem to be able to exploit this relationship,” adds Jonatan Malis.
    Relevance for behavior
    The T4 cell’s ability to multiply is linked to a certain receptor molecule on its surface. “Animals lacking this receptor misperceive visual motion and fail to maintain a stable course in behavioral experiments,” explains co-author Birte Zuidinga, who analyzed the walking trajectories of fruit flies in a virtual reality setup. This illustrates the importance of this type of computation for the animals’ behavior. “So far, our understanding of the basic algebra of neurons was rather incomplete,” says Alexander Borst. “However, the comparatively simple brain of the fruit fly has allowed us to gain insight into this seemingly intractable puzzle.” The researchers assume that similar neuronal computations underlie, for example, our abilities to localize sounds, to focus our attention, or to orient ourselves in space.
    Story Source:
    Materials provided by Max-Planck-Gesellschaft. Note: Content may be edited for style and length. More

  • in

    Fingertip sensitivity for robots

    In a paper published on February 23, 2022 in Nature Machine Intelligence, a team of scientists at the Max Planck Institute for Intelligent Systems (MPI-IS) introduce a robust soft haptic sensor named “Insight” that uses computer vision and a deep neural network to accurately estimate where objects come into contact with the sensor and how large the applied forces are. The research project is a significant step toward robots being able to feel their environment as accurately as humans and animals. Like its natural counterpart, the fingertip sensor is very sensitive, robust, and high resolution.
    The thumb-shaped sensor is made of a soft shell built around a lightweight stiff skeleton. This skeleton holds up the structure much like bones stabilize the soft finger tissue. The shell is made from an elastomer mixed with dark but reflective aluminum flakes, resulting in an opaque greyish color which prevents any external light finding its way in. Hidden inside this finger-sized cap is a tiny 160-degree fish-eye camera which records colorful images illuminated by a ring of LEDs.
    When any objects touch the sensor’s shell, the appearance of the color pattern inside the sensor changes. The camera records images many times per second and feeds a deep neural network with this data. The algorithm detects even the smallest change in light in each pixel. Within a fraction of a second, the trained machine-learning model can map out where exactly the finger is contacting an object, determine how strong the forces are and indicate the force direction. The model infers what scientists call a force map: it provides a force vector for every point in the three-dimensional fingertip.
    “We achieved this excellent sensing performance through the innovative mechanical design of the shell, the tailored imaging system inside, automatic data collection, and cutting-edge deep learning,” says Georg Martius, Max Planck Research Group Leader at MPI-IS, where he heads the Autonomous Learning Group. His Ph.D. student Huanbo Sun adds: “Our unique hybrid structure of a soft shell enclosing a stiff skeleton ensures high sensitivity and robustness. Our camera can detect even the slightest deformations of the surface from one single image.” Indeed, while testing the sensor, the researchers realized it was sensitive enough to feel its own orientation relative to gravity.
    The third member of the team is Katherine J. Kuchenbecker, the Director of the Haptic Intelligence Department at MPI-IS. She confirms that the new sensor will be useful: “Previous soft haptic sensors had only small sensing areas, were delicate and difficult to make, and often could not feel forces parallel to the skin, which are essential for robotic manipulation like holding a glass of water or sliding a coin along a table,” says Kuchenbecker.
    But how does such a sensor learn? Huanbo Sun designed a testbed to generate the training data needed for the machine-learning model to understand the correlation between the change in raw image pixels and the forces applied. The testbed probes the sensor all around its surface and records the true contact force vector together with the camera image inside the sensor. In this way, about 200,000 measurements were generated. It took nearly three weeks to collect the data and another one day to train the machine-learning model. Surviving this long experiment with so many different contact forces helped prove the robustness of Insight’s mechanical design, and tests with a larger probe showed how well the sensing system generalizes.
    Another special feature of the thumb-shaped sensor is that itpossesses a nail-shaped zone with a thinner elastomer layer. This tactile fovea is designed to detect even tiny forces and detailed object shapes. For this super-sensitive zone, the scientists choose an elastomer thickness of 1.2 mm rather than the 4 mm they used on the rest of the finger sensor.
    “The hardware and software design we present in our work can be transferred to a wide variety of robot parts with different shapes and precision requirements. The machine-learning architecture, training, and inference process are all general and can be applied to many other sensor designs,” Huanbo Sun concludes.
    Video: https://youtu.be/lTAJwcZopAA
    Story Source:
    Materials provided by Max Planck Institute for Intelligent Systems. Note: Content may be edited for style and length. More

  • in

    Inorganic borophene liquid crystals: A superior new material for optoelectronic devices

    Liquid crystals derived from borophene have risen in popularity, owing to their immense applicability in optoelectronic and photonic devices. However, their development requires a very narrow temperature range, which hinders their large-scale application. Now, Tokyo Tech researchers investigated a liquid-state borophene oxide, discovering that it exhibited high thermal stability and optical switching behavior even at low voltages. These findings highlight the strong potential of borophene oxide-derived liquid crystals for use in widespread applications.
    Two-dimensional (2D) atomic layered materials, such as the carbon-based graphene and the boron-based borophene, are highly sought after for their applications in a variety of optoelectronic devices, owing to their desirable electronic properties. The monolayer structure of borophene with a network of boron bonds endows it with high flexibility, which can be beneficial for the generation of a liquid state at low temperatures. Thus, it is not surprising that liquid crystals derived from 2D networked structures are in high demand. However, the poor stability of borophene, in particular, makes it difficult for it to undergo a phase transition to the liquid state.
    In contrast, borophene oxide — a derivative of borophene — can improve the stability of the internal boron network, in turn stabilizing the entire structure. This property of borophene oxide is different from that of other 2D materials, which are unable to yield liquid crystals without the use of solvents.
    To compensate for the lack of suitable liquid crystals, a team of researchers from Japan, including Assistant Professor Tetsuya Kambe and Professor Kimihisa Yamamoto from Tokyo Institute of Technology, investigated the properties of a borophene oxide analogue as a fully inorganic liquid with a layered structure. Their study was recently published in Nature Communications.
    Initially, the team used previously tested methods to generate borophene oxide layers (BoL) as crystals (BoL-C). They then converted BoL-C to liquid crystals (BoL-LC) by heating them to temperatures of 105-200°C. They observed that the resultant dehydration weakened the interactions between the interlayers of BoL-C, which is desirable for its flexibility.
    The team then analyzed the structural properties of BoL-LC using polarized optical microscopy, discovering that BoL-LC sheets are found stacked parallel to the surface of the liquid drop with a slightly curved form. This spherulite orientation of borophene sheets was confirmed using scanning electron microscopy.
    An analysis of the phase transition features revealed that phase transition (P-ii/P-i) occurred at around 100°C for BoL-LC. In fact, both transition phases exhibited high thermal stability at extreme temperatures. The team also observed a highly ordered orientation of the P-ii phase.
    To test its optical switching behavior, the team created a dynamic scattering device using BoL-LC, and found that unlike other organic liquid crystals, the BoL-based device responded well to voltages as low as 1V. These findings highlight the feasibility of inorganic liquid devices in harsh environments.
    “Although a liquid crystal device using graphene oxide has been reported previously, it was a lyotropic liquid crystal, with a strong dependence on the solution concentration. Therefore, the previously reported material is different from the liquid borophene created in this study, without the use of any solvents,” says Dr. Kambe, while discussing the advantages of BoL-LC over other 2D liquid crystals.
    What’s more, they found that even upon exposure to direct fire, BoL-LC was noncombustible! This confirms that BoL-LC in a liquid state with an ordered layer structure can exist over a wide range of temperatures — a property which has not been observed so far for other organic materials.
    When asked about the implications of these findings, Dr. Kambe and Dr. Yamamoto stated, “BoL-LC exhibits strong potential for use in widespread applications, that are unavailable to conventional organic liquid crystals or inorganic materials.”
    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Automation is fueling increasing mortality among U.S. adults, study finds

    The automation of U.S. manufacturing — robots replacing people on factory floors — is fueling rising mortality rate among America’s working-age adults, according to a new study by researchers at Yale and the University of Pennsylvania.
    The study, published Feb. 23 in the journal Demography, found evidence of a causal link between automation and increasing mortality, driven largely by increased “deaths of despair,” such as suicides and drug overdoses. This is particularly true for males and females aged 45 to 54, according to the study. But researchers also found evidence of increased mortality across multiple age and sex groups from causes as varied as cancer and heart disease.
    Public policy, including strong social-safety net programs, higher minimum wages, and limiting the supply of prescription opioids, can blunt automation’s effects on a community’s health, the researchers concluded.
    “For decades, manufacturers in the United States have turned to automation to remain competitive in a global marketplace, but this technological innovation has reduced the number of quality jobs available to adults without a college degree — a group that has faced increased mortality in recent years,” said lead author Rourke O’Brien, assistant professor of sociology in Yale’s Faculty of Arts and Sciences. “Our analysis shows that automation exacts a toll on the health of individuals both directly — by reducing employment, wages, and access to healthcare — as well as indirectly, by reducing the economic vitality of the broader community.”
    Since 1980, mortality rates in the United States have diverged from those in other high-income countries. Today, Americans on average die three years sooner than their counterparts in other wealthy nations.
    Automation is a major source of the decline of U.S. manufacturing jobs along with other factors, including competition with manufacturers in countries with lower labor costs, such as China and Mexico . Previous research has shown that the adoption of industrial robots caused the loss of an estimated 420,000 to 750,000 jobs during the 1990s and 2000s, the majority of which were in manufacturing.
    To understand the role of automation on increased mortality, O’Brien and co-authors Elizabeth F. Blair and Atheendar Venkataramani, both of the University of Pennsylvania, used newly available measures that chart the adoption of automation across U.S. industries and localities between 1993 and 2007. They combined these measures with U.S. death-certificate data over the same time period to estimate the causal effect of automation on the mortality of working age adults at the county level and for specific types of deaths.
    According to the study, each new robot per 1,000 workers led to about eight additional deaths per 100,000 males aged 45 to 54 and nearly four additional deaths per 100,000 females in the same age group. The analysis showed that automation caused a substantial increase in suicides among middle-aged men and drug overdose deaths among men of all ages and women aged 20 to 29. Overall, automation could be linked to 12% of the increase in drug overdose mortality among all working-age adults during the study period. The researchers also discovered evidence associating the lost jobs and reduced wages caused by automation with increased homicide, cancer, and cardiovascular disease within specific age-sex groups.
    The researchers examined policy areas that could mitigate automation’s harmful effects. They found that robust social safety net programs, such as Medicaid and unemployment benefits, at the state level moderated the effects of automation among middle-aged males, particularly suicide and drug overdose deaths. Labor market policies also soften automation’s effects on middle-aged men: The effects of automation were more pronounced in states with “right to work” laws, which contribute to lower rates of unionization, and states with lower minimum wages, according to the study.
    The study found suggestive evidence that the effect of automation on drug overdose deaths might be higher in areas with higher per capita supplies of prescription opioids.
    “Our findings underscore the importance of public policy in supporting the individuals and communities who have lost their jobs or seen their wages cut due to automation,” said Venkatarmani, co-author of the study. “A strong social safety net and labor market policies that improve the quality of jobs available to workers without a college degree may help reduce deaths of despair and strengthen the general health of communities, particularly those in our nation’s industrial heartland.”
    The study’s authors are members of Opportunity for Health, a research group that explores how economic opportunity affects the health of individuals and communities. The study was supported by the U.S. Social Security Administration.
    Story Source:
    Materials provided by Yale University. Original written by Mike Cummings. Note: Content may be edited for style and length. More

  • in

    Navigation tools could be pointing drivers to the shortest route — but not the safest

    Time for a road trip. You punch the destination into your GPS and choose the suggested route. But is this shortest route the safest? Not necessarily, according to new findings from Texas A&M University researchers.
    Dominique Lord and Soheil Sohrabi, with funding from the A.P. and Florence Wiley Faculty Fellow at Texas A&M, designed a study to examine the safety of navigational tools. Comparing the safest and shortest routes between five metropolitan areas in Texas — Dallas-Fort Worth, Waco, Austin, Houston and Bryan-College Station — including more than 29,000 road segments, they found that taking a route with an 8% reduction in travel time could increase the risk of being in a crash by 23%.
    “As route guidance systems aim to find the shortest path between a beginning and ending point, they can misguide drivers to take routes that may minimize travel time, but concurrently, carry a greater risk of crashes,” said Lord, professor in the Zachry Department of Civil and Environmental Engineering.
    The researchers collected and combined road and traffic characteristics, including geometry design, number of lanes, lane width, lighting and average daily traffic, weather conditions and historical crash data to analyze and develop statistical models for predicting the risk of being involved in crashes.
    The study revealed inconsistencies in the shortest and safest routes. In clear weather conditions, taking the shortest route instead of the safest between Dallas-Fort Worth and Bryan-College Station will reduce the travel time by 8%. Still, the probability of a crash increases to 20%. The analysis suggests that taking the longest route between Austin and Houston with an 11% increase in travel time results in a 1% decrease in the daily probability of crashes.
    Overall, local roads with a higher risk of crashes include poor geometric designs, drainage problems, lack of lighting and a higher risk of wildlife-vehicle collisions. More