More stories

  • in

    Graphene-based memory resistors show promise for brain-based computing

    As progress in traditional computing slows, new forms of computing are coming to the forefront. At Penn State, a team of engineers is attempting to pioneer a type of computing that mimics the efficiency of the brain’s neural networks while exploiting the brain’s analog nature.
    Modern computing is digital, made up of two states, on-off or one and zero. An analog computer, like the brain, has many possible states. It is the difference between flipping a light switch on or off and turning a dimmer switch to varying amounts of lighting.
    Neuromorphic or brain-inspired computing has been studied for more than 40 years, according to Saptarshi Das, the team leader and Penn State assistant professor of engineering science and mechanics. What’s new is that as the limits of digital computing have been reached, the need for high-speed image processing, for instance for self-driving cars, has grown. The rise of big data, which requires types of pattern recognition for which the brain architecture is particularly well suited, is another driver in the pursuit of neuromorphic computing.
    “We have powerful computers, no doubt about that, the problem is you have to store the memory in one place and do the computing somewhere else,” Das said.
    The shuttling of this data from memory to logic and back again takes a lot of energy and slows the speed of computing. In addition, this computer architecture requires a lot of space. If the computation and memory storage could be located in the same space, this bottleneck could be eliminated.
    “We are creating artificial neural networks, which seek to emulate the energy and area efficiencies of the brain,” explained Thomas Shranghamer, a doctoral student in the Das group and first author on a paper recently published in Nature Communications. “The brain is so compact it can fit on top of your shoulders, whereas a modern supercomputer takes up a space the size of two or three tennis courts.”
    Like synapses connecting the neurons in the brain that can be reconfigured, the artificial neural networks the team is building can be reconfigured by applying a brief electric field to a sheet of graphene, the one-atomic-thick layer of carbon atoms. In this work they show at least 16 possible memory states, as opposed to the two in most oxide-based memristors, or memory resistors.
    “What we have shown is that we can control a large number of memory states with precision using simple graphene field effect transistors,” Das said.
    The team thinks that ramping up this technology to a commercial scale is feasible. With many of the largest semiconductor companies actively pursuing neuromorphic computing, Das believes they will find this work of interest.
    The Army Research Office supported this work. The team has filed for a patent on this invention.

    Story Source:
    Materials provided by Penn State. Original written by Walt Mills. Note: Content may be edited for style and length. More

  • in

    Physicists circumvent centuries-old theory to cancel magnetic fields

    A team of scientists including two physicists at the University of Sussex has found a way to circumvent a 178-year old theory which means they can effectively cancel magnetic fields at a distance. They are the first to be able to do so in a way which has practical benefits.
    The work is hoped to have a wide variety of applications. For example, patients with neurological disorders such as Alzheimer’s or Parkinson’s might in future receive a more accurate diagnosis. With the ability to cancel out ‘noisy’ external magnetic fields, doctors using magnetic field scanners will be able to see more accurately what is happening in the brain.
    The study “Tailoring magnetic fields in inaccessible regions” is published in Physical Review Letters. It is an international collaboration between Dr Mark Bason and Jordi Prat-Camps at the University of Sussex, and Rosa Mach-Batlle and Nuria Del-Valle from the Universitat Autonoma de Barcelona and other institutions.
    “Earnshaw’s Theorem” from 1842 limits the ability to shape magnetic fields. The team were able to calculate an innovative way to circumvent this theory in order to effectively cancel other magnetic fields which can confuse readings in experiments.
    In practical terms, they achieved this through creating a device comprised of a careful arrangement of electrical wires. This creates additional fields and so counteracts the effects of the unwanted magnetic field. Scientists have been struggling with this challenge for years but now the team has found a new strategy to deal with these problematic fields. While a similar effect has been achieved at much higher frequencies, this is the first time it has been achieved at low frequencies and static fields — such as biological frequencies — which will unlock a host of useful applications.
    Other possible future applications for this work include:
    Quantum technology and quantum computing, in which ‘noise’ from exterior magnetic fields can affect experimental readings
    Neuroimaging, in which a technique called ‘transcranial magnetic stimulation’ activates different areas of the brain through magnetic fields. Using the techniques in this paper, doctors might be able to more carefully address areas of the brain needing stimulation.
    Biomedicine, to better control and manipulate nanorobots and magnetic nanoparticles that are moved inside a body by means of external magnetic fields. Potential applications for this development include improved drug delivery and magnetic hyperthermia therapies.
    Dr Rosa Mach-Batlle, the lead author on the paper from the Universitat Autonoma de Barcelona, said: “Starting from the fundamental question of whether it was possible or not to create a magnetic source at a distance, we came up with a strategy for controlling magnetism remotely that we believe could have a significant impact in technologies relying on the magnetic field distribution in inaccessible regions, such as inside of a human body.”
    Dr Mark Bason from the School of Mathematical and Physical Sciences at the University of Sussex said: “We’ve discovered a way to circumvent Earnshaw’s theorem which many people didn’t imagine was possible. As a physicist, that’s pretty exciting. But it’s not just a theoretical exercise as our research might lead to some really important applications: more accurate diagnosis for Motor Neurone Disease patients in future, for example, better understanding of dementia in the brain, or speeding the development of quantum technology.”

    Story Source:
    Materials provided by University of Sussex. Note: Content may be edited for style and length. More

  • in

    Forecasting elections with a model of infectious diseases

    Forecasting elections is a high-stakes problem. Politicians and voters alike are often desperate to know the outcome of a close race, but providing them with incomplete or inaccurate predictions can be misleading. And election forecasting is already an innately challenging endeavor — the modeling process is rife with uncertainty, incomplete information, and subjective choices, all of which must be deftly handled. Political pundits and researchers have implemented a number of successful approaches for forecasting election outcomes, with varying degrees of transparency and complexity. However, election forecasts can be difficult to interpret and may leave many questions unanswered after close races unfold.
    These challenges led researchers to wonder if applying a disease model to elections could widen the community involved in political forecasting. In a paper publishing today in SIAM Review, Alexandria Volkening (Northwestern University), Daniel F. Linder (Augusta University), Mason A. Porter (University of California, Los Angeles), and Grzegorz A. Rempala (The Ohio State University) borrowed ideas from epidemiology to develop a new method for forecasting elections. The team hoped to expand the community that engages with polling data and raise research questions from a new perspective; the multidisciplinary nature of their infectious disease model was a virtue in this regard. “Our work is entirely open-source,” Porter said. “Hopefully that will encourage others to further build on our ideas and develop their own methods for forecasting elections.”
    In their new paper, the authors propose a data-driven mathematical model of the evolution of political opinions during U.S. elections. They found their model’s parameters using aggregated polling data, which enabled them to track the percentages of Democratic and Republican voters over time and forecast the vote margins in each state. The authors emphasized simplicity and transparency in their approach and consider these traits to be particular strengths of their model. “Complicated models need to account for uncertainty in many parameters at once,” Rempala said.
    This study predominantly focused on the influence that voters in different states may exert on each other, since accurately accounting for interactions between states is crucial for the production of reliable forecasts. The election outcomes in states with similar demographics are often correlated, and states may also influence each other asymmetrically; for example, the voters in Ohio may more strongly influence the voters in Pennsylvania than the reverse. The strength of a state’s influence can depend on a number of factors, including the amount of time that candidates spend campaigning there and the state’s coverage in the news. To develop their forecasting approach, the team repurposed ideas from the compartmental modeling of biological diseases. Mathematicians often utilize compartmental models — which categorize individuals into a few distinct types (i.e., compartments) — to examine the spread of infectious diseases like influenza and COVID-19. A widely-studied compartmental model called the susceptible-infected-susceptible (SIS) model divides a population into two groups: those who are susceptible to becoming sick and those who are currently infected. The SIS model then tracks the fractions of susceptible and infected individuals in a community over time, based on the factors of transmission and recovery. When an infected person interacts with a susceptible person, the susceptible individual may become infected. An infected person also has a certain chance of recovering and becoming susceptible again.
    Because there are two major political parties in the U.S., the authors employed a modified version of an SIS model with two types of infections. “We used techniques from mathematical epidemiology because they gave us a means of framing relationships between states in a familiar, multidisciplinary way,” Volkening said. While elections and disease dynamics are certainly different, the researchers treated Democratic and Republican voting inclinations as two possible kinds of “infections” that can spread between states. Undecided, independent, or minor-party voters all fit under the category of susceptible individuals. “Infection” was interpreted as adopting Democratic or Republican opinions, and “recovery” represented the turnover of committed voters to undecided ones.
    In the model, committed voters can transmit their opinions to undecided voters, but the opposite is not true. The researchers took a broad view of transmission, interpreting opinion persuasion as occurring through both direct communication between voters and more indirect methods like campaigning, news coverage, and debates. Individuals can interact and lead to other people changing their opinions both within and between states.
    To determine the values of their models’ mathematical parameters, the authors used polling data on senatorial, gubernatorial, and presidential races from HuffPost Pollster for 2012 and 2016 and RealClearPolitics for 2018. They fit the model to the data for each individual race and simulated the evolution of opinions in the year leading up to each election by tracking the fractions of undecided, Democratic, and Republican voters in each state from January until Election Day. The researchers simulated their final forecasts as if they made them on the eve of Election Day, including all of the polling data but omitting the election results.
    Despite its basis in an unconventional field for election forecasting — namely, epidemiology — the resulting model performed surprisingly well. It forecast the 2012 and 2016 U.S. races for governor, Senate, and presidential office with a similar success rate as popular analyst sites FiveThirtyEight and Sabato’s Crystal Ball. For example, the authors’ success rate for predicting party outcomes at the state level in the 2012 and 2016 presidential elections was 94.1 percent, while FiveThirtyEight had a success rate of 95.1 percent and Sabato’s Crystal Ball had a success rate of 93.1 percent. “We were all initially surprised that a disease-transmission model could produce meaningful forecasts of elections,” Volkening said.
    After establishing their model’s capability to forecast outcomes on the eve of Election Day, the authors sought to determine how early the model could create accurate forecasts. Predictions that are made in the weeks and months before Election Day are particularly meaningful, but producing early forecasts is challenging because fewer polling data are available for model training. By employing polling data from the 2018 senatorial races, the team’s model was able to produce stable forecasts from early August onward with the same success rate as FiveThirtyEight’s final forecasts for those races.
    Despite clear differences between contagion and voting dynamics, this study suggests a valuable approach for describing how political opinions change across states. Volkening is currently applying this model — in collaboration with Northwestern University undergraduate students Samuel Chian, William L. He, and Christopher M. Lee — to forecast the 2020 U.S. presidential, senatorial, and gubernatorial elections. “This project has made me realize that it’s challenging to judge forecasts, especially when some elections are decided by a vote margin of less than one percent,” Volkening said. “The fact that our model does well is exciting, since there are many ways to make it more realistic in the future. We hope that our work encourages folks to think more critically about how they judge forecasts and get involved in election forecasting themselves.” More

  • in

    Toward ultrafast computer chips that retain data even when there is no power

    Spintronic devices are attractive alternatives to conventional computer chips, providing digital information storage that is highly energy efficient and also relatively easy to manufacture on a large scale. However, these devices, which rely on magnetic memory, are still hindered by their relatively slow speeds, compared to conventional electronic chips.
    In a paper published in the journal Nature Electronics, an international team of researchers has reported a new technique for magnetization switching — the process used to “write” information into magnetic memory — that is nearly 100 times faster than state-of-the-art spintronic devices. The advance could lead to the development of ultrafast magnetic memory for computer chips that would retain data even when there is no power.
    In the study, the researchers report using extremely short, 6-picosecond electrical pulses to switch the magnetization of a thin film in a magnetic device with great energy efficiency. A picosecond is one-trillionth of a second.
    The research was led by Jon Gorchon, a researcher at the French National Centre for Scientific Research (CNRS) working at the University of Lorraine’s L’Institut Jean Lamour in France, in collaboration with Jeffrey Bokor, professor of electrical engineering and computer sciences at the University of California, Berkeley, and Richard Wilson, assistant professor of mechanical engineering and of materials science and engineering at UC Riverside. The project began at UC Berkeley when Gorchon and Wilson were postdoctoral researchers in Bokor’s lab.
    In conventional computer chips, the 0s and 1s of binary data are stored as the “on” or “off” states of individual silicon transistors. In magnetic memory, this same information can be stored as the opposite polarities of magnetization, which are usually thought of as the “up” or “down” states. This magnetic memory is the basis for magnetic hard drive memory, the technology used to store the vast amounts of data in the cloud.
    A key feature of magnetic memory is that the data is “non-volatile,” which means that information is retained even when there is no electrical power applied.

    advertisement

    “Integrating magnetic memory directly into computer chips has been a long-sought goal,” said Gorchon. “This would allow local data on-chip to be retained when the power is off, and it would enable the information to be accessed far more quickly than pulling it in from a remote disk drive.”
    The potential of magnetic devices for integration with electronics is being explored in the field of spintronics, in which tiny magnetic devices are controlled by conventional electronic circuits, all on the same chip.
    State-of-the-art spintronics is done with the so-called spin-orbit torque device. In such a device, a small area of a magnetic film (a magnetic bit) is deposited on top of a metallic wire. A current flowing through the wire leads to a flow of electrons with a magnetic moment, which is also called the spin. That, in turn, exerts a magnetic torque — called the spin-orbit torque — on the magnetic bit. The spin-orbit torque can then switch the polarity of the magnetic bit.
    State-of-the-art spin-orbit torque devices developed so far required current pulses of at least a nanosecond, or a millionth of a second, to switch the magnetic bit, while the transistors in state-of-the-art computer chips switch in only 1 to 2 picoseconds. This leads to the speed of the overall circuit being limited by the slow magnetic switching speed.
    In this study, the researchers launched the 6-picosecond-wide electrical current pulses along a transmission line into a cobalt-based magnetic bit. The magnetization of the cobalt bit was then demonstrated to be reliably switched by the spin-orbit torque mechanism.

    advertisement

    While heating by electric currents is a debilitating problem in most modern devices, the researchers note that, in this experiment, the ultrafast heating aids the magnetization reversal.
    “The magnet reacts differently to heating on long versus short time scales,” said Wilson. “When heating is this fast, only a small amount can change the magnetic properties to help reverse the magnet’s direction.”
    Indeed, preliminary energy usage estimates are incredibly promising; the energy needed in this “ultrafast” spin-orbit torque device is almost two orders of magnitude smaller than in conventional spintronic devices that operate at much longer time scales.
    “The high energy efficiency of this novel, ultrafast magnetic switching process was a big, and very welcome, surprise,” said Bokor. “Such a high-speed, low-energy spintronic device can potentially tackle the performance limitations of current processor level memory systems, and it could also be used for logic applications.”
    The experimental methods used by the researchers also offer a new way of triggering and probing spintronic phenomena at ultrafast time scales, which could help better understand the underlying physics at play in phenomena like spin-orbit torque. More

  • in

    Machine learning helps hunt for COVID-19 therapies

    Michigan State University Foundation Professor Guowei Wei wasn’t preparing machine learning techniques for a global health crisis. Still, when one broke out, he and his team were ready to help.
    The group already has one machine learning model at work in the pandemic, predicting consequences of mutations to SARS-CoV-2. Now, Wei’s team has deployed another to help drug developers on their most promising leads for attacking one of the virus’ most compelling targets. The researchers shared their research in the peer-reviewed journal Chemical Science.
    Prior to the pandemic, Wei and his team were already developing machine learning computer models — specifically, models that use what’s known as deep learning — to help save drug developers time and money. The researchers “train” their deep learning models with datasets filled with information about proteins that drug developers want to target with therapeutics. The models can then make predictions about unknown quantities of interest to help guide drug design and testing.
    Over the past three years, the Spartans’ models have been among the top performers in a worldwide competition series for computer-aided drug design known as the Drug Design Data Resource, or D3R, Grand Challenge. Then COVID-19 came.
    “We knew this was going to be bad. China shut down an entire city with 10 million people,” said Wei, who is a professor in the Departments of Mathematics as well as Electrical and Computer Engineering. “We had a technique at hand, and we knew this was important.”
    Wei and his team have repurposed their deep learning models to focus on a specific SARS-CoV-2 protein called its main protease. The main protease is a cog in the coronavirus’s protein machinery that’s critical to how the pathogen makes copies of itself. Drugs that disable that cog could thus stop the virus from replicating.

    advertisement

    What makes the main protease an even more attractive target is that it’s distinct from all known human proteases, which isn’t always the case. Drugs that attack the viral protease are thus less likely to disrupt people’s natural biochemistry.
    Another advantage of the SARS-CoV-2 main protease is that’s it’s nearly identical to that of the coronavirus responsible for the 2003 SARS outbreak. This means that drug developers and Wei’s team weren’t starting completely from scratch. They had information about the structure of the main protease and chemical compounds called protease inhibitors that interfere with the protein’s function.
    Still, gaps remained in understanding where those protease inhibitors latch onto the viral protein and how tightly. That’s where the Spartans’ deep learning models came in.
    Wei’s team used its models to predict those details for over 100 known protease inhibitors. That data also let the team rank those inhibitors and highlight the most promising ones, which can be very valuable information for labs and companies developing new drugs, Wei said.
    “In the early days of a drug discovery campaign, you might have 1,000 candidates,” Wei said. Typically, all those candidates would move to preclinical tests in animals, then maybe the most promising 10 or so can safely advance to clinical trials in humans, Wei explained.

    advertisement

    By focusing on drugs that are most attracted to the protease’s most vulnerable spots, drug developers can whittle down that list of 1,000 from the start, saving money and months, if not years, Wei said.
    “This is a way to help drug developers prioritize. They don’t have to waste resources to check every single candidate,” he said.
    But Wei also had a reminder. The team’s models do not replace the need for experimental validation, preclinical or clinical trials. Drug developers still need to prove their products are safe before providing them for patients, which can take many years.
    For that reason, Wei said, antibody treatments that resemble what immune systems produce naturally to fight the coronavirus will be most likely the first therapies approved during the pandemic. These antibodies, however, target the virus’s spike protein, rather than its main protease. Developing protease inhibitors would thus provide a welcome addition in an arsenal to fight a deadly and constantly evolving enemy.
    “If developers want to design a new set of drugs, we’ve shown basically what they need to do,” Wei said. More

  • in

    Artificial intelligence-based algorithm for the early diagnosis of Alzheimer's

    Alzheimer’s disease (AD) is a neurodegenerative disorder that affects a significant proportion of the older population worldwide. It causes irreparable damage to the brain and severely impairs the quality of life in patients. Unfortunately, AD cannot be cured, but early detection can allow medication to manage symptoms and slow the progression of the disease.
    Functional magnetic resonance imaging (fMRI) is a noninvasive diagnostic technique for brain disorders. It measures minute changes in blood oxygen levels within the brain over time, giving insight into the local activity of neurons. Despite its advantages, fMRI has not been used widely in clinical diagnosis. The reason is twofold. First, the changes in fMRI signals are so small that they are overly susceptible to noise, which can throw off the results. Second, fMRI data are complex to analyze. This is where deep-learning algorithms come into the picture.
    In a recent study published in the Journal of Medical Imaging, scientists from Texas Tech University employed machine-learning algorithms to classify fMRI data. They developed a type of deep-learning algorithm known as a convolutional neural network (CNN) that can differentiate among the fMRI signals of healthy people, people with mild cognitive impairment, and people with AD.
    CNNs can autonomously extract features from input data that are hidden to human observers. They obtain these features through training, for which a large amount of pre-classified data is needed. CNNs are predominantly used for 2D image classification, which means that four-dimensional fMRI data (three spatial and one temporal) present a challenge. fMRI data are incompatible with most existing CNN designs.
    To overcome this problem, the researchers developed a CNN architecture that can appropriately handle fMRI data with minimal pre-processing steps. The first two layers of the network focus on extracting features from the data solely based on temporal changes, without regard for 3D structural properties. Then, the three subsequent layers extract spatial features at different scales from the previously extracted time features. This yields a set of spatiotemporal characteristics that the final layers use to classify the input fMRI data from either a healthy subject, one with early or late mild cognitive impairment, or one with AD.
    This strategy offers many advantages over previous attempts to combine machine learning with fMRI for AD diagnosis. Harshit Parmar, doctoral student at Texas Tech University and lead author of the study, explains that the most important aspect of their work lies in the qualities of their CNN architecture. The new design is simple yet effective for handling complex fMRI data, which can be fed as input to the CNN without any significant manipulation or modification of the data structure. In turn, this reduces the computational resources needed and allows the algorithm to make predictions faster.
    Can deep learning methods improve the field of AD detection and diagnosis? Parmar thinks so. “Deep learning CNNs could be used to extract functional biomarkers related to AD, which could be helpful in the early detection of AD-related dementia,” he explains.
    The researchers trained and tested their CNN with fMRI data from a public database, and the initial results were promising: the classification accuracy of their algorithm was as high as or higher than that of other methods.
    If these results hold up for larger datasets, their clinical implications could be tremendous. “Alzheimer’s has no cure yet. Although brain damage cannot be reversed, the progression of the disease can be reduced and controlled with medication,” according to the authors. “Our classifier can accurately identify the mild cognitive impairment stages which provide an early warning before progression into AD.” More

  • in

    How computer scientists and marketers can create a better CX with AI

    Researchers from Erasmus University, The Ohio State University, York University, and London Business School published a new paper in the Journal of Marketing that examines the tension between AI’s benefits and costs and then offers recommendations to guide managers and scholars investigating these challenges.
    The study, forthcoming in the Journal of Marketing, is titled “Consumers and Artificial Intelligence: An Experiential Perspective” and is authored by Stefano Puntoni, Rebecca Walker Reczek, Markus Giesler, and Simona Botti.
    Not long ago, artificial intelligence (AI) was the stuff of science fiction. Now it is changing how consumers eat, sleep, work, play, and even date. Consumers can interact with AI throughout the day, from Fitbit’s fitness tracker and Alibaba’s Tmall Genie smart speaker to Google Photo’s editing suggestions and Spotify’s music playlists. Given the growing ubiquity of AI in consumers’ lives, marketers operate in organizations with a culture increasingly shaped by computer science. Software developers’ objective of creating technical excellence, however, may not naturally align with marketers’ objective of creating valued consumer experiences. For example, computer scientists often characterize algorithms as neutral tools evaluated on efficiency and accuracy, an approach that may overlook the social and individual complexities of the contexts in which AI is increasingly deployed. Thus, whereas AI can improve consumers’ lives in very concrete and relevant ways, a failure to incorporate behavioral insight into technological developments may undermine consumers’ experiences with AI.
    This article seeks to bridge these two perspectives. On one hand, the researchers acknowledge the benefits that AI can provide to consumers. On the other hand, they build on and integrate sociological and psychological scholarship to examine the costs consumers can experience in their interactions with AI. As Puntoni explains, “A key problem with optimistic celebrations that view AI’s alleged accuracy and efficiency as automatic promoters of democracy and human inclusion is their tendency to efface intersectional complexities.”
    The article begins by presenting a framework that conceptualizes AI as an ecosystem with four capabilities: data capture, classification, delegation, and social. It focuses on the consumer experience of these capabilities, including the tensions felt. Reczek adds, “To articulate a customer-centric view of AI, we move attention away from the technology toward how the AI capabilities are experienced by consumers. Consumer experience relates to the interactions between the consumer and the company during the customer journey and encompasses multiple dimensions: emotional, cognitive, behavioral, sensorial, and social.”
    The researchers then discuss the experience of these tensions at a macro level, by exposing relevant and often explosive narratives in the sociological context, and at the micro level, by illustrating them with real-life examples grounded in relevant psychological literature. Using these insights, the researchers provide marketers with recommendations regarding how to learn about and manage the tensions. Paralleling the joint emphasis on social and individual responses, they outline both the organizational learning in which firms should engage to lead the deployment of consumer AI and concrete steps to design improved consumer AI experiences. The article closes with a research agenda that cuts across the four consumer experiences and ideas for how researchers might contribute new knowledge on this important topic.

    Story Source:
    Materials provided by American Marketing Association. Original written by Matt Weingarden. Note: Content may be edited for style and length. More

  • in

    The sweet spot of flagellar assembly

    To build the machinery that enables bacteria to swim, over 50 proteins have to be assembled according to a logic and well-defined order to form the flagellum, the cellular equivalent of an offshore engine of a boat. To be functional, the flagellum is assembled piece by piece, ending with the helix called flagellar filament, composed of six different subunits called flagellins. Microbiologists from the University of Geneva (UNIGE) have demonstrated that adding sugar to the flagellins is crucial for the flagellum’s assembly and functionality. This glycosylation is carried out by a newly discovered enzyme FlmG, whose role was hitherto unknown. Based on this observation — which you can read all about in the journal eLife — the researchers followed up with a second discovery published in Developmental Cell. Among the six flagellins of Caulobacter crescentus, the model bacterium in the two studies, one is the special one serving a signalling role to trigger the final assembly of the flagellum.
    The flagellum is the locomotive engine of bacteria. Thanks to the flagellum, bacteria can swim towards food whether in the lake Geneva (Léman) or inside the host during an infection. The flagellum — which, due to its complexity, is similar to an offshore engine — is made up of a basic structure, a rotary motor and a helical propeller. It is synthesized inside the bacteria in their cytosol. “The 50 proteins must be produced sequentially and assembled in the right order,” begins Patrick Viollier, a researcher in UNIGE’s Department of Microbiology and Molecular Medicine. At the same time the flagellum must be embedded within the bacterial envelope that contains up to three cell layers before ending up on the outside. While the flagellar subunits are known many of the subtleties in flagellar assembly control and targeting mechanisms are still poorly understood.
    Sweet suprise
    The UNIGE microbiologists studied the bacterium Caulobacter crescentus. “These bacteria are very interesting for studying flagella since they produce two different daughter cells: one has a flagellum and the other doesn’t. They’re ideal for understanding what is needed for building a flagellum ,” explains Nicolas Kint, co-author of the study. Another peculiarity is that the flagellar filament of this bacterium is an assembly consisting of six flagellin sub-units, meaning it isn’t the result of the polymerisation of a single protein, as is the case for many other bacteria. “When analysing these six flagellins, we discovered they were decorated with sugars, indicating that a glycosylation step — an enzymatic reaction adding sugars to proteins — was taking place and was needed for assembly. It was a surprising discovery, since this reaction is not very common and not well understood in bacteria,” continues Professor Viollier.
    Viollier’s research team succeeded in demonstrating that the glycosylation of the six flagellins that make up the filament is essential for the formation and functionality of the flagellum. “To demonstrate this, we first identified the gene that produces the glycosylation enzyme, FlmG. When it’s absent, it results in bacteria without flagellum. Secondly, we genetically modified another type of bacterium, Escherichia coli, to express one of the six flagellins, the glycosylation enzyme and sugar producing enzymes from Caulobacter crescentus. All these elements are required to obtain a modified flagellin,” adds Nicolas Kint.
    A versatile black sheep
    “The different elements of the flagellum are produced one after the other: the molecules of the base first, then those of the rotor and finally the propeller. The scientific literature indicates that this sequential process is important. However, we don’t know how the order of manufacturing the sub-units is controlled .” The researcher and his team focused on the synthesis of the six flagellins, discovering a black sheep among them: a sub-unit that has only 50% sequence homology with the other five. “This sub-unit serves as become a checkpoint protein, a repressive molecular traffic cop restraining the synthesis of the other flagellin proteins,” says Professor Viollier. It is present before the synthesis of the other five sub-units, and it acts as a negative regulator. As long as it is present in the cytosol, the synthesis of the other sub-units is prevented. Once the elements of the flagellum are assembled, apart from the filament, the cop is exported to the membrane and thus removed. Then the synthesis of the last five sub-units can then begin. “It is a sensor for the protein synthesis and a component of the flagellar filament at the same time: a dual function that is unique of its kind,” says the microbiologist with great enthusiasm.
    This discovery is fundamental for understanding the motility of bacteria and the assembly of proteins. “It also provides clues for understanding the synthesis and assembly of tubulin, an essential part of the cytoskeleton,” concludes Professor Viollier.

    Story Source:
    Materials provided by Université de Genève. Note: Content may be edited for style and length. More