More stories

  • in

    New study suggests better approach in search for COVID-19 drugs

    Research from the University of Kent, Goethe-University in Frankfurt am Main, and the Philipps-University in Marburg has provided crucial insights into the biological composition of SARS-CoV-2, the cause of COVID-19, revealing vital clues for the discovery of antiviral drugs.
    Researchers compared SARS-CoV-2 and the closely related virus SARS-CoV, the cause of the 2002/03 SARS outbreak. Despite being 80% biologically identical, the viruses differ in crucial properties. SARS-CoV-2 is more contagious and less deadly, with a fatality rate of 2% compared to SARS-CoV’s 10%. Moreover, SARS-CoV-2 can be spread by asymptomatic individuals, whereas SARS-CoV was only transmitted by those who were already ill.
    Most functions in cells are carried out by proteins; large molecules made up of amino acids. The amino acid sequence determines the function of a protein. Viruses encode proteins that reprogramme infected cells to produce more viruses. Despite the proteins of SARS-CoV-2 and SARS-CoV having largely the same amino acid sequences, the study identifies a small subset of amino acid sequence positions that differ between them and are responsible for the observed changes in the behaviour of both viruses.
    Crucially, these dissimilarities between SARS-CoV-2 and SARS-CoV also result in different sensitivities to drugs for the treatment of COVID-19. This is vitally important, as many attempts to identify COVID-19 drugs are based on drug response data from other coronaviruses like SARS-CoV. However, the study findings show that the effectiveness of drugs against SARS-CoV or other coronaviruses does not indicate their effectiveness against SARS-CoV-2.
    Martin Michaelis, Professor of Molecular Medicine at Kent’s School of Biosciences, said: “We have now a much better idea how the small differences between SARS-CoV and SARS-CoV-2 can have such a massive impact on the behaviour of these viruses and the diseases that they cause. Our data also show that we must be more careful with the experimental systems that are used for the discovery of drugs for COVID-19. Only research using SARS-CoV-2 produces reliable results.”
    Professor Jindrich Cinatl, Goethe-University, said: “Since the COVID-19 pandemic started, I have been amazed that two so similar viruses can behave so differently. Now we start to understand this. This also includes a better idea of what we have to do to get better at finding drugs for COVID-19.”

    Story Source:
    Materials provided by University of Kent. Original written by Sam Wood. Note: Content may be edited for style and length. More

  • in

    Wafer-scale production of graphene-based photonic devices

    Our world needs reliable telecommunications more than ever before. However, classic devices have limitations in terms of size and cost and, especially, power consumption — which is directly related to greenhouse emissions. Graphene could change this and transform the future of broadband. Now, Graphene Flagship researchers have devised a wafer-scale fabrication technology that, thanks to predetermined graphene single-crystal templates, allows for integration into silicon wafers, enabling automation and paving the way to large scale production.
    This work, published in the journal ACS Nano, is a great example of a collaboration fostered by the Graphene Flagship ecosystem. It counted on the participation of several Graphene Flagship partner institutions like CNIT and the Istituto Italiano di Tecnologia (IIT), in Italy, the Cambridge Graphene Centre at the University of Cambridge, UK, and Graphene Flagship Associated Member and spin-off CamGraphIC. Furthermore, Graphene Flagship-linked third party INPHOTEC and researchers at the Tecip Institute in Italy provided the graphene photonics integrated circuits fabrication. Through the Wafer-scale Integration Work Package and Spearhead Projects such as Metrograph, the Graphene Flagship fosters collaboration between academia and leading industries to develop high-technology readiness level prototypes and products, until they can reach market exploitation.
    The new fabrication technique is enabled by the adoption of single-crystal graphene arrays. “Traditionally, when aiming at wafer-scale integration, one grows a wafer-sized layer of graphene and then transfer it onto silicon,” explains Camilla Coletti, coordinator of IIT’s Graphene Labs, who co-led the study. “Transferring an atom-thick layer of graphene over wafers while maintaining its integrity and quality is challenging” she adds. “The crystal seeding, growth and transfer technique adopted in this work ensures wafer-scale high-mobility graphene exactly where is needed: a great advantage for the scalable fabrication of photonic devices like modulators,” continues Coletti.
    It is estimated that, by 2023, the world will see over 28 billion connected devices, most of which will require 5G. These challenging requirements will demand new technologies. “Silicon and germanium alone have limitations; however, graphene provides many advantages,” says Marco Romagnoli from Graphene Flagship partner CNIT, linked third party INPHOTEC, and associated member CamGraphiC, who co-led the study. “This methodology allows us to obtain over 12,000 graphene crystals in one wafer, matching the exact configuration and disposition we need for graphene-enabled photonic devices,” he adds. Furthermore, the process is compatible with existing automated fabrication systems, which will accelerate its industrial uptake and implementation.
    In another publication in Nature Communications, researchers from Graphene Flagship partners CNIT, Istituto Italiano di Tecnologia (IIT), in Italy, Nokia — including their teams in Italy and Germany, Graphene Flagship-linked third party INPHOTEC and researchers at Tecip, used this approach to demonstrate a practical implementation: “We used our technique to design high-speed graphene photodetectors,” says Coletti. “Together, these advances will accelerate the commercial implementation of graphene-based photonic devices,” she adds.
    Graphene-enabled photonic devices offer several advantages. They absorb light from ultraviolet to the far-infrared — this allows for ultra-broadband communications. Graphene devices can have ultra-high mobility of carriers — electrons and holes — enabling data transmission that exceeds the best performing ethernet networks, breaking the barrier of 100 gigabits per second.
    Reducing the energetic demands of telecom and datacom is fundamental to provide more sustainable solutions. At present, Information and communication technologies are already responsible for almost 4% of all greenhouse emissions, comparable to the carbon footprint of the airline industry, projected to increase to around 14% by 2040. “In graphene, almost all the energy of light can be converted into electric signals, which massively reduces power consumption and maximises efficiency,” adds Romagnoli.
    Frank Koppens, Graphene Flagship Leader for Photonics and Optoelectronics, says: “This is the first time that high-quality graphene has been integrated on the wafer-scale. The work shows direct relevance by revealing high-yield and high-speed absorption modulators. These impressive achievements bring commercialisation of graphene devices into 5G communications very close.”
    Andrea C. Ferrari, Science and Technology Officer of the Graphene Flagship and Chair of its Management Panel added: “This work is a major milestone for the Graphene Flagship. A close collaboration between academic and industrial partners has finally developed a wafer-scale process for graphene integration. The Graphene Foundry is no more a distant goal, but it starts today.”

    Story Source:
    Materials provided by Graphene Flagship. Original written by Fernando Gomollón-Bel. Note: Content may be edited for style and length. More

  • in

    Smartphone app to change your personality

    Personality traits such as conscientiousness or sociability are patterns of experience and behavior that can change throughout our lives. Individual changes usually take place slowly as people gradually adapt to the demands of society and their environment. However, it is unclear whether certain personality traits can also be psychologically influenced in a short-term and targeted manner.
    Researchers from the universities of Zurich, St. Gallen, Brandeis, Illinois, and ETH Zurich have now investigated this question using a digital intervention. In their study, around 1,500 participants were provided with a specially developed smartphone app for three months and the researchers then assessed whether and how their personalities had changed. The five major personality traits of openness, conscientiousness, sociability (extraversion), considerateness (agreeableness), and emotional vulnerability (neuroticism) were examined. The app included elements of knowledge transfer, behavioral and resource activation, self-reflection, and feedback on progress. All communication with the digital coach and companion (a chatbot) took place virtually. The chatbot supported the participants on a daily basis to help them make the desired changes.
    Changes after three months
    The majority of participants said that they wanted to reduce their emotional vulnerability, increase their conscientiousness, or increase their extraversion. Those who participated in the intervention for more than three months reported greater success in achieving their change goals than the control group who took part for only two months. Close friends and family members also observed changes in those participants who wanted to increase expression of a certain personality trait. However, for those who wanted to reduce expression of a trait, the people close to them noticed little change. This group mainly comprised those participants who wanted to become less emotionally vulnerable, an inner process that is less observable from the outside.
    “The participants and their friends alike reported that three months after the end of the intervention, the personality changes brought about by using the app had persisted,” says Mathias Allemand, professor of psychology at UZH. “These surprising results show that we are not just slaves to our personality, but that we can deliberately make changes to routine experience and behavior patterns.”
    Important for health promotion and prevention
    The findings also indicate that development of the personality structure can happen more quickly than was previously believed. “In addition, change processes accompanied by digital tools can be used in everyday life,” explains first author Mirjam Stieger of Brandeis University in the USA, who did her doctorate at UZH. However, more evidence of the effectiveness of digital interventions is needed. For example, it was unclear whether the changes achieved were permanent or only reflected temporary fluctuations.
    The present findings are not only interesting for research, but could also find application in a variety of areas of life. In health promotion and prevention, for example, such apps could boost the resources of individuals, as people’s attitude to their situation and personality traits such as conscientiousness have an influence on health and healthy aging.
    The Smartphone App PEACH (PErsonality coACH)
    The smartphone application PEACH was developed as part of a project funded by the Swiss National Science Foundation (SNSF) to study personality change through a digital intervention. The application provides scalable communication capabilities using a digital agent that mimics a conversation with a human. The PEACH app also includes digital journaling, reminders of individual goals, video clips, opportunities for self-reflection and feedback on progress. Weekly core topics and small interventions aim to address and activate the desired changes and thus the development of personality traits.
    The app was developed as a research tool. In the future, however, it is thought that research apps such as PEACH will be made widely available.

    Story Source:
    Materials provided by University of Zurich. Note: Content may be edited for style and length. More

  • in

    Silicon chip provides low cost solution to help machines see the world clearly

    Researchers in Southampton and San Francisco have developed the first compact 3D LiDAR imaging system that can match and exceed the performance and accuracy of most advanced, mechanical systems currently used.
    3D LiDAR can provide accurate imaging and mapping for many applications; it is the “eyes” for autonomous cars and is used in facial recognition software and by autonomous robots and drones. Accurate imaging is essential for machines to map and interact with the physical world but the size and costs of the technology currently needed has limited LIDAR’s use in commercial applications.
    Now a team of researchers from Pointcloud Inc in San Francisco and the University of Southampton’s Optoelectronic Research Centre (ORC) have developed a new, integrated system, which uses silicon photonic components and CMOS electronic circuits in the same microchip. The prototype they have developed would be a low-cost solution and could pave the way to large volume production of low-cost, compact and high-performance 3D imaging cameras for use in robotics, autonomous navigation systems, mapping of building sites to increase safety and in healthcare.
    Graham Reed, Professor of Silicon Photonics within the ORC said, “LIDAR has been promising a lot but has not always delivered on its potential in recent years because, although experts have recognised that integrated versions can scale down costs, the necessary performance has not been there. Until now.
    “The silicon photonics system we have developed provides much higher accuracy at distance compared to other chip-based LIDAR systems to date, and most mechanical versions, showing that the much sought-after integrated system for LIDAR is viable.”
    Remus Nicolaescu, CEO of Pointcloud Inc added, “The combination of high performance and low cost manufacturing, will accelerate existing applications in autonomy and augmented reality, as well as open new directions, such as industrial and consumer digital twin applications requiring high depth accuracy, or preventive healthcare through remote behavioural and vital signs monitoring requiring high velocity accuracy.
    “The collaboration with the world class team at the ORC has been instrumental, and greatly accelerated the technology development.”
    The latest tests of the prototype, published in the journal Nature, show that it has an accuracy of 3.1 millimetres at a distance of 75 metres.
    Amongst the problems faced by previous integrated systems are the difficulties in providing a dense array of pixels that can be easily addressed; this has restricted them to fewer than 20 pixels whereas this new system is the first large-scale 2D coherent detector array consisting of 512 pixels. The research teams are now working to extend the pixels arrays and the beam steering technology to make the system even better suited to real-world applications and further improve performance.

    Story Source:
    Materials provided by University of Southampton. Note: Content may be edited for style and length. More

  • in

    Computational medicine: Moving from uncertainty to precision

    Individual choices in medicine carry a certain amount of uncertainty.
    An innovative partnership at The University of Texas at Austin takes aim at medicine down to the individual level by applying state-of-the-art computation to medical care.
    “Medicine in its essence is decision-making under uncertainty, decisions about tests and treatments,” said Radek Bukowski, MD, PhD, professor and associate chair of Investigation and Discovery in the Department of Women’s Health at Dell Medical School at UT Austin.
    “The human body and the healthcare system are complex systems made of a vast number of intensely interacting elements,” he said. “In such complex systems, there are many different pathways along which an outcome can occur. Our bodies are robust, but this also makes us very individualized, and the practice of medicine challenging. Everyone is made of different combinations of risk factors and protective characteristics. This is why precision medicine is paramount going forward.”
    To that effect, in the January 2021 edition of the American Journal of Obstetrics Gynecology, experts at Dell Med, Oden Institute for Computational and Engineering Sciences (Oden Institute), and Texas Advanced Computing Center (TACC), along with stakeholders across healthcare, industry, and government, stated that the emergence of computational medicine will revolutionize the future of medicine and health care. Craig Cordola of Ascension and Christopher Zarins of HeartFlow co-authored this editorial review with Bukowski and others.
    According to Bukowski, this interdisciplinary group provides a unique combination of resources that are poised to make Texas a leader in providing computational solutions to today’s and tomorrow’s health care issues.

    advertisement

    “At UT Austin we’re fortunate to have found ourselves at a very opportune point in time for computational medical research,” Bukowski said. “The Oden Institute has world-class expertise in mathematical modeling, applied math, and computational medicine; TACC is home to the world’s largest supercomputer for open science, and also committed to improving medical care, including outcomes for women and babies.”
    Powered by such collaborations, the emerging discipline of computational medicine focuses on developing quantitative approaches to understanding the mechanisms, diagnosis, and treatment of human disease through applications, more commonly found in mathematics, engineering, and computational science. These computational approaches are well-suited to modeling complex systems such as the human body.
    An On-Point area of Study for Obstetrics
    While computation is pivotal to all domains in medicine, it is especially promising in obstetrics because it concerns at least two patients — mother and baby, who frequently have conflicting interests, making medical decision-making particularly difficult and the stakes exceptionally high.
    According to state Rep. Donna Howard, D-Austin, a co-author of the editorial review, Texas legislators should be concerned about the unacceptably high rate of maternal morbidity and mortality in the state.

    advertisement

    “When I became aware of the efforts to bring computational medical approaches to addressing maternal morbidity and mortality, I was immediately intrigued,” Howard said. “And when I learned of the interdisciplinary expertise that has found itself conveniently positioned to create this new frontier of medicine, I was sold.”
    Individualized medicine is happening now because of advancements in computing power and mathematical modeling that can solve the problems which were unsolvable until now.
    Case in point: in 2018 the National Science Foundation awarded UT Austin a $1.2 million grant to support research using computational medicine and smartphones to monitor the activity and behavior of 1,000 pregnant women in the Austin area.
    In particular, the growing array of data sources including health records, administrative databases, randomized controlled trials, and internet-connected sensors provides a wealth of information at multiple timescales for which to develop sophisticated data-driven models and inform theoretical formulations.
    “When combined with analysis platforms via high performance computing, we now have the capability to provide patients and medical providers analysis of outcomes and risk assessment on a per-individual basis to improve the shared decision making process,” Bukowski concluded. More

  • in

    New wearable device turns the body into a battery

    Researchers at the University of Colorado Boulder have developed a new, low-cost wearable device that transforms the human body into a biological battery.
    The device, described today in the journal Science Advances, is stretchy enough that you can wear it like a ring, a bracelet or any other accessory that touches your skin. It also taps into a person’s natural heat — employing thermoelectric generators to convert the body’s internal temperature into electricity.
    “In the future, we want to be able to power your wearable electronics without having to include a battery,” said Jianliang Xiao, senior author of the new paper and an associate professor in the Paul M. Rady Department of Mechanical Engineering at CU Boulder.
    The concept may sound like something out of The Matrix film series, in which a race of robots have enslaved humans to harvest their precious organic energy. Xiao and his colleagues aren’t that ambitious: Their devices can generate about 1 volt of energy for every square centimeter of skin space — less voltage per area than what most existing batteries provide but still enough to power electronics like watches or fitness trackers.
    Scientists have previously experimented with similar thermoelectric wearable devices, but Xiao’s is stretchy, can heal itself when damaged and is fully recyclable — making it a cleaner alternative to traditional electronics.
    “Whenever you use a battery, you’re depleting that battery and will, eventually, need to replace it,” Xiao said. “The nice thing about our thermoelectric device is that you can wear it, and it provides you with constant power.”
    High-tech bling

    advertisement

    The project isn’t Xiao’s first attempt to meld human with robot. He and his colleagues previously experimented with designing “electronic skin,” wearable devices that look, and behave, much like real human skin. That android epidermis, however, has to be connected to an external power source to work.
    Until now. The group’s latest innovation begins with a base made out of a stretchy material called polyimine. The scientists then stick a series of thin thermoelectric chips into that base, connecting them all with liquid metal wires. The final product looks like a cross between a plastic bracelet and a miniature computer motherboard or maybe a techy diamond ring.
    “Our design makes the whole system stretchable without introducing much strain to the thermoelectric material, which can be really brittle,” Xiao said.
    Just pretend that you’re out for a jog. As you exercise, your body heats up, and that heat will radiate out to the cool air around you. Xiao’s device captures that flow of energy rather than letting it go to waste.
    “The thermoelectric generators are in close contact with the human body, and they can use the heat that would normally be dissipated into the environment,” he said.

    advertisement

    Lego blocks
    He added that you can easily boost that power by adding in more blocks of generators. In that sense, he compares his design to a popular children’s toy.
    “What I can do is combine these smaller units to get a bigger unit,” he said. “It’s like putting together a bunch of small Lego pieces to make a large structure. It gives you a lot of options for customization.”
    Xiao and his colleagues calculated, for example, that a person taking a brisk walk could use a device the size of a typical sports wristband to generate about 5 volts of electricity — which is more than what many watch batteries can muster.
    Like Xiao’s electronic skin, the new devices are as resilient as biological tissue. If your device tears, for example, you can pinch together the broken ends, and they’ll seal back up in just a few minutes. And when you’re done with the device, you can dunk it into a special solution that will separate out the electronic components and dissolve the polyimine base — each and every one of those ingredients can then be reused.
    “We’re trying to make our devices as cheap and reliable as possible, while also having as close to zero impact on the environment as possible,” Xiao said.
    While there are still kinks to work out in the design, he thinks that his group’s devices could appear on the market in five to 10 years. Just don’t tell the robots. We don’t want them getting any ideas.
    Coauthors on the new paper include researchers from China’s Harbin Institute of Technology, Southeast University, Zhejiang University, Tongji University and Huazhong University of Science and Technology.
    Video: https://www.youtube.com/watch?v=hexScHvEFwQ&feature=emb_logo More

  • in

    A language learning system that pays attention — more efficiently than ever before

    Human language can be inefficient. Some words are vital. Others, expendable.
    Reread the first sentence of this story. Just two words, “language” and “inefficient,” convey almost the entire meaning of the sentence. The importance of key words underlies a popular new tool for natural language processing (NLP) by computers: the attention mechanism. When coded into a broader NLP algorithm, the attention mechanism homes in on key words rather than treating every word with equal importance. That yields better results in NLP tasks like detecting positive or negative sentiment or predicting which words should come next in a sentence.
    The attention mechanism’s accuracy often comes at the expense of speed and computing power, however. It runs slowly on general-purpose processors like you might find in consumer-grade computers. So, MIT researchers have designed a combined software-hardware system, dubbed SpAtten, specialized to run the attention mechanism. SpAtten enables more streamlined NLP with less computing power.
    “Our system is similar to how the human brain processes language,” says Hanrui Wang. “We read very fast and just focus on key words. That’s the idea with SpAtten.”
    The research will be presented this month at the IEEE International Symposium on High-Performance Computer Architecture. Wang is the paper’s lead author and a PhD student in the Department of Electrical Engineering and Computer Science. Co-authors include Zhekai Zhang and their advisor, Assistant Professor Song Han.
    Since its introduction in 2015, the attention mechanism has been a boon for NLP. It’s built into state-of-the-art NLP models like Google’s BERT and OpenAI’s GPT-3. The attention mechanism’s key innovation is selectivity — it can infer which words or phrases in a sentence are most important, based on comparisons with word patterns the algorithm has previously encountered in a training phase. Despite the attention mechanism’s rapid adoption into NLP models, it’s not without cost.

    advertisement

    NLP models require a hefty load of computer power, thanks in part to the high memory demands of the attention mechanism. “This part is actually the bottleneck for NLP models,” says Wang. One challenge he points to is the lack of specialized hardware to run NLP models with the attention mechanism. General-purpose processors, like CPUs and GPUs, have trouble with the attention mechanism’s complicated sequence of data movement and arithmetic. And the problem will get worse as NLP models grow more complex, especially for long sentences. “We need algorithmic optimizations and dedicated hardware to process the ever-increasing computational demand,” says Wang.
    The researchers developed a system called SpAtten to run the attention mechanism more efficiently. Their design encompasses both specialized software and hardware. One key software advance is SpAtten’s use of “cascade pruning,” or eliminating unnecessary data from the calculations. Once the attention mechanism helps pick a sentence’s key words (called tokens), SpAtten prunes away unimportant tokens and eliminates the corresponding computations and data movements. The attention mechanism also includes multiple computation branches (called heads). Similar to tokens, the unimportant heads are identified and pruned away. Once dispatched, the extraneous tokens and heads don’t factor into the algorithm’s downstream calculations, reducing both computational load and memory access.
    To further trim memory use, the researchers also developed a technique called “progressive quantization.” The method allows the algorithm to wield data in smaller bitwidth chunks and fetch as few as possible from memory. Lower data precision, corresponding to smaller bitwidth, is used for simple sentences, and higher precision is used for complicated ones. Intuitively it’s like fetching the phrase “cmptr progm” as the low-precision version of “computer program.”
    Alongside these software advances, the researchers also developed a hardware architecture specialized to run SpAtten and the attention mechanism while minimizing memory access. Their architecture design employs a high degree of “parallelism,” meaning multiple operations are processed simultaneously on multiple processing elements, which is useful because the attention mechanism analyzes every word of a sentence at once. The design enables SpAtten to rank the importance of tokens and heads (for potential pruning) in a small number of computer clock cycles. Overall, the software and hardware components of SpAtten combine to eliminate unnecessary or inefficient data manipulation, focusing only on the tasks needed to complete the user’s goal.
    The philosophy behind the system is captured in its name. SpAtten is a portmanteau of “sparse attention,” and the researchers note in the paper that SpAtten is “homophonic with ‘spartan,’ meaning simple and frugal.” Wang says, “that’s just like our technique here: making the sentence more concise.” That concision was borne out in testing.

    advertisement

    The researchers coded a simulation of SpAtten’s hardware design — they haven’t fabricated a physical chip yet — and tested it against competing general-purposes processors. SpAtten ran more than 100 times faster than the next best competitor (a TITAN Xp GPU). Further, SpAtten was more than 1,000 times more energy efficient than competitors, indicating that SpAtten could help trim NLP’s substantial electricity demands.
    The researchers also integrated SpAtten into their previous work, to help validate their philosophy that hardware and software are best designed in tandem. They built a specialized NLP model architecture for SpAtten, using their Hardware-Aware Transformer (HAT) framework, and achieved a roughly two times speedup over a more general model.
    The researchers think SpAtten could be useful to companies that employ NLP models for the majority of their artificial intelligence workloads. “Our vision for the future is that new algorithms and hardware that remove the redundancy in languages will reduce cost and save on the power budget for data center NLP workloads” says Wang.
    On the opposite end of the spectrum, SpAtten could bring NLP to smaller, personal devices. “We can improve the battery life for mobile phone or IoT devices,” says Wang, referring to internet-connected “things” — televisions, smart speakers, and the like. “That’s especially important because in the future, numerous IoT devices will interact with humans by voice and natural language, so NLP will be the first application we want to employ.”
    Han says SpAtten’s focus on efficiency and redundancy removal is the way forward in NLP research. “Human brains are sparsely activated [by key words]. NLP models that are sparsely activated will be promising in the future,” he says. “Not all words are equal — pay attention only to the important ones.” More

  • in

    New mathematical method for generating random connected networks

    Many natural and human-made networks, such as computer, biological or social networks have a connectivity structure that critically shapes their behavior. The academic field of network science is concerned with analyzing such real-world complex networks and understanding how their structure influences their function or behavior. Examples are the vascular network of our bodies, the network of neurons in our brain, or the network of how an epidemic is spreading through a society.
    The need for reliable null models
    The analysis of such networks often focuses on finding interesting properties and features. For example, does the structure of a particular contact network help diseases spread especially quickly? In order to find out, we need a baseline — a set of random networks, a so-called “null model” — to compare to. Furthermore, since more connections obviously create more opportunities for infection, the number of connections of each node in the baseline should be matched to the network we analyze. Then if our network appears to facilitate spreading more than the baseline, we know it must be due to its specific network structure. However, creating truly random, unbiased, null models that are matched in some property is difficult — and usually requires a different approach for each property of interest. Existing algorithm that create connected networks with a specific number of connections for each node all suffer from uncontrolled bias, which means that some networks are generated more than others, potentially compromising the conclusions of the study.
    A new method that eliminates bias
    Szabolcs Horvát and Carl Modes at the Center for Systems Biology Dresden (CSBD) and the Max Planck Institute of Molecular Cell Biology and Genetics (MPI-CBG) developed such a model which makes it possible to eliminate bias, and reach solid conclusions. Szabolcs Horvát describes: “We developed a null model for connected networks where the bias is under control and can be factored out. Specifically, we created an algorithm which can generate random connected networks with a prescribed number of connections for each node. With our method, we demonstrated that more naïve but commonly used approaches may lead to invalid conclusions.” The coordinating author of the study, Carl Modes concludes: “This finding illustrates the need for mathematically well-founded methods. We hope that our work will be useful to the broader network science community. In order to make it as easy as possible for other researchers to use it, we also developed a software and made it publicly available.”

    Story Source:
    Materials provided by Max-Planck-Gesellschaft. Note: Content may be edited for style and length. More