More stories

  • in

    Light-carrying chips advance machine learning

    In the digital age, data traffic is growing at an exponential rate. The demands on computing power for applications in artificial intelligence such as pattern and speech recognition in particular, or for self-driving vehicles, often exceeds the capacities of conventional computer processors. Working together with an international team, researchers at the University of Münster are developing new approaches and process architectures which can cope with these tasks extremely efficient. They have now shown that so-called photonic processors, with which data is processed by means of light, can process information much more rapidly and in parallel — something electronic chips are incapable of doing.
    Background and methodology
    Light-based processors for speeding up tasks in the field of machine learning enable complex mathematical tasks to be processed at enormously fast speeds (10¹² -10¹⁵ operations per second). Conventional chips such as graphic cards or specialized hardware like Google’s TPU (Tensor Processing Unit) are based on electronic data transfer and are much slower. The team of researchers led by Prof. Wolfram Pernice from the Institute of Physics and the Center for Soft Nanoscience at the University of Münster implemented a hardware accelerator for so-called matrix multiplications, which represent the main processing load in the computation of neural networks. Neural networks are a series of algorithms which simulate the human brain. This is helpful, for example, for classifying objects in images and for speech recognition.
    The researchers combined the photonic structures with phase-change materials (PCMs) as energy-efficient storage elements. PCMs are usually used with DVDs or BluRay discs in optical data storage. In the new processor this makes it possible to store and preserve the matrix elements without the need for an energy supply. To carry out matrix multiplications on multiple data sets in parallel, the Münster physicists used a chip-based frequency comb as a light source. A frequency comb provides a variety of optical wavelengths which are processed independently of one another in the same photonic chip. As a result, this enables highly parallel data processing by calculating on all wavelengths simultaneously — also known as wavelength multiplexing. “Our study is the first one to apply frequency combs in the field of artificially neural networks,” says Wolfram Pernice.
    In the experiment the physicists used a so-called convolutional neural network for the recognition of handwritten numbers. These networks are a concept in the field of machine learning inspired by biological processes. They are used primarily in the processing of image or audio data, as they currently achieve the highest accuracies of classification. “The convolutional operation between input data and one or more filters — which can be a highlighting of edges in a photo, for example — can be transferred very well to our matrix architecture,” explains Johannes Feldmann, the lead author of the study. “Exploiting light for signal transference enables the processor to perform parallel data processing through wavelength multiplexing, which leads to a higher computing density and many matrix multiplications being carried out in just one timestep. In contrast to traditional electronics, which usually work in the low GHz range, optical modulation speeds can be achieved with speeds up to the 50 to 100 GHz range.” This means that the process permits data rates and computing densities, i.e. operations per area of processor, never previously attained.
    The results have a wide range of applications. In the field of artificial intelligence, for example, more data can be processed simultaneously while saving energy. The use of larger neural networks allows more accurate, and hitherto unattainable, forecasts and more precise data analysis. For example, photonic processors support the evaluation of large quantities of data in medical diagnoses, for instance in high-resolution 3D data produced in special imaging methods. Further applications are in the fields of self-driving vehicles, which depend on fast, rapid evaluation of sensor data, and of IT infrastructures such as cloud computing which provide storage space, computing power or applications software.

    Story Source:
    Materials provided by University of Münster. Note: Content may be edited for style and length. More

  • in

    A bit too much: Reducing the bit width of Ising models for quantum annealing

    Given a list of cities and the distances between each pair of cities, how do you determine the shortest route that visits each city exactly once and returns to the starting location? This famous problem is called the “traveling salesman problem” and is an example of a combinatorial optimization problem. Solving these problems using conventional computers can be very time-consuming, and special devices called “quantum annealers” have been created for this purpose.
    Quantum annealers are designed to find the lowest energy state (or “ground state”) of what’s known as an “Ising model.” Such models are abstract representations of a quantum mechanical system involving interacting spins that are also influenced by external magnetic fields. In the late 90s, scientists found that combinatorial optimization problems could be formulated as Ising models, which in turn could be physically implemented in quantum annealers. To obtain the solution to a combinatorial optimization problem, one simply has to observe the ground state reached in its associated quantum annealer after a short time.
    One of the biggest challenges in this process is the transformation of the “logical” Ising model into a physically implementable Ising model suitable for quantum annealing. Sometimes, the numerical values of the spin interactions or the external magnetic fields require a number of bits to represent them (bit width) too large for a physical system. This severely limits the versatility and applicability of quantum annealers to real world problems. Fortunately, in a recent study published in IEEE Transactions on Computers, scientists from Japan have tackled this issue. Based purely on mathematical theory, they developed a method by which a given logical Ising model can be transformed into an equivalent model with a desired bit width so as to make it “fit” a desired physical implementation.
    Their approach consists in adding auxiliary spins to the Ising model for problematic interactions or magnetic fields in such a way that the ground state (solution) of the transformed model is the same as that of the original model while also requiring a lower bit width. The technique is relatively simple and completely guaranteed to produce an equivalent Ising model with the same solution as the original. “Our strategy is the world’s first to efficiently and theoretically address the bit-width reduction problem in the spin interactions and magnetic field coefficients in Ising models,” remarks Professor Nozomu Togawa from Waseda University, Japan, who led the study.
    The scientists also put their method to the test in several experiments, which further confirmed its validity. Prof. Togawa has high hopes, and he concludes by saying, “The approach developed in this study will widen the applicability of quantum annealers and make them much more attractive for people dealing with not only physical Ising models but all kinds of combinatorial optimization problems. Such problems are common in cryptography, logistics, and artificial intelligence, among many other fields.”

    Story Source:
    Materials provided by Waseda University. Note: Content may be edited for style and length. More

  • in

    'Virtual biopsies' could replace tissue biopsies in future thanks to new technique

    A new advanced computing technique using routine medical scans to enable doctors to take fewer, more accurate tumour biopsies, has been developed by cancer researchers at the University of Cambridge.
    This is an important step towards precision tissue sampling for cancer patients to help select the best treatment. In future the technique could even replace clinical biopsies with ‘virtual biopsies’, sparing patients invasive procedures.
    The research published in European Radiology shows that combining computed tomography (CT) scans with ultrasound images creates a visual guide for doctors to ensure they sample the full complexity of a tumour with fewer targeted biopsies.
    Capturing the patchwork of different types of cancer cell within a tumour — known as tumour heterogeneity — is critical for selecting the best treatment because genetically-different cells may respond differently to treatment.
    Most cancer patients undergo one or several biopsies to confirm diagnosis and plan their treatment. But because this is an invasive clinical procedure, there is an urgent need to reduce the number of biopsies taken and to make sure biopsies accurately sample the genetically-different cells in the tumour, particularly for ovarian cancer patients.
    High grade serous ovarian (HGSO) cancer, the most common type of ovarian cancer, is referred to as a ‘silent killer’ because early symptoms can be difficult to pick up. By the time the cancer is diagnosed, it is often at an advanced stage, and survival rates have not changed much over the last 20 years.

    advertisement

    But late diagnosis isn’t the only problem. HGSO tumours tend to have a high level of tumour heterogeneity and patients with more genetically-different patches of cancer cells tend to have a poorer response to treatment.
    Professor Evis Sala from the Department of Radiology, co-lead CRUK Cambridge Centre Advanced Cancer Imaging Programme, leads a multi-disciplinary team of radiologists, physicists, oncologists and computational scientists using innovative computing techniques to reveal tumour heterogeneity from standard medical images. This new study, led by Professor Sala, involved a small group of patients with advanced ovarian cancer who were due to have ultrasound-guided biopsies prior to starting chemotherapy.
    For the study, the patients first had a standard-of-care CT scan. A CT scanner uses x-rays and computing to create a 3D image of the tumour from multiple image ‘slices’ through the body.
    The researchers then used a process called radiomics — using high-powered computing methods to analyse and extract additional information from the data-rich images created by the CT scanner — to identify and map distinct areas and features of the tumour. The tumour map was then superimposed on the ultrasound image of the tumour and the combined image used to guide the biopsy procedure.
    By taking targeted biopsies using this method, the research team reported that the diversity of cancer cells within the tumour was successfully captured.

    advertisement

    Co-first author Dr Lucian Beer, from the Department of Radiology and CRUK Cambridge Centre Ovarian Cancer Programme, said of the results: “Our study is a step forward to non-invasively unravel tumour heterogeneity by using standard-of-care CT-based radiomic tumour habitats for ultrasound-guided targeted biopsies.”
    Co-first author Paula Martin-Gonzalez, from the Cancer Research UK Cambridge Institute and CRUK Cambridge Centre Ovarian Cancer Programme, added: “We will now be applying this method in a larger clinical study.”
    Professor Sala said: “This study provides an important milestone towards precision tissue sampling. We are truly pushing the boundaries in translating cutting edge research to routine clinical care.”
    Fiona Barve (56) is a science teacher who lives near Cambridge. She was diagnosed with ovarian cancer in 2017 after visiting her doctor with abdominal pain. She was diagnosed with stage 4 ovarian cancer and immediately underwent surgery and a course of chemotherapy. Since March 2019 she has been cancer free and is now back to teaching three days a week.
    “I was diagnosed at a late stage and I was fortunate my surgery, which I received within four weeks of being diagnosed, and chemotherapy worked for me. I feel lucky to be around,” said Barve.
    “When you are first undergoing the diagnosis of cancer, you feel as if you are on a conveyor belt, every part of the journey being extremely stressful. This new enhanced technique will reduce the need for several procedures and allow patients more time to adjust to their circumstances. It will enable more accurate diagnosis with less invasion of the body and mind. This can only be seen as positive progress.” More

  • in

    COVID-19 unmasked: Math model suggests optimal treatment strategies

    Getting control of COVID-19 will take more than widespread vaccination; it will also require better understanding of why the disease causes no apparent symptoms in some people but leads to rapid multi-organ failure and death in others, as well as better insight into what treatments work best and for which patients.
    To meet this unprecedented challenge, researchers at Massachusetts General Hospital (MGH), in collaboration with investigators from Brigham and Women’s Hospital and the University of Cyprus, have created a mathematical model based on biology that incorporates information about the known infectious machinery of SARS-CoV-2, the virus that causes COVID-19, and about the potential mechanisms of action of various treatments that have been tested in patients with COVID-19.
    The model and its important clinical applications are described in the journal Proceedings of the National Academy of Sciences (PNAS).
    “Our model predicts that antiviral and anti-inflammatory drugs that were first employed to treat COVID-19 might have limited efficacy, depending on the stage of the disease progression,” says corresponding author Rakesh K. Jain, PhD, from the Edwin L. Steele Laboratories in the Department of Radiation Oncology at MGH and Harvard Medical School (HMS).
    Jain and colleagues found that in all patients, the viral load (the level of SARS-CoV-2 particles in the bloodstream) increases during early lung infection, but then may go in different directions starting after Day 5, depending on levels of key immune guardian cells, called T cells. T cells are the first responders of the immune system that effectively coordinate other aspects of immunity. The T cell response is known as adaptive immunity because it is flexible and responds to immediate threats.
    In patients younger than 35 who have healthy immune systems, a sustained recruitment of T cells occurs, accompanied by a reduction in viral load and inflammation and a decrease in nonspecific immune cells (so-called “innate” immunity). All of these processes lead to lower risk for blood clot formation and to restoring oxygen levels in lung tissues, and these patients tend to recover.

    advertisement

    In contrast, people who have higher levels of inflammation at the time of infection — such as those with diabetes, obesity or high blood pressure — or whose immune systems are tilted toward more active innate immune responses but less effective adaptive immune responses tend to have poor outcomes.
    The investigators also sought to answer the question of why men tend have more severe COVID-19 compared with women, and found that although the adaptive immune response is not as vigorous in women as in men, women have lower levels of a protein called TMPRSS2 that allows SARS-CoV-2 to enter and infect normal cells.
    Based on their findings, Jain and colleagues propose that optimal treatment for older patients — who are likely to already have inflammation and impaired immune responses compared with younger patients — should include the clot-preventing drug heparin and/or the use of an immune response-modifying drug (checkpoint inhibitor) in early stages of the disease, and the anti-inflammatory drug dexamethasone at later stages.
    In patients with pre-existing conditions such as obesity, diabetes and high blood pressure or immune system abnormalities, treatment might also include drugs specifically targeted against inflammation-promoting substances (cytokines, such as interleukin-6) in the body, as well as drugs that can inhibit the renin-angiotensin system (the body’s main blood pressure control mechanism), thereby preventing activation of abnormal blood pressure and resistance to blood flow that can occur in response to viral infections.
    This work shows how tools originally developed for cancer research can be useful for understanding COVID-19: The model was first created to analyze involvement of the renin angiotensin system in the development of fibrous tissues in tumors, but was modified to include SARS-CoV-2 infection and COVID-19-specific mechanisms. The team is further developing the model and plans to use it to examine the dynamics of the immune system in response to different types of COVID-19 vaccines as well as cancer-specific comorbidities that might require special considerations for treatment.
    Co-corresponding authors are Lance L. Munn, MGH, and Triantafyllos Stylianopoulos, University of Cyprus. Other authors are Chrysovalantis Voutouri, U. Cyprus; Mohammad Reza Nikmaneshi, Sharif University of Technology, Iran; C. Corey Hardin, Melin J. Khandekar and Sayon Dutta, all from MGH; and Ankit B. Patel and Ashish Verma from Brigham and Women’s Hospital.
    Jain’s research is supported by an Investigator Award and grants from the National Foundation for Cancer Research, Jane’s Trust Foundation, American Medical Research Foundation and Harvard Ludwig Cancer Center. Munn’s research is supported by a National Institutes of Health grant. Stylianopoulos’s research is supported by the European Research Council and Cyprus Research and Innovation Foundation. Patel is support by an American Society of Nephrology Joseph A. Carlucci Research Fellowship. More

  • in

    Advanced materials in a snap

    If everything moved 40,000 times faster, you could eat a fresh tomato three minutes after planting a seed. You could fly from New York to L.A. in half a second. And you’d have waited in line at airport security for that flight for 30 milliseconds.
    Thanks to machine learning, designing materials for new, advanced technologies could accelerate that much.
    A research team at Sandia National Laboratories has successfully used machine learning — computer algorithms that improve themselves by learning patterns in data — to complete cumbersome materials science calculations more than 40,000 times faster than normal.
    Their results, published Jan. 4 in npj Computational Materials, could herald a dramatic acceleration in the creation of new technologies for optics, aerospace, energy storage and potentially medicine while simultaneously saving laboratories money on computing costs.
    “We’re shortening the design cycle,” said David Montes de Oca Zapiain, a computational materials scientist at Sandia who helped lead the research. “The design of components grossly outpaces the design of the materials you need to build them. We want to change that. Once you design a component, we’d like to be able to design a compatible material for that component without needing to wait for years, as it happens with the current process.”
    The research, funded by the U.S. Department of Energy’s Basic Energy Sciences program, was conducted at the Center for Integrated Nanotechnologies, a DOE user research facility jointly operated by Sandia and Los Alamos national labs.

    advertisement

    Machine learning speeds up computationally expensive simulations
    Sandia researchers used machine learning to accelerate a computer simulation that predicts how changing a design or fabrication process, such as tweaking the amounts of metals in an alloy, will affect a material. A project might require thousands of simulations, which can take weeks, months or even years to run.
    The team clocked a single, unaided simulation on a high-performance computing cluster with 128 processing cores (a typical home computer has two to six processing cores) at 12 minutes. With machine learning, the same simulation took 60 milliseconds using only 36 cores-equivalent to 42,000 times faster on equal computers. This means researchers can now learn in under 15 minutes what would normally take a year.
    Sandia’s new algorithm arrived at an answer that was 5% different from the standard simulation’s result, a very accurate prediction for the team’s purposes. Machine learning trades some accuracy for speed because it makes approximations to shortcut calculations.
    “Our machine-learning framework achieves essentially the same accuracy as the high-fidelity model but at a fraction of the computational cost,” said Sandia materials scientist Rémi Dingreville, who also worked on the project.
    Benefits could extend beyond materials
    Dingreville and Montes de Oca Zapiain are going to use their algorithm first to research ultrathin optical technologies for next-generation monitors and screens. Their research, though, could prove widely useful because the simulation they accelerated describes a common event — the change, or evolution, of a material’s microscopic building blocks over time.
    Machine learning previously has been used to shortcut simulations that calculate how interactions between atoms and molecules change over time. The published results, however, demonstrate the first use of machine learning to accelerate simulations of materials at relatively large, microscopic scales, which the Sandia team expects will be of greater practical value to scientists and engineers.
    For instance, scientists can now quickly simulate how miniscule droplets of melted metal will glob together when they cool and solidify, or conversely, how a mixture will separate into layers of its constituent parts when it melts. Many other natural phenomena, including the formation of proteins, follow similar patterns. And while the Sandia team has not tested the machine-learning algorithm on simulations of proteins, they are interested in exploring the possibility in the future. More

  • in

    Breaking through the resolution barrier with quantum-limited precision

    Researchers at Paderborn University have developed a new method of distance measurement for systems such as GPS, which achieves more precise results than ever before. Using quantum physics, the team led by Leibniz Prize winner Professor Christine Silberhorn has successfully overcome the so-called resolution limit, which causes the “noise” we may see in photos, for example. Their findings have just been published in the academic journal Physical Review X Quantum (PRX Quantum).
    Physicist Dr Benjamin Brecht explains the problem of the resolution limit: “In laser distance measurements a detector registers two light pulses of different intensities with a time difference. The more precise the time measurement is, the more accurately the distance can be determined. Providing the time separation between the pulses is greater than the length of the pulses, this works well.” Problems arise, however, as Brecht explains, if the pulses overlap: “Then you can no longer measure the time difference using conventional methods. This is known as the “resolution limit” and is a well-known effect in photos. Very small structures or textures can no longer be resolved. That’s the same problem — just with position rather than time.”
    A further challenge, according to Brecht, is to determine the different intensities of two light pulses, simultaneously with their time difference and the arrival time. But this is exactly what the researchers have managed to do — “with quantum-limited precision,” adds Brecht. Working with partners from the Czech Republic and Spain, the Paderborn physicists were even able to measure these values when the pulses overlapped by 90 per cent. Brecht says: “This is far beyond the resolution limit. The precision of the measurement is 10,000 times better. Using methods from quantum information theory, we can find new forms of measurement which overcome the limitations of established methods.”
    These findings could allow significant improvements in the future to the precision of applications such as LIDAR, a method of optical distance and speed measurement, and GPS. It will take some time, however, before this is ready for the market, points out Brecht.

    Story Source:
    Materials provided by Universität Paderborn. Note: Content may be edited for style and length. More

  • in

    Deep neural network predicts transcription factors

    A joint research team from KAIST and UCSD has developed a deep neural network named DeepTFactor that predicts transcription factors from protein sequences. DeepTFactor will serve as a useful tool for understanding the regulatory systems of organisms, accelerating the use of deep learning for solving biological problems.
    A transcription factor is a protein that specifically binds to DNA sequences to control the transcription initiation. Analyzing transcriptional regulation enables the understanding of how organisms control gene expression in response to genetic or environmental changes. In this regard, finding the transcription factor of an organism is the first step in the analysis of the transcriptional regulatory system of an organism.
    Previously, transcription factors have been predicted by analyzing sequence homology with already characterized transcription factors or by data-driven approaches such as machine learning. Conventional machine learning models require a rigorous feature selection process that relies on domain expertise such as calculating the physicochemical properties of molecules or analyzing the homology of biological sequences. Meanwhile, deep learning can inherently learn latent features for the specific task.
    A joint research team comprised of Ph.D. candidate Gi Bae Kim and Distinguished Professor Sang Yup Lee of the Department of Chemical and Biomolecular Engineering at KAIST, and Ye Gao and Professor Bernhard O. Palsson of the Department of Biochemical Engineering at UCSD reported a deep learning-based tool for the prediction of transcription factors. Their research paper “DeepTFactor: A deep learning-based tool for the prediction of transcription factors” was published online in PNAS.
    Their article reports the development of DeepTFactor, a deep learning-based tool that predicts whether a given protein sequence is a transcription factor using three parallel convolutional neural networks. The joint research team predicted 332 transcription factors of Escherichia coli K-12 MG1655 using DeepTFactor and the performance of DeepTFactor by experimentally confirming the genome-wide binding sites of three predicted transcription factors (YqhC, YiaU, and YahB).
    The joint research team further used a saliency method to understand the reasoning process of DeepTFactor. The researchers confirmed that even though information on the DNA binding domains of the transcription factor was not explicitly given the training process, DeepTFactor implicitly learned and used them for prediction. Unlike previous transcription factor prediction tools that were developed only for protein sequences of specific organisms, DeepTFactor is expected to be used in the analysis of the transcription systems of all organisms at a high level of performance.
    Distinguished Professor Sang Yup Lee said, “DeepTFactor can be used to discover unknown transcription factors from numerous protein sequences that have not yet been characterized. It is expected that DeepTFactor will serve as an important tool for analyzing the regulatory systems of organisms of interest.”

    Story Source:
    Materials provided by The Korea Advanced Institute of Science and Technology (KAIST). Note: Content may be edited for style and length. More

  • in

    Supercapacitors challenge batteries

    A team working with Roland Fischer, Professor of Inorganic and Metal-Organic Chemistry at the Technical University Munich (TUM) has developed a highly efficient supercapacitor. The basis of the energy storage device is a novel, powerful and also sustainable graphene hybrid material that has comparable performance data to currently utilized batteries.
    Usually, energy storage is associated with batteries and accumulators that provide energy for electronic devices. However, in laptops, cameras, cellphones or vehicles, so-called supercapacitors are increasingly installed these days.
    Unlike batteries they can quickly store large amounts of energy and put it out just as fast. If, for instance, a train brakes when entering the station, supercapacitors are storing the energy and provide it again when the train needs a lot of energy very quickly while starting up.
    However, one problem with supercapacitors to date was their lack of energy density. While lithium accumulators reach an energy density of up to 265 Kilowatt hours (KW/h), supercapacitors thus far have only been delivering a tenth thereof.
    Sustainable material provides high performance
    The team working with TUM chemist Roland Fischer has now developed a novel, powerful as well as sustainable graphene hybrid material for supercapacitors. It serves as the positive electrode in the energy storage device. The researchers are combining it with a proven negative electrode based on titan and carbon.

    advertisement

    The new energy storage device does not only attain an energy density of up to 73 Wh/kg, which is roughly equivalent to the energy density of an nickel metal hydride battery, but also performs much better than most other supercapacitors at a power density of 16 kW/kg. The secret of the new supercapacitor is the combination of different materials — hence, chemists refer to the supercapacitor as “asymmetrical.”
    Hybrid materials: Nature is the role model
    The researchers are betting on a new strategy to overcome the performance limits of standard materials — they utilize hybrid materials. “Nature is full of highly complex, evolutionarily optimized hybrid materials — bones and teeth are examples. Their mechanical properties, such as hardness and elasticity were optimized through the combination of various materials by nature,” says Roland Fischer.
    The abstract idea of combining basic materials was transferred to supercapacitors by the research team. As a basis, they used the novel positive electrode of the storage unit with chemically modified graphene and combined it with a nano-structured metal organic framework, a so-called MOF.
    Powerful and stable
    Decisive for the performance of graphene hybrids are on the one hand a large specific surface and controllable pore sizes and on the other hand a high electrical conductivity. “The high performance capabilities of the material is based on the combination of the microporous MOFs with the conductive graphene acid,” explains first author Jayaramulu Kolleboyina, a former guest scientist working with Roland Fischer.

    advertisement

    A large surface is important for good supercapacitors. It allows for the collection of a respectively large number of charge carriers within the material — this is the basic principle for the storage of electrical energy.
    Through skillful material design, the researchers achieved the feat of linking the graphene acid with the MOFs. The resulting hybrid MOFs have a very large inner surface of up to 900 square meters per gram and are highly performant as positive electrodes in a supercapacitor.
    Long stability
    However, that is not the only advantage of the new material. To achieve a chemically stable hybrid, one needs strong chemical bonds between the components. The bonds are apparently the same as those between amino acids in proteins, according to Fischer: “In fact, we have connected the graphene acid with a MOF-amino acid, which creates a type of peptide bond.”
    The stable connection between the nano-structured components has huge advantages in terms of long term stability: The more stable the bonds, the more charging and discharging cycles are possible without significant performance impairment.
    For comparison: A classic lithium accumulator has a useful life of around 5,000 cycles. The new cell developed by the TUM researchers retains close to 90 percent capacity even after 10,000 cycles.
    International network of experts
    Fischer emphasizes how important the unfettered international cooperation the researchers controlled themselves was when it came to the development of the new supercapacitor. Accordingly, Jayaramulu Kolleboyina built the team. He was a guest scientist from India invited by the Alexander von Humboldt Foundation and who by now is the head of the chemistry department at the newly established Indian Institute of Technology in Jammu.
    “Our team also networked with electro-chemistry and battery research experts in Barcelona as well as graphene derivate experts from the Czech Republic,” reports Fischer. “Furthermore, we have integrated partners from the USA and Australia. This wonderful, international co-operation promises much for the future.”
    The research was supported by the Deutsche Forschungsgemeinschaft (DFG) within the cluster of excellence e-conversion, the Alexander von Humboldt Foundation, the Indian Institute of Technology Jammu, the Queensland University of Technology and the Australian Research Council (ARC). Further funding came from the European Regional Development Fund provided by the Ministry of Education, Youth and Sports of the Czech Republic. More