More stories

  • in

    Do sweat it! Wearable microfluidic sensor to measure lactate concentration in real time

    With the seemingly unstoppable advancement in the fields of miniaturization and materials science, all sorts of electronic devices have emerged to help us lead easier and healthier lives. Wearable sensors fall in this category, and they have received much attention lately as useful tools to monitor a person’s health in real time. Many such sensors operate by quantifying biomarkers, that is, measurable indicators that reflect one’s health condition. Widely used biomarkers are heartrate and body temperature, which can be monitored continuously with relative ease. On the contrary, chemical biomarkers in bodily fluids, such as blood, saliva, and sweat, are more challenging to quantify with wearable sensors.
    For instance, lactate, which is produced during the breakdown of glucose in the absence of oxygen in tissues, is an important biomarker present in both blood and sweat that reflects the intensity of physical exercise done as well as the oxygenation of muscles. During exercise, muscles requiring energy can rapidly run out of oxygen and fall back to a different metabolic pathway that provides energy at the ‘cost’ of accumulating lactate, which causes pain and fatigue. Lactate is then released into the bloodstream and part of it is eliminated through sweat. This means that a wearable chemical sensor could measure the concentration of lactate in sweat to give a real-time picture of the intensity of exercise or the condition of muscles.
    Although lactate-measuring wearable sensors have already been proposed, most of them are composed of materials that can cause irritation of the skin. To address this problem, a team of scientists in Japan recently carried out a study to bring us a more comfortable and practical sensor. Their work, which was published in Electrochimica Acta, was led by Associate Professor Isao Shitanda, Mr. Masaya Mitsumoto, and Dr. Noya Loew from the Department of Pure and Applied Chemistry at the Tokyo University of Science, Japan.
    The team first focused on the sensing mechanism that they would employ in the sensor. Most lactate biosensors are made by immobilizing lactate oxidase (an enzyme) and an appropriate mediator on an electrode. A chemical reaction involving lactate oxidase, the mediator, and free lactate results in the generation of a measurable current between electrodes — a current that is roughly proportional to the concentration of lactate.
    A tricky aspect here is how to immobilize the enzyme and mediator on an electrode. To do this, the scientists employed a method called “electron beam-induced graft polymerization,” by which functional molecules were bonded to a carbon-based material that can spontaneously bind to the enzyme. The researchers then turned the material into a liquid ink that can be used to print electrodes. This last part turns out to be an important aspect for the future commercialization of the sensor, as Dr. Shitanda explains, “The fabrication of our sensor is compatible with screen printing, an excellent method for fabricating lightweight, flexible electrodes that can be scaled up for mass production.”
    With the sensing mechanism complete, the team then designed an appropriate system for collecting sweat and delivering it to the sensor. They achieved this with a microfluidic sweat collection system made out of polydimethylsiloxane (PDMS); it comprised multiple small inlets, an outlet, and a chamber for the sensor in between. “We decided to use PDMS because it is a soft, nonirritating material suitable for our microfluidic sweat collection system, which is to be in direct contact with the skin,” comments Mr. Mitsumoto.
    The detection limits of the sensor and its operating range for lactate concentrations was confirmed to be suitable for investigating the “lactate threshold” — the point at which aerobic (with oxygen) metabolism turns into anaerobic (without oxygen) metabolism during exercise. Real-time monitoring of this bodily phenomenon is important for several applications, as Dr. Loew remarks, “Monitoring the lactate threshold will help optimize the training of athletes and the exercise routines of rehabilitation patients and the elderly, as well as control the exertion of high-performance workers such as firefighters.”
    The team is already testing the implementation of this sensor in practical scenarios. With any luck, the progress made in this study will help develop the field of wearable chemical sensors, helping us to keep better track of our bodily processes and maintain better health.

    Story Source:
    Materials provided by Tokyo University of Science. Note: Content may be edited for style and length. More

  • in

    You snooze, you lose – with some sleep trackers

    Wearable sleep tracking devices — from Fitbit to Apple Watch to never-heard-of brands stashed away in the electronics clearance bin — have infiltrated the market at a rapid pace in recent years.
    And like any consumer products, not all sleep trackers are created equal, according to West Virginia University neuroscientists.
    Prompted by a lack of independent, third-party evaluations of these devices, a research team led by Joshua Hagen, director of the Human Performance Innovation Center at the WVU Rockefeller Neuroscience Institute, tested the efficacy of eight commercial sleep trackers.
    Fitbit and Oura came out on top in measuring total sleep time, total wake time and sleep efficiency, the results indicate. All other devices, however, either overestimated or underestimated at least one of those sleep metrics, and none of the eight could quantify sleep stages (REM, non-REM) with effective accuracy to be useful when compared to an electroencephalogram, or EEG, which records electrical activity in the brain.
    The study is published in the Nature and Science of Sleep.
    “The biggest takeaway is that not all consumer devices are created equal, and for the end user to take care in selecting the technology to suit their application based on the data,” Hagen said. “Some devices are currently performing well for total sleep time and sleep efficiency, but the community at large seems to still struggle with sleep staging (deep, REM, light). This is not surprising, since typically brain waves are needed to properly measure this. However, when thinking about what you generally have control over with your sleep — time to bed, time in bed, choices before bed that impact sleep efficiency — these can be accurately measured in some devices.”
    Researchers observed five healthy adults — two males, ages 26 and 41, and three females, ages 22, 23 and 27 — who participated by wearing the sleep trackers for a combined total of 98 nights.

    advertisement

    The commercial sleep technologies displayed lower error and bias values when quantifying sleep/wake states as compared to sleep staging durations. Still, these findings revealed that there is a remarkably high degree of variability in the accuracy of commercial sleep technologies, the researchers stated.
    “While technology, both hardware and software, continually advances, it is critical to evaluate the accuracy of these devices in an ongoing fashion,” Hagen said. “Updates to hardware, firmware and algorithms happen continuously, and we must understand how this affects accuracy.”
    Research in this area will evolve with the technology, added Hagen, who himself utilizes four to five sleep devices to keep monitoring his ZZZs.
    “I’m a big believer in living the research,” he said. “I need to understand what the consumer sees in the smartphone apps, what the usability of the devices is, etc. Without that objective sleep data, you can only rely on how you feel when you wake up — and while that is important, that doesn’t tell the whole story. If your alarm goes off and you happen to be in a deep sleep stage, you will wake up very groggy, and could feel as though that sleep was not restorative, when in fact it could have been. It’s just not subjectively noticeable right at that moment.”
    At the end of the day, however, it’s up to the user’s needs as to which product may be most suited for that person, Hagen added.

    advertisement

    “After accuracy, it comes down to logistics. Do you prefer a watch with a display? A ring? A mattress sensor? What is the price of each? Which smartphone app is most appealing? But again, that is if all accuracies are close to equal. If the price is right and the form factor is ideal, but the data accuracy is extremely poor, then those factors don’t matter.”
    The Human Performance Innovation Center works with members of the US military along with collegiate and professional athletes to better understand and optimize human performance, resiliency, and recovery, applying these findings to solutions for the general and clinical populations.
    Joining Hagen in the study from WVU were Jason Stone, Lauren Rentz, Jillian Forsey, Jad Ramadan, Victor Finomore, Scott Galster and Ali Rezai.
    Citation: Evaluations of Commercial Sleep Technologies for Objective Monitoring During Routine Sleeping Conditions More

  • in

    A sharper look at the interior of semiconductors

    Images provide information — what we can observe with our own eyes enables us to understand. Constantly expanding the field of perception into dimensions that are initially hidden from the naked eye, drives science forward. Today, increasingly powerful microscopes let us see into the cells and tissues of living organisms, into the world of microorganisms as well as into inanimate nature. But even the best microscopes have their limits. “To be able to observe structures and processes down to the nanoscale level and below, we need new methods and technologies,” says Dr Silvio Fuchs from the Institute of Optics and Quantum Electronics at the University of Jena. This applies in particular to technological areas such as materials research or data processing. “These days, electronic components, computer chips or circuits are becoming increasingly small,” adds Fuchs. Together with colleagues, he has now developed a method that makes it possible to display and study such tiny, complex structures and even “see inside” them without destroying them. In the current issue of the scientific journal Optica, the researchers present their method — Coherence Tomography with Extreme Ultraviolet Light (XCT for short) — and show its potential in research and application.
    Light penetrates the sample and is reflected by internal structures
    The imaging procedure is based on optical coherence tomography (OCT), which has been established in ophthalmology for a number of years, explains doctoral candidate Felix Wiesner, the lead author of the study. “These devices have been developed to examine the retina of the eye non-invasively, layer by layer, to create 3-dimensional images.” At the ophthalmologist, OCT uses infrared light to illuminate the retina. The radiation is selected in such a way that the tissue to be examined does not absorb it too strongly and it can be reflected by the inner structures. However, the physicists in Jena use extremely short-wave UV light instead of long-wave infrared light for their OCT. “This is due to the size of the structures we want to image,” says Felix Wiesner. In order to look into semiconductor materials with structure sizes of only a few nanometres, light with a wavelength of only a few nanometres is needed.
    Nonlinear optical effect generates coherent extremely short-wave UV light
    Generating such extremely short-wave UV light (XUV) used to be a challenge and was almost only possible in large-scale research facilities. Jena physicists, however, generate broadband XUV in an ordinary laboratory and use what are called high harmonics for this purpose. This is radiation that is produced by the interaction of laser light with a medium and it has a frequency many times that of the original light. The higher the harmonic order, the shorter the resulting wavelength. “In this way, we generate light with a wavelength of between 10 and 80 nanometres using infrared lasers,” explains Prof. Gerhard Paulus, Professor of Nonlinear Optics at the University of Jena. “Like the irradiated laser light, the resulting broadband XUV light is also coherent, which means that it has laser-like properties.”
    In the work described in their current paper, the physicists exposed nanoscopic layer structures in silicon to the coherent XUV radiation and analysed the reflected light. The silicon samples contained thin layers of other metals, such as titanium or silver, at different depths. Because these materials have different reflective properties from the silicon, they can be detected in the reflected radiation. The method is so precise that not only can the deep structure of the tiny samples be displayed with nanometre accuracy, but — due to the differing reflective behaviour — the chemical composition of the samples can also be determined precisely and, above all, in a non-destructive manner. “This makes coherence tomography an interesting application for inspecting semiconductors, solar cells or multilayer optical components,” says Paulus. It could be used for quality control in the manufacturing process of such nanomaterials, to detect internal defects or chemical impurities.

    Story Source:
    Materials provided by Friedrich-Schiller-Universitaet Jena. Original written by Ute Schönfelder. Note: Content may be edited for style and length. More

  • in

    Can evolution be predicted?

    Scientists created a framework to test the predictions of biological optimality theories, including evolution.
    Evolution adapts and optimizes organisms to their ecological niche. This could be used to predict how an organism evolves, but how can such predictions be rigorously tested? The Biophysics and Computational Neuroscience group led by professor Gašper Tkačik at the Institute of Science and Technology (IST) Austria has now created a mathematical framework to do exactly that.
    Evolutionary adaptation often finds clever solutions to challenges posed by different environments, from how to survive in the dark depths of the oceans to creating intricate organs such as an eye or an ear. But can we mathematically predict these outcomes?
    This is the key question that motivates the Tkačik research group. Working at the intersection of biology, physics, and mathematics, they apply theoretical concepts to complex biological systems, or as Tkačik puts it: “We simply want to show that it is sometimes possible to predict change in biological systems, even when dealing with such a complex beast as evolution.”
    Climbing mountains in many dimensions
    In a joint work by the postdoctoral fellow Wiktor Młynarski and PhD student Michal Hledík, assisted by group alumnus Thomas Sokolowski, who is now working at the Frankfurt Institute for Advanced Studies, the scientists spearheaded an essential advance towards their goal. They developed a statistical framework that uses experimental data from complex biological systems to rigorously test and quantify how well such a system is adapted to its environment. An example of such an adaptation is the design of the eye’s retina that optimally collects light to form a sharp image, or the wiring diagram of a worm’s nervous system that ensures all the muscles and sensors are connected efficiently, using the least amount of neural wiring.
    The established model the scientists base their results on represents adaptation as movement on a landscape with mountains and valleys. The features of an organism determine where it is located on this landscape. As evolution progresses and the organism adapts to its ecological niche, it climbs towards the peak of one of the mountains. Better adaptation results in a better performance in the environment — for example producing more offspring — which in turn is reflected in a higher elevation on this landscape. Therefore, a falcon with its sharp eyesight is located at a higher point than the bird’s ancestor whose vision was worse in the same environment.
    The new framework by Młynarski, Hledík, and colleagues allows them to quantify how well the organisms are adapted to their niche. On a two-dimensional landscape with mountains and valleys, calculating the elevation appears trivial, but real biological systems are much more complex. There are many more factors influencing it, which results in landscapes with many more dimensions. Here, intuition breaks down and the researchers need rigorous statistical tools to quantify adaptation and test its predictions against experimental data. This is what the new framework delivers.
    Building bridges in science
    IST Austria provides a fertile ground for interdisciplinary collaborations. Wiktor Młynarski, originally coming from computer science, is interested in applying mathematical concepts to biological systems. “This paper is a synthesis of many of my scientific interests, bringing together different biological systems and conceptual approaches,” he describes this most recent study. In his interdisciplinary research, Michal Hledík works with both the Tkačik group and the research group led by Nicholas Barton in the field of evolutionary genetics at IST Austria. Gašper Tkačik himself was inspired to study complex biological systems through the lens of physics by his PhD advisor William Bialek at Princeton University. “There, I learned that the living world is not always messy, complex, and unapproachable by physical theories. In contrast, it can drive completely new developments in applied and fundamental physics,” he explains.
    “Our legacy should be the ability to point a finger at selected biological systems and predict, from first principles, why these systems are as they are, rather than being limited to describing how they work,” Tkačik describes his motivation. Prediction should be possible in a controlled environment, such as with the relatively simple E. coli bacteria growing under optimal conditions. Another avenue for prediction are systems that operate under hard physical limits, which strongly constrain evolution. One example are our eyes that need to convey high-resolution images to the brain while using the minimal amount of energy. Tkačik summarizes, “Theoretically deriving even a bit of an organism’s complexity would be the ultimate answer to the ‘Why?’ question that humans have grappled with throughout the ages. Our recent work creates a tool to approach this question, by building a bridge between mathematics and biology.” More

  • in

    Mathematical modeling to identify factors that determine adaptive therapy success

    One of the most challenging issues in cancer therapy is the development of drug resistance and subsequent disease progression. In a new article featured on this month’s cover of Cancer Research, Moffitt Cancer Center researchers, in collaboration with Oxford University, report results from their study using mathematical modeling to show that cell turnover impacts drug resistance and is an important factor that governs the success of adaptive therapy.
    Cancer treatment options have increased substantially over the past few decades; however, many patients eventually develop drug resistance. Physicians strive to overcome resistance by either trying to target cancer cells through an alternative approach or targeting the resistance mechanism itself, but success with these approaches is often limited, as additional resistance mechanisms can arise.
    Researchers in Moffitt’s Integrated Mathematical Oncology Department and Center of Excellence for Evolutionary Therapy believe that resistance may partly develop because of the high doses of drugs that are commonly used during treatment. Patients are typically administered a maximum tolerated dose of therapy that kills as many cancer cells as possible with the fewest side effects. However, according to evolutionary theories, this maximum tolerated dose approach could lead to drug resistance because of the existence of drug resistant cells before treatment even begins. Once sensitive cells are killed by anti-cancer therapies, these drug resistant cells are given free rein to divide and multiply. Moffitt researchers believe an alternative treatment strategy called adaptive therapy may be a better approach to kill cancer cells and minimize the development of drug resistance.
    “Adaptive therapy aims not to eradicate the tumor, but to control it. Therapy is applied to reduce tumor burden to a tolerable level but is subsequently modulated or withdrawn to maintain a pool of drug-sensitive cancer cells,” said Alexander Anderson, Ph.D., chair of the Integrated Mathematical Oncology Department and founding director of the Center of Excellence for Evolutionary Therapy.
    Previous laboratory studies have shown that adaptive therapy can prolong the time to cancer progression for several different tumor types, including ovarian, breast and melanoma. Additionally, a clinical trial in prostate cancer patients at Moffitt has shown that compared to standard treatment, adaptive therapy increased the time to cancer progression by approximately 10 months and reduced the cumulative drug usage by 53%.
    Despite these encouraging results, it is unclear which tumor types will respond best to adaptive therapy in the clinic. Recent studies have shown that the success of adaptive therapy is dependent on different factors, including levels of spatial constraint, the fitness of the resistant cell population, the initial number of resistant cells and the mechanisms of resistance. However, it is unclear how the cost of resistance factors into a tumor’s response to adaptive therapy.

    advertisement

    The cost of resistance refers to the idea that cells that become resistant have a fitness advantage over non-resistant cells when a drug is present, but this may come at a cost, such as a slower growth rate. However, drug resistance is not always associated with a cost and it is unclear whether a cost of resistance is necessary for the success of adaptive therapy.
    The research team at Moffitt used mathematical modeling to determine how the cost of resistance is associated with adaptive therapy. They modeled the growth of drug sensitive and resistant cell populations under both continuous therapy and adaptive therapy conditions and compared their time to disease progression in the presence and absence of a cost of resistance.
    The researchers showed that tumors with higher cell density and those with smaller levels of pre-existing resistance did better under adaptive therapy conditions. They also showed that cell turnover is a key factor that impacts the cost of resistance and outcomes to adaptive therapy by increasing competition between sensitive and resistance cells. To do so, they made use of phase plane techniques, which provide a visual way to dissect the dynamics of mathematical models.
    “I’m a very visual person and find that phase planes make it easy to gain an intuition for a model. You don’t need to manipulate equations, which makes them great for communicating with experimental and clinical collaborators. We are honored that Cancer Research selected our collage of phase planes for their cover and hope this will help making mathematical oncology accessible to more people,” said Maximilian Strobl, lead study author and doctoral candidate at University of Oxford.
    To confirm their models, the researchers analyzed data from 67 prostate cancer patients undergoing intermittent therapy treatment, a predecessor of adaptive therapy.
    “We find that even though our model was constructed as a conceptual tool, it can recapitulate individual patient dynamics for a majority of patients, and that it can describe patients who continuously respond, as well as those who eventually relapse,” said Anderson.
    While more studies are needed to understand how adaptive therapies may benefit patients, researchers are hopeful their data will lead to better indicators of which tumors will respond to adaptive therapy.
    “With better understanding of tumor growth, resistance costs, and turnover rates, adaptive therapy can be more carefully tailored to patients who stand to benefit from it the most and, more importantly, highlight which patients may benefit from multi-drug approaches,” said Anderson. More

  • in

    Switching to firm contracts may prevent natural gas fuel shortages at US power plants

    Between January 2012 and March 2018, there were an average of 1,000 failures each year at large North American gas power plants due to unscheduled fuel shortages and fuel conservation interruptions. This is a problem as the power grid depends on reliable natural gas delivery from these power plants in order to function. More than a third of all U.S. electricity is generated from natural gas. New research now indicates that these fuel shortages are not due to failures of pipelines and that in certain areas of the country a change in how gas is purchased can significantly reduce generator outages.
    The paper, “What Causes Natural Gas Fuel Shortages at U.S. Power Plants?” by researchers at Carnegie Mellon University and the North American Electric Reliability Corporation, was published in Energy Policy.
    Gas shortages at generators have caused simultaneous failures of several power plants. Physical failures and disruptions of the natural gas pipeline network are rare; the authors found that they account for no more than 5% of the power plant generation lost to fuel shortages over the six years examined. The vast majority of the natural gas generator outages due to fuel unavailability were due to curtailment of gas when supplies were tight. In the Midwest and Mid-Atlantic states, natural gas was available but power plants that did not purchase firm contracts were out-prioritized by commercial and industrial customers.
    “While it is unsurprising that plants using the spot market or interruptible pipeline contracts for their fuel were somewhat more likely to experience fuel shortages than those with firm contracts, these contracts can still make a big difference in reliability in certain regions,” says Jay Apt, a Professor and the Co-Director of Carnegie Mellon’s Electricity Industry Center, who co-authored the paper. “Still, firm contracts are not a solution for areas such as New England that have few gas pipelines and further discussion on other mitigation strategies should be explored.”
    Natural gas is increasingly used to generate power in the U.S. and the North American Electric Reliability Corporation (NERC) projects that the natural gas generating capacity will further expand by 12 GW over the next decade, about a 5% increase. Fuel shortages have been a problem at power plants that are used exclusively at times of peak demand, such as during extreme cold and hot weather, as well as at more heavily-used gas power plants. This indicates that fuel shortages affect the power grid’s ability to operate whether it’s responding to an emergency or merely serving load during normal operation.
    Previous research has focused on technical reports from reliability organizations or regional transmission organizations. For the first time, researchers for this paper used historical data collected by NERC to examine fuel shortages between 2012 and 2018 at natural gas power plants in North America to determine their cause. The researchers’ primary goal was to identify how many of these fuel shortage failures were caused by physical interruptions of gas flow as opposed to operational procedures on the pipeline network, such as gas service curtailment priority. They also sought to respond to policy questions regarding whether generators could mitigate fuel shortage failures by switching to firm pipeline contracts.
    Along with analyzing the NERC data from 2012 — 2018, the researchers developed a systematic approach to match the NERC failure data to U.S. Energy Information Administration generator characteristic data in order to evaluate how gas pipeline system characteristics have historically affected natural gas fuel shortage failures. They calculated a time series of unscheduled, unavailable capacity due to fuel shortages and time-matched the beginning times of fuel shortage power plant failure events with time windows of pipeline failures to determine if pipeline failures could have caused fuel shortage outages at power plants. They then completed a similar process of spatial matching of power plants to gas trading hubs in order to assess the historical availability of natural gas for transactions by power plants.
    Ultimately, the researchers observed that both plants with firm contracts and plants without firm contracts experienced fuel shortages and conservation interruptions, but that non-firm plants were overrepresented in the fuel shortage failure data. This suggests that curtailment priority on pipeline networks is the likely reason for most correlated failures. However, the data also suggests that firm contracts will not solve everything and other strategies should be explored, especially in areas such as New England where the pipeline network has historically been constrained.

    Story Source:
    Materials provided by Carnegie Mellon University. Note: Content may be edited for style and length. More

  • in

    Supercomputer turns back cosmic clock

    Astronomers have tested a method for reconstructing the state of the early Universe by applying it to 4000 simulated universes using the ATERUI II supercomputer at the National Astronomical Observatory of Japan (NAOJ). They found that together with new observations the method can set better constraints on inflation, one of the most enigmatic events in the history of the Universe. The method can shorten the observation time required to distinguish between various inflation theories.
    Just after the Universe came into existence 13.8 billion years ago, it suddenly increased more than a trillion, trillion times in size, in less than a trillionth of a trillionth of a microsecond; but no one knows how or why. This sudden “inflation,” is one of the most important mysteries in modern astronomy. Inflation should have created primordial density fluctuations which would have affected the distribution of where galaxies developed. Thus, mapping the distribution of galaxies can rule out models for inflation which don’t match the observed data.
    However, processes other than inflation also impact galaxy distribution, making it difficult to derive information about inflation directly from observations of the large-scale structure of the Universe, the cosmic web comprised of countless galaxies. In particular, the gravitationally driven growth of groups of galaxies can obscure the primordial density fluctuations.
    A research team led by Masato Shirasaki, an assistant professor at NAOJ and the Institute of Statistical Mathematics, thought to apply a “reconstruction method” to turn back the clock and remove the gravitational effects from the large-scale structure. They used ATERUI II, the world’s fastest supercomputer dedicated to astronomy simulations, to create 4000 simulated universes and evolve them through gravitationally driven growth. They then applied this method to see how well it reconstructed the starting state of the simulations. The team found that their method can correct for the gravitational effects and improve the constraints on primordial density fluctuations.
    “We found that this method is very effective,” says Shirasaki. “Using this method, we can verify of the inflation theories with roughly one tenth the amount of data. This method can shorten the required observing time in upcoming galaxy survey missions such as SuMIRe by NAOJ’s Subaru Telescope.”

    Story Source:
    Materials provided by National Institutes of Natural Sciences. Note: Content may be edited for style and length. More

  • in

    Graphene 'nano-origami' creates tiniest microchips yet

    The tiniest microchips yet can be made from graphene and other 2D-materials, using a form of ‘nano-origami’, physicists at the University of Sussex have found.
    This is the first time any researchers have done this, and it is covered in a paper published in the ACS Nano journal.
    By creating kinks in the structure of graphene, researchers at the University of Sussex have made the nanomaterial behave like a transistor, and have shown that when a strip of graphene is crinkled in this way, it can behave like a microchip, which is around 100 times smaller than conventional microchips.
    Prof Alan Dalton in the School of Mathematical and Physics Sciences at the University of Sussex, said:
    “We’re mechanically creating kinks in a layer of graphene. It’s a bit like nano-origami.
    “Using these nanomaterials will make our computer chips smaller and faster. It is absolutely critical that this happens as computer manufacturers are now at the limit of what they can do with traditional semiconducting technology. Ultimately, this will make our computers and phones thousands of times faster in the future.
    “This kind of technology — “straintronics” using nanomaterials as opposed to electronics — allows space for more chips inside any device. Everything we want to do with computers — to speed them up — can be done by crinkling graphene like this.”
    Dr Manoj Tripathi, Research Fellow in Nano-structured Materials at the University of Sussex and lead author on the paper, said:
    “Instead of having to add foreign materials into a device, we’ve shown we can create structures from graphene and other 2D materials simply by adding deliberate kinks into the structure. By making this sort of corrugation we can create a smart electronic component, like a transistor, or a logic gate.”
    The development is a greener, more sustainable technology. Because no additional materials need to be added, and because this process works at room temperature rather than high temperature, it uses less energy to create.

    Story Source:
    Materials provided by University of Sussex. Note: Content may be edited for style and length. More