More stories

  • in

    Identifying individual proteins using nanopores and supercomputers

    The amount and types of proteins our cells produce tell us important details about our health and how our bodies work. But the methods we have of identifying and quantifying individual proteins are inadequate to the task. Not only is the diversity of proteins unknown, but often, amino acids are changed after synthesis through post-translational modifications.
    In recent years, much progress has been made in DNA reading using nanopores — minute membranes large enough to let an unspooled DNA strand through, but just barely. By carefully measuring the ionic voltage of the nanopore as DNA crosses over, biologists have been able to rapidly identify the order of base pairs in the sequence. In fact, this year, nanopores were used to finally sequence the entire human genome — something that was not previously possible with other technologies.
    In new research out in Science magazine, researchers from Delft University of Technology in the Netherlands and the University of Illinois at Urbana-Champaign (UIUC) in the U.S. have extended these DNA nanopore successes and provided a proof-of-concept that the same method is possible for single protein identification, characterizing proteins with single-amino-acid resolution and a vanishingly small (10^-6 or 1 in a million) margins of error.
    “This nanopore peptide reader provides site-specific information about the peptide’s primary sequence that may find applications in single-molecule protein fingerprinting and variant identification,” the authors wrote.
    The workhorses of our cells, proteins are a long peptide strings made of 20 different types of amino acids. The researchers utilized an enzyme called helicase Hel308 that can attach to DNA-peptide hybrids and pull them, in a controlled way, through a biological nanopore known as MspA (mycobacterium smegmatis porin A). They chose the Hel308 DNA helicase because it can pull peptides through the pore in half-nucleotide observable steps, which correspond closely to single amino acids.
    Each step through the narrow gate theoretically produces a unique current signal as the amino acid partially blocks an electrical current carried by ions through the nanopore. More

  • in

    Data available for training AI to spot skin cancer are insufficient and lacking in pictures of darker skin

    The images and accompanying data available for training artificial intelligence (AI) to spot skin cancer are insufficient and include very few images of darker skin, according to research presented at the NCRI Festival and published in Lancet Digital Health. 
    AI is increasingly being used in medicine as it can make diagnosis of diseases like skin cancer quicker and more effective. However, AI needs to be ‘trained’ by looking at data and images from a large number of patients where the diagnosis has already been established and so an AI program depends heavily upon the information it is trained on.
    Researchers say there is an urgent need for better sets of data on skin cancers and other skin lesions which contain information on who is represented in the datasets.
    The research was presented by Dr David Wen from the University of Oxford, UK. He said: “AI programs hold a lot of potential for diagnosing skin cancer because it can look at pictures and quickly and cost-effectively evaluate any worrying spots on the skin. However, it’s important to know about the images and patients used to develop programs, as these influence which groups of people the programs will be most effective for in real-life settings. Research has shown that programs trained on images taken from people with lighter skin types only might not be as accurate for people with darker skin, and vice versa.”
    Dr Wen and his colleagues carried out the first ever review of all freely accessible sets of data on skin lesions around the world. They found 21 sets including more than 100,000 pictures.
    Diagnosis of skin cancer normally requires a photo of the worrying lesion as well as a picture taken with a special hand-held magnifier, called a dermatoscope, but only two of the 21 datasets included images taken with both of these methods. The datasets were also missing other important information, such as how images were chosen to be included, and evidence of ethical approval or patient consent. More

  • in

    Engineers design autonomous robot that can open doors, find wall outlet to recharge

    One flaw in the notion that robots will take over the world is that the world is full of doors.
    And doors are kryptonite to robots, said Ou Ma, an aerospace engineering professor at the University of Cincinnati.
    “Robots can do many things, but if you want one to open a door by itself and go through the doorway, that’s a tremendous challenge,” Ma said.
    Students in UC’s Intelligent Robotics and Autonomous Systems Laboratory have solved this complex problem in three-dimensional digital simulations. Now they’re building an autonomous robot that not only can open its own doors but also can find the nearest electric wall outlet to recharge without human assistance.
    This simple advance in independence represents a huge leap forward for helper robots that vacuum and disinfect office buildings, airports and hospitals. Helper robots are part of a $27 billion robotics industry, which includes manufacturing and automation.
    The study was published in the journal IEEE Access.
    UC College of Engineering and Applied Science doctoral student Yufeng Sun, the study’s lead author, said some researchers have addressed the problem by scanning an entire room to create a 3D digital model so the robot can locate a door. But that is a time-consuming custom solution that works only for the particular room that is scanned. More

  • in

    Global river database documents 40 years of change

    A first-ever database compiling movement of the largest rivers in the world over time could become a crucial tool for urban planners to better understand the deltas that are home to these rivers and a large portion of Earth’s population.
    The database, created by researchers at The University of Texas at Austin, uses publicly available remote sensing data to show how the river centerlines of the world’s 48 most threatened deltas have moved during the past 40 years. The data can be used to predict how rivers will continue to move over time and help governments manage population density and future development.
    “When we think about river management strategies, we have very little to no information about how rivers are moving over time,” said Paola Passalacqua, an associate professor in the Cockrell School of Engineering’s Department of Civil, Architectural and Environmental Engineering who leads the ongoing river analysis research.
    The research was published today in Proceedings of the National Academy of Sciences.
    The database includes three U.S. rivers, the Mississippi, the Colorado and the Rio Grande. Although some areas of these deltas are experiencing migration, overall, they are mostly stable, the data show. Aggressive containment strategies to keep those rivers in their place, especially near population centers, play a role in that, Passalacqua said.
    Average migration rates for each river delta help identify which areas are stable and which are experiencing major river shifts. The researchers also published more extensive data online that includes information about how different segments of rivers have moved over time. It could help planners see what’s going in rural areas vs. urban areas when making decisions about how to manage the rivers and what to do with development.
    The researchers leaned on techniques from a variety of disciplines to compile the data and published their methods online. Machine learning and image processing software helped them examine decades’ worth of images. The researchers worked with Alan Bovik of the Department of Electrical and Computer Engineering and doctoral student Leo Isikdogan to develop that technology. They also borrowed from fluid mechanics, using tools designed to monitor water particles in turbulence experiments to instead track changes to river locations over the years.
    “We got the idea to use tools from fluid mechanics while attending a weekly department seminar where other researchers at the university share their work,” said Tess Jarriel, a graduate research assistant in Passalacqua’s lab and lead author of the paper. “It just goes to show how important it is to collaborate across disciplines.”
    Rivers that have high sediment flux and flood frequency move more as it is in their nature and part of an important tradeoff that underpins Passalacqua’s research.
    By knowing more about these river deltas where millions of people live, planners can have a better idea of how best to balance these tradeoffs. Passalacqua, as well as researchers in her lab, have recently published research about these tradeoffs between the need for river freedom and humanity’s desire for stability.
    Passalacqua has been working on this topic for more than eight years. The team and collaborators are in the process of publishing another paper as part of this work that expands beyond the centerlines of rivers and will also look at riverbanks. That additional information will give an even clearer picture about river movement over time, with more nuance, because sides of the river can move in different directions and at different speeds.
    The research was funded through Passalacqua’s National Science Foundation CAREER award; grants from the NSF’s Ocean Sciences and Earth Sciences divisions; and Planet Texas 2050, a UT Austin initiative to support research to make communities more resilient. Co-authors on the paper are Jarriel and postdoctoral researcher John Swartz.
    Story Source:
    Materials provided by University of Texas at Austin. Note: Content may be edited for style and length. More

  • in

    Hidden behavior of supercapacitor materials

    Researchers from the University of Surrey’s Advanced Technology Institute (ATI) and the University of São Paulo have developed a new analysis technique that will help scientists improve renewable energy storage by making better supercapacitors. The team’s new approach enables researchers to investigate the complex inter-connected behaviour of supercapacitor electrodes made from layers of different materials.
    Improvements in energy storage are vital if countries are to deliver carbon reduction targets. The inherent unpredictability of energy from solar and wind means effective storage is required to ensure consistency in supply, and supercapacitors are seen as an important part of the solution.
    Supercapacitors could also be the answer to charging electric vehicles much faster than is possible using lithium-ion batteries. However, more supercapacitor development is needed to enable them to effectively store enough electricity.
    Surrey’s peer-reviewed paper, published in Electrochimica Acta, explains how the research team used a cheap polymer material called Polyaniline (PANI), which stores energy through a mechanism known as pseudocapacitance. PANI is conductive and can be used as the electrode in a supercapacitor device, storing charge by trapping ions. To maximise energy storage, the researchers have developed a novel method of depositing a thin layer of PANI onto a forest of conductive carbon nanotubes. This composite material makes an excellent supercapacitive electrode, but the fact that it is made up of different materials makes it difficult to separate and fully understand the complex processes which occur during charging and discharging. This is a problem across the field of pseudocapacitor development.
    To tackle this problem, the researchers adopted a technique known as the Distribution of Relaxation Times. This analysis method allows scientists to examine complex electrode processes to separate and identify them, making it possible to optimise fabrication methods to maximise useful reactions and reduce reactions that damage the electrode. The technique can also be applied to researchers using different materials in supercapacitor and pseudocapacitor development.
    Ash Stott, a postgraduate research student at the University of Surrey who was the lead scientist on the project, said:
    “The future of global energy use will depend on consumers and industry generating, storing and using energy more efficiently, and supercapacitors will be one of the leading technologies for intermittent storage, energy harvesting and high-power delivery. Our work will help make that happen more effectively.”
    Professor Ravi Silva, Director of the ATI and principal author, said:
    “Following on from world leaders pledging their support for green energy at COP26, our work shows researchers how to accelerate the development of high-performance materials for use as energy storage elements, a key component of solar or wind energy systems. This research brings us one step closer to a clean, cost-effective energy future.”
    Story Source:
    Materials provided by University of Surrey. Note: Content may be edited for style and length. More

  • in

    AI behind deepfakes may power materials design innovations

    The person staring back from the computer screen may not actually exist, thanks to artificial intelligence (AI) capable of generating convincing but ultimately fake images of human faces. Now this same technology may power the next wave of innovations in materials design, according to Penn State scientists.
    “We hear a lot about deepfakes in the news today — AI that can generate realistic images of human faces that don’t correspond to real people,” said Wesley Reinhart, assistant professor of materials science and engineering and Institute for Computational and Data Sciences faculty co-hire, at Penn State. “That’s exactly the same technology we used in our research. We’re basically just swapping out this example of images of human faces for elemental compositions of high-performance alloys.”
    The scientists trained a generative adversarial network (GAN) to create novel refractory high-entropy alloys, materials that can withstand ultra-high temperatures while maintaining their strength and that are used in technology from turbine blades to rockets.
    “There are a lot of rules about what makes an image of a human face or what makes an alloy, and it would be really difficult for you to know what all those rules are or to write them down by hand,” Reinhart said. “The whole principle of this GAN is you have two neural networks that basically compete in order to learn what those rules are, and then generate examples that follow the rules.”
    The team combed through hundreds of published examples of alloys to create a training dataset. The network features a generator that creates new compositions and a critic that tries to discern whether they look realistic compared to the training dataset. If the generator is successful, it is able to make alloys that the critic believes are real, and as this adversarial game continues over many iterations, the model improves, the scientists said.
    After this training, the scientists asked the model to focus on creating alloy compositions with specific properties that would be ideal for use in turbine blades. More

  • in

    Biodiversity ‘time machine’ uses artificial intelligence to learn from the past

    Experts can make crucial decisions about future biodiversity management by using artificial intelligence to learn from past environmental change, according to research at the University of Birmingham.
    A team, led by the University’s School of Biosciences, has proposed a ‘time machine framework’ that will help decision-makers effectively go back in time to observe the links between biodiversity, pollution events and environmental changes such as climate change as they occurred and examine the impacts they had on ecosystems.
    In a new paper, published in Trends in Ecology and Evolution, the team sets out how these insights can be used to forecast the future of ecosystem services such as climate change mitigation, food provisioning and clean water.
    Using this information, stakeholders can prioritise actions which will provide the greatest impact.
    Principal investigator, Dr Luisa Orsini, is an Associate Professor at the University of Birmingham and Fellow of The Alan Turing Institute. She explained: “Biodiversity sustains many ecosystem services. Yet these are declining at an alarming rate. As we discuss vital issues like these at the COP26 Summit in Glasgow, we might be more aware than ever that future generations may not be able to enjoy nature’s services if we fail to protect biodiversity.”
    Biodiversity loss happens over many years and is often caused by the cumulative effect of multiple environmental threats. Only by quantifying biodiversity before, during and after pollution events, can the causes of biodiversity and ecosystem service loss be identified, say the researchers.
    Managing biodiversity whilst ensuring the delivery of ecosystem services is a complex problem because of limited resources, competing objectives and the need for economic profitability. Protecting every species is impossible. The time machine framework offers a way to prioritize conservation approaches and mitigation interventions.
    Dr Orsini added: “We have already seen how a lack of understanding of the interlinked processes underpinning ecosystem services has led to mismanagement, with negative impacts on the environment, the economy and on our wellbeing. We need a whole-system, evidence-based approach in order to make the right decisions in the future. Our time-machine framework is an important step towards that goal.”
    Lead author, Niamh Eastwood, is a PhD student at the University of Birmingham. She said: “We are working with stakeholders (e.g. UK Environment Agency) to make this framework accessible to regulators and policy makers. This will support decision-making in regulation and conservation practices.”
    The framework draws on the expertise of biologists, ecologists, environmental scientists, computer scientists and economists. It is the result of a cross-disciplinary collaboration among the University of Birmingham, The Alan Turing Institute, The University of Leeds, the University of Cardiff, The University of California Berkeley, The American University of Paris and the Goethe University Frankfurt.
    Story Source:
    Materials provided by University of Birmingham. Note: Content may be edited for style and length. More

  • in

    How monitoring quantum Otto engine affects its performance

    Covering a broad spectrum of different modes of operations of engines with a working substance having just two quantum states, the researchers found that only for idealized cycles that perform infinitely slowly it makes no difference which monitoring scheme is applied. But all engines that run in finite time and hence are of practical interest work considerably better for their power output and reliability when they are monitored according to the repeated contact scheme. confined to a piston, which undergoes four subsequent strokes: it is first compressed, then heated up, expanded, and finally cooled down to its initial temperature.Today, with significant advancements in nano-fabrication, the quantum revolution is upon us, bringing quantum heat engines into the limelight. Like their classical counterparts, quantum heat engines could be operated in various protocols which might be continuous or cyclic. Unlike classical engine which uses a macroscopic amount of the working substance, the working substance of a quantum engine has pronounced quantum features. The most prominent of these is the discreteness of the possible energies it can take. Even more outlandish from the classical point of view is the fact that a quantum system may stay in two or more of its allowed energies at the same time. This property, which has no classical analog, is known as ‘coherence’. Otherwise, a quantum Otto engine is also characterized by four strokes like its classical counterpart.Determining the quantum Otto engine’s performance metrics, such as power output or efficiency is the key to improving design and tailoring better working substances. A direct diagnosis of such metrics requires measuring the energies of the engine at the beginning and end of each stroke. While a classical engine is only negligibly affected by measurements, in quantum engines the act of the measurement itself causes a bizarre measurement effect in which the engine’s quantum state is severely affected via quantum mechanics. Most importantly, any coherence in the system at the end of the cycle would be completely removed by the measurement effect.It has long been believed that these strange measurement-induced effects are irrelevant for the understanding of quantum engines and hence have been neglected in traditional quantum thermodynamics. Moreover, not much thought has been put into the design of monitoring protocols that yield a reliable diagnosis of the engine’s performance while minimally altering it.However, novel breakthrough research performed at the Center for Theoretical Physics of Complex Systems within the Institute for Basic Science, South Korea, may change this rigid perspective. The researchers investigated the impact of different measurement-based diagnostic schemes on the performance of a quantum Otto engine. In addition, they discovered a minimally invasive measurement method that preserves coherence across the cycles.The researchers utilized the so-called ‘repeated contacts scheme’, where they record the engine’s states using an ancillary probe, and measurements of the probe are performed only at the end of the engine’s working cycles. This bypasses the need to measure the engine repeatedly after each stroke and avoids undesirable measurement-induced quantum effects such as the removal of any coherence that was built up during the cycle.The preservation of coherence throughout the engine’s lifetime enhanced critical performance metrics like the maximum power output and reliability, making the engine more capable and dependable. According to Prof. Thingna, “this is the first example in which the influence of an experimenter, who wants to know whether the engine does what it is designed to do, has been properly considered.”
    Covering a broad spectrum of different modes of operations of engines with a working substance having just two quantum states, the researchers found that only for idealized cycles that perform infinitely slowly it makes no difference which monitoring scheme is applied. But all engines that run in finite time and hence are of practical interest work considerably better for their power output and reliability when they are monitored according to the repeated contact scheme.
    Overall, the researchers concluded that the nature of the measurement techniques can bring theory closer to experimental data. Hence, it is vital to take these factors into account when monitoring and testing quantum heat engines. This research was published in the Physical Review X Quantum.
    Story Source:
    Materials provided by Institute for Basic Science. Note: Content may be edited for style and length. More