More stories

  • in

    Love-hate relationship of solvent and water leads to better biomass breakup

    Scientists at the Department of Energy’s Oak Ridge National Laboratory used neutron scattering and supercomputing to better understand how an organic solvent and water work together to break down plant biomass, creating a pathway to significantly improve the production of renewable biofuels and bioproducts.
    The discovery, published in the Proceedings of the National Academy of Sciences, sheds light on a previously unknown nanoscale mechanism that occurs during biomass deconstruction and identifies optimal temperatures for the process.
    “Understanding this fundamental mechanism can aid in the rational design of even more efficient technologies for processing biomass,” said Brian Davison, ORNL chief scientist for systems biology and biotechnology.
    Producing biofuels from plant material requires breaking its polymeric cellulose and hemicellulose components into fermentable sugars while removing the intact lignin — a structural polymer also found in plant cell walls — for use in value-added bioproducts such as plastics. Liquid chemicals known as solvents are often employed in this process to dissolve the biomass into its molecular components.
    Paired with water, a solvent called tetrahydrofuran, or THF, is particularly effective at breaking down biomass. Discovered by Charles Wyman and Charles Cai of the University of California, Riverside, during a study supported by DOE’s BioEnergy Science Center at ORNL, the THF-water mixture produces high yields of sugars while preserving the structural integrity of lignin for use in bioproducts. The success of these cosolvents intrigued ORNL scientists.
    “Using THF and water to pretreat biomass was a very important technological advance,” said ORNL’s Loukas Petridis of the University of Tennessee/ORNL Center for Molecular Biophysics. “But the science behind it was not known.”
    Petridis and his colleagues first ran a series of molecular dynamics simulations on the Titan and Summit supercomputers at the Oak Ridge Leadership Computing Facility, a DOE Office of Science user facility at ORNL. Their simulations showed that THF and water, which stay mixed in bulk, separate at the nanoscale to form clusters on biomass.

    advertisement

    THF selectively forms nanoclusters around the hydrophobic, or water-repelling, portions of lignin and cellulose while complementary water-rich nanoclusters form on the hydrophilic, or water-loving, portions. This dual action drives the deconstruction of biomass as each of the solvents dissolves portions of the cellulose while preventing lignin from forming clumps that would limit access to the cellulosic sugars — a common occurrence when biomass is mixed in water alone.
    “This was an interesting finding,” Petridis said. “But it is always important to validate simulations with experiments to make sure that what the simulations report corresponds to reality.”
    This phenomenon occurs at the tiny scale of three to four nanometers. For comparison, a human hair is typically 80,000 to 100,000 nanometers wide. These reactions presented a significant challenge to demonstrate in a physical experiment.
    Scientists at the High Flux Isotope Reactor, a DOE Office of Science user facility at ORNL, overcame this challenge using neutron scattering and a technique called contrast matching. This technique selectively replaces hydrogen atoms with deuterium, a form of hydrogen with an added neutron, to make certain components of the complex mixture in the experiment more visible to neutrons than others.
    “Neutrons see a hydrogen atom and a deuterium atom very differently,” said ORNL’s Sai Venkatesh Pingali, a Bio-SANS instrument scientist who performed the neutron scattering experiments. “We use this approach to selectively highlight parts of the whole system, which otherwise would not be visible, especially when they’re really small.”
    The use of deuterium rendered the cellulose invisible to neutrons and made the THF nanoclusters visually pop out against the cellulose like the proverbial needle in a haystack.
    To mimic biorefinery processing, researchers developed an experimental setup to heat the mixture of biomass and solvents and observe the changes in real time. The team found the action of the THF-water mix on biomass effectively kept lignin from clumping at all temperatures, enabling easier deconstruction of the cellulose. Increasing the temperature to 150 degrees Celsius triggered cellulose microfibril breakdown. These data provide new insights into the ideal processing temperature for these cosolvents to deconstruct biomass.
    “This was a collaborative effort with biologists, computational experts and neutron scientists working in tandem to answer the scientific challenge and provide industry-relevant knowledge,” Davison said. “The method could fuel further discoveries about other solvents and help grow the bioeconomy.” More

  • in

    Giving robots human-like perception of their physical environments

    Wouldn’t we all appreciate a little help around the house, especially if that help came in the form of a smart, adaptable, uncomplaining robot? Sure, there are the one-trick Roombas of the appliance world. But MIT engineers are envisioning robots more like home helpers, able to follow high-level, Alexa-type commands, such as “Go to the kitchen and fetch me a coffee cup.”
    To carry out such high-level tasks, researchers believe robots will have to be able to perceive their physical environment as humans do.
    “In order to make any decision in the world, you need to have a mental model of the environment around you,” says Luca Carlone, assistant professor of aeronautics and astronautics at MIT. “This is something so effortless for humans.
    But for robots it’s a painfully hard problem, where it’s about transforming pixel values that they see through a camera, into an understanding of the world.” Now Carlone and his students have developed a representation of spatial perception for robots that is modeled after the way humans perceive and navigate the world.
    The new model, which they call 3D Dynamic Scene Graphs, enables a robot to quickly generate a 3D map of its surroundings that also includes objects and their semantic labels (a chair versus a table, for instance), as well as people, rooms, walls, and other structures that the robot is likely seeing in its environment.
    The model also allows the robot to extract relevant information from the 3D map, to query the location of objects and rooms, or the movement of people in its path.

    advertisement

    “This compressed representation of the environment is useful because it allows our robot to quickly make decisions and plan its path,” Carlone says. “This is not too far from what we do as humans. If you need to plan a path from your home to MIT, you don’t plan every single position you need to take. You just think at the level of streets and landmarks, which helps you plan your route faster.”
    Beyond domestic helpers, Carlone says robots that adopt this new kind of mental model of the environment may also be suited for other high-level jobs, such as working side by side with people on a factory floor or exploring a disaster site for survivors.
    He and his students, including lead author and MIT graduate student Antoni Rosinol, will present their findings this week at the Robotics: Science and Systems virtual conference.
    A mapping mix
    At the moment, robotic vision and navigation has advanced mainly along two routes: 3D mapping that enables robots to reconstruct their environment in three dimensions as they explore in real time; and semantic segmentation, which helps a robot classify features in its environment as semantic objects, such as a car versus a bicycle, which so far is mostly done on 2D images.

    advertisement

    Carlone and Rosinol’s new model of spatial perception is the first to generate a 3D map of the environment in real-time, while also labeling objects, people (which are dynamic, contrary to objects), and structures within that 3D map.
    The key component of the team’s new model is Kimera, an open-source library that the team previously developed to simultaneously construct a 3D geometric model of an environment, while encoding the likelihood that an object is, say, a chair versus a desk.
    “Like the mythical creature that is a mix of different animals, we wanted Kimera to be a mix of mapping and semantic understanding in 3D,” Carlone says.
    Kimera works by taking in streams of images from a robot’s camera, as well as inertial measurements from onboard sensors, to estimate the trajectory of the robot or camera and to reconstruct the scene as a 3D mesh, all in real-time.
    To generate a semantic 3D mesh, Kimera uses an existing neural network trained on millions of real-world images, to predict the label of each pixel, and then projects these labels in 3D using a technique known as ray-casting, commonly used in computer graphics for real-time rendering.
    The result is a map of a robot’s environment that resembles a dense, three-dimensional mesh, where each face is color-coded as part of the objects, structures, and people within the environment.
    A layered scene
    If a robot were to rely on this mesh alone to navigate through its environment, it would be a computationally expensive and time-consuming task. So the researchers built off Kimera, developing algorithms to construct 3D dynamic “scene graphs” from Kimera’s initial, highly dense, 3D semantic mesh.
    Scene graphs are popular computer graphics models that manipulate and render complex scenes, and are typically used in video game engines to represent 3D environments.
    In the case of the 3D dynamic scene graphs, the associated algorithms abstract, or break down, Kimera’s detailed 3D semantic mesh into distinct semantic layers, such that a robot can “see” a scene through a particular layer, or lens. The layers progress in hierarchy from objects and people, to open spaces and structures such as walls and ceilings, to rooms, corridors, and halls, and finally whole buildings.
    Carlone says this layered representation avoids a robot having to make sense of billions of points and faces in the original 3D mesh.
    Within the layer of objects and people, the researchers have also been able to develop algorithms that track the movement and the shape of humans in the environment in real time.
    The team tested their new model in a photo-realistic simulator, developed in collaboration with MIT Lincoln Laboratory, that simulates a robot navigating through a dynamic office environment filled with people moving around.
    “We are essentially enabling robots to have mental models similar to the ones humans use,” Carlone says. “This can impact many applications, including self-driving cars, search and rescue, collaborative manufacturing, and domestic robotics.
    Another domain is virtual and augmented reality (AR). Imagine wearing AR goggles that run our algorithm: The goggles would be able to assist you with queries such as ‘Where did I leave my red mug?’ and ‘What is the closest exit?’
    You can think about it as an Alexa which is aware of the environment around you and understands objects, humans, and their relations.”
    “Our approach has just been made possible thanks to recent advances in deep learning and decades of research on simultaneous localization and mapping,” Rosinol says. “With this work, we are making the leap toward a new era of robotic perception called spatial-AI, which is just in its infancy but has great potential in robotics and large-scale virtual and augmented reality.”
    This research was funded, in part, by the Army Research Laboratory, the Office of Naval Research, and MIT Lincoln Laboratory.
    Paper: “3D Dynamic scene graphs: Actionable spatial perception with places, objects, and humans” https://roboticsconference.org/program/papers/79/
    Video: https://www.youtube.com/watch?v=SWbofjhyPzI More

  • in

    Researchers gives robots intelligent sensing abilities to carry out complex tasks

    Picking up a can of soft drink may be a simple task for humans, but this is a complex task for robots — it has to locate the object, deduce its shape, determine the right amount of strength to use, and grasp the object without letting it slip. Most of today’s robots operate solely based on visual processing, which limits their capabilities. In order to perform more complex tasks, robots have to be equipped with an exceptional sense of touch and the ability to process sensory information quickly and intelligently.
    A team of computer scientists and materials engineers from the National University of Singapore (NUS) has recently demonstrated an exciting approach to make robots smarter. They developed a sensory integrated artificial brain system that mimics biological neural networks, which can run on a power-efficient neuromorphic processor, such as Intel’s Loihi chip. This novel system integrates artificial skin and vision sensors, equipping robots with the ability to draw accurate conclusions about the objects they are grasping based on the data captured by the vision and touch sensors in real-time.
    “The field of robotic manipulation has made great progress in recent years. However, fusing both vision and tactile information to provide a highly precise response in milliseconds remains a technology challenge. Our recent work combines our ultra-fast electronic skins and nervous systems with the latest innovations in vision sensing and AI for robots so that they can become smarter and more intuitive in physical interactions,” said Assistant Professor Benjamin Tee from the NUS Department of Materials Science and Engineering. He co-leads this project with Assistant Professor Harold Soh from the Department of Computer Science at the NUS School of Computing.
    The findings of this cross-disciplinary work were presented at the conference Robotics: Science and Systems conference in July 2020.
    Human-like sense of touch for robots
    Enabling a human-like sense of touch in robotics could significantly improve current functionality, and even lead to new uses. For example, on the factory floor, robotic arms fitted with electronic skins could easily adapt to different items, using tactile sensing to identify and grip unfamiliar objects with the right amount of pressure to prevent slipping.

    advertisement

    In the new robotic system, the NUS team applied an advanced artificial skin known as Asynchronous Coded Electronic Skin (ACES) developed by Asst Prof Tee and his team in 2019. This novel sensor detects touches more than 1,000 times faster than the human sensory nervous system. It can also identify the shape, texture and hardness of objects 10 times faster than the blink of an eye.
    “Making an ultra-fast artificial skin sensor solves about half the puzzle of making robots smarter. They also need an artificial brain that can ultimately achieve perception and learning as another critical piece in the puzzle,” added Asst Prof Tee, who is also from the NUS Institute for Health Innovation & Technology.
    A human-like brain for robots
    To break new ground in robotic perception, the NUS team explored neuromorphic technology — an area of computing that emulates the neural structure and operation of the human brain — to process sensory data from the artificial skin. As Asst Prof Tee and Asst Prof Soh are members of the Intel Neuromorphic Research Community (INRC), it was a natural choice to use Intel’s Loihi neuromorphic research chip for their new robotic system.
    In their initial experiments, the researchers fitted a robotic hand with the artificial skin, and used it to read braille, passing the tactile data to Loihi via the cloud to convert the micro bumps felt by the hand into a semantic meaning. Loihi achieved over 92 per cent accuracy in classifying the Braille letters, while using 20 times less power than a normal microprocessor.

    advertisement

    Asst Prof Soh’s team improved the robot’s perception capabilities by combining both vision and touch data in a spiking neural network. In their experiments, the researchers tasked a robot equipped with both artificial skin and vision sensors to classify various opaque containers containing differing amounts of liquid. They also tested the system’s ability to identify rotational slip, which is important for stable grasping.
    In both tests, the spiking neural network that used both vision and touch data was able to classify objects and detect object slippage. The classification was 10 per cent more accurate than a system that used only vision. Moreover, using a technique developed by Asst Prof Soh’s team, the neural networks could classify the sensory data while it was being accumulated, unlike the conventional approach where data is classified after it has been fully gathered. In addition, the researchers demonstrated the efficiency of neuromorphic technology: Loihi processed the sensory data 21 per cent faster than a top performing graphics processing unit (GPU), while using more than 45 times less power.
    Asst Prof Soh shared, “We’re excited by these results. They show that a neuromorphic system is a promising piece of the puzzle for combining multiple sensors to improve robot perception. It’s a step towards building power-efficient and trustworthy robots that can respond quickly and appropriately in unexpected situations.”
    “This research from the National University of Singapore provides a compelling glimpse to the future of robotics where information is both sensed and processed in an event-driven manner combining multiple modalities. The work adds to a growing body of results showing that neuromorphic computing can deliver significant gains in latency and power consumption once the entire system is re-engineered in an event-based paradigm spanning sensors, data formats, algorithms, and hardware architecture,” said Mr Mike Davies, Director of Intel’s Neuromorphic Computing Lab.
    This research was supported by the National Robotics R&D Programme Office (NR2PO), a set-up that nurtures the robotics ecosystem in Singapore through funding research and development (R&D) to enhance the readiness of robotics technologies and solutions. Key considerations for NR2PO’s R&D investments include the potential for impactful applications in the public sector, and the potential to create differentiated capabilities for our industry.
    Next steps
    Moving forward, Asst Prof Tee and Asst Prof Soh plan to further develop their novel robotic system for applications in the logistics and food manufacturing industries where there is a high demand for robotic automation, especially moving forward in the post-COVID era.
    Video: https://www.youtube.com/watch?v=08XyaXlxWno&feature=emb_logo More

  • in

    Move over, Siri! Researchers develop improv-based Chatbot

    What would conversations with Alexa be like if she was a regular at The Second City?
    Jonathan May, research lead at the USC Information Sciences Institute (ISI) and research assistant professor of computer science at USC’s Viterbi School of Engineering, is exploring this question with Justin Cho, an ISI programmer analyst and prospective USC Viterbi Ph.D. student, through their Selected Pairs Of Learnable ImprovisatioN (SPOLIN) project. Their research incorporates improv dialogues into chatbots to produce more engaging interactions.
    The SPOLIN research collection is made up of over 68,000 English dialogue pairs, or conversational dialogues of a prompt and subsequent response. These pairs model yes-and dialogues, a foundational principle in improvisation that encourages more grounded and relatable conversations. After gathering the data, Cho and May built SpolinBot, an improv agent programmed with the first yes-and research collection large enough to train a chatbot.
    The project research paper, “Grounding Conversations with Improvised Dialogues,” was presented on July 6 at the Association of Computational Linguistics conference, held July 5-10.
    Finding Common Ground
    May was looking for new research ideas in his work. His love for language analysis had led him to work on Natural Language Processing (NLP) projects, and he began searching for more interesting forms of data he could work with.

    advertisement

    “I’d done some improv in college and pined for those days,” he said. “Then a friend who was in my college improv troupe suggested that it would be handy to have a ‘yes-and’ bot to practice with, and that gave me the inspiration — it wouldn’t just be fun to make a bot that can improvise, it would be practical!”
    The deeper May explored this idea, the more valid he found it to be. Yes-and is a pillar of improvisation that prompts a participant to accept the reality that another participant says (“yes”) and then build on that reality by providing additional information (“and”). This technique is key in establishing a common ground in interaction. As May put it, “Yes-and is the improv community’s way of saying ‘grounding.'”
    Yes-ands are important because they help participants build a reality together. In movie scripts, for example, maybe 10-11% of the lines can be considered yes-ands, whereas in improv, at least 25% of the lines are yes-ands. This is because, unlike movies, which have settings and characters that are already established for audiences, improvisers act without scene, props, or any objective reality.
    “Because improv scenes are built from almost no established reality, dialogue taking place in improv actively tries to reach mutual assumptions and understanding,” said Cho. “This makes dialogue in improv more interesting than most ordinary dialogue, which usually takes place with many assumptions already in place (from common sense, visual signals, etc.).”
    But finding a source to extract improv dialogue from was a challenge. Initially, May and Cho examined typical dialogue sets such as movie scripts and subtitle collections, but those sources didn’t contain enough yes-ands to mine. Moreover, it can be difficult to find recorded, let alone transcribed, improv.

    advertisement

    The Friendly Neighborhood Improv Bot
    Before visiting USC as an exchange student in Fall 2018, Cho reached out to May, inquiring about NLP research projects that he could participate in. Once Cho came to USC, he learned about the improv project that May had in mind.
    “I was interested in how it touched on a niche that I wasn’t familiar with, and I was especially intrigued that there was little to no prior work in this area,” Cho said. “I was hooked when Jon said that our project will be answering a question that hasn’t even been asked yet: the question of how modeling grounding in improv through the yes-and act can contribute to improving dialogue systems.”
    Cho investigated multiple approaches to gathering improv data. He finally came across Spontaneanation, an improv podcast hosted by prolific actor and comedian Paul F. Tompkins that ran from 2015 to 2019.
    With its open-topic episodes, about a good 30 minutes of continuous improvisation, high quality recordings, and substantial size, Spontaneanation was the perfect source to mine yes-ands from for the project. The duo fed their Spontaneanation data into a program, and SpolinBot was born.
    “One of the cool parts of the project is that we figured out a way to just use improv,” May explained. “Spontaneanation was a great resource for us, but is fairly small as data sets go; we only got about 10,000 yes-ands from it. But we used those yes-ands to build a classifier (program) that can look at new lines of dialogue and determine whether they’re yes-ands.”
    Working with improv dialogues first helped the researchers find yes-ands from other sources as well, as most of the SPOLIN data comes from movie scripts and subtitles. “Ultimately, the SPOLIN corpus contains more than five times as many yes-ands from non-improv sources than from improv, but we only were able to get those yes-ands by starting with improv,” May said.
    SpolinBot has a few controls that can refine its responses, taking them from safe and boring to funny and wacky, and also generates five response options that users can choose from to continue the conversation.
    SpolinBot #Goals
    The duo has a lot of plans for SpolinBot, along with extending its conversational abilities beyond yes-ands. “We want to explore other factors that make improv interesting, such as character-building, scene-building, ‘if this (usually an interesting anomaly) is true, what else is also true?,’ and call-backs (referring to objects/events mentioned in previous dialogue turns),” Cho said. “We have a long way to go, and that makes me more excited for what I can explore throughout my PhD and beyond.”
    May echoed Cho’s sentiments. “Ultimately, we want to build a good conversational partner and a good creative partner,” he said, noting that even in improv, yes-ands only mark the beginning of a conversation. “Today’s bots, SpolinBot included, aren’t great at keeping the thread of the conversation going. There should be a sense that both participants aren’t just establishing a reality, but are also experiencing that reality together.”
    That latter point is key, because, as May explained, a good partner should be an equal, not subservient in the way that Alexa and Siri are. “I’d like my partner to be making decisions and brainstorming along with me,” he said. “We should ultimately be able to reap the benefits of teamwork and cooperation that humans have long benefited from by working together. And the virtual partner has the added benefit of being much better and faster at math than me, and not actually needing to eat!” More

  • in

    New organic material unlocks faster and more flexible electronic devices

    Mobile phones and other electronic devices made from an organic material that is thin, bendable and more powerful are now a step closer thanks to new research led by scientists at The Australian University (ANU).
    Lead researchers Dr Ankur Sharma and Associate Professor Larry Lu say it would help create the next generation of ultra-fast electronic chips, which promise to be much faster than current electronic chips we use.
    “Conventional devices run on electricity — but this material allows us to use light or photons, which travels much faster,” Dr Sharma said.
    “The interesting properties we have observed in this material make it a contender for super-fast electronic processors and chips.
    “We now have the perfect building block, to achieve flexible next generation electronics.”
    Associate Professor Lu said they observed interesting functions and capabilities in their organic material, previously unseen.

    advertisement

    “The capabilities we observed in this material that can help us achieve ultra-fast electronic devices,” said Associate Professor Lu.
    The team were able to control the growth of a novel organic semiconductor material — stacking one molecule precisely over the other.
    “The material is just one carbon atom thick, a hundred times thinner than a human hair, which gives it the flexibility to be bent into any shape. This will lead to its application in flexible electronic devices.”
    In 2018 the same team developed a material that combined both organic and inorganic elements.
    Now, they’ve been able to improve the organic part of the material, allowing them to completely remove the inorganic component.
    “It’s made from just carbon and hydrogen, which would mean devices can be biodegradable or easily recyclable, thus avoiding the tonnes of e-waste generated by current generation electronic devices,” Dr Sharma said.
    Dr Sharma says while the actual devices might still be some way off, this new study is an important next step, and a key demonstration of this new material’s immense capabilities.

    Story Source:
    Materials provided by Australian National University. Note: Content may be edited for style and length. More

  • in

    Renewable energy transition makes dollars and sense

    Making the transition to a renewable energy future will have environmental and long-term economic benefits and is possible in terms of energy return on energy invested (EROI), UNSW Sydney researchers have found.
    Their research, published in the international journal Ecological Economics recently, disproves the claim that a transition to large-scale renewable energy technologies and systems will damage the macro-economy by taking up too large a chunk of global energy generation.
    Honorary Associate Professor Mark Diesendorf, in collaboration with Prof Tommy Wiedmann of UNSW Engineering, analysed dozens of studies on renewable electricity systems in regions where wind and/or solar could provide most of the electricity generation in future, such as Australia and the United States.
    The Clean Energy Australia report states that renewable energy’s contribution to Australia’s total electricity generation is already at 24 per cent.
    Lead author A/Prof Diesendorf is a renewable energy researcher with expertise in electricity generation, while co-author Prof Tommy Wiedmann is a sustainability scientist.
    A/Prof Diesendorf said their findings were controversial in light of some fossil fuel and nuclear power supporters, as well as some economists, rejecting a transition to large-scale electricity renewables.

    advertisement

    “These critics claim the world’s economy would suffer because they argue renewables require too much lifecycle energy to build, to the point of diverting all that energy away from other uses,” he said.
    “Our paper shows that there is no credible scientific evidence to support such claims, many of which are founded upon a study published in 2014 that used data up to 30 years old.
    “There were still research papers coming out in 2018 using the old data and that prompted me to examine the errors made by those perpetuating the misconception.”
    A/Prof Diesendorf said critics’ reliance on outdated figures was “ridiculous” for both solar and wind technology.
    “It was very early days back then and those technologies have changed so dramatically just in the past 10 years, let alone the past three decades,” he said.

    advertisement

    “This evolution is reflected in their cost reductions: wind by about 30 per cent and solar by 85 to 90 per cent in the past decade. These cost reductions reflect increasing EROIs.”
    A/Prof Diesendorf said fears about macro-economic damage from a transition to renewable energy had been exaggerated.
    “Not only did these claims rely on outdated data, but they also failed to consider the energy efficiency advantages of transitioning away from fuel combustion and they also overestimated storage requirements,” he said.
    “I was unsurprised by our results, because I have been following the literature for several years and doubted the quality of the studies that supported the previous beliefs about low EROIs for wind and solar.”
    Spotlight on wind and solar
    A/Prof Diesendorf said the study focused on wind and solar renewables which could provide the vast majority of electricity, and indeed almost all energy, for many parts of the world in future.
    “Wind and solar are the cheapest of all existing electricity generation technologies and are also widely available geographically,” he said.
    “We critically examined the case for large-scale electricity supply-demand systems in regions with high solar and/or high wind resources that could drive the transition to 100 per cent renewable electricity, either within these regions or where power could be economically transmitted to these regions.
    “In these regions — including Australia, the United States, Middle East, North Africa, China, parts of South America and northern Europe — variable renewable energy (VRE) such as wind and/or solar can provide the major proportion of annual electricity generation.
    “For storage, we considered hydroelectricity, including pumped hydro, batteries charged with excess wind and/or solar power, and concentrated solar thermal (CST) with thermal storage, which is a solar energy technology that uses sunlight to generate heat.”
    Energy cost/benefit ratio approach
    Co-author Prof Wiedmann said the researchers used Net Energy Analysis as their conceptual framework within which to identify the strengths and weaknesses of past studies in determining the EROI of renewable energy technologies and systems.
    “We used the established Net Energy Analysis method because it’s highly relevant to the issue of EROI: it aims to calculate all energy inputs into making a technology in order to understand the full impact,” Prof Wiedmann said.
    “From mining the raw materials and minerals processing, to building and operating the technology, and then deconstructing it at the end of its life. So, it’s a lifecycle assessment of all energy which humans use to make a technology.”
    Renewable transition possible
    A/Prof Diesendorf said their findings revealed that a transition from fossil fuels to renewable energy was worthwhile, contradicting the assumptions and results of many previous studies on the EROIs of wind and solar.
    “We found that the EROIs of wind and solar technologies are generally high and increasing; typically, solar at a good site could generate the lifecycle primary energy required to build itself in one to two years of operation, while large-scale wind does it in three to six months,” he said.
    “The impact of storage on EROI depends on the quantities and types of storage adopted and their operational strategies. In the regions we considered, the quantity of storage required to maintain generation reliability is relatively small.
    “We also discovered that taking into account the low energy conversion efficiency of fossil-fuelled electricity greatly increases the relative EROIs of wind and solar.
    “Finally, we found the macro-economic impact of a rapid transition to renewable electricity would at worst be temporary and would be unlikely to be major.”
    A more sustainable future
    A/Prof Diesendorf said he hoped the study’s results would give renewed confidence to businesses and governments considering or already making a transition to more sustainable electricity technologies and systems.
    “This could be supported by government policy, which is indeed the case in some parts of Australia — including the ACT, Victoria and South Australia — where there’s strong support for the transition,” he said.
    “A number of mining companies in Australia are also going renewable, such as a steel producer which has a power purchase agreement with a solar farm to save money, while a zinc refinery built its own solar farm to supply cheaper electricity.”
    A/Prof Diesendorf said the Australian Government, however, could help with more policies to smooth the transition to renewable energy.
    “In Australia the transition is happening because renewable energy is much cheaper than fossil fuels, but there are many roadblocks and potholes in the way,” he said.
    “For example, wind and solar farms have inadequate transmission lines to feed power into cities and major industries, and we need more support for storage to better balance the variability of wind and solar.
    “So, I hope our research will help bolster support to continuing with the transition, because we have discredited the claim that the EROIs of electricity renewables are so low that a transition could displace investment in other sectors.” More

  • in

    A Raspberry Pi-based virtual reality system for small animals

    The Raspberry Pi Virtual Reality system (PiVR) is a versatile tool for presenting virtual reality environments to small, freely moving animals (such as flies and fish larvae), according to a study published July 14, 2020 in the open-access journal PLOS Biology by David Tadres and Matthieu Louis of the University of California, Santa Barbara. The use of PiVR, together with techniques like optogenetics, will facilitate the mapping and characterization of neural circuits involved in behavior.
    PiVR consists of a behavioral arena, a camera, a Raspberry Pi microcomputer, an LED controller, and a touchscreen. This system can implement a feedback loop between real-time behavioral tracking and delivery of a stimulus. PiVR is a versatile, customizable system that costs less than $500, takes less than six hours to build (using a 3D printer), and was designed to be accessible to a wide range of neuroscience researchers.
    In the new study, Tadres and Louis used their PiVR system to present virtual realities to small, freely moving animals during optogenetic experiments. Optogenetics is a technique that enables researchers to use light to control the activity of neurons in living animals, allowing them to examine causal relationships between the activity of genetically-labeled neurons and specific behaviors.
    As a proof-of-concept, Tadres and Louis used PiVR to study sensory navigation in response to gradients of chemicals and light in a range of animals. They showed how fruit fly larvae change their movements in response to real and virtual odor gradients. They then demonstrated how adult flies adapt their speed of movement to avoid locations associated with bitter tastes evoked by optogenetic activation of their bitter-sensing neurons. In addition, they showed that zebrafish larvae modify their turning maneuvers in response to changes in the intensity of light mimicking spatial gradients. According to the authors, PiVR represents a low-barrier technology that should empower many labs to characterize animal behavior and study the functions of neural circuits.
    “More than ever,” the authors note, “neuroscience is technology-driven. In recent years, we have witnessed a boom in the use of closed-loop tracking and optogenetics to create virtual sensory realities. Integrating new interdisciplinary methodology in the lab can be daunting. With PiVR, our goal has been to make virtual reality paradigms accessible to everyone, from professional scientists to high-school students. PiVR should help democratize cutting-edge technology to study behavior and brain functions.”

    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Ups and downs in COVID-19 data may be caused by data reporting practices

    As data accumulates on COVID-19 cases and deaths, researchers have observed patterns of peaks and valleys that repeat on a near-weekly basis. But understanding what’s driving those patterns has remained an open question.
    A study published this week in mSystems reports that those oscillations arise from variations in testing practices and data reporting, rather than from societal practices around how people are infected or treated. The findings suggest that epidemiological models of infectious disease should take problems with diagnosis and reporting into account.
    “The practice of acquiring data is as important at times as the data itself,” said computational biologist Aviv Bergman, Ph.D., at the Albert Einstein College of Medicine in New York City, and microbiologist Arturo Casadevall, M.D., Ph.D., at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland. Bergman and Casadevall worked on the study with Yehonatan Sella, Ph.D., at Albert Einstein, and physician-scientist Peter Agre, Ph.D., at Johns Hopkins.
    The study began when Agre, who co-won the 2003 Nobel Prize in Chemistry, noticed that precise weekly fluctuations in the data were clearly linked to the day of the week. “We became very suspicious,” said Bergman.
    The researchers collected the total number of daily tests, positive tests, and deaths in U.S. national data over 161 days, from January through the end of June. They also collected New York City-specific data and Los Angeles-specific data from early March through late June. To better understand the oscillating patterns, they conducted a power spectrum analysis, which is a methodology for identifying different frequencies within a signal. (It’s often used in signal and image processing, but the authors believe the new work represents the first application to epidemiological data.)
    The analysis pointed to a 7-day cycle in the rise and fall of national new cases, and 6.8-day and 6.9-day cycles in New York City and Los Angeles, respectively. Those oscillations are reflected in analyses that have found, for example, that the mortality rate is higher at the end of the week or on the weekend.
    Alarmed by the consistency of the signal, the researchers looked for an explanation. They reported that an increase in social gatherings on the weekends was likely not a factor, since the time from exposure to the coronavirus to showing symptoms can range from 4-14 days. Previous analyses have also suggested that patients receive lower-quality care later in the week, but the new analysis didn’t support that hypothesis.
    The researchers then examined reporting practices. Some areas, like New York City and Los Angeles, report deaths according to when the individual died. But national data publishes deaths according to when the death was reported — not when it occurred. In large datasets that report the date of death, rather than the date of the report, the apparent oscillations vanish. Similar discrepancies in case reporting explained the oscillations found in new case data.
    The authors of the new study note that weekend interactions or health care quality may influence outcomes, but these societal factors do not significantly contribute to the repeated patterns.
    “These oscillations are a harbinger of problems in the public health response,” said Casadevall.
    The researchers emphasized that no connection exists between the number of tests and the number of cases, and that unless data reporting practices change, the oscillations will remain. “And as long as there are infected people, these oscillations, due to fluctuations in the number of tests administered and reporting, will always be observed,” said Bergman, “even if the number of cases drops.” More