More stories

  • in

    Bay Area storms get wetter in a warming world

    The December 2014 North American Storm Complex was a powerful winter storm, referred to by some as California’s “Storm of the Decade.” Fueled by an atmospheric river originating over the tropical waters of the Pacific Ocean, the storm dropped 8 inches of rainfall in 24 hours, sported wind gusts of 139 miles per hour, and left 150,000 households without power across the San Francisco Bay Area.
    Writing in Weather and Climate Extremes this week, researchers described the potential impacts of climate change on extreme storms in the San Francisco Bay area, among them the December 2014 North American Storm Complex.
    Re-simulating five of the most powerful storms that have hit the area, they determined that under future conditions some of these extreme events would deliver 26-37% more rain, even more than is predicted simply by accounting for air’s ability to carry more water in warmer conditions.
    However, they found these increases would not occur with every storm, only those that include an atmospheric river accompanied by an extratropical cyclone.
    The research — funded by the City and County of San Francisco and in partnership with agencies including the San Francisco Public Utilities Commission, Port of San Francisco, and San Francisco International Airport — will help the region plan its future infrastructure with mitigation and sustainability in mind.
    “Having this level of detail is a game changer,” said Dennis Herrera, General Manager of the San Francisco Public Utilities Commission, which was the lead City agency on the study. “This groundbreaking data will help us develop tools to allow our port, airport, utilities, and the City as a whole to adapt to our changing climate and increasingly extreme storms.”
    These first-of-their-kind forecasts for the city were made possible by the Stampede2 supercomputer at the Texas Advanced Computing Center (TACC) and the Cori system at the National Energy Research Scientific Computing Center (NERSC) — two of the most powerful supercomputers in the world, supported by the National Science Foundation and Department of Energy respectively. More

  • in

    Engineers get under the skin of ionic skin

    In the quest to build smart skin that mimics the sensing capabilities of natural skin, ionic skins have shown significant advantages. They’re made of flexible, biocompatible hydrogels that use ions to carry an electrical charge. In contrast to smart skins made of plastics and metals, the hydrogels have the softness of natural skin. This offers a more natural feel to the prosthetic arm or robot hand they are mounted on, and makes them comfortable to wear.
    These hydrogels can generate voltages when touched, but scientists did not clearly understand how — until a team of researchers at UBC devised a unique experiment, published today in Science.
    “How hydrogel sensors work is they produce voltages and currents in reaction to stimuli, such as pressure or touch — what we are calling a piezoionic effect. But we didn’t know exactly how these voltages are produced,” said the study’s lead author Yuta Dobashi, who started the work as part of his master’s in biomedical engineering at UBC.
    Working under the supervision of UBC researcher Dr. John Madden, Dobashi devised hydrogel sensors containing salts with positive and negative ions of different sizes. He and collaborators in UBC’s physics and chemistry departments applied magnetic fields to track precisely how the ions moved when pressure was applied to the sensor.
    “When pressure is applied to the gel, that pressure spreads out the ions in the liquid at different speeds, creating an electrical signal. Positive ions, which tend to be smaller, move faster than larger, negative ions. This results in an uneven ion distribution which creates an electric field, which is what makes a piezoionic sensor work.”
    The researchers say this new knowledge confirms that hydrogels work in a similar way to how humans detect pressure, which is also through moving ions in response to pressure, inspiring potential new applications for ionic skins. More

  • in

    Researchers design simpler magnets for twisty facilities that could lead to steady-state fusion operation

    Harnessing the power that makes the sun and stars shine could be made easier by powerful magnets with straighter shapes than have been made before. Researchers linked to the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) have found a way to create such magnets for fusion facilities known as stellarators.
    Such facilities have complex twisted magnetic coils, compared with the straight up-and-down coils in more widely used tokamak facilities, and can produce fusion reactions without the risk of disruptions that tokamaks face. This advantage makes stellarators a candidate to serve as the model for a next-generation fusion pilot plant.
    Now, by adding sections to the stellarator coils that are relatively straight, researchers could both reduce the manufacturing cost and make it easier to install openings that would allow technicians to repair the device’s interior. Both innovations could aid the development of a stellarator power plant, replicating fusion on Earth for a virtually inexhaustible supply of power to generate electricity without producing greenhouse gases or long-lived radioactive waste.
    “In the future, people will have to replace components within stellarators as they wear out, which requires large openings between the coils of the magnets,” said physicist Caoxiang Zhu, an author of the paperreporting the results in Nuclear Fusion who completed the research when he was on staff at PPPL. He is now on staff at the University of Science and Technology of China. “But it’s hard to have large openings in stellarators because the electromagnetic coils zig and zag and are really complex.” But by using a mathematical technique known as “spline representation,” Zhu and the other collaborators were able to design magnets with straighter sections than before while still creating magnetic fields that can confine plasma. Those straight sections could provide good locations for windows.
    Invented by astrophysicist Lyman Spitzer, PPPL’s first director, stellarators are fusion facility concepts that use high-powered magnets to create interweaving magnetic fields that confine plasma, hot gas consisting of electrons and bare atomic nuclei. Stellarators have advantages over tokamaks, doughnut-shaped devices that are currently the most popular fusion facility concept worldwide, but their fantastically complicated magnets have made design and construction challenging.
    Zhu and the researchers added the spline capability to Zhu’s FOCUS computer code. To test the concept, the team designed magnets that could fit on the Helically Symmetric eXperiment (HSX), a stellarator at the University of Wisconsin-Madison.
    The updated code showed that researchers could create straighter magnets than before while preserving their strength and accuracy. “In principle, you can always make straighter coils, but the trade-off is that their magnetic fields might not confine the plasma as well as those produced by twistier coils,” said Nicola Lonigro, a student in the DOE’s Science Undergraduate Laboratory Internship (SULI) program at the time of the research, lead author of the paper, and now a Ph.D. candidate at the University of York in Britain. “But our research showed that you could make a simpler coil with straighter sections that makes the same magnetic field shape and strength as conventional ones do.”
    Creating simpler magnets could aid the development of a stellarator fusion power plant. “In the long term, this work is a contribution to the larger effort trying to make stellarators commercially viable,” Lonigro said.
    This research was supported by the DOE’s Office of Science (Fusion Energy Sciences and Workforce Development for Teachers and Scientists).
    Story Source:
    Materials provided by DOE/Princeton Plasma Physics Laboratory. Original written by Raphael Rosen. Note: Content may be edited for style and length. More

  • in

    Scientific advance leads to a new tool in the fight against hackers

    A new form of security identification could soon see the light of day and help us protect our data from hackers and cybercriminals. Quantum mathematicians at the University of Copenhagen have solved a mathematical riddle that allows for a person’s geographical location to be used as a personal ID that is secure against even the most advanced cyber attacks.
    People have used codes and encryption to protect information from falling into the wrong hands for thousands of years. Today, encryption is widely used to protect our digital activity from hackers and cybercriminals who assume false identities and exploit the internet and our increasing number of digital devices to steal from us.
    As such, there is an ever-growing need for new security measures to detect hackers posing as our banks or other trusted institutions. Within this realm, researchers from the University of Copenhagen’s Department of Mathematical Sciences have just made a giant leap.
    “There is a constant battle in cryptography between those who want to protect information and those seeking to crack it. New security keys are being developed and later broken and so the cycle continues. Until, that is, a completely different type of key has been found.” , says Professor Matthias Christandl.
    For nearly twenty years, researchers around the world have been trying to solve the riddle of how to securely determine a person’s geographical location and use it as a secure ID. Until now, this had not been possible by way of normal methods like GPS tracking.
    “Today, there are no traditional ways, whether by internet or radio signals for example, to determine where another person is situated geographically with one hundred percent accuracy. Current methods are not unbreakable, and hackers can impersonate someone you trust even when they are far far away. However, quantum physics opens up a few entirely different possibilities,” says Matthias Christandl. More

  • in

    A sharper image for proteins

    Proteins may be the most important and varied biomolecules within living systems. These strings of amino acids, assuming complex 3-dimensional forms, are essential for the growth and maintenance of tissue, the initiation of thousands of biochemical reactions, and the protection of the body from pathogens through the immune system. They play a central role in health and disease and are primary targets for pharmaceutical drugs.
    To fully understand proteins and their myriad functions, researchers have developed sophisticated means to see and study them through advanced microscopy, improving light detection, imaging software, and the integration of advanced hardware systems.
    In a new study, corresponding author Shaopeng Wang and his colleagues at Arizona State University describe a new technique that promises to revolutionize the imaging of proteins and other vital biomolecules, allowing these tiny entities to be visualized with unprecedented clarity and by simpler means than existing methods.
    “The method we report in this study uses normal cover glass instead of gold coated cover glass, which has two advantages over our previously reported label-free single-protein imaging method, Wang says. It is compatible with fluorescence imaging for in-situ cross validation, and it reduces the light-induced heating effect that could harm the biological samples. Pengfei Zhang, an outstanding postdoctoral researcher in my group, is the technical lead of this project.”
    Wang has a joint faculty position in the Biodesign Center for Bioelectronics and Biosensors and School of Biological and Health Systems Engineering. The group’s research findings appear in the current issue of the journal Nature Communications.
    The new method, known as evanescent scattering microscopy (ESM), is based on an optical property first recognized in antiquity, known as total internal reflection. This occurs when light passes from a high-refractive medium, (like glass) into a low-refractive medium (like water). More

  • in

    From blurry to bright: AI tech helps researchers peer into the brains of mice

    Johns Hopkins biomedical engineers have developed an artificial intelligence (AI) training strategy to capture images of mouse brain cells in action. The researchers say the AI system, in concert with specialized ultra-small microscopes, make it possible to find precisely where and when cells are activated during movement, learning and memory. The data gathered with this technology could someday allow scientists to understand how the brain functions and is affected by disease.
    The researcher’s experiments in mice were published in Nature Communications on March 22.
    “When a mouse’s head is restrained for imaging, its brain activity may not truly represent its neurological function,” says Xingde Li, Ph.D., professor of biomedical engineering at the Johns Hopkins University School of Medicine. “To map brain circuits that control daily functions in mammals, we need to see precisely what is happening among individual brain cells and their connections, while the animal is freely moving around, eating and socializing.”
    To gather this extremely detailed data, Li’s team developed ultra-small microscopes that the mice can wear on the top of their head. Measuring in a couple of millimeter in diameter, the size of these microscopes limit the imaging technology they can carry on board. In comparison to benchtop models, the frame rate on the miniature microscopes is low, which make them susceptible to interference from motion. Disturbances such as the mouse’s breathing or heart rate would affect the accuracy of the data these microscopes can capture. Researchers estimate that Li’s miniature microscope would need to exceed 20 frames per second to eliminate all the disturbances from the motion of a freely moving mouse.
    “There are two ways to increase frame rate,” says Li. “You can increase the scanning speed and you can decrease the number of points scanned.”
    In previous research, Li’s engineering team quickly found they hit the physical limits of the scanner, reaching six frames per second, which maintained excellent image quality but was far below the required rate. So, the team moved on to the second strategy for increasing frame rate — decreasing the number of points scanned. However, similar to reducing the number of pixels in an image, this strategy would cause the microscope to capture lower-resolution data. More

  • in

    New brain learning mechanism calls for revision of long-held neuroscience hypothesis

    The brain is a complex network containing billions of neurons. Each of these neurons communicates simultaneously with thousands of others via their synapses (links), and collects incoming signals through several extremely long, branched “arms,” called dendritic trees.
    For the last 70 years a core hypothesis of neuroscience has been that brain learning occurs by modifying the strength of the synapses, following the relative firing activity of their connecting neurons. This hypothesis has been the basis for machine and deep learning algorithms which increasingly affect almost all aspects of our lives. But after seven decades, this long-lasting hypothesis has now been called into question.
    In an article published today in Scientific Reports, researchers from Bar-Ilan University in Israel reveal that the brain learns completely differently than has been assumed since the 20th century. The new experimental observations suggest that learning is mainly performed in neuronal dendritic trees, where the trunk and branches of the tree modify their strength, as opposed to modifying solely the strength of the synapses (dendritic leaves), as was previously thought. These observations also indicate that the neuron is actually a much more complex, dynamic and computational element than a binary element that can fire or not. Just one single neuron can realize deep learning algorithms, which previously required an artificial complex network consisting of thousands of connected neurons and synapses.
    “We’ve shown that efficient learning on dendritic trees of a single neuron can artificially achieve success rates approaching unity for handwritten digit recognition. This finding paves the way for an efficient biologically inspired new type of AI hardware and algorithms,” said Prof. Ido Kanter, of Bar-Ilan’s Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Center, who led the research. “This simplified learning mechanism represents a step towards a plausible biological realization of backpropagation algorithms, which are currently the central technique in AI,” added Shiri Hodassman, a PhD student and one of the key contributors to this work.
    The efficient learning on dendritic trees is based on Kanter and his research team’s experimental evidence for sub-dendritic adaptation using neuronal cultures, together with other anisotropic properties of neurons, like different spike waveforms, refractory periods and maximal transmission rates.
    The brain’s clock is a billion times slower than existing parallel GPUs, but with comparable success rates in many perceptual tasks.
    The new demonstration of efficient learning on dendritic trees calls for new approaches in brain research, as well as for the generation of counterpart hardware aiming to implement advanced AI algorithms. If one can implement slow brain dynamics on ultrafast computers, the sky is the limit.
    Story Source:
    Materials provided by Bar-Ilan University. Note: Content may be edited for style and length. More

  • in

    Ionic liquid-based reservoir computing: The key to efficient and flexible edge computing

    Physical reservoir computing (PRC), which relies on the transient response of physical systems, is an attractive machine learning framework that can perform high-speed processing of time-series signals at low power. However, PRC systems have low tunability, limiting the signals it can process. Now, researchers from Japan present ionic liquids as an easily tunable physical reservoir device that can be optimized to process signals over a broad range of timescales by simply changing their viscosity.
    Artificial Intelligence (AI) is fast becoming ubiquitous in the modern society and will feature a broader implementation in the coming years. In applications involving sensors and internet-of-things devices, the norm is often edge AI, a technology in which the computing and analyses are performed close to the user (where the data is collected) and not far away on a centralized server. This is because edge AI has low power requirements as well as high-speed data processing capabilities, traits that are particularly desirable in processing time-series data in real time.
    In this regard, physical reservoir computing (PRC), which relies on the transient dynamics of physical systems, can greatly simplify the computing paradigm of edge AI. This is because PRC can be used to store and process analog signals into those edge AI can efficiently work with and analyze. However, the dynamics of solid PRC systems are characterized by specific timescales that are not easily tunable and are usually too fast for most physical signals. This mismatch in timescales and their low controllability make PRC largely unsuitable for real-time processing of signals in living environments.
    To address this issue, a research team from Japan involving Professor Kentaro Kinoshita and Sang-Gyu Koh, a PhD student, from the Tokyo University of Science, and senior researchers Dr. Hiroyuki Akinaga, Dr. Hisashi Shima, and Dr. Yasuhisa Naitoh from the National Institute of Advanced Industrial Science and Technology, proposed, in a new study published in Scientific Reports, the use of liquid PRC systems instead. “Replacing conventional solid reservoirs with liquid ones should lead to AI devices that can directly learn at the time scales of environmentally generated signals, such as voice and vibrations, in real time,” explains Prof. Kinoshita. “Ionic liquids are stable molten salts that are completely made up of free-roaming electrical charges. The dielectric relaxation of the ionic liquid, or how its charges rearrange as a response to an electric signal, could be used as a reservoir and is holds much promise for edge AI physical computing.”
    In their study, the team designed a PRC system with an ionic liquid (IL) of an organic salt, 1-alkyl-3-methylimidazolium bis(trifluoromethane sulfonyl)imide ([Rmim+] [TFSI-] R = ethyl (e), butyl (b), hexyl (h), and octyl (o)), whose cationic part (the positively charged ion) can be easily varied with the length of a chosen alkyl chain. They fabricated gold gap electrodes, and filled in the gaps with the IL. “We found that the timescale of the reservoir, while complex in nature, can be directly controlled by the viscosity of the IL, which depends on the length of the cationic alkyl chain. Changing the alkyl group in organic salts is easy to do, and presents us with a controllable, designable system for a range of signal lifetimes, allowing a broad range of computing applications in the future,” says Prof. Kinoshita. By adjusting the alkyl chain length between 2 and 8 units, the researchers achieved characteristic response times that ranged between 1 — 20 ms, with longer alkyl sidechains leading to longer response times and tunable AI learning performance of devices.
    The tunability of the system was demonstrated using an AI image identification task. The AI was presented a handwritten image as the input, which was represented by 1 ms width rectangular pulse voltages. By increasing the side chain length, the team made the transient dynamics approach that of the target signal, with the discrimination rate improving for higher chain lengths. This is because, compared to [emim+] [TFSI-], in which the current relaxed to its value in about 1 ms, the IL with a longer side chain and, in turn, longer relaxation time retained the history of the time series data better, improving identification accuracy. When the longest sidechain of 8 units was used, the discrimination rate reached a peak value of 90.2%.
    These findings are encouraging as they clearly show that the proposed PRC system based on the dielectric relaxation at an electrode-ionic liquid interface can be suitably tuned according to the input signals by simply changing the IL’s viscosity. This could pave the way for edge AI devices that can accurately learn the various signals produced in the living environment in real time.
    Story Source:
    Materials provided by Tokyo University of Science. Note: Content may be edited for style and length. More