More stories

  • in

    Prototype taps into the sensing capabilities of any smartphone to screen for prediabetes

    According to the U.S. Centers for Disease Control, one out of every three adults in the United States has prediabetes, a condition marked by elevated blood sugar levels that could lead to the development of Type 2 diabetes. The good news is that, if detected early, prediabetes can be reversed through lifestyle changes such as improved diet and exercise. The bad news? Eight out of 10 Americans with prediabetes don’t know that they have it, putting them at increased risk of developing diabetes as well as disease complications that include heart disease, kidney failure and vision loss.
    Current screening methods typically involve a visit to a health care facility for laboratory testing and/or the use of a portable glucometer for at-home testing, meaning access and cost may be barriers to more widespread screening. But researchers at the University of Washington may have found the sweet spot when it comes to increasing early detection of prediabetes. The team developed GlucoScreen, a new system that leverages the capacitive touch sensing capabilities of any smartphone to measure blood glucose levels without the need for a separate reader.
    The researchers describe GlucoScreen in a new paper published March 28 in the Proceedings of the Association for Computing Machinery on Interactive, Mobile, Wearable and Ubiquitous Technologies.
    The researchers’ results suggest GlucoScreen’s accuracy is comparable to that of standard glucometer testing. The team found the system to be accurate at the crucial threshold between a normal blood glucose level, at or below 99 mg/dL, and prediabetes, defined as a blood glucose level between 100 and 125 mg/dL. This approach could make glucose testing less costly and more accessible — particularly for one-time screening of a large population.
    “In conventional screening a person applies a drop of blood to a test strip, where the blood reacts chemically with the enzymes on the strip. A glucometer is used to analyze that reaction and deliver a blood glucose reading,” said lead author Anandghan Waghmare, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “We took the same test strip and added inexpensive circuitry that communicates data generated by that reaction to any smartphone through simulated tapping on the screen. GlucoScreen then processes the data and displays the result right on the phone, alerting the person if they are at risk so they know to follow up with their physician.”
    Specifically, the GlucoScreen test strip samples the amplitude of the electrochemical reaction that occurs when a blood sample mixes with enzymes five times each second.

    The strip then transmits the amplitude data to the phone through a series of touches at variable speeds using a technique called “pulse-width modulation.” The term “pulse width” refers to the distance between peaks in the signal — in this case, the length between taps. Each pulse width represents a value along the curve. The greater the distance between taps for a particular value, the higher the amplitude associated with the electrochemical reaction on the strip.
    “You communicate with your phone by tapping the screen with your finger,” Waghmare said. “That’s basically what the strip is doing, only instead of a single tap to produce a single action, it’s doing multiple taps at varying speeds. It’s comparable to how Morse code transmits information through tapping patterns.”
    The advantage of this technique is that it does not require complicated electronic components. This minimizes the cost to manufacture the strip and the power required for it to operate compared to more conventional communication methods, like Bluetooth and WiFi. All data processing and computation occurs on the phone, which simplifies the strip and further reduces the cost.
    The test strip also doesn’t need batteries. It uses photodiodes instead to draw what little power it needs from the phone’s flash.
    The flash is automatically engaged by the GlucoScreen app, which walks the user through each step of the testing process. First, a user affixes each end of the test strip to the front and back of the phone as directed. Next, they prick their finger with a lancet, as they would in a conventional test, and apply a drop of blood to the biosensor attached to the test strip. After the data is transmitted from the strip to the phone, the app applies machine learning to analyze the data and calculate a blood glucose reading.

    That stage of the process is similar to that performed on a commercial glucometer. What sets GlucoScreen apart, in addition to its novel touch technique, is its universality.
    “Because we use the built-in capacitive touch screen that’s present in every smartphone, our solution can be easily adapted for widespread use. Additionally, our approach does not require low-level access to the capacitive touch data, so you don’t have to access the operating system to make GlucoScreen work,” said co-author Jason Hoffman, a UW doctoral student in the Allen School. “We’ve designed it to be ‘plug and play.’ You don’t need to root the phone — in fact, you don’t need to do anything with the phone, other than install the app. Whatever model you have, it will work off the shelf.”
    The researchers evaluated their approach using a combination of in vitro and clinical testing. Due to the COVID-19 pandemic, they had to delay the latter until 2021 when, on a trip home to India, Waghmare connected with Dr. Shailesh Pitale at Dew Medicare and Trinity Hospital. Upon learning about the UW project, Dr. Pitale agreed to facilitate a clinical study involving 75 consenting patients who were already scheduled to have blood drawn for a laboratory blood glucose test. Using that laboratory test as the ground truth, Waghmare and the team evaluated GlucoScreen’s performance against that of a conventional strip and glucometer.
    Given how common prediabetes and diabetes are globally, this type of technology has the potential to change clinical care, the researchers said.
    “One of the barriers I see in my clinical practice is that many patients can’t afford to test themselves, as glucometers and their test strips are too expensive. And, it’s usually the people who most need their glucose tested who face the biggest barriers,” said co-author Dr. Matthew Thompson, UW professor of both family medicine in the UW School of Medicine and global health. “Given how many of my patients use smartphones now, a system like GlucoScreen could really transform our ability to screen and monitor people with prediabetes and even diabetes.”
    GlucoScreen is presently a research prototype. Additional user-focused and clinical studies, along with alterations to how test strips are manufactured and packaged, would be required before the system could be made widely available, the team said.
    But, the researchers added, the project demonstrates how we have only begun to tap into the potential of smartphones as a health screening tool.
    “Now that we’ve shown we can build electrochemical assays that can work with a smartphone instead of a dedicated reader, you can imagine extending this approach to expand screening for other conditions,” said senior author Shwetak Patel, the Washington Research Foundation Entrepreneurship Endowed Professor in Computer Science & Engineering and Electrical & Computer Engineering at the UW.
    Additional co-authors are Farshid Salemi Parizi, a former UW doctoral student in electrical and computer engineering who is now a senior machine learning engineer at OctoML, and Yuntao Wang, a research professor at Tsinghua University and former visiting professor at the Allen School. This research was funded in part by the Bill & Melinda Gates Foundation. More

  • in

    New details of SARS-COV-2 structure

    A new study led by Worcester Polytechnic Institute (WPI) brings into sharper focus the structural details of the COVID-19 virus, revealing an elliptical shape that “breathes,” or changes shape, as it moves in the body. The discovery, which could lead to new antiviral therapies for the disease and quicker development of vaccines, is featured in the April edition of the peer-reviewed Cell Press structural biology journal Structure.
    “This is critical knowledge we need to fight future pandemics,” said Dmitry Korkin, Harold L. Jurist ’61 and Heather E. Jurist Dean’s Professor of Computer Science and lead researcher on the project. “Understanding the SARS-COV-2 virus envelope should allow us to model the actual process of the virus attaching to the cell and apply this knowledge to our understanding of the therapies at the molecular level. For instance, how can the viral activity be inhibited by antiviral drugs? How much antiviral blocking is needed to prevent virus-to-host interaction? We don’t know. But this is the best thing we can do right now — to be able to simulate actual processes.”
    Feeding genetic sequencing information and massive amounts of real-world data about the pandemic virus into a supercomputer in Texas, Korkin and his team, working in partnership with a group led by Siewert-Jan Marrink at the University of Groningen, Netherlands, produced a computational model of the virus’s envelope, or outer shell, in “near atomistic detail” that had until now been beyond the reach of even the most powerful microscopes and imaging techniques.
    Essentially, the computer used structural bioinformatics and computational biophysics to create its own picture of what the SARS-COV-2 particle looks like. And that picture showed that the virus is more elliptical than spherical and can change its shape. Korkin said the work also led to a better understanding of the M proteins in particular: underappreciated and overlooked components of the virus’s envelope.
    The M proteins form entities called dimers with a copy of each other, and play a role in the particle’s shape-shifting by keeping the structure flexible overall while providing a triangular mesh-like structure on the interior that makes it remarkably resilient, Korkin said. In contrast, on the exterior, the proteins assemble into mysterious filament-like structures that have puzzled scientists who have seen Korkin’s results, and will require further study.
    Korkin said the structural model developed by the researchers expands what was already known about the envelope architecture of the SARS-COV-2 virus and previous SARS- and MERS-related outbreaks. The computational protocol used to create the model could also be applied to more rapidly model future coronaviruses, he said. A clearer picture of the virus’ structure could reveal crucial vulnerabilities.
    “The envelope properties of SARS-COV-2 are likely to be similar to other coronaviruses,” he said. “Eventually, knowledge about the properties of coronavirus membrane proteins could lead to new therapies and vaccines for future viruses.”
    The new findings published in Structure were three years in the making and built upon Korkin’s work in the early days of the pandemic to provide the first 3D roadmap of the virus, based on genetic sequence information from the first isolated strain in China. More

  • in

    New algorithm keeps drones from colliding in midair

    When multiple drones are working together in the same airspace, perhaps spraying pesticide over a field of corn, there’s a risk they might crash into each other.
    To help avoid these costly crashes, MIT researchers presented a system called MADER in 2020. This multiagent trajectory-planner enables a group of drones to formulate optimal, collision-free trajectories. Each agent broadcasts its trajectory so fellow drones know where it is planning to go. Agents then consider each other’s trajectories when optimizing their own to ensure they don’t collide.
    But when the team tested the system on real drones, they found that if a drone doesn’t have up-to-date information on the trajectories of its partners, it might inadvertently select a path that results in a collision. The researchers revamped their system and are now rolling out Robust MADER, a multiagent trajectory planner that generates collision-free trajectories even when communications between agents are delayed.
    “MADER worked great in simulations, but it hadn’t been tested in hardware. So, we built a bunch of drones and started flying them. The drones need to talk to each other to share trajectories, but once you start flying, you realize pretty quickly that there are always communication delays that introduce some failures,” says Kota Kondo, an aeronautics and astronautics graduate student.
    The algorithm incorporates a delay-check step during which a drone waits a specific amount of time before it commits to a new, optimized trajectory. If it receives additional trajectory information from fellow drones during the delay period, it might abandon its new trajectory and start the optimization process over again.
    When Kondo and his collaborators tested Robust MADER, both in simulations and flight experiments with real drones, it achieved a 100 percent success rate at generating collision-free trajectories. While the drones’ travel time was a bit slower than it would be with some other approaches, no other baselines could guarantee safety.

    “If you want to fly safer, you have to be careful, so it is reasonable that if you don’t want to collide with an obstacle, it will take you more time to get to your destination. If you collide with something, no matter how fast you go, it doesn’t really matter because you won’t reach your destination,” Kondo says.
    Kondo wrote the paper with Jesus Tordesillas, a postdoc; Parker C. Lusk, a graduate student; Reinaldo Figueroa, Juan Rached, and Joseph Merkel, MIT undergraduates; and senior author Jonathan P. How, the Richard C. Maclaurin Professor of Aeronautics and Astronautics and a member of the MIT-IBM Watson AI Lab. The research will be presented at the International Conference on Robots and Automation.
    Planning trajectories
    MADER is an asynchronous, decentralized, multiagent trajectory-planner. This means that each drone formulates its own trajectory and that, while all agents must agree on each new trajectory, they don’t need to agree at the same time. This makes MADER more scalable than other approaches, since it would be very difficult for thousands of drones to agree on a trajectory simultaneously. Due to its decentralized nature, the system would also work better in real-world environments where drones may fly far from a central computer.
    With MADER, each drone optimizes a new trajectory using an algorithm that incorporates the trajectories it has received from other agents. By continually optimizing and broadcasting their new trajectories, the drones avoid collisions.

    But perhaps one agent shared its new trajectory several seconds ago, but a fellow agent didn’t receive it right away because the communication was delayed. In real-world environments, signals are often delayed by interference from other devices or environmental factors like stormy weather. Due to this unavoidable delay, a drone might inadvertently commit to a new trajectory that sets it on a collision course.
    Robust MADER prevents such collisions because each agent has two trajectories available. It keeps one trajectory that it knows is safe, which it has already checked for potential collisions. While following that original trajectory, the drone optimizes a new trajectory but does not commit to the new trajectory until it completes a delay-check step.
    During the delay-check period, the drone spends a fixed amount of time repeatedly checking for communications from other agents to see if its new trajectory is safe. If it detects a potential collision, it abandons the new trajectory and starts the optimization process over again.
    The length of the delay-check period depends on the distance between agents and environmental factors that could hamper communications, Kondo says. If the agents are many miles apart, for instance, then the delay-check period would need to be longer.
    Completely collision-free
    The researchers tested their new approach by running hundreds of simulations in which they artificially introduced communication delays. In each simulation, Robust MADER was 100 percent successful at generating collision-free trajectories, while all the baselines caused crashes.
    The researchers also built six drones and two aerial obstacles and tested Robust MADER in a multiagent flight environment. They found that, while using the original version of MADER in this environment would have resulted in seven collisions, Robust MADER did not cause a single crash in any of the hardware experiments.
    “Until you actually fly the hardware, you don’t know what might cause a problem. Because we know that there is a difference between simulations and hardware, we made the algorithm robust, so it worked in the actual drones, and seeing that in practice was very rewarding,” Kondo says.
    Drones were able to fly 3.4 meters per second with Robust MADER, although they had a slightly longer average travel time than some baselines. But no other method was perfectly collision-free in every experiment.
    In the future, Kondo and his collaborators want to put Robust MADER to the test outdoors, where many obstacles and types of noise can affect communications. They also want to outfit drones with visual sensors so they can detect other agents or obstacles, predict their movements, and include that information in trajectory optimizations.
    This work was supported by Boeing Research and Technology. More

  • in

    Can AI predict how you'll vote in the next election?

    Artificial intelligence technologies like ChatGPT are seemingly doing everything these days: writing code, composing music, and even creating images so realistic you’ll think they were taken by professional photographers. Add thinking and responding like a human to the conga line of capabilities. A recent study from BYU proves that artificial intelligence can respond to complex survey questions just like a real human.
    To determine the possibility of using artificial intelligence as a substitute for human responders in survey-style research, a team of political science and computer science professors and graduate students at BYU tested the accuracy of programmed algorithms of a GPT-3 language model — a model that mimics the complicated relationship between human ideas, attitudes, and sociocultural contexts of subpopulations.
    In one experiment, the researchers created artificial personas by assigning the AI certain characteristics like race, age, ideology, and religiosity; and then tested to see if the artificial personas would vote the same as humans did in 2012, 2016, and 2020 U.S. presidential elections. Using the American National Election Studies (ANES) for their comparative human database, they found a high correspondence between how the AI and humans voted.
    “I was absolutely surprised to see how accurately it matched up,” said David Wingate, BYU computer science professor, and co-author on the study. “It’s especially interesting because the model wasn’t trained to do political science — it was just trained on a hundred billion words of text downloaded from the internet. But the consistent information we got back was so connected to how people really voted.”
    In another experiment, they conditioned artificial personas to offer responses from a list of options in an interview-style survey, again using the ANES as their human sample. They found high similarity between nuanced patterns in human and AI responses.
    This innovation holds exciting prospects for researchers, marketers, and pollsters. Researchers envision a future where artificial intelligence is used to craft better survey questions, refining them to be more accessible and representative; and even simulate populations that are difficult to reach. It can be used to test surveys, slogans, and taglines as a precursor to focus groups.
    “We’re learning that AI can help us understand people better,” said BYU political science professor Ethan Busby. “It’s not replacing humans, but it is helping us more effectively study people. It’s about augmenting our ability rather than replacing it. It can help us be more efficient in our work with people by allowing us to pre-test our surveys and our messaging.”
    And while the expansive possibilities of large language models are intriguing, the rise of artificial intelligence poses a host of questions — how much does AI really know? Which populations will benefit from this technology and which will be negatively impacted? And how can we protect ourselves from scammers and fraudsters who will manipulate AI to create more sophisticated phishing scams?
    While much of that is still to be determined, the study lays out a set of criteria that future researchers can use to determine how accurate an AI model is for different subject areas.
    “We’re going to see positive benefits because it’s going to unlock new capabilities,” said Wingate, noting that AI can help people in many different jobs be more efficient. “We’re also going to see negative things happen because sometimes computer models are inaccurate and sometimes they’re biased. It will continue to churn society.”
    Busby says surveying artificial personas shouldn’t replace the need to survey real people and that academics and other experts need to come together to define the ethical boundaries of artificial intelligence surveying in research related to social science. More

  • in

    New chip design to provide greatest precision in memory to date

    Everyone is talking about the newest AI and the power of neural networks, forgetting that software is limited by the hardware on which it runs. But it is hardware, says USC Professor of Electrical and Computer Engineering Joshua Yang, that has become “the bottleneck.” Now, Yang’s new research with collaborators might change that. They believe that they have developed a new type of chip with the best memory of any chip thus far for edge AI (AI in portable devices).
    For approximately the past 30 years, while the size of the neural networks needed for AI and data science applications doubled every 3.5 months, the hardware capability needed to process them doubled only every 3.5 years. According to Yang, hardware presents a more and more severe problem for which few have patience.
    Governments, industry, and academia are trying to address this hardware challenge worldwide. Some continue to work on hardware solutions with silicon chips, while others are experimenting with new types of materials and devices. Yang’s work falls into the middle — focusing on exploiting and combining the advantages of the new materials and traditional silicon technology that could support heavy AI and data science computation.
    Their new paper in Nature focuses on the understanding of fundamental physics that leads to a drastic increase in memory capacity needed for AI hardware. The team led by Yang, with researchers from USC (including Han Wang’s group), MIT, and the University of Massachusetts, developed a protocol for devices to reduce “noise” and demonstrated the practicality of using this protocol in integrated chips. This demonstration was made at TetraMem, a startup company co-founded by Yang and his co-authors (Miao Hu, Qiangfei Xia, and Glenn Ge), to commercialize AI acceleration technology. According to Yang, this new memory chip has the highest information density per device (11 bits) among all types of known memory technologies thus far. Such small but powerful devices could play a critical role in bringing incredible power to the devices in our pockets. The chips are not just for memory but also for the processor. And millions of them in a small chip, working in parallel to rapidly run your AI tasks, could only require a small battery to power it.
    The chips that Yang and his colleagues are creating combine silicon with metal oxide memristors in order to create powerful but low-energy intensive chips. The technique focuses on using the positions of atoms to represent information rather than the number of electrons (which is the current technique involved in computations on chips). The positions of the atoms offer a compact and stable way to store more information in an analog, instead of digital fashion. Moreover, the information can be processed where it is stored instead of being sent to one of the few dedicated ‘processors,’ eliminating the so-called ‘von Neumann bottleneck’ existing in current computing systems. In this way, says Yang, computing for AI is “more energy efficient with a higher throughput.”
    How it works
    Yang explains that electrons which are manipulated in traditional chips, are “light.” And this lightness, makes them prone to moving around and being more volatile. Instead of storing memory through electrons, Yang and collaborators are storing memory in full atoms. Here is why this memory matters. Normally, says Yang, when one turns off a computer, the information memory is gone — but if you need that memory to run a new computation and your computer needs the information all over again, you have lost both time and energy. This new method, focusing on activating atoms rather than electrons, does not require battery power to maintain stored information. Similar scenarios happen in AI computations, where a stable memory capable of high information density is crucial. Yang imagines this new tech that may enable powerful AI capability in edge devices, such as Google Glasses, which he says previously suffered from a frequent recharging issue.
    Further, by converting chips to rely on atoms as opposed to electrons, chips become smaller. Yang adds that with this new method, there is more computing capacity at a smaller scale. And this method, he says, could offer “many more levels of memory to help increase information density.”
    To put it in context, right now, ChatGPT is running on a cloud. The new innovation, followed by some further development, could put the power of a mini version of ChatGPT in everyone’s personal device. It could make such high-powered tech more affordable and accessible for all sorts of applications. More

  • in

    AI could set a new bar for designing hurricane-resistant buildings

    Being able to withstand hurricane-force winds is the key to a long life for many buildings on the Eastern Seaboard and Gulf Coast of the U.S. Determining the right level of winds to design for is tricky business, but support from artificial intelligence may offer a simple solution.
    Equipped with 100 years of hurricane data and modern AI techniques, researchers at the National Institute of Standards and Technology (NIST) have devised a new method of digitally simulating hurricanes. The results of a study published today in Artificial Intelligence for the Earth Systems demonstrate that the simulations can accurately represent the trajectory and wind speeds of a collection of actual storms. The authors suggest that simulating numerous realistic hurricanes with the new approach can help to develop improved guidelines for the design of buildings in hurricane-prone regions.
    State and local laws that regulate building design and construction — more commonly known as building codes — point designers to standardized maps. On these maps, engineers can find the level of wind their structure must handle based on its location and its relative importance (i.e., the bar is higher for a hospital than for a self-storage facility). The wind speeds in the maps are derived from scores of hypothetical hurricanes simulated by computer models, which are themselves based on real-life hurricane records.
    “Imagine you had a second Earth, or a thousand Earths, where you could observe hurricanes for 100 years and see where they hit on the coast, how intense they are. Those simulated storms, if they behave like real hurricanes, can be used to create the data in the maps almost directly,” said NIST mathematical statistician Adam Pintar, a study co-author.
    The researchers who developed the latest maps did so by simulating the complex inner workings of hurricanes, which are influenced by physical parameters such as sea surface temperatures and the Earth’s surface roughness. However, the requisite data on these specific factors is not always readily available.
    More than a decade later, advances in AI-based tools and years of additional hurricane records have made an unprecedented approach possible, which could result in more realistic hurricane wind maps down the road.

    NIST postdoctoral researcher Rikhi Bose, together with Pintar and NIST Fellow Emil Simiu, used these new techniques and resources to tackle the issue from a different angle. Rather than having their model mathematically build a storm from the ground up, the authors of the new study taught it to mimic actual hurricane data with machine learning, Pintar said.
    Studying for a physics exam by only looking at the questions and answers of previous assignments may not play out in a student’s favor, but for powerful AI-based techniques, this type of approach could be worthwhile.
    With enough quality information to study, machine-learning algorithms can construct models based on patterns they uncover within datasets that other methods may miss. Those models can then simulate specific behaviors, such as the wind strength and movement of a hurricane.
    In the new research, the study material came in the form of the National Hurricane Center’s Atlantic Hurricane Database (HURDAT2), which contains information about hurricanes going back more than 100 years, such as the coordinates of their paths and windspeeds.
    The researchers split data on more than 1,500 storms into sets for training and testing their model. When challenged with concurrently simulating the trajectory and wind of historical storms it had not seen before, the model scored highly.

    “It performs very well. Depending on where you’re looking at along the coast, it would be quite difficult to identify a simulated hurricane from a real one, honestly,” Pintar said.
    They also used the model to generate sets of 100 years’ worth of hypothetical storms. It produced the simulations in a matter of seconds, and the authors saw a large degree of overlap with the general behavior of the HURDAT2 storms, suggesting that their model could rapidly produce collections of realistic storms.
    However, there were some discrepancies, such as in the Northeastern coastal states. In these regions, HURDAT2 data was sparse, and thus, the model generated less realistic storms.
    “Hurricanes are not as frequent in, say, Boston as in Miami, for example. The less data you have, the larger the uncertainty of your predictions,” Simiu said.
    As a next step, the team plans to use simulated hurricanes to develop coastal maps of extreme wind speeds as well as quantify uncertainty in those estimated speeds.
    Since the model’s understanding of storms is limited to historical data for now, it cannot simulate the effects that climate change will have on storms of the future. The traditional approach of simulating storms from the ground up is better suited to that task. However, in the short term, the authors are confident that wind maps based on their model — which is less reliant on elusive physical parameters than other models are — would better reflect reality.
    Within the next several years they aim to produce and propose new maps for inclusion in building standards and codes. More

  • in

    Machine learning model helps forecasters improve confidence in storm prediction

    When severe weather is brewing and life-threatening hazards like heavy rain, hail or tornadoes are possible, advance warning and accurate predictions are of utmost importance. Colorado State University weather researchers have given storm forecasters a powerful new tool to improve confidence in their forecasts and potentially save lives.
    Over the last several years, Russ Schumacher, professor in the Department of Atmospheric Science and Colorado State Climatologist, has led a team developing a sophisticated machine learning model for advancing skillful prediction of hazardous weather across the continental United States. First trained on historical records of excessive rainfall, the model is now smart enough to make accurate predictions of events like tornadoes and hail four to eight days in advance — the crucial sweet spot for forecasters to get information out to the public so they can prepare. The model is called CSU-MLP, or Colorado State University-Machine Learning Probabilities.
    Led by research scientist Aaron Hill, who has worked on refining the model for the last two-plus years, the team recently published their medium-range (four to eight days) forecasting ability in the American Meteorological Society journal Weather and Forecasting.
    Working with Storm Prediction Center forecasters
    The researchers have now teamed with forecasters at the national Storm Prediction Center in Norman, Oklahoma, to test the model and refine it based on practical considerations from actual weather forecasters. The tool is not a stand-in for the invaluable skill of human forecasters, but rather provides an agnostic, confidence-boosting measure to help forecasters decide whether to issue public warnings about potential weather.
    “Our statistical models can benefit operational forecasters as a guidance product, not as a replacement,” Hill said.

    Israel Jirak, M.S. ’02, Ph.D. ’05, is science and operations officer at the Storm Prediction Center and co-author of the paper. He called the collaboration with the CSU team “a very successful research-to-operations project.”
    “They have developed probabilistic machine learning-based severe weather guidance that is statistically reliable and skillful while also being practically useful for forecasters,” Jirak said. The forecasters in Oklahoma are using the CSU guidance product daily, particularly when they need to issue medium-range severe weather outlooks.
    Nine years of historical weather data
    The model is trained on a very large dataset containing about nine years of detailed historical weather observations over the continental U.S. These data are combined with meteorological retrospective forecasts, which are model “re-forecasts” created from outcomes of past weather events. The CSU researchers pulled the environmental factors from those model forecasts and associated them with past events of severe weather like tornadoes and hail. The result is a model that can run in real time with current weather events and produce a probability of those types of hazards with a four- to eight-day lead time, based on current environmental factors like temperature and wind.
    Ph.D. student Allie Mazurek is working on the project and is seeking to understand which atmospheric data inputs are the most important to the model’s predictive capabilities. “If we can better decompose how the model is making its predictions, we can hopefully better diagnose why the model’s predictions are good or bad during certain weather setups,” she said.
    Hill and Mazurek are working to make the model not only more accurate, but also more understandable and transparent for the forecasters using it.
    For Hill, it’s most gratifying to know that years of work refining the machine learning tool are now making a difference in a public, operational setting.
    “I love fundamental research. I love understanding new things about our atmosphere. But having a system that is providing improved warnings and improved messaging around the threat of severe weather is extremely rewarding,” Hill said. More

  • in

    Can a solid be a superfluid? Engineering a novel supersolid state from layered 2D materials

    A collaboration of Australian and European physicists predict that layered electronic 2D semiconductors can host a curious quantum phase of matter called the supersolid.
    The supersolid is a very counterintuitive phase indeed. It is made up of particles that simultaneously form a rigid crystal and yet at the same time flow without friction since all the particles belong to the same single quantum state.
    A solid becomes ‘super’ when its quantum properties match the well-known quantum properties of superconductors. A supersolid simultaneously has two orders, solid and super: solid because of the spatially repeating pattern of particles, super because the particles can flow without resistance.”Although a supersolid is rigid, it can flow like a liquid without resistance,” explains Lead author Dr Sara Conti (University of Antwerp).
    The study was conducted at UNSW (Australia), University of Antwerp (Belgium) and University of Camerino (Italy).
    A 50-Year Journey Towards the Exotic Supersolid
    Geoffrey Chester, a Professor at Cornell University, predicted in 1970 that solid helium-4 under pressure should at low temperatures display: Crystalline solid order, with each helium atom at a specific point in a regularly ordered lattice and, at the same time, Bose-Einstein condensation of the atoms, with every atom in the same single quantum state, so they flow without resistance.

    However in the following five decades the Chester supersolid has not been unambiguously detected.
    Alternative approaches to forming a supersolid-like state have reported supersolid-like phases in cold-atom systems in optical lattices. These are either clusters of condensates or condensates with varying density determined by the trapping geometries. These supersolid-like phases should be distinguished from the original Chester supersolid in which each single particle is localised in its place in the crystal lattice purely by the forces acting between the particles.
    The new Australia-Europe study predicts that such a state could instead be engineered in two-dimensional (2D) electronic materials in a semiconductor structure, fabricated with two conducting layers separated by an insulating barrier of thickness d.
    One layer is doped with negatively-charged electrons and the other with positively-charged holes.
    The particles forming the supersolid are interlayer excitons, bound states of an electron and hole tied together by their strong electrical attraction. The insulating barrier prevents fast self-annihilation of the exciton bound pairs. Voltages applied to top and bottom metal ‘gates’ tune the average separation r0 between excitons.

    The research team predicts that excitons in this structure will form a supersolid over a wide range of layer separations and average separations between the excitons. The electrical repulsion between the excitons can constrain them into a fixed crystalline lattice.
    “A key novelty is that a supersolid phase with Bose-Einstein quantum coherence appears at layer separations much smaller than the separation predicted for the non-super exciton solid that is driven by the same electrical repulsion between excitons,” says co-corresponding author Prof David Neilson (University of Antwerp).
    “In this way, the supersolid pre-empts the non-super exciton solid. At still larger separations, the non-super exciton solid eventually wins, and the quantum coherence collapses.”
    “This is an extremely robust state, readily achievable in experimental setups,” adds co-corresponding author Prof Alex Hamilton (UNSW). “Ironically, the layer separations are relatively large and are easier to fabricate than the extremely small layer separations in such systems that have been the focus of recent experiments aimed at maximising the interlayer exciton binding energies.”
    As for detection, for a superfluid it is well known that this cannot be rotated until it can host a quantum vortex, analogous to a whirlpool. But to form this vortex requires a finite amount of energy, and hence a sufficiently strong rotational force. So up to this point, the measured rotational moment of inertia (the extent to which an object resists rotational acceleration) will remain zero. In the same way, a supersolid can be identified by detecting such an anomaly in its rotational moment of inertia.
    The research team has reported the complete phase diagram of this system at low temperatures.
    “By changing the layer separation relative to the average exciton spacing, the strength of the exciton-exciton interactions can be tuned to stabilise either the superfluid, or the supersolid, or the normal solid,” says Dr Sara Conti.
    “The existence of a triple point is also particularly intriguing. At this point, the boundaries of supersolid and normal-solid melting, and the supersolid to normal-solid transition, all cross. There should be exciting physics coming from the exotic interfaces separating these domains, for example, Josephson tunnelling between supersolid puddles embedded in a normal-background.” More