More stories

  • in

    New algorithm flies drones faster than human racing pilots can

    To be useful, drones need to be quick. Because of their limited battery life they must complete whatever task they have — searching for survivors on a disaster site, inspecting a building, delivering cargo — in the shortest possible time. And they may have to do it by going through a series of waypoints like windows, rooms, or specific locations to inspect, adopting the best trajectory and the right acceleration or deceleration at each segment.
    Algorithm outperforms professional pilots
    The best human drone pilots are very good at doing this and have so far always outperformed autonomous systems in drone racing. Now, a research group at the University of Zurich (UZH) has created an algorithm that can find the quickest trajectory to guide a quadrotor — a drone with four propellers — through a series of waypoints on a circuit. “Our drone beat the fastest lap of two world-class human pilots on an experimental race track,” says Davide Scaramuzza, who heads the Robotics and Perception Group at UZH and the Rescue Robotics Grand Challenge of the NCCR Robotics, which funded the research.
    “The novelty of the algorithm is that it is the first to generate time-optimal trajectories that fully consider the drones’ limitations,” says Scaramuzza. Previous works relied on simplifications of either the quadrotor system or the description of the flight path, and thus they were sub-optimal. “The key idea is, rather than assigning sections of the flight path to specific waypoints, that our algorithm just tells the drone to pass through all waypoints, but not how or when to do that,” adds Philipp Foehn, PhD student and first author of the paper.
    External cameras provide position information in real-time
    The researchers had the algorithm and two human pilots fly the same quadrotor through a race circuit. They employed external cameras to precisely capture the motion of the drones and — in the case of the autonomous drone — to give real-time information to the algorithm on where the drone was at any moment. To ensure a fair comparison, the human pilots were given the opportunity to train on the circuit before the race. But the algorithm won: all its laps were faster than the human ones, and the performance was more consistent. This is not surprising, because once the algorithm has found the best trajectory it can reproduce it faithfully many times, unlike human pilots.
    Before commercial applications, the algorithm will need to become less computationally demanding, as it now takes up to an hour for the computer to calculate the time-optimal trajectory for the drone. Also, at the moment, the drone relies on external cameras to compute where it was at any moment. In future work, the scientists want to use onboard cameras. But the demonstration that an autonomous drone can in principle fly faster than human pilots is promising. “This algorithm can have huge applications in package delivery with drones, inspection, search and rescue, and more,” says Scaramuzza.
    Story Source:
    Materials provided by University of Zurich. Note: Content may be edited for style and length. More

  • in

    Rounding errors could make certain stopwatches pick wrong race winners, researchers show

    As the Summer Olympics draw near, the world will shift its focus to photo finishes and races determined by mere fractions of a second. Obtaining such split-second measurements relies on faultlessly rounding a raw time recorded by a stopwatch or electronic timing system to a submitted time.
    Researchers at the University of Surrey found certain stopwatches commit rounding errors when converting raw times to final submitted times. In American Journal of Physics, by AIP Publishing, David Faux and Janet Godolphin outline a series of computer simulations based on procedures for converting raw race times for display.
    Faux was inspired when he encountered the issue firsthand while volunteering at a swim meet. While helping input times into the computer, he noticed a large portion of times they inputted were rounded to either the closest half-second or full second.
    “Later, when the frequencies of the digit pairs were plotted, a distinct pattern emerged,” he said. “We discovered that the distribution of digit pairs was statistically inconsistent with the hypothesis that each digit pair was equally likely, as one would expect from stopwatches.”
    Stopwatches and electronic timing systems use quartz oscillators to measure time intervals, with each oscillation calculated as 0.0001 seconds. These times are then processed for display to 0.01 seconds, for example, to the public at a sporting venue.
    Faux and Godolphin set to work simulating roughly 3 million race times corresponding to swimmers of all ages and abilities. As expected, the raw times indicated each fraction of a second had the same chance of being a race time. For example, there was 1% chance a race time ended in either 0.55 seconds or 0.6 seconds.
    When they processed raw times through the standard display routine, the uniform distribution disappeared. Most times were correctly displayed.
    Where rounding errors occurred, they usually resulted in changes of one one-hundredth of a second. One raw time of 28.3194 was converted to a displayed time of 28.21.
    “The question we really need to answer is whether rounding errors are uncorrected in electronic timing systems used in sporting events worldwide,” Faux said. “We have so far been unable to unearth the actual algorithm that is used to translate a count of quartz oscillations to a display.”
    The researchers collected more than 30,000 race times from swimming competitions and will investigate if anomalous timing patterns appear in the collection, which would suggest the potential for rounding errors in major sporting events.
    The article “The floating point: Rounding error in timing devices” is authored by David A. Faux and Janet Godolphin. The article appears in American Journal of Physics.
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Wearable brain-machine interface turns intentions into actions

    A new wearable brain-machine interface (BMI) system could improve the quality of life for people with motor dysfunction or paralysis, even those struggling with locked-in syndrome — when a person is fully conscious but unable to move or communicate.
    A multi-institutional, international team of researchers led by the lab of Woon-Hong Yeo at the Georgia Institute of Technology combined wireless soft scalp electronics and virtual reality in a BMI system that allows the user to imagine an action and wirelessly control a wheelchair or robotic arm.
    The team, which included researchers from the University of Kent (United Kingdom) and Yonsei University (Republic of Korea), describes the new motor imagery-based BMI system this month in the journal Advanced Science.
    “The major advantage of this system to the user, compared to what currently exists, is that it is soft and comfortable to wear, and doesn’t have any wires,” said Yeo, associate professor on the George W. Woodruff School of Mechanical Engineering.
    BMI systems are a rehabilitation technology that analyzes a person’s brain signals and translates that neural activity into commands, turning intentions into actions. The most common non-invasive method for acquiring those signals is ElectroEncephaloGraphy, EEG, which typically requires a cumbersome electrode skull cap and a tangled web of wires.
    These devices generally rely heavily on gels and pastes to help maintain skin contact, require extensive set-up times, are generally inconvenient and uncomfortable to use. The devices also often suffer from poor signal acquisition due to material degradation or motion artifacts — the ancillary “noise” which may be caused by something like teeth grinding or eye blinking. This noise shows up in brain-data and must be filtered out. More

  • in

    Novel method predicts if COVID-19 clinical trials will fail or succeed

    In order to win the battle against COVID-19, studies to develop vaccines, drugs, devices and re-purposed drugs are urgently needed. Randomized clinical trials are used to provide evidence of safety and efficacy as well as to better understand this novel and evolving virus. As of July 15, more than 6,180 COVID-19 clinical trials have been registered through ClinicalTrials.gov, the national registry and database for privately and publicly funded clinical studies conducted around the world. Knowing which ones are likely to succeed is imperative.
    Researchers from Florida Atlantic University’s College of Engineering and Computer Science are the first to model COVID-19 completion versus cessation in clinical trials using machine learning algorithms and ensemble learning. The study, published in PLOS ONE, provides the most extensive set of features for clinical trial reports, including features to model trial administration, study information and design, eligibility, keywords, drugs and other features.
    This research shows that computational methods can deliver effective models to understand the difference between completed vs. ceased COVID-19 trials. In addition, these models also can predict COVID-19 trial status with satisfactory accuracy.
    Because COVID-19 is a relatively novel disease, very few trials have been formally terminated. Therefore, for the study, researchers considered three types of trials as cessation trials: terminated, withdrawn, and suspended. These trials represent research efforts that have been stopped/halted for particular reasons and represent research efforts and resources that were not successful.
    “The main purpose of our research was to predict whether a COVID-19 clinical trial will be completed or terminated, withdrawn or suspended. Clinical trials involve a great deal of resources and time including planning and recruiting human subjects,” said Xingquan “Hill” Zhu, Ph.D., senior author and a professor in the Department of Computer and Electrical Engineering and Computer Science, who conducted the research with first author Magdalyn “Maggie” Elkin, a second-year Ph.D. student in computer science who also works full-time. “If we can predict the likelihood of whether a trial might be terminated or not down the road, it will help stakeholders better plan their resources and procedures. Eventually, such computational approaches may help our society save time and sources to combat the global COVID-19 pandemic.”
    For the study, Zhu and Elkin collected 4,441 COVID-19 trials from ClinicalTrials.gov to build a testbed. They designed four types of features (statistics features, keyword features, drug features and embedding features) to characterize clinical trial administration, eligibility, study information, criteria, drug types, study keywords, as well as embedding features commonly used in state-of-the-art machine learning. In total, 693 dimensional features were created to represent each clinical trial. For comparison purposes, researchers used four models: Neural Network; Random Forest; XGBoost; and Logistic Regression.
    Feature selection and ranking showed that keyword features derived from the MeSH (medical subject headings) terms of the clinical trial reports, were the most informative for COVID-19 trial prediction, followed by drug features, statistics features and embedding features. Although drug features and study keywords were the most informative features, all four types of features are essential for accurate trial prediction.
    By using ensemble learning and sampling, the model used in this study achieved more than 0.87 areas under the curve (AUC) scores and more than 0.81 balanced accuracy for prediction, indicating high efficacy of using computational methods for COVID-19 clinical trial prediction. Results also showed single models with balanced accuracy as high as 70 percent and an F1-score of 50.49 percent, suggesting that modeling clinical trials is best when segregating research areas or diseases.
    “Clinical trials that have stopped for various reasons are costly and often represent a tremendous loss of resources. As future outbreaks of COVID-19 are likely even after the current pandemic has declined, it is critical to optimize efficient research efforts,” said Stella Batalama, Ph.D., dean, College of Engineering and Computer Science. “Machine learning and AI driven computational approaches have been developed for COVID-19 health care applications, and deep learning techniques have been applied to medical imaging processing in order to predict outbreak, track virus spread and for COVID-19 diagnosis and treatment. The new approach developed by professor Zhu and Maggie will be helpful to design computational approaches to predict whether or not a COVID-19 clinical trial will be completed so that stakeholders can leverage the predictions to plan resources, reduce costs, and minimize the time of the clinical study.”
    The study was funded by the National Science Foundation awarded to Zhu.
    Story Source:
    Materials provided by Florida Atlantic University. Original written by Gisele Galoustian. Note: Content may be edited for style and length. More

  • in

    Cancer: Information theory to fight resistance to treatments

    One of the major challenges in modern cancer therapy is the adaptive response of cancer cells to targeted therapies: initially, these therapies are very often effective, then adaptive resistance occurs, allowing the tumor cells to proliferate again. Although this adaptive response is theoretically reversible, such a reversal is hampered by numerous molecular mechanisms that allow the cancer cells to adapt to the treatment. The analysis of these mechanisms is limited by the complexity of cause and effect relationships that are extremely difficult to observe in vivo in tumor samples. In order to overcome this challenge, a team from the University of Geneva (UNIGE) and the University Hospitals of Geneva (HUG), Switzerland, has used information theory for the first time, in order to objectify in vivo the molecular regulations at play in the mechanisms of the adaptive response and their modulation by a therapeutic combination. These results are published in the journal Neoplasia.
    Adaptive response limits the efficiency of targeted therapies used to fight the development of tumors: after an effective treatment phase that reduces the tumor size, an adaptation to the used molecule occurs that allows the tumor cells to proliferate again. “We now know that this resistance to treatment has a large reversible component that does not involve mutations, which are an irreversible process,” explains Rastine Merat, a researcher in the Department of Pathology and Immunology at the UNIGE Faculty of Medicine, the head of the Onco-Dermatology Unit at the HUG and the principal investigator of the study.
    Research confronted with the complexity of biological regulations
    In order to prevent resistance to targeted therapies, scientists need to understand the molecular mechanisms of the adaptive response. “These mechanisms may involve variations in gene expression, for example,” explains Rastine Merat. It is then necessary to modify or prevent these variations by means of a therapeutic combination that blocks the consequences or even prevents them. One challenge remains: the description of these mechanisms and their modulation under the effect of a therapeutic combination is very often carried out on isolated cultured cells and not validated in tumor tissue in the body. “This is essentially due to the difficulty of objectifying these mechanisms, which may occur in a transient manner and only in a minority of cells in tumor tissues, and above all which involve non-linear cause and effect relationships,” explains the Geneva researcher.
    Applying information theory to tumors
    To counter these difficulties, the UNIGE and HUG team came up with the idea of using information theory, more specifically by quantifying mutual information. This approach has previously been used in biology, mainly to quantify cell signaling and understand genetic regulation networks. “This statistical method makes it possible to link two parameters involved in a mechanism by measuring the reduction in the uncertainty of one of the parameters when the value of the other parameter is known,” simplifies Rastine Merat.
    Practically, the scientists proceed step by step: they take biopsies of tumors (in this case melanomas) in a mouse model at different stages of their development during therapy. Using immunohistochemical analyses — i.e. tumor sections — they measure, using an automated approach, the expression of proteins involved in the mechanism at play in the adaptive response. “The proposed mathematical approach is easily applicable to routine techniques such as immunohistochemistry and makes it possible to validate in vivo the relevance of the mechanisms under study, even if they occur in a minority of cells and in a transient manner,” the Geneva researcher explains. Thus, scientists can not only validate in the organism the molecular mechanisms they are studying, but also the impact of innovative therapeutic combinations that result from the understanding of these mechanisms. “Similarly, we could use this approach in therapeutic trials as a predictive marker of response to therapeutic combinations that seek to prevent adaptive resistance,” he continues.
    A method suitable for all types of cancer
    “This method, developed in a melanoma model, could be applied to other types of cancer for which the same issues of adaptive resistance to targeted therapies occur and for which combination therapy approaches based on an understanding of the mechanisms involved are under development,” concludes Rastine Merat.
    Story Source:
    Materials provided by Université de Genève. Note: Content may be edited for style and length. More

  • in

    Using ultra-low temperatures to understand high-temperature superconductivity

    At low temperatures, certain materials lose their electrical resistance and conduct electricity without any loss — this phenomenon of superconductivity has been known since 1911, but it is still not fully understood. And that is a pity, because finding a material that would still have superconducting properties even at high temperatures would probably trigger a technological revolution.
    A discovery made at TU Wien (Vienna) could be an important step in this direction: A team of solid-state physicists studied an unusual material — a so-called “strange metal” made of ytterbium, rhodium and silicon. Strange metals show an unusual relationship between electrical resistance and temperature. In the case of this material, this correlation can be seen in a particularly wide temperature range, and the underlying mechanism is known. Contrary to previous assumptions, it now turns out that this material is also a superconductor and that superconductivity is closely related to strange metal behaviour. This could be the key to understanding high-temperature superconductivity in other classes of materials as well.
    Strange metal: linear relationship between resistance and temperature
    In ordinary metals, electrical resistance at low temperatures increases with the square of the temperature. In some high-temperature superconductors, however, the situation is completely different: at low temperatures, below the so-called superconducting transition temperature, they show no electrical resistance at all, and above this temperature the resistance increases linearly instead of quadratically with temperature. This is what defines “strange metals.”
    “It has therefore already been suspected in recent years that this linear relationship between resistance and temperature is of great importance for superconductivity,” says Prof. Silke Bühler-Paschen, who heads the research area “Quantum Materials” at the Institute of Solid State Physics at TU Wien. “But unfortunately, until now we didn’t know of a suitable material to study this in great depth.” In the case of high-temperature superconductors, the linear relationship between temperature and resistance is usually only detectable in a relatively small temperature range, and, furthermore, various effects that inevitably occur at higher temperatures can influence this relationship in complicated ways.
    Many experiments have already been carried out with an exotic material (YbRh2Si2) that displays strange metal behaviour over an extremely wide temperature range — but, surprisingly, no superconductivity seemed to emerge from this extreme “strange metal” state. “Theoretical considerations have already been put forward to justify why superconductivity is simply not possible here,” says Silke Bühler-Paschen. “Nevertheless, we decided to take another look at this material.”
    Record-breaking temperatures
    At TU Wien, a particularly powerful low-temperature laboratory is available. “There we can study materials under more extreme conditions than other research groups have been able to do so far,” explains Silke Bühler-Paschen. First, the team was able to show that in YbRh2Si2 the linear relationship between resistance and temperature exists in an even larger temperature range than previously thought — and then they made the key discovery: at extremely low temperatures of only one millikelvin, the strange metal turns into a superconductor.
    “This makes our material ideally suited for finding out in what way the strange metal behaviour leads to superconductivity,” says Silke Bühler-Paschen.
    Paradoxically, the very fact that the material only becomes superconducting at very low temperatures ensures that it can be used to study high-temperature superconductivity particularly well: “The mechanisms that lead to superconductivity are visible particularly well at these extremely low temperatures because they are not overlaid by other effects in this regime. In our material, this is the localisation of some of the conduction electrons at a quantum critical point. There are indications that a similar mechanism may also be responsible for the behaviour of high-temperature superconductors such as the famous cuprates,” says Silke Bühler-Paschen.
    Story Source:
    Materials provided by Vienna University of Technology. Note: Content may be edited for style and length. More

  • in

    New technology shows promise in detecting, blocking grid cyberattacks

    Researchers from Idaho National Laboratory and New Mexico-based Visgence Inc. have designed and demonstrated a technology that can block cyberattacks from impacting the nation’s electric power grid.
    During a recent live demonstration at INL’s Critical Infrastructure Test Range Complex, the Constrained Cyber Communication Device (C3D) was tested against a series of remote access attempts indicative of a cyberattack. The device alerted operators to the abnormal commands and blocked them automatically, preventing the attacks from accessing and damaging critical power grid components.
    “Protecting our critical infrastructure from foreign adversaries is a key component in the department’s national security posture,” said Patricia Hoffman, acting assistant secretary for the U.S. Department of Energy. “It’s accomplishments like this that expand our efforts to strengthen our electric system against threats while mitigating vulnerabilities. Leveraging the capabilities of Idaho National Laboratory and the other national laboratories will accelerate the modernization of our grid hardware, protecting us from cyberattacks.”
    The C3D device uses advanced communication capabilities to autonomously review and filter commands being sent to protective relay devices. Relays are the heart and soul of the nation’s power grid and are designed to rapidly command breakers to turn off the flow of electricity when a disturbance is detected. For instance, relays can prevent expensive equipment from being damaged when a power line fails because of a severe storm.
    However, relays are not traditionally designed to block the speed and stealthiness of a cyberattack, which can send wild commands to grid equipment in milliseconds. To prevent this kind of attack, an intelligent and automatic filtering technology is needed.
    “As cyberattacks against the nation’s critical infrastructure have grown more sophisticated, there is a need for a device to provide a last line of defense against threats,” said INL program manager Jake Gentle. “The C3D device sits deep inside a utility’s network, monitoring and blocking cyberattacks before they impact relay operations.”
    To test the technology’s effectiveness, researchers spent nearly a year collaborating with industry experts, including longtime partners from Power Engineers, an international engineering and environmental consulting firm. INL and the Department of Energy also established an industry advisory board consisting of power grid and cybersecurity experts from across the federal government, private industry and academia.
    After thoroughly assessing industry needs and analyzing the makeup of modern cyber threats, researchers designed an electronic device that could be wired into a protective relay’s communication network. Then they constructed a 36-foot mobile substation and connected it to INL’s full-scale electric power grid test bed to establish an at-scale power grid environment.
    With the entire system online, researchers sent a sudden power spike command to the substation relays and monitored the effects from a nearby command center. Instantly, the C3D device blocked the command and prevented the attack from damaging the larger grid.
    The development of the device was funded by DOE’s Office of Electricity under the Protective Relay Permission Communication project. The technology and an associated software package will undergo further testing over the next several months before being made available for licensing to private industry.
    Story Source:
    Materials provided by DOE/Idaho National Laboratory. Note: Content may be edited for style and length. More

  • in

    Machine learning models to help photovoltaic systems find their place in the sun

    Although photovoltaic systems constitute a promising way of harnessing solar energy, power grid managers need to accurately predict their power output to schedule generation and maintenance operations efficiently. Scientists from Incheon National University, Korea, have developed a machine learning-based approach that can more accurately estimate the output of photovoltaic systems than similar algorithms, paving the way to a more sustainable society.
    With the looming threat of climate change, it is high time we embrace renewable energy sources on a larger scale. Photovoltaic systems, which generate electricity from the nearly limitless supply of sunlight energy, are one of the most promising ways of generating clean energy. However, integrating photovoltaic systems into existing power grids is not a straightforward process. Because the power output of photovoltaic systems depends heavily on environmental conditions, power plant and grid managers need estimations of how much power will be injected by photovoltaic systems so as to plan optimal generation and maintenance schedules, among other important operational aspects.
    In line with modern trends, if something needs predicting, you can safely bet that artificial intelligence will make an appearance. To date, there are many algorithms that can estimate the power produced by photovoltaic systems several hours ahead by learning from previous data and analyzing current variables. One of them, called adaptive neuro-fuzzy inference system (ANFIS), has been widely applied for forecasting the performance of complex renewable energy systems. Since its inception, many researchers have combined ANFIS with a variety of machine learning algorithms to improve its performance even further.
    In a recent study published in Renewable and Sustainable Energy Reviews, a research team led by Jong Wan Hu from Incheon National University, Korea, developed two new ANFIS-based models to better estimate the power generated by photovoltaic systems ahead of time by up to a full day. These two models are ‘hybrid algorithms’ because they combine the traditional ANFIS approach with two different particle swarm optimization methods, which are powerful and computationally efficient strategies for finding optimal solutions to optimization problems.
    To assess the performance of their models, the team compared them with other ANFIS-based hybrid algorithms. They tested the predictive abilities of each model using real data from an actual photovoltaic system deployed in Italy in a previous study. The results, as Dr. Hu remarks, were very promising: “One of the two models we developed outperformed all the hybrid models tested, and hence showed great potential for predicting the photovoltaic power of solar systems at both short- and long-time horizons.”
    The findings of this study could have immediate implications in the field of photovoltaic systems from software and production perspectives. “In terms of software, our models can be turned into applications that accurately estimate photovoltaic system values, leading to enhanced performance and grid operation. In terms of production, our methods can translate into a direct increase in photovoltaic power by helping select variables that can be used in the photovoltaic system’s design,” explains Dr. Hu. Let us hope this work helps us in the transition to sustainable energy sources!
    Story Source:
    Materials provided by Incheon National University. Note: Content may be edited for style and length. More