More stories

  • in

    Helping doctors manage COVID-19

    Artificial intelligence (AI) technology developed by researchers at the University of Waterloo is capable of assessing the severity of COVID-19 cases with a promising degree of accuracy.
    A study, which is part of the COVID-Net open-source initiative launched more than a year ago, involved researchers from Waterloo and spin-off start-up company DarwinAI, as well as radiologists at the Stony Brook School of Medicine and the Montefiore Medical Center in New York.
    Deep-learning AI was trained to analyze the extent and opacity of infection in the lungs of COVID-19 patients based on chest x-rays. Its scores were then compared to assessments of the same x-rays by expert radiologists.
    For both extent and opacity, important indicators of the severity of infections, predictions made by the AI software were in good alignment with scores provided by the human experts.
    Alexander Wong, a systems design engineering professor and co-founder of DarwinAI, said the technology could give doctors an important tool to help them manage cases.
    “Assessing the severity of a patient with COVID-19 is a critical step in the clinical workflow for determining the best course of action for treatment and care, be it admitting the patient to ICU, giving a patient oxygen therapy, or putting a patient on a mechanical ventilator,” Wong said.
    “The promising results in this study show that artificial intelligence has a strong potential to be an effective tool for supporting frontline healthcare workers in their decisions and improving clinical efficiency, which is especially important given how much stress the ongoing pandemic has placed on healthcare systems around the world.”
    A paper on the research, “Towards computer-aided severity assessment via deep neural networks for geographic and opacity extent scoring of SARS-CoV-2 chest X-rays,” appears in the journal Scientific Reports.
    Story Source:
    Materials provided by University of Waterloo. Original written by Brian Caldwell. Note: Content may be edited for style and length. More

  • in

    Driving in the snow is a team effort for AI sensors

    Nobody likes driving in a blizzard, including autonomous vehicles. To make self-driving cars safer on snowy roads, engineers look at the problem from the car’s point of view.
    A major challenge for fully autonomous vehicles is navigating bad weather. Snow especially confounds crucial sensor data that helps a vehicle gauge depth, find obstacles and keep on the correct side of the yellow line, assuming it is visible. Averaging more than 200 inches of snow every winter, Michigan’s Keweenaw Peninsula is the perfect place to push autonomous vehicle tech to its limits. In two papers presented at SPIE Defense + Commercial Sensing 2021, researchers from Michigan Technological University discuss solutions for snowy driving scenarios that could help bring self-driving options to snowy cities like Chicago, Detroit, Minneapolis and Toronto.
    Just like the weather at times, autonomy is not a sunny or snowy yes-no designation. Autonomous vehicles cover a spectrum of levels, from cars already on the market with blind spot warnings or braking assistance, to vehicles that can switch in and out of self-driving modes, to others that can navigate entirely on their own. Major automakers and research universities are still tweaking self-driving technology and algorithms. Occasionally accidents occur, either due to a misjudgment by the car’s artificial intelligence (AI) or a human driver’s misuse of self-driving features.
    Humans have sensors, too: our scanning eyes, our sense of balance and movement, and the processing power of our brain help us understand our environment. These seemingly basic inputs allow us to drive in virtually every scenario, even if it is new to us, because human brains are good at generalizing novel experiences. In autonomous vehicles, two cameras mounted on gimbals scan and perceive depth using stereo vision to mimic human vision, while balance and motion can be gauged using an inertial measurement unit. But, computers can only react to scenarios they have encountered before or been programmed to recognize.
    Since artificial brains aren’t around yet, task-specific artificial intelligence (AI) algorithms must take the wheel — which means autonomous vehicles must rely on multiple sensors. Fisheye cameras widen the view while other cameras act much like the human eye. Infrared picks up heat signatures. Radar can see through the fog and rain. Light detection and ranging (lidar) pierces through the dark and weaves a neon tapestry of laser beam threads.
    “Every sensor has limitations, and every sensor covers another one’s back,” said Nathir Rawashdeh, assistant professor of computing in Michigan Tech’s College of Computing and one of the study’s lead researchers. He works on bringing the sensors’ data together through an AI process called sensor fusion. More

  • in

    Mass gatherings during Malaysian election directly and indirectly boosted COVID-19 spread, study suggests

    New estimates suggest that mass gatherings during an election in the Malaysian state of Sabah directly caused 70 percent of COVID-19 cases detected in Sabah after the election, and indirectly caused 64.4 percent of cases elsewhere in Malaysia. Jue Tao Lim of the National University of Singapore, Kenwin Maung of the University of Rochester, New York, and colleagues present these findings in the open-access journal PLOS Computational Biology.
    Mass gatherings of people pose high risks of spreading COVID-19. However, it is difficult to accurately estimate the direct and indirect effects of such events on increased case counts.
    To address this difficulty, Lim, Maung, and colleagues developed a new computational method for estimating both direct and spill-over effects of mass gatherings. Departing from traditional epidemiological approaches, they employed a statistical strategy known as a synthetic control method, which enabled comparison between the aftermath of mass gatherings and what might have happened if the gatherings had not occurred.
    The researchers then applied this method to the Sabah state election. This election involved mandated in-person voting and political rallies, both of which resulted in a significant increase in inter-state travel and in-person gatherings by voters, politicians, and campaign workers. Prior to the election, Malaysia had experienced an average of about 16 newly diagnosed COVID-19 cases per day for nearly four months. After the election, that number jumped to 190 cases per day for 17 days until lockdown policies were reinstated.
    Using their novel method, the researchers estimated that mass gatherings during the election directly caused 70 percent of COVID-19 cases in Sabah during the 17 days after the election, amounting to a total of 2,979 cases. Meanwhile, 64.4 percent of post-election cases elsewhere in Malaysia — 1,741 cases total — were indirectly attributed to the election.
    “Our work underscores the serious risk that mass gatherings in a single region could spill over into other regions and cause a national-scale outbreak,” Lim says.
    Lim and colleagues say that the same synthetic control framework could be applied to death rates and genetic data to deepen understanding of the impact of the Sabah election.
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    The robot smiled back

    While our facial expressions play a huge role in building trust, most robots still sport the blank and static visage of a professional poker player. With the increasing use of robots in locations where robots and humans need to work closely together, from nursing homes to warehouses and factories, the need for a more responsive, facially realistic robot is growing more urgent.
    Long interested in the interactions between robots and humans, researchers in the Creative Machines Lab at Columbia Engineering have been working for five years to create EVA, a new autonomous robot with a soft and expressive face that responds to match the expressions of nearby humans. The research will be presented at the ICRA conference on May 30, 2021, and the robot blueprints are open-sourced on Hardware-X (April 2021).
    “The idea for EVA took shape a few years ago, when my students and I began to notice that the robots in our lab were staring back at us through plastic, googly eyes,” said Hod Lipson, James and Sally Scapa Professor of Innovation (Mechanical Engineering) and director of the Creative Machines Lab.
    Lipson observed a similar trend in the grocery store, where he encountered restocking robots wearing name badges, and in one case, decked out in a cozy, hand-knit cap. “People seemed to be humanizing their robotic colleagues by giving them eyes, an identity, or a name,” he said. “This made us wonder, if eyes and clothing work, why not make a robot that has a super-expressive and responsive human face?”
    While this sounds simple, creating a convincing robotic face has been a formidable challenge for roboticists. For decades, robotic body parts have been made of metal or hard plastic, materials that were too stiff to flow and move the way human tissue does. Robotic hardware has been similarly crude and difficult to work with — circuits, sensors, and motors are heavy, power-intensive, and bulky.
    The first phase of the project began in Lipson’s lab several years ago when undergraduate student Zanwar Faraj led a team of students in building the robot’s physical “machinery.” They constructed EVA as a disembodied bust that bears a strong resemblance to the silent but facially animated performers of the Blue Man Group. EVA can express the six basic emotions of anger, disgust, fear, joy, sadness, and surprise, as well as an array of more nuanced emotions, by using artificial “muscles” (i.e. cables and motors) that pull on specific points on EVA’s face, mimicking the movements of the more than 42 tiny muscles attached at various points to the skin and bones of human faces. More

  • in

    Artificial neurons recognize biosignals in real time

    Current neural network algorithms produce impressive results that help solve an incredible number of problems. However, the electronic devices used to run these algorithms still require too much processing power. These artificial intelligence (AI) systems simply cannot compete with an actual brain when it comes to processing sensory information or interactions with the environment in real time.
    Neuromorphic chip detects high-frequency oscillations
    Neuromorphic engineering is a promising new approach that bridges the gap between artificial and natural intelligence. An interdisciplinary research team at the University of Zurich, the ETH Zurich, and the UniversityHospital Zurich has used this approach to develop a chip based on neuromorphic technology that reliably and accurately recognizes complex biosignals. The scientists were able to use this technology to successfully detect previously recorded high-frequency oscillations (HFOs). These specific waves, measured using an intracranial electroencephalogram (iEEG), have proven to be promising biomarkers for identifying the brain tissue that causes epileptic seizures.
    Complex, compact and energy efficient
    The researchers first designed an algorithm that detects HFOs by simulating the brain’s natural neural network: a tiny so-called spiking neural network (SNN). The second step involved imple-menting the SNN in a fingernail-sized piece of hardware that receives neural signals by means of electrodes and which, unlike conventional computers, is massively energy efficient. This makes calculations with a very high temporal resolution possible, without relying on the internet or cloud computing. “Our design allows us to recognize spatiotemporal patterns in biological signals in real time,” says Giacomo Indiveri, professor at the Institute for Neuroinformatics of UZH and ETH Zur-ich.
    Measuring HFOs in operating theaters and outside of hospitals
    The researchers are now planning to use their findings to create an electronic system that reliably recognizes and monitors HFOs in real time. When used as an additional diagnostic tool in operating theaters, the system could improve the outcome of neurosurgical interventions.
    However, this is not the only field where HFO recognition can play an important role. The team’s long-term target is to develop a device for monitoring epilepsy that could be used outside of the hospital and that would make it possible to analyze signals from a large number of electrodes over several weeks or months. “We want to integrate low-energy, wireless data communications in the design — to connect it to a cellphone, for example,” says Indiveri. Johannes Sarnthein, a neurophysiologist at UniversityHospital Zurich, elaborates: “A portable or implantable chip such as this could identify periods with a higher or lower rate of incidence of seizures, which would enable us to deliver personalized medicine.” This research on epilepsy is being conducted at the Zurich Center of Epileptology and Epilepsy Surgery, which is run as part of a partnership between UniversityHospital Zurich, the Swiss Epilepsy Clinic and the University Children’s Hospital Zurich.
    Story Source:
    Materials provided by University of Zurich. Note: Content may be edited for style and length. More

  • in

    Mathematical model developed to prevent botulism

    For years, food producers who make lightly preserved, ready-to-eat food have had to follow a set of guidelines to stop growth of Clostridium botulinum bacteria and production of a strong neurotoxin. The toxin can cause a serious illness called botulism.
    For refrigerated products, the guidelines for controlling Clostridium botulinum indicate that the water contained in the products should have a salt content of at least 3.5%. Unfortunately, this hampers efforts to develop salt-reduced products, even though such products would benefit public health, as most consumers eat more salt than recommended.
    If food producers want to launch products that contain e.g. less salt, they have had to conduct laboratory experiments to document that such a change in recipe will not compromise food safety. This is a time-consuming and costly process.
    Reduced need for costly product testing
    Researchers at the National Food Institute have now developed a mathematical model, which replaces costly laboratory experiments. The industry has been asking for this model for years. The new model can predict whether a particular recipe for chilled products can prevent the growth of Clostridium botulinum and production of the toxin.
    The model is the most comprehensive of its kind in the world and can show how storage temperature, pH, salt and the use of five different preservatives (such as acetic and lactic acids) affect potential bacterial growth and production of the toxin. Previous models have only incorporated the effect of half of these factors. More

  • in

    Spacetime crystals proposed by placing space and time on an equal footing

    A Penn State scientist studying crystal structures has developed a new mathematical formula that may solve a decades-old problem in understanding spacetime, the fabric of the universe proposed in Einstein’s theories of relativity.
    “Relativity tells us space and time can mix to form a single entity called spacetime, which is four-dimensional: three space-axes and one time-axis,” said Venkatraman Gopalan, professor of materials science and engineering and physics at Penn State. “However, something about the time-axis sticks out like sore thumb.”
    For calculations to work within relativity, scientists must insert a negative sign on time values that they do not have to place on space values. Physicists have learned to work with the negative values, but it means that spacetime cannot be dealt with using traditional Euclidean geometry and instead must be viewed with the more complex hyperbolic geometry.
    Gopalan developed a two-step mathematical approach that allows the differences between space and time to be blurred, removing the negative sign problem, serving as a bridge between the two geometries.
    “For more than 100 years, there has been an effort to put space and time on the same footing,” Gopalan said. “But that has really not happened because of this minus sign. This research removes that problem at least in special relativity. Space and time are truly on the same footing in this work.” The paper, published today (May 27) in the journal Acta Crystallographica A, is accompanied by a commentary in which two physicists write that Gopalan’s approach may hold the key to unifying quantum mechanics and gravity, two foundational fields of physics that are yet to be fully unified.
    “Gopalan’s idea of general relativistic spacetime crystals and how to obtain them is both powerful and broad,” said Martin Bojowald, professor of physics at Penn State. “This research, in part, presents a new approach to a problem in physics that has remained unresolved for decades.”
    In addition to providing a new approach to relate spacetime to traditional geometry, the research has implications for developing new structures with exotic properties, known as spacetime crystals.
    Crystals contain repeating arrangement of atoms, and in recent years scientists have explored the concept of time crystals, in which the state of a material changes and repeats in time as well, like a dance. However, time is disconnected from space in those formulations. The method developed by Gopalan would allow for a new class of spacetime crystals to be explored, where space and time can mix.
    “These possibilities could usher in an entirely new class of metamaterials with exotic properties otherwise not available in nature, besides understanding the fundamental attributes of a number of dynamical systems,” said Avadh Saxena, a physicist at Los Alamos National Laboratory.
    Gopalan’s method involves blending two separate observations of the same event. Blending occurs when two observers exchange time coordinates but keep their own space coordinates. With an additional mathematical step called renormalization, this leads to “renormalized blended spacetime.”
    “Let’s say I am on the ground and you are flying on the space station, and we both observe an event like a comet fly by,” Gopalan said. “You make your measurement of when and where you saw it, and I make mine of the same event, and then we compare notes. I then adopt your time measurement as my own, but I retain my original space measurement of the comet. You in turn adopt my time measurement as your own, but retain your own space measurement of the comet. From a mathematical point of view, if we do this blending of our measurements, the annoying minus sign goes away.”
    The National Science Foundation funded this research.
    Story Source:
    Materials provided by Penn State. Note: Content may be edited for style and length. More

  • in

    Improving computer vision for AI

    Researchers from UTSA, the University of Central Florida (UCF), the Air Force Research Laboratory (AFRL) and SRI International have developed a new method that improves how artificial intelligence learns to see.
    Led by Sumit Jha, professor in the Department of Computer Science at UTSA, the team has changed the conventional approach employed in explaining machine learning decisions that relies on a single injection of noise into the input layer of a neural network.
    The team shows that adding noise — also known as pixilation — along multiple layers of a network provides a more robust representation of an image that’s recognized by the AI and creates more robust explanations for AI decisions. This work aids in the development of what’s been called “explainable AI” which seeks to enable high-assurance applications of AI such as medical imaging and autonomous driving.
    “It’s about injecting noise into every layer,” Jha said. “The network is now forced to learn a more robust representation of the input in all of its internal layers. If every layer experiences more perturbations in every training, then the image representation will be more robust and you won’t see the AI fail just because you change a few pixels of the input image.”
    Computer vision — the ability to recognize images — has many business applications. Computer vision can better identify areas of concern in the livers and brains of cancer patients. This type of machine learning can also be employed in many other industries. Manufacturers can use it to detect defection rates, drones can use it to help detect pipeline leaks, and agriculturists have begun using it to spot early signs of crop disease to improve their yields.
    Through deep learning, a computer is trained to perform behaviors, such as recognizing speech, identifying images or making predictions. Instead of organizing data to run through set equations, deep learning works within basic parameters about a data set and trains the computer to learn on its own by recognizing patterns using many layers of processing.
    The team’s work, led by Jha, is a major advancement to previous work he’s conducted in this field. In a 2019 paper presented at the AI Safety workshop co-located with that year’s International Joint Conference on Artificial Intelligence (IJCAI), Jha, his students and colleagues from the Oak Ridge National Laboratory demonstrated how poor conditions in nature can lead to dangerous neural network performance. A computer vision system was asked to recognize a minivan on a road, and did so correctly. His team then added a small amount of fog and posed the same query again to the network: the AI identified the minivan as a fountain. As a result, their paper was a best paper candidate.
    In most models that rely on neural ordinary differential equations (ODEs), a machine is trained with one input through one network, and then spreads through the hidden layers to create one response in the output layer. This team of UTSA, UCF, AFRL and SRI researchers use a more dynamic approach known as a stochastic differential equations (SDEs). Exploiting the connection between dynamical systems to show that neural SDEs lead to less noisy, visually sharper, and quantitatively robust attributions than those computed using neural ODEs.
    The SDE approach learns not just from one image but from a set of nearby images due to the injection of the noise in multiple layers of the neural network. As more noise is injected, the machine will learn evolving approaches and find better ways to make explanations or attributions simply because the model created at the onset is based on evolving characteristics and/or the conditions of the image. It’s an improvement on several other attribution approaches including saliency maps and integrated gradients.
    Jha’s new research is described in the paper “On Smoother Attributions using Neural Stochastic Differential Equations.” Fellow contributors to this novel approach include UCF’s Richard Ewetz, AFRL’s Alvaro Velazquez and SRI’s Sumit Jha. The lab is funded by the Defense Advanced Research Projects Agency, the Office of Naval Research and the National Science Foundation. Their research will be presented at the 2021 IJCAI, a conference with about a 14% acceptance rate for submissions. Past presenters at this highly selective conference have included Facebook and Google.
    “I am delighted to share the fantastic news that our paper on explainable AI has just been accepted at IJCAI,” Jha added. “This is a big opportunity for UTSA to be part of the global conversation on how a machine sees.”
    Story Source:
    Materials provided by University of Texas at San Antonio. Original written by Milady Nazir. Note: Content may be edited for style and length. More