More stories

  • in

    Artificial intelligence makes great microscopes better than ever

    To observe the swift neuronal signals in a fish brain, scientists have started to use a technique called light-field microscopy, which makes it possible to image such fast biological processes in 3D. But the images are often lacking in quality, and it takes hours or days for massive amounts of data to be converted into 3D volumes and movies.
    Now, EMBL scientists have combined artificial intelligence (AI) algorithms with two cutting-edge microscopy techniques — an advance that shortens the time for image processing from days to mere seconds, while ensuring that the resulting images are crisp and accurate. The findings are published in Nature Methods.
    “Ultimately, we were able to take ‘the best of both worlds’ in this approach,” says Nils Wagner, one of the paper’s two lead authors and now a PhD student at the Technical University of Munich. “AI enabled us to combine different microscopy techniques, so that we could image as fast as light-field microscopy allows and get close to the image resolution of light-sheet microscopy.”
    Although light-sheet microscopy and light-field microscopy sound similar, these techniques have different advantages and challenges. Light-field microscopy captures large 3D images that allow researchers to track and measure remarkably fine movements, such as a fish larva’s beating heart, at very high speeds. But this technique produces massive amounts of data, which can take days to process, and the final images usually lack resolution.
    Light-sheet microscopy homes in on a single 2D plane of a given sample at one time, so researchers can image samples at higher resolution. Compared with light-field microscopy, light-sheet microscopy produces images that are quicker to process, but the data are not as comprehensive, since they only capture information from a single 2D plane at a time.
    To take advantage of the benefits of each technique, EMBL researchers developed an approach that uses light-field microscopy to image large 3D samples and light-sheet microscopy to train the AI algorithms, which then create an accurate 3D picture of the sample.
    “If you build algorithms that produce an image, you need to check that these algorithms are constructing the right image,” explains Anna Kreshuk, the EMBL group leader whose team brought machine learning expertise to the project. In the new study, the researchers used light-sheet microscopy to make sure the AI algorithms were working, Anna says. “This makes our research stand out from what has been done in the past.”
    Robert Prevedel, the EMBL group leader whose group contributed the novel hybrid microscopy platform, notes that the real bottleneck in building better microscopes often isn’t optics technology, but computation. That’s why, back in 2018, he and Anna decided to join forces. “Our method will be really key for people who want to study how brains compute. Our method can image an entire brain of a fish larva, in real time,” Robert says.
    He and Anna say this approach could potentially be modified to work with different types of microscopes too, eventually allowing biologists to look at dozens of different specimens and see much more, much faster. For example, it could help to find genes that are involved in heart development, or could measure the activity of thousands of neurons at the same time.
    Next, the researchers plan to explore whether the method can be applied to larger species, including mammals. More

  • in

    Researchers develop artificial intelligence that can detect sarcasm in social media

    Computer science researchers at the University of Central Florida have developed a sarcasm detector.
    Social media has become a dominant form of communication for individuals, and for companies looking to market and sell their products and services. Properly understanding and responding to customer feedback on Twitter, Facebook and other social media platforms is critical for success, but it is incredibly labor intensive.
    That’s where sentiment analysis comes in. The term refers to the automated process of identifying the emotion — either positive, negative or neutral — associated with text. While artificial intelligence refers to logical data analysis and response, sentiment analysis is akin to correctly identifying emotional communication. A UCF team developed a technique that accurately detects sarcasm in social media text.
    The team’s findings were recently published in the journal Entropy.
    Effectively the team taught the computer model to find patterns that often indicate sarcasm and combined that with teaching the program to correctly pick out cue words in sequences that were more likely to indicate sarcasm. They taught the model to do this by feeding it large data sets and then checked its accuracy.
    “The presence of sarcasm in text is the main hindrance in the performance of sentiment analysis,” says Assistant Professor of engineering Ivan Garibay ’00MS ’04PhD. “Sarcasm isn’t always easy to identify in conversation, so you can imagine it’s pretty challenging for a computer program to do it and do it well. We developed an interpretable deep learning model using multi-head self-attention and gated recurrent units. The multi-head self-attention module aids in identifying crucial sarcastic cue-words from the input, and the recurrent units learn long-range dependencies between these cue-words to better classify the input text.”
    The team, which includes computer science doctoral student Ramya Akula, began working on this problem under a DARPA grant that supports the organization’s Computational Simulation of Online Social Behavior program.
    “Sarcasm has been a major hurdle to increasing the accuracy of sentiment analysis, especially on social media, since sarcasm relies heavily on vocal tones, facial expressions and gestures that cannot be represented in text,” says Brian Kettler, a program manager in DARPA’s Information Innovation Office (I2O). “Recognizing sarcasm in textual online communication is no easy task as none of these cues are readily available.”
    This is one of the challenges Garibay’s Complex Adaptive Systems Lab (CASL) is studying. CASL is an interdisciplinary research group dedicated to the study of complex phenomena such as the global economy, the global information environment, innovation ecosystems, sustainability, and social and cultural dynamics and evolution. CASL scientists study these problems using data science, network science, complexity science, cognitive science, machine learning, deep learning, social sciences, team cognition, among other approaches.
    “In face-to-face conversation, sarcasm can be identified effortlessly using facial expressions, gestures, and tone of the speaker,” Akula says. “Detecting sarcasm in textual communication is not a trivial task as none of these cues are readily available. Specially with the explosion of internet usage, sarcasm detection in online communications from social networking platforms is much more challenging.”
    Garibay is an assistant professor in Industrial Engineering and Management Systems. He has several degrees including a Ph.D. in computer science from UCF. Garibay is the director of UCF’s Artificial Intelligence and Big Data Initiative of CASL and of the master’s program in data analytics. His research areas include complex systems, agent-based models, information and misinformation dynamics on social media, artificial intelligence and machine learning. He has more than 75 peer-reviewed papers and more than $9.5 million in funding from various national agencies.
    Akula is a doctoral scholar and graduate research assistant at CASL. She has a master’s degree in computer science from Technical University of Kaiserslautern in Germany and a bachelor’s degree in computer science engineering from Jawaharlal Nehru Technological University, India.
    Story Source:
    Materials provided by University of Central Florida. Original written by Zenaida Gonzalez Kotala. Note: Content may be edited for style and length. More

  • in

    Algorithms show accuracy in gauging unconsciousness under general anesthesia

    Anesthestic drugs act on the brain but most anesthesiologists rely on heart rate, respiratory rate, and movement to infer whether surgery patients remain unconscious to the desired degree. In a new study, a research team based at MIT and Massachusetts General Hospital shows that a straightforward artificial intelligence approach, attuned to the kind of anesthetic being used, can yield algorithms that assess unconsciousness in patients based on brain activity with high accuracy and reliability.
    “One of the things that is foremost in the minds of anesthesiologists is ‘Do I have somebody who is lying in front of me who may be conscious and I don’t realize it?’ Being able to reliably maintain unconsciousness in a patient during surgery is fundamental to what we do,” said senior author Emery N. Brown, Edward Hood Taplin Professor in The Picower Institute for Learning and Memory and the Institute for Medical Engineering and Science at MIT, and an anesthesiologist at MGH. “This is an important step forward.”
    More than providing a good readout of unconsciousness, Brown added, the new algorithms offer the potential to allow anesthesiologists to maintain it at the desired level while using less drug than they might administer when depending on less direct, accurate and reliable indicators. That can improve patient’s post-operative outcomes, such as delirium.
    “We may always have to be a little bit ‘overboard’,” said Brown, who is also a professor at Harvard Medical School. “But can we do it with sufficient accuracy so that we are not dosing people more than is needed?”
    Used to drive an infusion pump, for instance, algorithms could help anesthesiologists precisely throttle drug delivery to optimize a patient’s state and the doses they are receiving.
    Artificial intelligence, real-world testing
    To develop the technology to do so, postdocs John Abel and Marcus Badgeley led the study, published in PLOS ONE [LINK TBD], in which they trained machine learning algorithms on a remarkable data set the lab gathered back in 2013. In that study, 10 healthy volunteers in their 20s underwent anesthesia with the commonly used drug propofol. As the dose was methodically raised using computer controlled delivery, the volunteers were asked to respond to a simple request until they couldn’t anymore. Then when they were brought back to consciousness as the dose was later lessened, they became able to respond again. All the while, neural rhythms reflecting their brain activity were recorded with electroencephalogram (EEG) electrodes, providing a direct, real-time link between measured brain activity and exhibited unconsciousness. More

  • in

    Hologram experts can now create real-life images that move in the air

    They may be tiny weapons, but Brigham Young University’s holography research group has figured out how to create lightsabers — green for Yoda and red for Darth Vader, naturally — with actual luminous beams rising from them.
    Inspired by the displays of science fiction, the researchers have also engineered battles between equally small versions of the Starship Enterprise and a Klingon Battle Cruiser that incorporate photon torpedoes launching and striking the enemy vessel that you can see with the naked eye.
    “What you’re seeing in the scenes we create is real; there is nothing computer generated about them,” said lead researcher Dan Smalley, a professor of electrical engineering at BYU. “This is not like the movies, where the lightsabers or the photon torpedoes never really existed in physical space. These are real, and if you look at them from any angle, you will see them existing in that space.”
    It’s the latest work from Smalley and his team of researchers who garnered national and international attention three years ago when they figured out how to draw screenless, free-floating objects in space. Called optical trap displays, they’re created by trapping a single particle in the air with a laser beam and then moving that particle around, leaving behind a laser-illuminated path that floats in midair; like a “a 3D printer for light.”
    The research group’s new project, funded by a National Science Foundation CAREER grant, goes to the next level and produces simple animations in thin air. The development paves the way for an immersive experience where people can interact with holographic-like virtual objects that co-exist in their immediate space.
    “Most 3D displays require you to look at a screen, but our technology allows us to create images floating in space — and they’re physical; not some mirage,” Smalley said. “This technology can make it possible to create vibrant animated content that orbits around or crawls on or explodes out of every day physical objects.”
    To demonstrate that principle, the team has created virtual stick figures that walk in thin air. They were able to demonstrate the interaction between their virtual images and humans by having a student place a finger in the middle of the volumetric display and then film the same stick finger walking along and jumping off that finger.
    Smalley and Rogers detail these and other recent breakthroughs in a new paper published in Nature Scientific Reports this month. The work overcomes a limiting factor to optical trap displays: wherein this technology lacks the ability to show virtual images, Smalley and Rogers show it is possible to simulate virtual images by employing a time-varying perspective projection backdrop.
    “We can play some fancy tricks with motion parallax and we can make the display look a lot bigger than it physically is,” Rogers said. “This methodology would allow us to create the illusion of a much deeper display up to theoretically an infinite size display.”
    Video: https://www.youtube.com/watch?v=N12i_FaHvOU&list=TLGGbyUMLSISdIswNzA1MjAyMQ&t=1s
    Story Source:
    Materials provided by Brigham Young University. Original written by Todd Hollingshead. Note: Content may be edited for style and length. More

  • in

    Mathematical model predicting disease spread patterns

    Early on in the COVID-19 pandemic, health officials seized on contact tracing as the most effective way to anticipate the virus’s migration from the initial, densely populated hot spots and try to curb its spread. Months later, infections were nonetheless recorded in similar patterns in nearly every region of the country, both urban and rural.
    A team of environmental engineers, alerted by the unusual wealth of data published regularly by county health agencies throughout the pandemic, began researching new methods to describe what was happening on the ground in a way that does not require obtaining information on individuals’ movements or contacts. Funding for their effort came through a National Research Foundation RAPID research grant (CBET 2028271).
    In a paper published May 6 in the Proceedings of the National Academy of Sciences, they presented their results: a model that predicts where the disease will spread from an outbreak, in what patterns and how quickly.
    “Our model should be helpful to policymakers because it predicts disease spread without getting into granular details, such as personal travel information, which can be tricky to obtain from a privacy standpoint and difficult to gather in terms of resources,” explained Xiaolong Geng, a research assistant professor of environmental engineering at NJIT who built the model and is one of the paper’s authors.
    “We did not think a high level of intrusion would work in the United States so we sought an alternative way to map the spread,” noted Gabriel Katul, the Theodore S. Coile Distinguished Professor of Hydrology and Micrometeorology at Duke University and a co-author.
    Their numerical scheme mapped the classic SIR epidemic model (computations based on a division of the population into groups of susceptible, infectious and recovered people) onto the population agglomeration template. Their calculations closely approximated the multiphase COVID-19 epidemics recorded in each U.S. state. More

  • in

    In graphene process, resistance is useful

    A Rice University laboratory has adapted its laser-induced graphene technique to make high-resolution, micron-scale patterns of the conductive material for consumer electronics and other applications.
    Laser-induced graphene (LIG), introduced in 2014 by Rice chemist James Tour, involves burning away everything that isn’t carbon from polymers or other materials, leaving the carbon atoms to reconfigure themselves into films of characteristic hexagonal graphene.
    The process employs a commercial laser that “writes” graphene patterns into surfaces that to date have included wood, paper and even food.
    The new iteration writes fine patterns of graphene into photoresist polymers, light-sensitive materials used in photolithography and photoengraving.
    Baking the film increases its carbon content, and subsequent lasing solidifies the robust graphene pattern, after which unlased photoresist is washed away.
    Details of the PR-LIG process appear in the American Chemical Society journal ACS Nano.
    “This process permits the use of graphene wires and devices in a more conventional silicon-like process technology,” Tour said. “It should allow a transition into mainline electronics platforms.”
    The Rice lab produced lines of LIG about 10 microns wide and hundreds of nanometers thick, comparable to that now achieved by more cumbersome processes that involve lasers attached to scanning electron microscopes, according to the researchers.
    Achieving lines of LIG small enough for circuitry prompted the lab to optimize its process, according to graduate student Jacob Beckham, lead author of the paper.
    “The breakthrough was a careful control of the process parameters,” Beckham said. “Small lines of photoresist absorb laser light depending on their geometry and thickness, so optimizing the laser power and other parameters allowed us to get good conversion at very high resolution.”
    Because the positive photoresist is a liquid before being spun onto a substrate for lasing, it’s a simple matter to dope the raw material with metals or other additives to customize it for applications, Tour said.
    Potential applications include on-chip microsupercapacitors, functional nanocomposites and microfluidic arrays.
    Story Source:
    Materials provided by Rice University. Note: Content may be edited for style and length. More

  • in

    Evading the uncertainty principle in quantum physics

    The uncertainty principle, first introduced by Werner Heisenberg in the late 1920’s, is a fundamental concept of quantum mechanics. In the quantum world, particles like the electrons that power all electrical product can also behave like waves. As a result, particles cannot have a well-defined position and momentum simultaneously. For instance, measuring the momentum of a particle leads to a disturbance of position, and therefore the position cannot be precisely defined.
    In recent research, published in Science, a team led by Prof. Mika Sillanpää at Aalto University in Finland has shown that there is a way to get around the uncertainty principle. The team included Dr. Matt Woolley from the University of New South Wales in Australia, who developed the theoretical model for the experiment.
    Instead of elementary particles, the team carried out the experiments using much larger objects: two vibrating drumheads one-fifth of the width of a human hair. The drumheads were carefully coerced into behaving quantum mechanically.
    “In our work, the drumheads exhibit a collective quantum motion. The drums vibrate in an opposite phase to each other, such that when one of them is in an end position of the vibration cycle, the other is in the opposite position at the same time. In this situation, the quantum uncertainty of the drums’ motion is cancelled if the two drums are treated as one quantum-mechanical entity,” explains the lead author of the study, Dr. Laure Mercier de Lepinay.
    This means that the researchers were able to simultaneously measure the position and the momentum of the two drumheads — which should not be possible according to the Heisenberg uncertainty principle. Breaking the rule allows them to be able to characterize extremely weak forces driving the drumheads.
    “One of the drums responds to all the forces of the other drum in the opposing way, kind of with a negative mass,” Sillanpää says.
    Furthermore, the researchers also exploited this result to provide the most solid evidence to date that such large objects can exhibit what is known as quantum entanglement. Entangled objects cannot be described independently of each other, even though they may have an arbitrarily large spatial separation. Entanglement allows pairs of objects to behave in ways that contradict classical physics, and is the key resource behind emerging quantum technologies. A quantum computer can, for example, carry out the types of calculations needed to invent new medicines much faster than any supercomputer ever could.
    In macroscopic objects, quantum effects like entanglement are very fragile, and are destroyed easily by any disturbances from their surrounding environment. Therefore, the experiments were carried out at a very low temperature, only a hundredth a degree above absolute zero at -273 degrees.
    In the future, the research group will use these ideas in laboratory tests aiming at probing the interplay of quantum mechanics and gravity. The vibrating drumheads may also serve as interfaces for connecting nodes of large-scale, distributed quantum networks.
    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More

  • in

    Trial demonstrates early AI-guided detection of heart disease in routine practice

    Heart disease can take a number of forms, but some types of heart disease, such as asymptomatic low ejection fraction, can be hard to recognize, especially in the early stages when treatment would be most effective. The ECG AI-Guided Screening for Low Ejection Fraction, or EAGLE, trial set out to determine whether an artificial intelligence (AI) screening tool developed to detect low ejection fraction using data from an EKG could improve the diagnosis of this condition in routine practice. Study findings are published in Nature Medicine.
    Systolic low ejection fraction is defined as the heart’s inability to contract strongly enough with each beat to pump at least 50% of the blood from its chamber. An echocardiogram can readily diagnose low ejection fraction, but this time-consuming imaging test requires more resources than a 12-lead EKG, which is fast, inexpensive and readily available. The AI-enabled EKG algorithm was tested and developed through a convolutional neural network and validated in subsequent studies.
    The EAGLE trial took place in 45 medical institutions in Minnesota and Wisconsin, including rural clinics, and community and academic medical centers. In all, 348 primary care clinicians from 120 medical care teams were randomly assigned to usual care or intervention. The intervention group was alerted to a positive screening result for low ejection fraction via the electronic health record, prompting them to order an echocardiogram to confirm.
    “The AI-enabled EKG facilitated the diagnosis of patients with low ejection fraction in a real-world setting by identifying people who previously would have slipped through the cracks,” says Peter Noseworthy, M.D., a Mayo Clinic cardiac electrophysiologist. Dr. Noseworthy is senior author on the study.
    In eight months, 22,641 adult patients had an EKG under the care of the clinicians in the trial. The AI found positive results in 6% of the patients. The proportion of patients who received an echocardiogram was similar overall, but among patients with a positive screening result, a higher percentage of intervention patients received an echocardiogram.
    “The AI intervention increased the diagnosis of low ejection fraction overall by 32% relative to usual care. Among patients with a positive AI result, the relative increase of diagnosis was 43%,” says Xiaoxi Yao, Ph.D., a health outcomes researcher in cardiovascular diseases at Mayo Clinic and first author on the study. “To put it in absolute terms, for every 1,000 patients screened, the AI screening yielded five new diagnoses of low ejection fraction over usual care.”
    “With EAGLE, the information was readily available in the electronic health record, and care teams could see the results and decide how to use that information,” says Dr. Noseworthy. “The takeaway is that we are likely to see more AI use in the practice of medicine as time goes on. It’s up to us to figure how to use this in a way that improves care and health outcomes but does not overburden front-line clinicians.”
    Also, the EAGLE trial used a positive deviance approach to evaluate the top five care team users and the top five nonusers of the AI screening information. Dr. Yao says this cycle of learning and feedback from physicians will demonstrate ways of improving adaptation and application of AI technology in the practice.
    EAGLE is one of the first large-scale trials to demonstrate value of AI in routine practice. The low ejection fraction algorithm, which has received Food and Drug Administration breakthrough designation, is one of several algorithms developed by Mayo and licensed to Anumana Inc., a new company focusing on unlocking hidden biomedical knowledge to enable early detection as well as accelerate treatment of heart disease. The low ejection fraction algorithm was also previously licensed to Eko Devices Inc., specifically for hand-held devices that are externally applied to the chest.
    The EAGLE trial was funded by Mayo Clinic’s Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, in collaboration with the departments of Cardiovascular Medicine and Family Medicine, and the Division of Community Internal Medicine.
    Story Source:
    Materials provided by Mayo Clinic. Original written by Terri Malloy. Note: Content may be edited for style and length. More