More stories

  • in

    ChatGPT is debunking myths on social media around vaccine safety, say experts

    ChatGPT could help to increase vaccine uptake by debunking myths around jab safety, say the authors of a study published in the peer-reviewed journal Human Vaccines and Immunotherapeutics.
    The researchers asked the artificial intelligence (AI) chatbot the top 50 most frequently-asked Covid-19 vaccine questions. They included queries based on myths and fake stories such as the vaccine causing Long Covid.
    Results show that ChatGPT scored nine out of 10 on average for accuracy. The rest of the time it was correct but left some gaps in the information provided, according to the study.
    Based on these findings, experts who led the study from the GenPoB research group based at the Instituto de Investigación Sanitaria (IDIS) — Hospital Clinico Universitario of Santiago de Compostela, say the AI tool is a “reliable source of non-technical information to the public,” especially for people without specialist scientific knowledge.
    However, the findings do highlight some concerns about the technology such as ChatGPT changing its answers in certain situations.
    “Overall, ChatGPT constructs a narrative in line with the available scientific evidence, debunking myths circulating on social media,” says lead author Antonio Salas, who as well as leading the GenPoB research group, is also a Professor at the Faculty of Medicine at the University of Santiago de Compostela, in Spain.
    “Thereby it potentially facilitates an increase in vaccine uptake. ChatGPT can detect counterfeit questions related to vaccines and vaccination. The language this AI uses is not too technical and therefore easily understandable to the public but without losing scientific rigor. More

  • in

    Better cybersecurity with new material

    Digital information exchange can be safer, cheaper and more environmentally friendly with the help of a new type of random number generator for encryption developed at Linköping University, Sweden. The researchers behind the study believe that the new technology paves the way for a new type of quantum communication.
    In an increasingly connected world, cybersecurity is becoming increasingly important to protect not just the individual, but also, for example, national infrastructure and banking systems. And there is an ongoing race between hackers and those trying to protect information. The most common way to protect information is through encryption. So when we send emails, pay bills and shop online, the information is digitally encrypted.
    To encrypt information, a random number generator is used, which can either be a computer programme or the hardware itself. The random number generator provides keys that are used to both encrypt and unlock the information at the receiving end.
    Different types of random number generators provide different levels of randomness and thus security. Hardware is the much safer option as randomness is controlled by physical processes. And the hardware method that provides the best randomness is based on quantum phenomena — what researchers call the Quantum Random Number Generator, QRNG.
    “In cryptography, it’s not only important that the numbers are random, but that you’re the only one who knows about them. With QRNG’s, we can certify that a large amount of the generated bits is private and thus completely secure. And if the laws of quantum physics are true, it should be impossible to eavesdrop without the recipient finding out,” says Guilherme B Xavier, researcher at the Department of Electrical Engineering at Linköping University.
    His research group, together with researchers at the Department of Physics, Chemistry and Biology (IFM), has developed a new type of QRNG, that can be used for encryption, but also for betting and computer simulations. The new feature of the Linköping researchers’ QRNG is the use of light emitting diodes made from the crystal-like material perovskite.
    Their random number generator is among the best produced and compares well with similar products. Thanks to the properties of perovskites, it has the potential to be cheaper and more environmentally friendly. More

  • in

    Software analyzes calcium ‘sparks’ that can contribute to arrhythmia

    A team of UC Davis and University of Oxford researchers have developed an innovative tool: SparkMaster 2. The open-source software allows scientists to analyze normal and abnormal calcium signals in cells automatically.
    Calcium is a key signaling molecule in all cells, including muscles like the heart. The new software enables the automatic analysis of distinct patterns of calcium release in cells. This includes calcium “sparks,” microscopic releases of calcium within cardiac cells associated with irregular heartbeats, also known as arrhythmia.
    A research article demonstrating the capabilities of SparkMaster 2 was published in Circulation Research.
    Jakub Tomek, the first author of the research article, is a Sir Henry Wellcome Fellow in the Department of Physiology, Anatomy and Genetics at the University of Oxford. He spent his fellowship year at UC Davis, working with Distinguished Professor Donald M. Bers.
    “It was great to present SparkMaster 2 at recent conferences and see the enthusiastic response. I felt it would be an outlier and that few people would care. But many people were excited about having a new analysis tool that overcomes many of the limitations they have experienced with prior tools,” Tomek said.
    Fellowship at UC Davis leads to updated tool
    Problems with how and when calcium is released by cells can have an impact on a range of diseases, including arrhythmia and hypertension. To understand the mechanisms behind these diseases, researchers use fluorescent calcium indicators and microscopic imaging that can measure the calcium changes at the cellular level. More

  • in

    Optics and AI find viruses faster

    Researchers have developed an automated version of the viral plaque assay, the gold-standard method for detecting and quantifying viruses. The new method uses time-lapse holographic imaging and deep learning to greatly reduce detection time and eliminate staining and manual counting. This advance could help streamline the development of new vaccines and antiviral drugs.
    Yuzhu Li from Ozcan Lab at the University of California, Los Angeles (UCLA), will present this research at Frontiers in Optics + Laser Science (FiO LS), which will be held 9 — 12 October 2023 at the Greater Tacoma Convention Center in Tacoma (Greater Seattle Area), Washington.
    “By cutting down the detection time compared to traditional viral plaque assays, this technique might help expedite vaccine and drug development research by significantly reducing the detection time needed and eliminating chemical staining and manual counting entirely, explains Li. “In the event of a new virus outbreak, vaccines or antiviral treatments could be developed, tested, and made available to the public at a significantly accelerated rate, resulting in a faster response time to virus-induced health emergencies.”
    Although the viral plaque assay is a cost-effective way to assess virus infectivity and quantify the amount of a virus in a sample, it is time consuming to perform. Samples are first diluted and then added to cultured cells. If the virus kills the infected cells, a region free of cells — a plaque — develops. Experts then manually count the stained plaque-forming units (PFUs), a process that is susceptible to staining irregularities and human counting errors.
    The new stain-free automated viral plaque assay system replaces manual plaque counting with a lens-free holographic imaging system that images the spatiotemporal features of PFUs during incubation. A deep learning algorithm is then used to detect, classify and locate PFUs based on changes observed.
    To show the efficacy of their system, the researchers infected cultured cells with the Vesicular stomatitis virus. After just 20 hours of incubation, the automated system detected more than 90% of the viral PFUs without any false positives. This was much faster than the traditional plaque assay, which requires 48 hours of incubation for this virus. They also applied the automated approach to herpes simplex virus type-1 and encephalomyocarditis virus. They demonstrated even shorter incubation times for these viruses, saving an average of around 48 and 20 hours, respectively.
    The researchers report that no false positives were detected across all time points. In addition, because the system can identify individual PFUs during their early growth, before the formation of PFU clusters, it can be used to analyze viral samples with about 10 times higher virus concentrations than traditional approaches.
    “As for the next steps, UCLA researchers are improving their system design to further increase its sensitivity and specificity for various types of viruses, paving the way for broad adoption in laboratory and industrial settings, said Li. “They are also exploring other potential applications of this technique in virology research for high-throughput and cost-effective screening of antiviral drugs.” More

  • in

    A system to keep cloud-based gamers in sync

    Cloud gaming, which involves playing a video game remotely from the cloud, witnessed unprecedented growth during the lockdowns and gaming hardware shortages that occurred during the heart of the Covid-19 pandemic. Today, the burgeoning industry encompasses a $6 billion global market and more than 23 million players worldwide.
    However, interdevice synchronization remains a persistent problem in cloud gaming and the broader field of networking. In cloud gaming, video, audio, and haptic feedback are streamed from one central source to multiple devices, such as a player’s screen and controller, which typically operate on separate networks. These networks aren’t synchronized, leading to a lag between these two separate streams. A player might see something happen on the screen and then hear it on their controller a half second later.
    Inspired by this problem, scientists from MIT and Microsoft Research took a unique approach to synchronizing streams transmitted to two devices. Their system, called Ekho, adds inaudible white noise sequences to the game audio streamed from the cloud server. Then it listens for those sequences in the audio recorded by the player’s controller.
    Ekho uses the mismatch between these noise sequences to continuously measure and compensate for the interstream delay.
    In real cloud gaming sessions, the researchers showed that Ekho is highly reliable. The system can keep streams synchronized to within less than 10 milliseconds of each other, most of the time. Other synchronization methods resulted in consistent delays of more than 50 milliseconds.
    And while Ekho was designed for cloud gaming, this technique could be used more broadly to synchronize media streams traveling to different devices, such as in training situations that utilize multiple augmented or virtual reality headsets.
    “Sometimes, all it takes for a good solution to come out is to think outside what has been defined for you. The entire community has been fixed on how to solve this problem by synchronizing through the network. Synchronizing two streams by listening to the audio in the room sounded crazy, but it turned out to be a very good solution,” says Pouya Hamadanian, an electrical engineering and computer science (EECS) graduate student and lead author of a paper describing Ekho. More

  • in

    An ‘introspective’ AI finds diversity improves performance

    An artificial intelligence with the ability to look inward and fine tune its own neural network performs better when it chooses diversity over lack of diversity, a new study finds. The resulting diverse neural networks were particularly effective at solving complex tasks.
    “We created a test system with a non-human intelligence, an artificial intelligence (AI), to see if the AI would choose diversity over the lack of diversity and if its choice would improve the performance of the AI,” says William Ditto, professor of physics at North Carolina State University, director of NC State’s Nonlinear Artificial Intelligence Laboratory (NAIL) and co-corresponding author of the work. “The key was giving the AI the ability to look inward and learn how it learns.”
    Neural networks are an advanced type of AI loosely based on the way that our brains work. Our natural neurons exchange electrical impulses according to the strengths of their connections. Artificial neural networks create similarly strong connections by adjusting numerical weights and biases during training sessions. For example, a neural network can be trained to identify photos of dogs by sifting through a large number of photos, making a guess about whether the photo is of a dog, seeing how far off it is and then adjusting its weights and biases until they are closer to reality.
    Conventional AI uses neural networks to solve problems, but these networks are typically composed of large numbers of identical artificial neurons. The number and strength of connections between those identical neurons may change as it learns, but once the network is optimized, those static neurons are the network.
    Ditto’s team, on the other hand, gave its AI the ability to choose the number, shape and connection strength between neurons in its neural network, creating sub-networks of different neuron types and connection strengths within the network as it learns.
    “Our real brains have more than one type of neuron,” Ditto says. “So we gave our AI the ability to look inward and decide whether it needed to modify the composition of its neural network. Essentially, we gave it the control knob for its own brain. So it can solve the problem, look at the result, and change the type and mixture of artificial neurons until it finds the most advantageous one. It’s meta-learning for AI.
    “Our AI could also decide between diverse or homogenous neurons,” Ditto says. “And we found that in every instance the AI chose diversity as a way to strengthen its performance.”
    The team tested the AI’s accuracy by asking it to perform a standard numerical classifying exercise, and saw that its accuracy increased as the number of neurons and neuronal diversity increased. A standard, homogenous AI could identify the numbers with 57% accuracy, while the meta-learning, diverse AI was able to reach 70% accuracy. More

  • in

    A step closer to digitizing the sense of smell: Model describes odors better than human panelists

    A main crux of neuroscience is learning how our senses translate light into sight, sound into hearing, food into taste, and texture into touch. Smell is where these sensory relationships get more complex and perplexing.
    To address this question, a research team co-led by the Monell Chemical Senses Center and start-up Osmo, a Cambridge, Mass.-based company spun out of machine learning research done at Google Research, Google DeepMind (formerly known as Google Brain), are investigating how airborne chemicals connect to odor perception in the brain. To this end they discovered that a machine-learning model has achieved human-level proficiency at describing, in words, how chemicals might smell. Their research appears in the September 1 issue of Science.
    “The model addresses age-old gaps in the scientific understanding of the sense of smell,” said senior co-author Joel Mainland, PhD, Monell Center Member. This collaboration moves the world closer to digitizing odors to be recorded and reproduced. It also may identify new odors for the fragrance and flavor industry that could not only decrease dependence on naturally sourced endangered plants, but also identify new functional scents for such uses as mosquito repellent or malodor masking.
    “How our brains and noses work together” Humans have about 400 functional olfactory receptors. These are proteins at the end of olfactory nerves that connect with airborne molecules to transmit an electrical signal to the olfactory bulb. The number of olfactory receptors is much more than we use for color vision — four — or even taste — about 40.
    “In olfaction research, however, the question of what physical properties make an airborne molecule smell the way it does to the brain has remained an enigma,” said Mainland. “But if a computer can discern the relationship between how molecules are shaped and how we ultimately perceive their odors, scientists could use that knowledge to advance the understanding of how our brains and noses work together.”
    To address this, Osmo CEO Alex Wiltschko, PhD and his team created a model that learned how to match the prose descriptions of a molecule’s odor with the odor’s molecular structure. The resulting map of these interactions is essentially groupings of similarly smelling odors, like floral sweet and candy sweet. “Computers have been able to digitize vision and hearing, but not smell — our deepest and oldest sense,” said Wiltschko. “This study proposes and validates a novel data-driven map of human olfaction, matching chemical structure to odor perception.”
    What is the smell of garlic or of ozone?
    The model was trained using an industry dataset that included the molecular structures and odor qualities of 5,000 known odorants. Data input is the shape of a molecule, and the output is a prediction of which odor words best describe its smell. More

  • in

    Electrical noise stimulation applied to the brain could be key to boosting math learning

    Exciting a brain region using electrical noise stimulation can help improve mathematical learning in those who struggle with the subject, according to a new study from the Universities of Surrey and Oxford, Loughborough University, and Radboud University in The Netherlands.
    During this unique study, researchers investigated the impact of neurostimulation on learning. Despite the growing interest in this non-invasive technique, little is known about the neurophysiological changes induced and the effect it has on learning.
    Researchers found that electrical noise stimulation over the frontal part of the brain improved the mathematical ability of people whose brain was less excited (by mathematics) before the application of stimulation. No improvement in mathematical scores was identified in those who had a high level of brain excitation during the initial assessment or in the placebo groups. Researchers believe that electrical noise stimulation acts on the sodium channels in the brain, interfering with the cell membrane of the neurons, which increases cortical excitability.
    Professor Roi Cohen Kadosh, Professor of Cognitive Neuroscience and Head of the School of Psychology at the University of Surrey who led this project, said:
    “Learning is key to everything we do in life — from developing new skills, such as driving a car, to learning how to code. Our brains are constantly absorbing and acquiring new knowledge.
    “Previously, we have shown that a person’s ability to learn is associated with neuronal excitation in their brains. What we wanted to discover in this case is if our novel stimulation protocol could boost, in other words excite, this activity and improve mathematical skills.”
    For the study, 102 participants were recruited, and their mathematical skills were assessed through a series of multiplication problems. Participants were then split into four groups: a learning group exposed to high-frequency random electrical noise stimulation, an overlearning group in which participants practised the multiplication beyond the point of mastery with high-frequency random electrical noise stimulation. The remaining two groups, consisted of a learning and overlearning group but they were exposed to a sham (i.e., placebo) condition, an experience akin to real stimulation without applying significant electrical currents. EEG recordings were taken at the beginning and at the end of the stimulation to measure brain activity. More