More stories

  • in

    Scientists develop an energy-efficient wireless power and information transfer system

    Industrial Internet of Things (IIoTs) refers to a technology that combines wireless sensors, controllers, and mobile communication technologies to make every aspect of industrial production processes intelligent and efficient. Since IIoTs can involve several small battery-driven devices and sensors, there is a growing need to develop a robust network for data transmission and power transfer to monitor the IIoT environment.
    In this regard, wireless power transfer is a promising technology. It utilizes radio frequency signals to power small devices that consume minimal power. Recently, simultaneous wireless information and power transfer (SWIPT), which utilizes a single radio frequency signal to simultaneously perform energy harvesting and information decoding, has attracted significant interest for IIoTs. Additionally, with smart devices rapidly growing in number, SWIPT has been combined with nonorthogonal multiple access (NOMA) system, which is a promising candidate for IIoTs due to their ability to extend the battery life of sensors and other devices. However, the energy efficiency of this system falls significantly with transmission distance from the central controller.
    To overcome this limitation, a team of researchers from South Korea, led by Associate Professor Dong-Wook Seo from the Division of Electronics and Electrical Information Engineering at Korea Maritime and Ocean University, has developed a new framework by applying SWIPT-aided NOMA to a distributed antenna system (DAS), significantly improving the energy and spectral efficiencies of IIoTs. “By applying a DAS with supporting antennas relatively close to edge users alongside a central base station, SWIPT-NOMA’s loss with growing distance can be reduced efficiently. This improves information decoding and energy harvesting performance,” explains Dr. Seo.
    Their study was made available online on 27 October 2022 and published in Volume 19, Issue 7 of the journal IEEE Transactions on Industrial Informatics in 01 July 2023.
    The researchers formulated a three-step iterative algorithm to maximize the energy efficiency of the SWIPT-NOMA-DAS system. They first optimized the power allocation for the central IoT controller. After that, the power allocation for NOMA signaling and power splitting (PS) assignment for SWIPT were optimized jointly, while minimizing the data rates and harvested energy requirements. Finally, the team analyzed an outage event in which the system cannot provide sufficient energy and data rates, thereby extending the joint power allocation and PS assignment optimization method to the multi-cluster scenario.
    They validated their algorithm through extensive numerical simulations, finding that the proposed SWIPT-NOMA-DAS system is five times more energy efficient than SWIPT-NOMA without DAS. Also, it shows a more than 10% improvement in performance over SWIPT-OMA-DAS.
    Highlighting the significance of their study, Dr. Seo says: “This technology ensures very efficient energy consumption and offers various advantages such as convenience, low power, and battery life extension. Thus, it can be applied to smartphones, laptops, wearable devices, and electric vehicles. Most importantly, the SWIPT-NOMA-DAS system can optimize resource allocation and efficiently perform wireless charging and information transmission for users in an IoT environment.” More

  • in

    AI performs comparably to human readers of mammograms

    Using a standardized assessment, researchers in the UK compared the performance of a commercially available artificial intelligence (AI) algorithm with human readers of screening mammograms. Results of their findings were published in Radiology, a journal of the Radiological Society of North America (RSNA).
    Mammographic screening does not detect every breast cancer. False-positive interpretations can result in women without cancer undergoing unnecessary imaging and biopsy. To improve the sensitivity and specificity of screening mammography, one solution is to have two readers interpret every mammogram.
    According to the researchers, double reading increases cancer detection rates by 6 to 15% and keeps recall rates low. However, this strategy is labor-intensive and difficult to achieve during reader shortages.
    “There is a lot of pressure to deploy AI quickly to solve these problems, but we need to get it right to protect women’s health,” said Yan Chen, Ph.D., professor of digital screening at the University of Nottingham, United Kingdom.
    Prof. Chen and her research team used test sets from the Personal Performance in Mammographic Screening, or PERFORMS, quality assurance assessment utilized by the UK’s National Health Service Breast Screening Program (NHSBSP), to compare the performance of human readers with AI. A single PERFORMS test consists of 60 challenging exams from the NHSBSP with abnormal, benign and normal findings. For each test mammogram, the reader’s score is compared to the ground truth of the AI results.
    “It’s really important that human readers working in breast cancer screening demonstrate satisfactory performance,” she said. “The same will be true for AI once it enters clinical practice.”
    The research team used data from two consecutive PERFORMS test sets, or 120 screening mammograms, and the same two sets to evaluate the performance of the AI algorithm. The researchers compared the AI test scores with the scores of the 552 human readers, including 315 (57%) board-certified radiologists and 237 non-radiologist readers consisting of 206 radiographers and 31 breast clinicians. More

  • in

    ChatGPT is debunking myths on social media around vaccine safety, say experts

    ChatGPT could help to increase vaccine uptake by debunking myths around jab safety, say the authors of a study published in the peer-reviewed journal Human Vaccines and Immunotherapeutics.
    The researchers asked the artificial intelligence (AI) chatbot the top 50 most frequently-asked Covid-19 vaccine questions. They included queries based on myths and fake stories such as the vaccine causing Long Covid.
    Results show that ChatGPT scored nine out of 10 on average for accuracy. The rest of the time it was correct but left some gaps in the information provided, according to the study.
    Based on these findings, experts who led the study from the GenPoB research group based at the Instituto de Investigación Sanitaria (IDIS) — Hospital Clinico Universitario of Santiago de Compostela, say the AI tool is a “reliable source of non-technical information to the public,” especially for people without specialist scientific knowledge.
    However, the findings do highlight some concerns about the technology such as ChatGPT changing its answers in certain situations.
    “Overall, ChatGPT constructs a narrative in line with the available scientific evidence, debunking myths circulating on social media,” says lead author Antonio Salas, who as well as leading the GenPoB research group, is also a Professor at the Faculty of Medicine at the University of Santiago de Compostela, in Spain.
    “Thereby it potentially facilitates an increase in vaccine uptake. ChatGPT can detect counterfeit questions related to vaccines and vaccination. The language this AI uses is not too technical and therefore easily understandable to the public but without losing scientific rigor. More

  • in

    Better cybersecurity with new material

    Digital information exchange can be safer, cheaper and more environmentally friendly with the help of a new type of random number generator for encryption developed at Linköping University, Sweden. The researchers behind the study believe that the new technology paves the way for a new type of quantum communication.
    In an increasingly connected world, cybersecurity is becoming increasingly important to protect not just the individual, but also, for example, national infrastructure and banking systems. And there is an ongoing race between hackers and those trying to protect information. The most common way to protect information is through encryption. So when we send emails, pay bills and shop online, the information is digitally encrypted.
    To encrypt information, a random number generator is used, which can either be a computer programme or the hardware itself. The random number generator provides keys that are used to both encrypt and unlock the information at the receiving end.
    Different types of random number generators provide different levels of randomness and thus security. Hardware is the much safer option as randomness is controlled by physical processes. And the hardware method that provides the best randomness is based on quantum phenomena — what researchers call the Quantum Random Number Generator, QRNG.
    “In cryptography, it’s not only important that the numbers are random, but that you’re the only one who knows about them. With QRNG’s, we can certify that a large amount of the generated bits is private and thus completely secure. And if the laws of quantum physics are true, it should be impossible to eavesdrop without the recipient finding out,” says Guilherme B Xavier, researcher at the Department of Electrical Engineering at Linköping University.
    His research group, together with researchers at the Department of Physics, Chemistry and Biology (IFM), has developed a new type of QRNG, that can be used for encryption, but also for betting and computer simulations. The new feature of the Linköping researchers’ QRNG is the use of light emitting diodes made from the crystal-like material perovskite.
    Their random number generator is among the best produced and compares well with similar products. Thanks to the properties of perovskites, it has the potential to be cheaper and more environmentally friendly. More

  • in

    Software analyzes calcium ‘sparks’ that can contribute to arrhythmia

    A team of UC Davis and University of Oxford researchers have developed an innovative tool: SparkMaster 2. The open-source software allows scientists to analyze normal and abnormal calcium signals in cells automatically.
    Calcium is a key signaling molecule in all cells, including muscles like the heart. The new software enables the automatic analysis of distinct patterns of calcium release in cells. This includes calcium “sparks,” microscopic releases of calcium within cardiac cells associated with irregular heartbeats, also known as arrhythmia.
    A research article demonstrating the capabilities of SparkMaster 2 was published in Circulation Research.
    Jakub Tomek, the first author of the research article, is a Sir Henry Wellcome Fellow in the Department of Physiology, Anatomy and Genetics at the University of Oxford. He spent his fellowship year at UC Davis, working with Distinguished Professor Donald M. Bers.
    “It was great to present SparkMaster 2 at recent conferences and see the enthusiastic response. I felt it would be an outlier and that few people would care. But many people were excited about having a new analysis tool that overcomes many of the limitations they have experienced with prior tools,” Tomek said.
    Fellowship at UC Davis leads to updated tool
    Problems with how and when calcium is released by cells can have an impact on a range of diseases, including arrhythmia and hypertension. To understand the mechanisms behind these diseases, researchers use fluorescent calcium indicators and microscopic imaging that can measure the calcium changes at the cellular level. More

  • in

    Optics and AI find viruses faster

    Researchers have developed an automated version of the viral plaque assay, the gold-standard method for detecting and quantifying viruses. The new method uses time-lapse holographic imaging and deep learning to greatly reduce detection time and eliminate staining and manual counting. This advance could help streamline the development of new vaccines and antiviral drugs.
    Yuzhu Li from Ozcan Lab at the University of California, Los Angeles (UCLA), will present this research at Frontiers in Optics + Laser Science (FiO LS), which will be held 9 — 12 October 2023 at the Greater Tacoma Convention Center in Tacoma (Greater Seattle Area), Washington.
    “By cutting down the detection time compared to traditional viral plaque assays, this technique might help expedite vaccine and drug development research by significantly reducing the detection time needed and eliminating chemical staining and manual counting entirely, explains Li. “In the event of a new virus outbreak, vaccines or antiviral treatments could be developed, tested, and made available to the public at a significantly accelerated rate, resulting in a faster response time to virus-induced health emergencies.”
    Although the viral plaque assay is a cost-effective way to assess virus infectivity and quantify the amount of a virus in a sample, it is time consuming to perform. Samples are first diluted and then added to cultured cells. If the virus kills the infected cells, a region free of cells — a plaque — develops. Experts then manually count the stained plaque-forming units (PFUs), a process that is susceptible to staining irregularities and human counting errors.
    The new stain-free automated viral plaque assay system replaces manual plaque counting with a lens-free holographic imaging system that images the spatiotemporal features of PFUs during incubation. A deep learning algorithm is then used to detect, classify and locate PFUs based on changes observed.
    To show the efficacy of their system, the researchers infected cultured cells with the Vesicular stomatitis virus. After just 20 hours of incubation, the automated system detected more than 90% of the viral PFUs without any false positives. This was much faster than the traditional plaque assay, which requires 48 hours of incubation for this virus. They also applied the automated approach to herpes simplex virus type-1 and encephalomyocarditis virus. They demonstrated even shorter incubation times for these viruses, saving an average of around 48 and 20 hours, respectively.
    The researchers report that no false positives were detected across all time points. In addition, because the system can identify individual PFUs during their early growth, before the formation of PFU clusters, it can be used to analyze viral samples with about 10 times higher virus concentrations than traditional approaches.
    “As for the next steps, UCLA researchers are improving their system design to further increase its sensitivity and specificity for various types of viruses, paving the way for broad adoption in laboratory and industrial settings, said Li. “They are also exploring other potential applications of this technique in virology research for high-throughput and cost-effective screening of antiviral drugs.” More

  • in

    A system to keep cloud-based gamers in sync

    Cloud gaming, which involves playing a video game remotely from the cloud, witnessed unprecedented growth during the lockdowns and gaming hardware shortages that occurred during the heart of the Covid-19 pandemic. Today, the burgeoning industry encompasses a $6 billion global market and more than 23 million players worldwide.
    However, interdevice synchronization remains a persistent problem in cloud gaming and the broader field of networking. In cloud gaming, video, audio, and haptic feedback are streamed from one central source to multiple devices, such as a player’s screen and controller, which typically operate on separate networks. These networks aren’t synchronized, leading to a lag between these two separate streams. A player might see something happen on the screen and then hear it on their controller a half second later.
    Inspired by this problem, scientists from MIT and Microsoft Research took a unique approach to synchronizing streams transmitted to two devices. Their system, called Ekho, adds inaudible white noise sequences to the game audio streamed from the cloud server. Then it listens for those sequences in the audio recorded by the player’s controller.
    Ekho uses the mismatch between these noise sequences to continuously measure and compensate for the interstream delay.
    In real cloud gaming sessions, the researchers showed that Ekho is highly reliable. The system can keep streams synchronized to within less than 10 milliseconds of each other, most of the time. Other synchronization methods resulted in consistent delays of more than 50 milliseconds.
    And while Ekho was designed for cloud gaming, this technique could be used more broadly to synchronize media streams traveling to different devices, such as in training situations that utilize multiple augmented or virtual reality headsets.
    “Sometimes, all it takes for a good solution to come out is to think outside what has been defined for you. The entire community has been fixed on how to solve this problem by synchronizing through the network. Synchronizing two streams by listening to the audio in the room sounded crazy, but it turned out to be a very good solution,” says Pouya Hamadanian, an electrical engineering and computer science (EECS) graduate student and lead author of a paper describing Ekho. More

  • in

    An ‘introspective’ AI finds diversity improves performance

    An artificial intelligence with the ability to look inward and fine tune its own neural network performs better when it chooses diversity over lack of diversity, a new study finds. The resulting diverse neural networks were particularly effective at solving complex tasks.
    “We created a test system with a non-human intelligence, an artificial intelligence (AI), to see if the AI would choose diversity over the lack of diversity and if its choice would improve the performance of the AI,” says William Ditto, professor of physics at North Carolina State University, director of NC State’s Nonlinear Artificial Intelligence Laboratory (NAIL) and co-corresponding author of the work. “The key was giving the AI the ability to look inward and learn how it learns.”
    Neural networks are an advanced type of AI loosely based on the way that our brains work. Our natural neurons exchange electrical impulses according to the strengths of their connections. Artificial neural networks create similarly strong connections by adjusting numerical weights and biases during training sessions. For example, a neural network can be trained to identify photos of dogs by sifting through a large number of photos, making a guess about whether the photo is of a dog, seeing how far off it is and then adjusting its weights and biases until they are closer to reality.
    Conventional AI uses neural networks to solve problems, but these networks are typically composed of large numbers of identical artificial neurons. The number and strength of connections between those identical neurons may change as it learns, but once the network is optimized, those static neurons are the network.
    Ditto’s team, on the other hand, gave its AI the ability to choose the number, shape and connection strength between neurons in its neural network, creating sub-networks of different neuron types and connection strengths within the network as it learns.
    “Our real brains have more than one type of neuron,” Ditto says. “So we gave our AI the ability to look inward and decide whether it needed to modify the composition of its neural network. Essentially, we gave it the control knob for its own brain. So it can solve the problem, look at the result, and change the type and mixture of artificial neurons until it finds the most advantageous one. It’s meta-learning for AI.
    “Our AI could also decide between diverse or homogenous neurons,” Ditto says. “And we found that in every instance the AI chose diversity as a way to strengthen its performance.”
    The team tested the AI’s accuracy by asking it to perform a standard numerical classifying exercise, and saw that its accuracy increased as the number of neurons and neuronal diversity increased. A standard, homogenous AI could identify the numbers with 57% accuracy, while the meta-learning, diverse AI was able to reach 70% accuracy. More