More stories

  • in

    Intelligent optical chip to improve telecommunications

    From the internet, to fibre or satellite communications and medical diagnostics, our everyday life relies on optical technologies. These technologies use optical pulsed sources to transfer, retrieve or compute information. Gaining control over optical pulse shapes thus paves the way for further advances.
    PhD student Bennet Fischer and postdoctoral researcher Mario Chemnitz, in the team of Professor Roberto Morandotti of the Institut national de la recherche scientifique (INRS), developed a smart pulse-shaper integrated on a chip. The device output can autonomously adjust to a user-defined target waveform with strikingly low technical and computational requirements.
    An Innovative Design
    Ideally, an optical waveform generator should autonomously output a target waveform for user-friendliness, minimize the experimental requirements for driving the system and reading out the waveform, to ease online monitoring. It should also feature a long-term reliability, low losses, fibre connectivity, and maximal functionality.
    Among other things, practical imperfections, such as individual device fidelities, deteriorate the performances accessible from those initially designed or simulated for. “We find that evolutionary optimization can help in overcoming the inherent design limitations of on-chip systems and hence elevate their performance and reconfigurability to a new level,” says the postdoctoral researcher.
    Machine Learning for Smart Photonics
    The team was able to achieve this device alongside with the recent emergence of machine learning concepts into photonics, which promises unprecedented capabilities and system performance. “The optics community is eager to learn about new methods and smart device implementations. In our work, we present an interlinked bundle of machine-learning enabling methods of high relevance, for both the technical and academic optical communities.”
    The researchers used evolutionary optimization algorithms as a key tool for repurposing a programmable photonic chip beyond its original use. Evolutionary algorithms are nature-inspired computer programs, which allows to efficiently optimize many-parameter systems at significantly reduced computational resources.
    This innovative research was published in the journal Optica. “For us young researchers, PhDs and postdocs, it is of paramount importance for our careers that our research is visible and shared. Thus, we are truly grateful and overwhelmed with the news that our work is published in such an outstanding and interdisciplinary journal. It heats up our ambitions to continue our work and search for even better implementations and breakthrough applications. It endorses our efforts and it is simply a great honour,” says Mario Chemnitz.
    The team’s next steps include the investigation of more complex chip designs. The target is to improve the device performance, as well as the on-chip integration of the optical sampling (detection scheme). At terms, they could provide a single compact device ready-to-use.
    Story Source:
    Materials provided by Institut national de la recherche scientifique – INRS. Original written by Audrey-Maude Vézina. Note: Content may be edited for style and length. More

  • in

    Disease outbreak simulations reveal influence of 'seeding' by multiple infected people

    A new computational analysis suggests that, beyond the initial effect of one infected person arriving and spreading disease to a previously uninfected population, the continuous arrival of more infected individuals has a significant influence on the evolution and severity of the local outbreak. Mattia Mazzoli, Jose Javier Ramasco, and colleagues present these findings in the open-access journal PLOS Computational Biology.
    In light of the ongoing COVID-19 pandemic, much research has investigated the dynamics of local outbreaks caused by the first detected cases in a population, which are linked to travel. However, few studies have explored whether and how the arrival of multiple infected individuals might impact the development of a local outbreak — a situation termed “multi-seeding.”
    To examine the impact of multi-seeding, Mazzoli and colleagues first simulated local outbreaks in Europe using a computational modeling approach. To capture travel and seeding events, the simulations incorporated real-world location data from mobile phones during March 2020, when the COVID-19 pandemic began.
    These simulations suggested that there is indeed an association between the number of “seed” arrivals per local population and the speed of spread, the final number of people infected, and the peak incidence rate experienced by the population. This relationship appears to be complex and non-linear, and it depends on the details of the social contact network within the affected population, including the effects of lockdowns.
    To test whether the simulations accurately reflect real-world outbreaks, the researchers looked for similar associations between mobility data and COVID-19 incidence and mortality during the first wave of COVID-19 infection in England, France, Germany, Italy, and Spain. This analysis revealed strong signs of real-world multi-seeding effects similar to those observed in the simulations.
    Based on these findings, the researchers propose a method to understand and reconstruct the spatial spreading patterns of the main outbreak-producing events in every country.
    “Now that the relevance of multi-seeding is understood, it is crucial to develop containment measures that take it into account,” Ramasco says. Next, the researchers hope to incorporate the effects of vaccinations and antibodies acquired through infection into their simulations.
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Many US adults worry about facial image data in healthcare settings

    Uses of facial images and facial recognition technologies — to unlock a phone or in airport security — are becoming increasingly common in everyday life. But how do people feel about using such data in healthcare and biomedical research?
    Through surveying over 4,000 US adults, researchers found that a significant proportion of respondents considered the use of facial image data in healthcare across eight varying scenarios as unacceptable (15-25 percent). Taken with those that responded as unsure of whether the uses were acceptable, roughly 30-50 percent of respondents indicated some degree of concern for uses of facial recognition technologies in healthcare scenarios. Whereas using facial image data in some cases — such as to avoid medical errors, for diagnosis and screening, or for security — was acceptable to the majority, more than half of respondents did not accept or were uncertain about healthcare providers using this data to monitor patients’ emotions or symptoms, or for health research.
    In the biomedical research setting, most respondents were equally worried about the use of medical records, DNA data and facial image data in a study.
    While respondents were a diverse group in terms of age, geographic region, gender, racial and ethnic background, educational attainment, household income, and political views, their perspectives on these issues did not differ by demographics. Findings were published in the journal PLOS ONE.
    “Our results show that a large segment of the public perceives a potential privacy threat when it comes to using facial image data in healthcare,” said lead author Sara Katsanis, who heads the Genetics and Justice Laboratory at Ann & Robert H. Lurie Children’s Hospital of Chicago and is a Research Assistant Professor of Pediatrics at Northwestern University Feinberg School of Medicine. “To ensure public trust, we need to consider greater protections for personal information in healthcare settings, whether it relates to medical records, DNA data, or facial images. As facial recognition technologies become more common, we need to be prepared to explain how patient and participant data will be kept confidential and secure.”
    Senior author Jennifer K. Wagner, Assistant Professor of Law, Policy and Engineering in Penn State’s School of Engineering Design, Technology, and Professional Programs adds: “Our study offers an important opportunity for those pursuing possible use of facial analytics in healthcare settings and biomedical research to think about human-centeredness in a more meaningful way. The research that we are doing hopefully will help decisionmakers find ways to facilitate biomedical innovation in a thoughtful, responsible way that does not undermine public trust.”
    The research team, which includes co-authors with expertise in bioethics, law, genomics, facial analytics, and bioinformatics, hopes to conduct further research to understand the nuances where public trust is lacking.
    Story Source:
    Materials provided by Ann & Robert H. Lurie Children’s Hospital of Chicago. Note: Content may be edited for style and length. More

  • in

    Bridging optics and electronics

    Spatial light modulators are common optical components found in everything from home theater projectors to cutting-edge laser imaging and optical computing. These components can control various aspects of a light, such as intensity or and phase , pixel by pixel. Most spatial light modulators today rely on mechanical moving parts to achieve this control but that approach results in bulky and slow optical devices.
    Now, researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences, in collaboration with a team from Washington University, have developed a simple spatial light modulator made from gold electrodes covered by a thin film of electro-optical material that changes its optical properties in response to electric signals.
    This is a first step towards more compact, high-speed and precise spatial light modulators that could one day be used in everything from imaging to virtual reality, quantum communications and sensing.
    The research is published in Nature Communications.
    “This simple spatial light modulator is a bridge between the realms of optics and electronics,” said Cristina Benea-Chelmus, a postdoctoral fellow at SEAS and first author of the paper.
    “When you interface optics with electronics, you can use the entire backbone of electronics that has been developed to open up new functionalities in optics.”
    The researchers used electro-optic materials designed by chemists Delwin L. Elder and Larry R. Dalton at the University of Washington. When an electric signal is applied to this material, the refractive index of the material changes. By dividing the material into pixels, the researchers could control the intensity of light in each pixel separately with interlocking electrodes.
    With only a small amount of power, the device can dramatically change the intensity of light at each pixel and can efficiently modulate light across the visible spectrum.
    The researchers used the new spatial light modulators for image projection and remote sensing by single-pixel imaging.
    “We consider our work to mark the beginning of an up-and-coming field of hybrid organic-nanostructured electro-optics with broad applications in imaging, remote control, environmental monitoring, adaptive optics and laser ranging,” said Federico Capasso, Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering, senior author of the paper.
    Harvard’s Office of Technology Development has protected the intellectual property associated with this project and is exploring commercialization opportunities.
    The research was co-authored by Maryna L. Meretska, Delwin L. Elder, Michele Tamagnone and Larry R. Dalton. It was supported in part by the Office of Naval Research (ONR) MURI program, under grant no. N00014-20-1-2450.
    Story Source:
    Materials provided by Harvard John A. Paulson School of Engineering and Applied Sciences. Original written by Leah Burrows. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence-based technology quickly identifies genetic causes of serious disease

    An artificial intelligence (AI)-based technology rapidly diagnoses rare disorders in critically ill children with high accuracy, according to a report by scientists from University of Utah Health and Fabric Genomics, collaborators on a study led by Rady Children’s Hospital in San Diego. The benchmark finding, published in Genomic Medicine, foreshadows the next phase of medicine, where technology helps clinicians quickly determine the root cause of disease so they can give patients the right treatment sooner.
    “This study is an exciting milestone demonstrating how rapid insights from AI-powered decision support technologies have the potential to significantly improve patient care,” says Mark Yandell, Ph.D., co-corresponding author on the paper. Yandell is a professor of human genetics and Edna Benning Presidential Endowed Chair at U of U Health, and a founding scientific advisor to Fabric.
    Worldwide, about seven million infants are born with serious genetic disorders each year. For these children, life usually begins in intensive care. A handful of NICUs in the U.S., including at U of U Health, are now searching for genetic causes of disease by reading, or sequencing, the three billion DNA letters that make up the human genome. While it takes hours to sequence the whole genome, it can take days or weeks of computational and manual analysis to diagnose the illness.
    For some infants, that is not fast enough, Yandell says. Understanding the cause of the newborn’s illness is critical for effective treatment. Arriving at a diagnosis within the first 24 to 48 hours after birth gives these patients the best chance to improve their condition. Knowing that speed and accuracy are essential, Yandell’s group worked with Fabric to develop the new Fabric GEM algorithm, which incorporates AI to find DNA errors that lead to disease.
    In this study, the scientists tested GEM by analyzing whole genomes from 179 previously diagnosed pediatric cases from Rady’s Children’s Hospital and five other medical centers from across the world. GEM identified the causative gene as one of its top two candidates 92% of the time. Doing so outperformed existing tools that accomplished the same task less than 60% of the time.
    “Dr. Yandell and the Utah team are at the forefront of applying AI research in genomics,” says Martin Reese, Ph.D., CEO of Fabric Genomics and a co-author on the paper. “Our collaboration has helped Fabric achieve an unprecedented level of accuracy, opening the door for broad use of AI-powered whole genome sequencing in the NICU.”
    GEM leverages AI to learn from a vast and ever-growing body of knowledge that has become challenging to keep up with for clinicians and scientists. GEM cross-references large databases of genomic sequences from diverse populations, clinical disease information, and other repositories of medical and scientific data, combining all this with the patient’s genome sequence and medical records. To assist with the medical record search, GEM can be coupled with a natural language processing tool, Clinithink’s CLiX focus, which scans reams of doctors’ notes for the clinical presentations of the patient’s disease.
    “Critically ill children rapidly accumulate many pages of clinical notes,” Yandell says. “The need for physicians to manually review and summarize note contents as part of the diagnostic process is a massive time sink. The ability of Clinithink’s tool to automatically convert the contents of these notes in seconds for consumption by GEM is critical for speed and scalability.”
    Existing technologies mainly identify small genomic variants that include single DNA letter changes, or insertions or deletions of a small string of DNA letters. By contrast, GEM can also find “structural variants” as causes of disease. These changes are larger and are often more complex. It’s estimated that structural variants are behind 10 to 20% of genetic disease.
    “To be able to diagnose with more certainty opens a new frontier,” says Luca Brunelli, M.D., neonatologist and professor of pediatrics at U of U Health, who leads a team using GEM and other genome analysis technologies to diagnose patients in the NICU. His goal is to provide answers to families who would have had to live with uncertainty before the development of these tools. He says these advances now provide an explanation for why a child is sick, enable doctors to improve disease management, and, at times, lead to recovery.
    “This is a major innovation, one made possible through AI,” Yandell says. “GEM makes genome sequencing more cost-effective and scalable for NICU applications. It took an international team of clinicians, scientists, and software engineers to make this happen. Seeing GEM at work for such a critical application is gratifying.”
    Fabric and Yandell’s team at the Utah Center for Genetic Discovery have had their collaborative research supported by several national agencies, including the National Institutes of Health and American Heart Association, and by the U of U’s Center for Genomic Medicine. Yandell will continue to advise the Fabric team to further optimize GEM’s accuracy and interface for use in the clinic. More

  • in

    Storing data as mixtures of fluorescent dyes

    As the world’s data storage needs grow, new strategies for preserving information over long periods with reduced energy consumption are needed. Now, researchers reporting in ACS Central Science have developed a data storage approach based on mixtures of fluorescent dyes, which are deposited onto an epoxy surface in tiny spots with an inkjet printer. The mixture of dyes at each spot encodes binary information that is read with a fluorescent microscope. 
    Current devices for data storage, such as optical media, magnetic media and flash memory, typically last less than 20 years, and they require substantial energy to maintain stored information. Scientists have explored using different molecules, such as DNA or other polymers, to store information at high density and without power, for thousands of years or longer. But these approaches are limited by factors such as high relative cost and slow read/write speeds. George Whitesides, Amit Nagarkar and colleagues wanted to develop a molecular strategy that stores information with high density, fast read/write speeds and acceptable cost.
    The researchers chose seven commercially available fluorescent dye molecules that emit light at different wavelengths. They used the dyes as bits for American Standard Code for Information Interchange (ACSII) characters, where each bit is a “0” or “1,” depending on whether a particular dye is absent or present, respectively. A sequence of 0s and 1s was used to encode the first section of a seminal research paper by Michael Faraday, the famous scientist. The team used an inkjet printer to place the dye mixtures in tiny spots on an epoxy surface, where they became covalently bound. Then, they used a fluorescence microscope to read the emission spectra of dye molecules at each spot and decode the message. The fluorescent data could be read 1,000 times without a significant loss in intensity. The researchers also demonstrated the technique’s ability to write and read an image of Faraday. The strategy has a read rate of 469 bits/s, which is the fastest reported for any molecular information storage method, the researchers say.
    Story Source:
    Materials provided by American Chemical Society. Note: Content may be edited for style and length. More

  • in

    Attention-based deep neural network increases detection capability in sonar systems

    In underwater acoustics, deep learning is gaining traction in improving sonar systems to detect ships and submarines in distress or in restricted waters. However, noise interference from the complex marine environment becomes a challenge when attempting to detect targeted ship-radiated sounds.
    In the Journal of the Acoustical Society of America, published by the Acoustical Society of America through AIP Publishing, researchers in China and the United States explore an attention-based deep neural network (ABNN) to tackle this problem.
    “We found the ABNN was highly accurate in target recognition, exceeding a conventional deep neural network, particularly when using limited single-target data to detect multiple targets,” co-author Qunyan Ren said.
    Deep learning is a machine-learning method that uses artificial neural networks inspired by the human brain to recognize patterns. Each layer of artificial neurons, or nodes, learns a distinct set of features based on the information contained in the previous layer.
    ABNN uses an attention module to mimic elements in the cognitive process that enable us to focus on the most important parts of an image, language, or other pattern and tune out the rest. This is accomplished by adding more weight to certain nodes to enhance specific pattern elements in the machine-learning process.
    Incorporating an ABNN system in sonar equipment for targeted ship detection, the researchers tested two ships in a shallow, 135-square-mile area of the South China Sea. They compared their results with a typical deep neural network (DNN). Radar and other equipment were used to determine more than 17 interfering vessels in the experimental area.
    They found the ABNN increases its predictions considerably as it gravitates toward the features closely correlated with the training goals. Detection becomes more pronounced as the network continually cycles through the entire training dataset, accentuating the weighted nodes and disregarding irrelevant information.
    While the ABNN accuracy of detecting ships A and B separately was slightly higher than the DNN (98% and 97.4%, respectively), the ABNN accuracy of detecting both ships in the same vicinity was significantly higher (74% and 58.4%).
    For multiple-target identification, a traditional ABNN model is generally trained using multiship data, but this can be a complicated and computationally costly process. The researchers trained their ABNN model to detect each target separately. The individual-target datasets then merge as the output layer of the network is extended.
    “The need to detect multiple ships at one time is a common scenario, and our model significantly exceeds DNN in detecting two ships in the same vicinity,” Ren said. “Moreover, our ABNN focused on the inherent features of the two ships simultaneously.”
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Toward more energy efficient power converters

    Scientists from Nara Institute of Science and Technology (NAIST) used the mathematical method called automatic differentiation to find the optimal fit of experimental data up to four times faster. This research can be applied to multivariable models of electronic devices, which may allow them to be designed with increased performance while consuming less power.
    Wide bandgap devices, such as silicon carbide (SiC) metal-oxide semiconductor field-effect transistors (MOSFET), are a critical element for making converters faster and more sustainable. This is because of their larger switching frequencies with smaller energy losses under a wide range of temperatures when compared with conventional silicon-based devices. However, calculating the parameters that determine how the electrical current in a MOSFET responds as a function of the applied voltage remains difficult in a circuit simulation. A better approach for fitting experimental data to extract the important parameters would provide chip manufacturers the ability to design more efficient power converters.
    Now, a team of scientists led by NAIST has successfully used the mathematical method called automatic differentiation (AD) to significantly accelerate these calculations. While AD has been used extensively when training artificial neural networks, the current project extends its application into the area of model parameter extraction. For problems involving many variables, the task of minimizing the error is often accomplished by a process of “gradient descent,” in which an initial guess is repeatedly refined by making small adjustments in the direction that reduces the error the quickest. This is where AD can be much faster than previous alternatives, such as symbolic or numerical differentiation, at finding direction with the steepest “slope.” AD breaks down the problem into combinations of basic arithmetic operations, each of which only needs to be done once. “With AD, the partial derivatives with respect to each of the input parameters are obtained simultaneously, so there is no need to repeat the model evaluation for each parameter,” first author Michihiro Shintani says. By contrast, symbolic differentiation provides exact solutions, but uses a large amount of time and computational resources as the problem becomes more complex.
    To show the effectiveness of this method, the team applied it to experimental data collected from a commercially available SiC MOSFET. “Our approach reduced the computation time by 3.5× in comparison to the conventional numerical-differentiation method, which is close to the maximum improvement theoretically possible,” Shintani says. This method can be readily applied in many other areas of research involving multiple variables, since it preserves the physical meanings of the model parameters. The application of AD for the enhanced extraction of model parameters will support new advances in MOSFET development and improved manufacturing yields.
    Story Source:
    Materials provided by Nara Institute of Science and Technology. Note: Content may be edited for style and length. More