More stories

  • in

    Researchers use SPAD detector to achieve 3D quantum ghost imaging

    Researchers have reported the first 3D measurements acquired with quantum ghost imaging. The new technique enables 3D imaging on a single photon level, yielding the lowest photon dose possible for any measurement.
    “3D imaging with single photons could be used for various biomedical applications, such as eye care diagnostics,” said researcher Carsten Pitsch from the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation and Karlsruhe Institute of Technology, both in Germany. “It can be applied to image materials and tissues that are sensitive to light or drugs that become toxic when exposed to light without any risk of damage.”
    In the Optica Publishing Group journal Applied Optics, the researchers describe their new approach, which incorporates new single photon avalanche diode (SPAD) array detectors. They apply the new imaging scheme, which they call asynchronous detection, to perform 3D imaging with quantum ghost imaging.
    “Asynchronous detection might also be useful for military or security applications since it could be used to observe without being detected while also reducing the effects of over-illumination, turbulence and scattering,” said Pitsch. “We also want to investigate its use in hyperspectral imaging, which could allow multiple spectral regions to be recorded simultaneously while using a very low photon dose. This could be very useful for biological analysis.”
    Adding a third dimension
    Quantum ghost imaging creates images using entangled photon-pairs in which only one member of the photon pair interacts with the object. The detection time for each photon is then used to identify entangled pairs, which allows an image to be reconstructed. This approach not only allows imaging at extremely low light levels but also means that the objects being imaged do not have to interact with the photons used for imaging.
    Previous setups for quantum ghost imaging were not capable of 3D imaging because they relied on intensified charge-coupled device (ICCD) cameras. Although these cameras have good spatial resolution, they are time-gated and don’t allow the independent temporal detection of single photons. More

  • in

    The ‘unknome’: A database of human genes we know almost nothing about

    Researchers from the United Kingdom hope that a new, publicly available database they have created will shrink, not grow, over time. That’s because it is a compendium of the thousands of understudied proteins encoded by genes in the human genome, whose existence is known but whose functions are mostly not. The database, dubbed the “unknome,” is the work of Matthew Freeman of the Dunn School of Pathology, University of Oxford, England, and Sean Munro of MRC Laboratory of Molecular Biology in Cambridge, England, and colleagues, and is described in the open access journal PLOS Biology. Their own investigations of a subset of proteins in the database reveal that a majority contribute to important cellular functions, including development and resilience to stress.
    The sequencing of the human genome has made it clear that it encodes thousands of likely protein sequences whose identities and functions are still unknown. There are multiple reasons for this, including the tendency to focus scarce research dollars on already-known targets, and the lack of tools, including antibodies, to interrogate cells about the function of these proteins. But the risks of ignoring these proteins are significant, the authors argue, since it is likely that some, perhaps many, play important roles in critical cell processes, and may both provide insight and targets for therapeutic intervention.
    To promote more rapid exploration of such proteins, the authors created the unknome database (www.unknome.org), that assigns to every protein a “knownness” score, reflecting the information in the scientific literature about function, conservation across species, subcellular compartmentalization, and other elements. Based on this system, there are many thousands of proteins whose knownness is near-zero. Proteins from model organisms are included, along with those from the human genome. The database is open to all and is customizable, allowing the user to provide their own weights to different elements, thereby generating their own set of knownness scores to prioritize their own research.
    To test the utility of the database, the authors chose 260 genes in humans for which there were comparable genes in flies, and which had knownness scores of 1 or less in both species, indicating that almost nothing was known about them. For many of them, a complete knockout of the gene was incompatible with life in the fly; partial knockdowns or tissue-specific knockdowns led to the discovery that a large fraction contributed to essential functions influencing fertility, development, tissue growth, protein quality control, or stress resistance.
    The results suggest that, despite decades of detailed study, there are thousands of fly genes that remain to be understood at even the most basic level, and the same is clearly true for the human genome. “These uncharacterized genes have not deserved their neglect,” Munro said. “Our database provides a powerful, versatile and efficient platform to identify and select important genes of unknown function for analysis, thereby accelerating the closure of the gap in biological knowledge that the unknome represents.”
    Munro adds, “The role of thousands of human proteins remains unclear and yet research tends to focus on those that are already well understood. To help address this we created an Unknome database that ranks proteins based on how little is known about them, and then performed functional screens on a selection of these mystery proteins to demonstrate how ignorance can drive biological discovery.” More

  • in

    Texting while walking makes college students more likely to fall

    When it comes to college-aged adults who are glued to their smartphones, experts argue over whether texting while walking increases the risk of an accident. Some studies have shown that texting pedestrians are more likely to walk into oncoming traffic, while other studies suggest that young adults have mastered the art of multitasking and are able to text accurately while navigating obstacles. However, few studies have measured how texters respond to unpredictable hazard conditions. By simulating an environment with random slipping threats, researchers report in the journal Heliyon on August 8th that texting increases the risk of falling in response to walkway hazards.
    “On any day it seems as many as 80% of people, both younger and older, may be head down and texting. I wondered: is this safe?” says senior author Matthew A. Brodie, a neuroscientist and engineer at the University of New South Wales (UNSW) Graduate School of Biomedical Engineering. “This has made me want to investigate the dangers of texting while walking. I wanted to know if these dangers are real or imagined and to measure the risk in a repeatable way.”
    The team recruited 50 UNSW undergraduate students from his “Mechanics of the Human Body” course for this experiment. Brodie and co-author Yoshiro Okubo invented a tiled hazard walkway at Neuroscience Research Australia’s gait laboratory, which halfway through had a tile that could be adjusted to slide out of place, so anyone who stepped on it would slip as if on a banana peel. Students wore a safety harness — preventing any slip from becoming a fall — and sensors that collected their motion data. They then were asked to go along the walkway either without texting or while typing “The quick brown fox jumps over the lazy dog.”
    To better simulate the uncertainty of real life, students were only told that they may or may not slip. This allowed the researchers to study how texting pedestrians might anticipate and try to prevent a potential slip, such as by leaning forward.
    “What surprised me is how differently people responded to the threat of slipping,” says Brodie. “Some slowed down and took a more cautious approach. Others sped up in anticipation of slipping. Such different approaches reinforce how no two people are the same, and to better prevent accidents from texting while walking, multiple strategies may be needed.”
    Despite motion data showing that texting participants tried to be more cautious in response to a threat, this did not counteract their risk of falling. When participants went from leaning forwards (such as over a phone) to slipping backwards, their motion sensors showed an increase in the range of their “trunk angle.” Researchers used this number to measure whether the texting condition was making students more likely to fall, and they found that the average trunk angle range during a fall significantly increased if a student was texting.
    Walking also caused the texters’ accuracy to decrease. The highest texting accuracy occurred when participants were seated, but accuracy decreased even as walking participants were cautioned about a potential slip that did not occur. The lowest accuracy, however, occurred in conditions where participants did slip.
    The researchers note that young people may be more likely to take risks even if they are aware that texting and walking could increase their likelihood of falling. For that reason, the authors suggest that educational initiatives such as signs might be less effective in reaching this population. In addition to education, the researchers also suggest that phones could implement locking technology similar to what is used when users are driving. The technology could detect walking activity and activate a screen lock to prevent texting during that time. In future research, the team plans on looking into the effectiveness of this intervention. More

  • in

    Quantum material exhibits ‘non-local’ behavior that mimics brain function

    We often believe computers are more efficient than humans. After all, computers can complete a complex math equation in a moment and can also recall the name of that one actor we keep forgetting. However, human brains can process complicated layers of information quickly, accurately, and with almost no energy input: recognizing a face after only seeing it once or instantly knowing the difference between a mountain and the ocean. These simple human tasks require enormous processing and energy input from computers, and even then, with varying degrees of accuracy.
    Creating brain-like computers with minimal energy requirements would revolutionize nearly every aspect of modern life. Funded by the Department of Energy, Quantum Materials for Energy Efficient Neuromorphic Computing (Q-MEEN-C) — a nationwide consortium led by the University of California San Diego — has been at the forefront of this research.
    UC San Diego Assistant Professor of Physics Alex Frañó is co-director of Q-MEEN-C and thinks of the center’s work in phases. In the first phase, he worked closely with President Emeritus of University of California and Professor of Physics Robert Dynes, as well as Rutgers Professor of Engineering Shriram Ramanathan. Together, their teams were successful in finding ways to create or mimic the properties of a single brain element (such as a neuron or synapse) in a quantum material.
    Now, in phase two, new research from Q-MEEN-C, published in Nano Letters, shows that electrical stimuli passed between neighboring electrodes can also affect non-neighboring electrodes. Known as non-locality, this discovery is a crucial milestone in the journey toward new types of devices that mimic brain functions known as neuromorphic computing.
    “In the brain it’s understood that these non-local interactions are nominal — they happen frequently and with minimal exertion,” stated Frañó, one of the paper’s co-authors. “It’s a crucial part of how the brain operates, but similar behaviors replicated in synthetic materials are scarce.”
    Like many research projects now bearing fruit, the idea to test whether non-locality in quantum materials was possible came about during the pandemic. Physical lab spaces were shuttered, so the team ran calculations on arrays that contained multiple devices to mimic the multiple neurons and synapses in the brain. In running these tests, they found that non-locality was theoretically possible.
    When labs reopened, they refined this idea further and enlisted UC San Diego Jacobs School of Engineering Associate Professor Duygu Kuzum, whose work in electrical and computer engineering helped them turn a simulation into an actual device. More

  • in

    Accurate measurement of permittivity advances radio telescope receivers and next generation telecommunication networks

    Researchers invented a novel method to measure the permittivity of insulators 100 times more accurately than before. This technology is expected to contribute to the efficient development of sensitive radio receivers for radio telescopes as well as to the development of devices for the next generation communication networks, “Beyond 5G/6G.”
    Permittivity is a value that indicates how electrons inside an insulator react when a voltage is applied to the insulator. It is an important parameter for understanding the behavior of radio waves as they travel through insulators. In the development of telecommunications equipment, it is necessary to accurately determine the permittivity of materials used for circuit boards and building columns and walls. For radio astronomy, researchers also need to know the permittivity of components used in radio receivers.
    By devising a calculation method for electromagnetic wave propagation, the research team developed an analytical algorithm that derives the permittivity directly rather than by approximation. The team, consisting of researchers and engineers from the National Astronomical Observatory of Japan (NAOJ) and the National Institute of Information and Communications Technology (NICT), then used the new method to measure lens material for a receiver being developed for the Atacama Large Millimeter/submillimeter Array (ALMA) and confirmed that the results were consistent with other methods, demonstrating its effectiveness in actual device development.
    “The newly developed method is expected to contribute to not only the design of radio telescope components, but also to the development of high-frequency materials and devices for the realization of next-generation communication networks (Beyond 5G/6G) using the millimeter wave/terahertz band,” says Ryo Sakai, an engineer at NAOJ and the lead author of the research paper published recently.
    Reducing the error due to approximation by a factor of 100 speeds up the development process. If the permittivity of individual materials is measured inaccurately, the actual fabricated product may not meet the target performance. If the permittivity is known accurately from the design stage, unnecessary trial and error can be reduced and costs can be cut.
    Conventionally, there are several methods used for measuring permittivity. One method that can accurately measure permittivity is the “resonance method,” but in that case, the material to be measured must be placed in a device called a resonator, which requires precision processing of the material, sometimes less than several hundred micrometers thick. Another drawback is that the permittivity can only be measured at several specific frequencies. Since it is necessary to measure the permittivity of various materials during the development stage of a device, if high-precision processing is required for each measurement, the development process will take a long time. On the other hand, the “free-space method,” which has fewer of these drawbacks, is also used, but in this case, an approximation has been used to analyze the measurement results, and the error caused by this makes accurate measurement difficult.
    “Compared to other measurement methods, the free-space method has fewer restrictions on the shape of the measurement sample, and it is easy to extend the measurement frequency band,” says Sakai. The new analysis method is used with the “free-space method,” which means that with the new method, we can accurately measure permittivity with fewer constraints.
    NAOJ and NICT have jointly been conducting research and development for high-precision material property measurement systems at millimeter-wave and terahertz-wave frequencies. The team is aiming for further technological innovation by combining the knowledge gained through the development of astronomical instruments with that gained from developing communication technology. More

  • in

    New model reduces bias and enhances trust in AI decision-making and knowledge organization

    University of Waterloo researchers have developed a new explainable artificial intelligence (AI) model to reduce bias and enhance trust and accuracy in machine learning-generated decision-making and knowledge organization.
    Traditional machine learning models often yield biased results, favouring groups with large populations or being influenced by unknown factors, and take extensive effort to identify from instances containing patterns and sub-patterns coming from different classes or primary sources.
    The medical field is one area where there are severe implications for biased machine learning results. Hospital staff and medical professionals rely on datasets containing thousands of medical records and complex computer algorithms to make critical decisions about patient care. Machine learning is used to sort the data, which saves time. However, specific patient groups with rare symptomatic patterns may go undetected, and mislabeled patients and anomalies could impact diagnostic outcomes. This inherent bias and pattern entanglement leads to misdiagnoses and inequitable healthcare outcomes for specific patient groups.
    Thanks to new research led by Dr. Andrew Wong, a distinguished professor emeritus of systems design engineering at Waterloo, an innovative model aims to eliminate these barriers by untangling complex patterns from data to relate them to specific underlying causes unaffected by anomalies and mislabeled instances. It can enhance trust and reliability in Explainable Artificial Intelligence (XAI.)
    “This research represents a significant contribution to the field of XAI,” Wong said. “While analyzing a vast amount of protein binding data from X-ray crystallography, my team revealed the statistics of the physicochemical amino acid interacting patterns which were masked and mixed at the data level due to the entanglement of multiple factors present in the binding environment. That was the first time we showed entangled statistics can be disentangled to give a correct picture of the deep knowledge missed at the data level with scientific evidence.”
    This revelation led Wong and his team to develop the new XAI model called Pattern Discovery and Disentanglement (PDD).
    “With PDD, we aim to bridge the gap between AI technology and human understanding to help enable trustworthy decision-making and unlock deeper knowledge from complex data sources,” said Dr. Peiyuan Zhou, the lead researcher on Wong’s team. More

  • in

    Smart devices: Putting a premium on peace of mind

    The White House has announced plans to roll out voluntary labeling for internet-connected devices like thermostats and baby monitors that meet certain cybersecurity standards. A new survey of U.S. consumers shows that they are willing to pay a significant premium to tell which gadgets are safe from security attacks before they buy. But voluntary product labels may not be enough if the program is going to protect consumers in the long run, the findings suggest.
    Two out of five homes worldwide have at least one smart device that is vulnerable to cyber-attacks. Soon, that new smart TV or robot vacuum you’ve been considering for your home will come with a label that helps you gauge whether the device is secure and protected from bad actors trying to spy on you or sell your data.
    In July, the White House announced plans to roll out voluntary labeling for internet-connected devices like refrigerators, thermostats and baby monitors that meet certain cybersecurity standards, such as requiring data de-identification and automatic security updates.
    For tech companies that choose to participate, the good news is that there is a market for such a guarantee. A new survey of U.S. consumers shows that they are willing to pay a significant premium to tell which gadgets respect their privacy and are safe from security attacks before they buy.
    But voluntary product labels may not be enough if the program is going to protect consumers in the long run, the authors of the study caution.
    “Device manufacturers that do not care about security and privacy might decide not to disclose at all,” said Duke University assistant professor of computer science Pardis Emami-Naeini, who conducted the survey with colleagues at Carnegie Mellon University. “That’s not what we want.”
    The average household in the U.S. now has more than 20 devices connected to the internet, all collecting and sharing data. Fitness trackers measure your steps and monitor the quality of your sleep. Smart lights track your phone’s location and turn on as soon as you pull in the driveway. Video doorbells let you see who’s at the door — even when you’re not home. More

  • in

    Uncovering the Auger-Meitner Effect’s crucial role in electron energy loss

    Defects often limit the performance of devices such as light-emitting diodes (LEDs). The mechanisms by which defects annihilate charge carriers are well understood in materials that emit light at red or green wavelengths, but an explanation has been lacking for such loss in shorter-wavelength (blue or ultraviolet) emitters.
    Researchers in the Department of Materials at UC Santa Barbara, however, recently uncovered the crucial role of the Auger-Meitner effect, a mechanism that allows an electron to lose energy by kicking another electron up to a higher-energy state.
    “It is well known that defects or impurities — collectively referred to as ‘traps’ — reduce the efficiency of LEDs and other electronic devices,” said Materials Professor Chris Van de Walle, whose group performed the research.
    The new methodology revealed that the trap-assisted Auger-Meitner effect can produce loss rates that are orders of magnitude greater than those caused by other previously considered mechanisms, thus resolving the puzzle of how defects affect the efficiency of blue or UV light emitters. The findings are published in the journal Physical Review Letters.
    Observations of this phenomenon date back to the 1950s, when researchers at Bell Labs and General Electric observed its detrimental impact on transistors. Van de Walle explained that electrons can get trapped at defects and become unable to perform their intended role in the device, be it amplifying a charge in a transistor or emitting light by recombining it with a hole (an unoccupied lower-energy state) in an LED. The energy lost in this recombination process was assumed to be released in the form of phonons, i.e., lattice vibrations that heat up the device.
    Van de Walle’s group had previously modeled this phonon-mediated process and found that it duly fitted the observed efficiency loss in LEDs that emit light in the red or green regions of the spectrum. However, for blue or ultraviolet LEDs the model failed; the larger amount of energy carried by the electrons at these shorter wavelengths simply cannot be dissipated in the form of phonons.
    “This is where the Auger-Meitner process comes in,” explained Fangzhou Zhao, a postdoctoral researcher in Van de Walle’s group and the project’s lead researcher. The researchers found that, instead of releasing energy in the form of phonons, the electron transfers its energy to another electron that gets kicked up to a higher energy state. This process is often referred to as the Auger effect, after Pierre Auger, who reported it in 1923. However Lise Meitner — whose many accomplishments were never properly recognized during her lifetime — had already described the same phenomenon in 1922.
    Experimental work in the group of UC Santa Barbara materials professorJames Speck had suggested previously that trap-assisted Auger-Meitner processes could occur; however, based on measurements alone, it is difficult to rigorously distinguish between different recombination channels. Zhao and his co-researchers developed a first-principles methodology that, combined with cutting-edge computations, conclusively established the crucial role of the Auger-Meitner process. In the case of gallium nitride, the key material used in commercial LEDs, the results showed trap-assisted recombination rates that were more than a billion times greater than if only the phonon-mediated process had been considered. Clearly, not every trap will show such huge enhancements; but with the new methodology in hand, researchers can now accurately assess which defects or impurities are actually detrimental to the efficiency.
    “The computational formalism is completely general and can be applied to any defect or impurity in semiconducting or insulating materials,” said Mark Turiansky, another postdoctoral researcher in Van de Walle’s group who was involved in the project. The researchers hope that these results will increase understanding of recombination mechanisms not only in semiconductor light emitters, but also in any wide-band-gap material in which defects limit efficiency.
    The research was supported by the Department of Energy Office of Basic Energy Sciences and a Department of Defense Vannevar Bush Faculty Fellowship, which was awarded to Van de Walle in 2022. Zhao was the recipient of an Elings Prize Postdoctoral Fellowship. The computations were performed at the National Energy Research Scientific Computing Center (NERSC). More