More stories

  • in

    Quantum emitters: Beyond crystal clear to single-photon pure

    Photons — fundamental particles of light — are carrying these words to your eyes via the light from your computer screen or phone. Photons play a key role in the next-generation quantum information technology, such as quantum computing and communications. A quantum emitter, capable of producing a single, pure photon, is the crux of such technology but has many issues that have yet to be solved, according to KAIST researchers.
    A research team under Professor Yong-Hoon Cho has developed a technique that can isolate the desired quality emitter by reducing the noise surrounding the target with what they have dubbed a ‘nanoscale focus pinspot.’ They published their results on June 24 in ACS Nano.
    “The nanoscale focus pinspot is a structurally nondestructive technique under an extremely low dose ion beam and is generally applicable for various platforms to improve their single-photon purity while retaining the integrated photonic structures,” said lead author Yong-Hoon Cho from the Department of Physics at KAIST.
    To produce single photons from solid state materials, the researchers used wide-bandgap semiconductor quantum dots — fabricated nanoparticles with specialized potential properties, such as the ability to directly inject current into a small chip and to operate at room temperature for practical applications. By making a quantum dot in a photonic structure that propagates light, and then irradiating it with helium ions, researchers theorized that they could develop a quantum emitter that could reduce the unwanted noisy background and produce a single, pure photon on demand.
    Professor Cho explained, “Despite its high resolution and versatility, a focused ion beam typically suppresses the optical properties around the bombarded area due to the accelerated ion beam’s high momentum. We focused on the fact that, if the focused ion beam is well controlled, only the background noise can be selectively quenched with high spatial resolution without destroying the structure.”
    In other words, the researchers focused the ion beam on a mere pin prick, effectively cutting off the interactions around the quantum dot and removing the physical properties that could negatively interact with and degrade the photon purity emitted from the quantum dot.
    “It is the first developed technique that can quench the background noise without changing the optical properties of the quantum emitter and the built-in photonic structure,” Professor Cho asserted.
    Professor Cho compared it to stimulated emission depletion microscopy, a technique used to decrease the light around the area of focus, but leaving the focal point illuminated. The result is increased resolution of the desired visual target.
    “By adjusting the focused ion beam-irradiated region, we can select the target emitter with nanoscale resolution by quenching the surrounding emitter,” Professor Cho said. “This nanoscale selective-quenching technique can be applied to various material and structural platforms and further extended for applications such as optical memory and high-resolution micro displays.”
    Korea’s National Research Foundation and the Samsung Science and Technology Foundation supported this work.
    Story Source:
    Materials provided by The Korea Advanced Institute of Science and Technology (KAIST). Note: Content may be edited for style and length. More

  • in

    New molecular device has unprecedented reconfigurability reminiscent of brain plasticity

    In a discovery published in the journal Nature, an international team of researchers has described a novel molecular device with exceptional computing prowess.
    Reminiscent of the plasticity of connections in the human brain, the device can be reconfigured on the fly for different computational tasks by simply changing applied voltages. Furthermore, like nerve cells can store memories, the same device can also retain information for future retrieval and processing.
    “The brain has the remarkable ability to change its wiring around by making and breaking connections between nerve cells. Achieving something comparable in a physical system has been extremely challenging,” said Dr. R. Stanley Williams, professor in the Department of Electrical and Computer Engineering at Texas A&M University. “We have now created a molecular device with dramatic reconfigurability, which is achieved not by changing physical connections like in the brain, but by reprogramming its logic.”
    Dr. T. Venkatesan, director of the Center for Quantum Research and Technology (CQRT) at the University of Oklahoma, Scientific Affiliate at National Institute of Standards and Technology, Gaithersburg, and adjunct professor of electrical and computer engineering at the National University of Singapore, added that their molecular device might in the future help design next-generation processing chips with enhanced computational power and speed, but consuming significantly reduced energy.
    Whether it is the familiar laptop or a sophisticated supercomputer, digital technologies face a common nemesis, the von Neumann bottleneck. This delay in computational processing is a consequence of current computer architectures, wherein the memory, containing data and programs, is physically separated from the processor. As a result, computers spend a significant amount of time shuttling information between the two systems, causing the bottleneck. Also, despite extremely fast processor speeds, these units can be idling for extended amounts of time during periods of information exchange.
    As an alternative to conventional electronic parts used for designing memory units and processors, devices called memristors offer a way to circumvent the von Neumann bottleneck. Memristors, such as those made of niobium dioxide and vanadium dioxide, transition from being an insulator to a conductor at a set temperature. This property gives these types of memristors the ability to perform computations and store data. More

  • in

    Machine learning tool detects the risk of genetic syndromes in children with diverse backgrounds

    With an average accuracy of 88%, a deep learning technology offers rapid genetic screening that could accelerate the diagnosis of genetic syndromes, recommending further investigation or referral to a specialist in seconds, according to a study published in The Lancet Digital Health. Trained with data from 2,800 pediatric patients from 28 countries, the technology also considers the face variability related to sex, age, racial and ethnic background, according to the study led by Children’s National Hospital researchers.
    “We built a software device to increase access to care and a machine learning technology to identify the disease patterns not immediately obvious to the human eye or intuition, and to help physicians non-specialized in genetics,” said Marius George Linguraru, D.Phil., M.A., M.Sc., principal investigator in the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children’s National Hospital and senior author of the study. “This technological innovation can help children without access to specialized clinics, which are unavailable in most of the world. Ultimately, it can help reduce health inequality in under-resourced societies.”
    This machine learning technology indicates the presence of a genetic syndrome from a facial photograph captured at the point-of-care, such as in pediatrician offices, maternity wards and general practitioner clinics.
    “Unlike other technologies, the strength of this program is distinguishing ‘normal’ from ‘not-normal,’ which makes it an effective screening tool in the hands of community caregivers,” said Marshall L. Summar, M.D., director of the Rare Disease Institute at Children’s National. “This can substantially accelerate the time to diagnosis by providing a robust indicator for patients that need further workup. This first step is often the greatest barrier to moving towards a diagnosis. Once a patient is in the workup system, then the likelihood of diagnosis (by many means) is significantly increased.”
    Every year, millions of children are born with genetic disorders — including Down syndrome, a condition in which a child is born with an extra copy of their 21st chromosome causing developmental delays and disabilities, Williams-Beuren syndrome, a rare multisystem condition caused by a submicroscopic deletion from a region of chromosome 7, and Noonan syndrome, a genetic disorder caused by a faulty gene that prevents normal development in various parts of the body.
    Most children with genetic syndromes live in regions with limited resources and access to genetic services. The genetic screening may come with a hefty price tag. There are also insufficient specialists to help identify genetic syndromes early in life when preventive care can save lives, especially in areas of low income, limited resources and isolated communities. More

  • in

    Bionic arm restores natural behaviors in patients with upper limb amputations

    Cleveland Clinic researchers have engineered a first-of-its-kind bionic arm for patients with upper-limb amputations that allows wearers to think, behave and function like a person without an amputation, according to new findings published in Science Robotics.
    The Cleveland Clinic-led international research team developed the bionic system that combines three important functions — intuitive motor control, touch and grip kinesthesia, the intuitive feeling of opening and closing the hand. Collaborators included University of Alberta and University of New Brunswick.
    “We modified a standard-of-care prosthetic with this complex bionic system which enables wearers to move their prosthetic arm more intuitively and feel sensations of touch and movement at the same time,” said lead investigator Paul Marasco, Ph.D., associate professor in Cleveland Clinic Lerner Research Institute’s Department of Biomedical Engineering. “These findings are an important step towards providing people with amputation with complete restoration of natural arm function.”
    The system is the first to test all three sensory and motor functions in a neural-machine interface all at once in a prosthetic arm. The neural-machine interface connects with the wearer’s limb nerves. It enables patients to send nerve impulses from their brains to the prosthetic when they want to use or move it, and to receive physical information from the environment and relay it back to their brain through their nerves.
    The artificial arm’s bi-directional feedback and control enabled study participants to perform tasks with a similar degree of accuracy as non-disabled people.
    “Perhaps what we were most excited to learn was that they made judgments, decisions and calculated and corrected for their mistakes like a person without an amputation,” said Dr. Marasco, who leads the Laboratory for Bionic Integration. “With the new bionic limb, people behaved like they had a natural hand. Normally, these brain behaviors are very different between people with and without upper limb prosthetics.” Dr. Marasco also has an appointment to in Cleveland Clinic’s Charles Shor Epilepsy Center and the Cleveland VA Medical Center’s Advanced Platform Technology Center. More

  • in

    This rainbow-making tech could help autonomous vehicles read signs

    A new study explains the science behind microscale concave interfaces (MCI) — structures that reflect light to produce beautiful and potentially useful optical phenomena.
    “It is vital to be able to explain how a technology works to someone before you attempt to adopt it. Our new paper defines how light interacts with microscale concave interfaces,” says University at Buffalo engineering researcher Qiaoqiang Gan, noting that future applications of these effects could include aiding autonomous vehicles in recognizing traffic signs.
    The research was published online on Aug. 15 in Applied Materials Today, and is featured in the journal’s September issue.
    Gan, PhD, professor of electrical engineering in the UB School of Engineering and Applied Sciences, led the collaborative study, which was conducted by a team from UB, the University of Shanghai for Science and Technology, Fudan University, Texas Tech University and Hubei University. The first authors are Jacob Rada, UB PhD student in electrical engineering, and Haifeng Hu, PhD, professor of optical-electrical and computer engineering at the University of Shanghai for Science and Technology.
    Reflections that form concentric rings of light
    The study focuses on a retroreflective material — a thin film that consists of polymer microspheres laid down on the sticky side of a transparent tape. The microspheres are partially embedded in tape, and the parts that protrude form MCIs.?
    White light shining on this film is reflected in a way that causes the light to create concentric rainbow rings, the new paper reports. Alternately, hitting the material with a single-colored laser (red, green or blue, in the case of this study) generates a pattern of bright and dark rings. Reflections from infrared lasers also produced distinctive signals consisting of concentric rings.
    The research describes these effects in detail, and reports on experiments that used the thin film in a stop sign. The patterns formed by the material showed up clearly on both a visual camera that detects visible light, and a LIDAR (laser imaging, detection and ranging) camera that detects infrared signals, says Rada, the co-first author from UB.
    “Currently, autopilot systems face many challenges in recognizing traffic signs, especially in real-world conditions,” Gan says. “Smart traffic signs made from our material could provide more signals for future systems that use LIDAR and visible pattern recognition together to identify important traffic signs. This may be helpful to improve the traffic safety for autonomous cars.”
    “We demonstrated a new combined strategy to enhance the LIDAR signal and visible pattern recognition that are currently performed by both visible and infrared cameras,” Rada says. “Our work showed that the MCI is an ideal target for LIDAR cameras, due to the constantly strong signals that are produced.”
    A U.S. patent for the retroreflective material has been issued, as well as a counterpart in China, with Fudan University and UB as the patent-holders. The technology is available for licensing.
    Gan says future plans include testing the film using different wavelengths of light, and different materials for the microspheres, with the goal of enhancing performance for possible applications such as traffic signs designed for future autonomous systems.
    Story Source:
    Materials provided by University at Buffalo. Original written by Charlotte Hsu. Note: Content may be edited for style and length. More

  • in

    Machine learning algorithm revolutionizes how scientists study behavior

    To Eric Yttri, assistant professor of biological sciences and Neuroscience Institute faculty at Carnegie Mellon University, the best way to understand the brain is to watch how organisms interact with the world.
    “Behavior drives everything we do,” Yttri said.
    As a behavioral neuroscientist, Yttri studies what happens in the brain when animals walk, eat, sniff or do any action. This kind of research could help answer questions about neurological diseases or disorders like Parkinson’s disease or stroke. But identifying and predicting animal behavior is extremely difficult.
    Now, a new unsupervised machine learning algorithm developed by Yttri and Alex Hsu, a biological sciences Ph.D. candidate in his lab, makes studying behavior much easier and more accurate. The researchers published a paper on the new tool, B-SOiD (Behavioral segmentation of open field in DeepLabCut), in Nature Communications.
    Previously, the standard method to capture animal behavior was to track very simple actions, like whether a trained mouse pressed a lever or whether an animal was eating food or not. Alternatively, the experimenter could spend hours and hours manually identifying behavior, usually frame by frame on a video, a process prone to human error and bias.
    Hsu realized he could let an unsupervised learning algorithm do the time-consuming work. B-SOiD discovers behaviors by identifying patterns in the position of an animal’s body. The algorithm works with computer vision software and can tell researchers what behavior is happening at every frame in a video. More

  • in

    Exploring the past: Computational models shed new light on the evolution of prehistoric languages

    A new linguistic study sheds light on the nature of languages spoken before the written period, using computational modeling to reconstruct the grammar of the 6500-7000 year-old Proto-Indo-European language, which is the ancestor of most languages of Eurasia, including English and Hindi. The model employed makes it possible to observe evolutionary trends in language over the millennia. The article, “Reconstructing the evolution of Indo-European grammar,” authored by Gerd Carling (Lund University) and Chundra Cathcart (University of Zurich) will be published in the September 2021 issue of the scholarly journal Language.
    In the article, Carling & Cathcart use a database of features from 125 different languages of the Indo-European family, including extinct languages such as Sanskrit and Latin. Features include most of the differences that make the languages difficult to learn, such as differentiation in word order (the girl throws the stone in English or caitheann an cailín an chloch “throws the girl the stone” in Irish), gender (the apple in English or der Apfel in German), number of cases, number of forms of the verb, or whether languages have prepositions or postpositions (to the house in English but ghar ko “house-to” in Hindi). With the aid of methods adopted from computational biology, the authors use known grammars to reconstruct grammars of unknown prehistorical periods.
    The reconstruction of Indo-European grammar has been the subject of lengthy discussion for over a century. In the 19th century, scholars held the view that the ancient written languages, such as Classical Greek, were most similar to the reconstructed Proto-Indo-European language. The discovery of the archaic but highly dissimilar Hittite language in the early 20th century shifted the focus. Instead, scholars believed that Proto-Indo-European was a language with a structure more similar to non-Indo-European languages of Eurasia such as Basque or languages of the Caucasus region.
    The study confirms that Proto-Indo-European was similar to Classical Greek and Sanskrit, supporting the theory of the 19th century scholars. However, the study also provides new insights into the mechanisms of language change. Some features of the proto-language were very stable and dominant over time. Moreover, features of higher prominence and frequency were less likely to change.
    Though this study focused on one single family (Indo-European), the methods used in the paper can be applied to many other language families to reconstruct earlier states of languages and to observe how language evolves over time. The model also forms a basis for predicting future changes in language evolution.
    Story Source:
    Materials provided by Linguistic Society of America. Note: Content may be edited for style and length. More

  • in

    Revealing the hidden structure of quantum entangled states

    Quantum states that are entangled in many dimensions are key to our emerging quantum technologies, where more dimensions mean a higher quantum bandwidth (faster) and better resilience to noise (security), crucial for both fast and secure communication and speed up in error-free quantum computing. Now researchers at the University of the Witwatersrand in Johannesburg, South Africa, together with collaborators from Scotland, have invented a new approach to probing these “high-dimensional” quantum states, reducing the measurement time from decades to minutes.
    The study was published in the scientific journal, Nature Communications, on Friday, 27 August 2021. Wits PhD student Isaac Nape worked with Distinguished Professor Andrew Forbes, lead investigator on this study and Director of the Structured Light Laboratory in the School of Physics at Wits University, as well as postdoctoral fellow Dr Valeria Rodriguez-Fajardo, visiting Taiwanese researcher Dr Hasiao-Chih Huang, and Dr Jonathan Leach and Dr Feng Zhu from Heriot-Watt University in Scotland.
    In their paper titled: Measuring dimensionality and purity of high-dimensional entangled states, the team outlined a new approach to quantum measurement, testing it on a 100 dimensional quantum entangled state. With traditional approaches, the time of measurement increases unfavourably with dimension, so that to unravel a 100 dimensional state by a full ‘quantum state tomography’ would take decades. Instead, the team showed that the salient information of the quantum system — how many dimensions are entangled and to what level of purity? — could be deduced in just minutes. The new approach requires only simple ‘projections’ that could easily be done in most laboratories with conventional tools. Using light as an example, the team using an all-digital approach to perform the measurements.
    The problem, explains Forbes, is that while high-dimensional states are easily made, particularly with entangled particles of light (photons) they are not easy to measure — our toolbox for measuring and controlling them is almost empty. You can think of a high-dimensional quantum state like faces of a dice. A conventional dice has 6 faces, numbered 1 through 6, for a six-dimensional alphabet that can be used for computing, or for transferring information in communication. To make a “high-dimensional dice” means a dice with many more faces: 100 dimensions equals 100 faces — a rather complicated polygon. In our everyday world it would be easy to count the faces to know what sort of resource we had available to us, but not so in the quantum world. In the quantum world, you can never see the whole dice, so counting the faces is very difficult. The way we get around this is to do a tomography, like they do in the medical world, building up a picture from many, many slices of the object. But the information in quantum objects can be enormous, so the time for this process is prohibitive. A faster approach is a ‘Bell measurement’, a famous test to tell if what you have in front of you is entangled, like asking it “are you quantum or not?.” But while this confirms quantum correlations of the dice, it doesn’t say much about the number of faces it has.
    “Our work circumvented the problem by a chance discovery, that there is a set of measurements that is not a tomography and not a Bell measurement, but that holds important information of both,” says Isaac Nape, the PhD student who executed the research. “In technical parlance, we blended these two measurement approaches to do multiple projections that looks like a tomography but measuring the visibilities of the outcome, as if they were Bell measurements. This revealed the hidden information that could be extracted from the strength of the quantum correlations across many dimensions.” The combination of speed from the Bell-like approach and information from the tomography-like approach meant that key quantum parameters such as dimensionality and the purity of the quantum state could be determined quickly and quantitatively, the first approach to do so.
    “We are not suggesting that our approach replace other techniques,” says Forbes. “Rather, we see it as a fast probe to reveal what you are dealing with, and then use this information to make an informed decision on what to do next. A case of horses-for-courses.” For example, the team see their approach as changing the game in real-world quantum communication links, where a fast measurement of how noisy that quantum state has become and what this has done to the useful dimensions is crucial.
    Story Source:
    Materials provided by University of the Witwatersrand. Note: Content may be edited for style and length. More