More stories

  • in

    Silicon nanopillars for quantum communication

    Around the world, specialists are working on implementing quantum information technologies. One important path involves light: Looking ahead, single light packages, also known as light quanta or photons, could transmit data that is both coded and effectively tap proof. To this end, new photon sources are required that emit single light quanta in a controlled fashion — and on demand. Only recently has it been discovered that silicon can host sources of single-photons with properties suitable for quantum communication. So far, however, no-one has known how to integrate the sources into modern photonic circuits. For the first time, a team led by the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) has now presented an appropriate production technology using silicon nanopillars: a chemical etching method followed by ion bombardment.
    “Silicon and single-photon sources in the telecommunication field have long been the missing link in speeding up the development of quantum communication by optical fibers. Now we have created the necessary preconditions for it,” explains Dr. Yonder Berencén of HZDR’s Institute of Ion Beam Physics and Materials Research who led the current study. Although single-photon sources have been fabricated in materials like diamonds, only silicon-based sources generate light particles at the right wavelength to proliferate in optical fibers — a considerable advantage for practical purposes.
    The researchers achieved this technical breakthrough by choosing a wet etching technique — what is known as MacEtch (metal-assisted chemical etching) — rather than the conventional dry etching techniques for processing the silicon on a chip. These standard methods, which allow the creation of silicon photonic structures, use highly reactive ions. These ions induce light-emitting defects caused by the radiation damage in the silicon. However, they are randomly distributed and overlay the desired optical signal with noise. Metal-assisted chemical etching, on the other hand does not generate these defects — instead, the material is etched away chemically under a kind of metallic mask.
    The goal: single photon sources compatible with the fiber-optic network
    Using the MacEtch method, researchers initially fabricated the simplest form of a potential light wave-guiding structure: silicon nanopillars on a chip. They then bombarded the finished nanopillars with carbon ions, just as they would with a massive silicon block, and thus generated photon sources embedded in the pillars. Employing the new etching technique means the size, spacing, and surface density of the nanopillars can be precisely controlled and adjusted to be compatible with modern photonic circuits. Per square millimeter chip, thousands of silicon nanopillars conduct and bundle the light from the sources by directing it vertically through the pillars.
    The researchers varied the diameter of the pillars because “we had hoped this would mean we could perform single defect creation on thin pillars and actually generate a single photon source per pillar” explains Berencén. “It didn’t work perfectly the first time. By comparison, even for the thinnest pillars, the dose of our carbon bombardment was too high. But now it’s just a short step to single photon sources.”
    A step on which the team is already working intensively because the new technique has also unleashed something of a race for future applications. “My dream is to integrate all the elementary building blocks, from a single photon source via photonic elements through to a single photon detector, on one single chip and then connect lots of chips via commercial optical fibers to form a modular quantum network,” says Berencén.
    Story Source:
    Materials provided by Helmholtz-Zentrum Dresden-Rossendorf. Note: Content may be edited for style and length. More

  • in

    Can eyes on self-driving cars reduce accidents?

    Robotic eyes on autonomous vehicles could improve pedestrian safety, according to a new study at the University of Tokyo. Participants played out scenarios in virtual reality (VR) and had to decide whether to cross a road in front of a moving vehicle or not. When that vehicle was fitted with robotic eyes, which either looked at the pedestrian (registering their presence) or away (not registering them), the participants were able to make safer or more efficient choices.
    Self-driving vehicles seem to be just around the corner. Whether they’ll be delivering packages, plowing fields or busing kids to school, a lot of research is underway to turn a once futuristic idea into reality.
    While the main concern for many is the practical side of creating vehicles that can autonomously navigate the world, researchers at the University of Tokyo have turned their attention to a more “human” concern of self-driving technology. “There is not enough investigation into the interaction between self-driving cars and the people around them, such as pedestrians. So, we need more investigation and effort into such interaction to bring safety and assurance to society regarding self-driving cars,” said Professor Takeo Igarashi from the Graduate School of Information Science and Technology.
    One key difference with self-driving vehicles is that drivers may become more of a passenger, so they may not be paying full attention to the road, or there may be nobody at the wheel at all. This makes it difficult for pedestrians to gauge whether a vehicle has registered their presence or not, as there might be no eye contact or indications from the people inside it.
    So, how could pedestrians be made aware of when an autonomous vehicle has noticed them and is intending to stop? Like a character from the Pixar movie Cars, a self-driving golf cart was fitted with two large, remote-controlled robotic eyes. The researchers called it the “gazing car.” They wanted to test whether putting moving eyes on the cart would affect people’s more risky behavior, in this case, whether people would still cross the road in front of a moving vehicle when in a hurry.
    The team set up four scenarios, two where the cart had eyes and two without. The cart had either noticed the pedestrian and was intending to stop or had not noticed them and was going to keep driving. When the cart had eyes, the eyes would either be looking towards the pedestrian (going to stop) or looking away (not going to stop). More

  • in

    New software platform advances understanding of the surface finish of manufactured components

    Scientists from the University of Freiburg, Germany, and the University of Pittsburgh have developed a software platform that facilitates and standardizes the analysis of surfaces. The contact.engineering platform enables users to create a digital twin of a surface and thus to help predict, for example, how quickly it wears out, how well it conducts heat, or how well it adheres to other materials. The team included Michael Röttger from the Department of Microsystems Engineering, Lars Pastewka and Antoine Sanner from the Department of Microsystems Engineering and the University of Freiburg’s Cluster of Excellence livMatS, and Tevis Jacobs from the Department of Mechanical Engineering and Materials Science at the University of Pittsburgh’s Swanson School of Engineering.
    Topography influences material properties
    All engineered materials have surface roughness, even if they appear smooth when seen with the naked eye. Viewed under a microscope, they resemble the surfaces of a mountain landscape. “It is of particular interest, in both industrial applications and scientific research, to have precise knowledge of a surface’s topography, as this influences properties like the adhesion, friction, wettability, and durability of the material,” says Pastewka.
    Saving time and cost in manufacturing
    Manufacturers must carefully control the surface finish of, for example, automobiles or medical devices to ensure proper performance of the final application. At present, the optimal surface finish is found primarily by a trial-and-error process, where a series of components are made with different machining practices and then their properties are tested to determine which is best. This is a slow and costly process. “It would be far more efficient to use scientific models to design the optimal topography for a given application, but this is not possible at present,” says Jacobs. “It would require scientific advancements in linking topography to properties, and technical advancements in measuring and describing a surface.”
    The contact.engineering platform facilitates both of these advances and standardizes the procedure: It automatically integrates the various data from different tools, corrects measurement errors, and uses the data to create a digital twin of the surface. The platform calculates statistical metrics and applies mechanical models to the surfaces, helping to predict behavior. “The users can thus identify which topographical features influence which properties. This allows a systematic optimization of finishing processes,” says Pastewka.
    Facilitating open science
    The software platform also serves as a database on which users can share their measurements with colleagues or collaborators. Users can also choose to make their surface measurements available to the public. When they publish the data, a digital object identifier (DOI) is generated that can be referenced in scientific publications.
    “We are continually developing contact.engineering and would like to add even more analysis tools, for example for the chemical composition of surfaces,” says Pastewka. “The goal is to provide users with a digital twin that is as comprehensive as possible. That’s why we also welcome suggestions for improvements to the software platform from users in industry and research.”
    The development of contact.engineering was funded from the European Research Council and the US National Science Foundation, as well as from the University of Freiburg’s Cluster of Excellence Living, Adaptive, and Energy-autonomous Materials Systems (livMatS).
    Story Source:
    Materials provided by University of Pittsburgh. Note: Content may be edited for style and length. More

  • in

    Machine learning generates 3D model from 2D pictures

    Researchers from the McKelvey School of Engineering at Washington University in St. Louis have developed a machine learning algorithm that can create a continuous 3D model of cells from a partial set of 2D images that were taken using the same standard microscopy tools found in many labs today.
    Their findings were published Sept. 16 in the journal Nature Machine Intelligence.
    “We train the model on the set of digital images to obtain a continuous representation,” said Ulugbek Kamilov, assistant professor of electrical and systems engineering and of computer science and engineering. “Now, I can show it any way I want. I can zoom in smoothly and there is no pixelation.”
    The key to this work was the use of a neural field network, a particular kind of machine learning system that learns a mapping from spatial coordinates to the corresponding physical quantities. When the training is complete, researchers can point to any coordinate and the model can provide the image value at that location.
    A particular strength of neural field networks is that they do not need to be trained on copious amounts of similar data. Instead, as long as there is a sufficient number of 2D images of the sample, the network can represent it in its entirety, inside and out.
    The image used to train the network is just like any other microscopy image. In essence, a cell is lit from below; the light travels through it and is captured on the other side, creating an image.
    “Because I have some views of the cell, I can use those images to train the model,” Kamilov said. This is done by feeding the model information about a point in the sample where the image captured some of the internal structure of the cell.
    Then the network takes its best shot at recreating that structure. If the output is wrong, the network is tweaked. If it’s correct, that pathway is reinforced. Once the predictions match real-world measurements, the network is ready to fill in parts of the cell that were not captured by the original 2D images.
    The model now contains information of a full, continuous representation of the cell — there’s no need to save a data-heavy image file because it can always be recreated by the neural field network.
    And, Kamilov said, not only is the model an easy-to-store, true representation of the cell, but also, in many ways, it’s more useful than the real thing.
    “I can put any coordinate in and generate that view,” he said. “Or I can generate entirely new views from different angles.” He can use the model to spin a cell like a top or zoom in for a closer look; use the model to do other numerical tasks; or even feed it into another algorithm.
    Story Source:
    Materials provided by Washington University in St. Louis. Note: Content may be edited for style and length. More

  • in

    A smartphone's camera and flash could help people measure blood oxygen levels at home

    First, pause and take a deep breath.
    When we breathe in, our lungs fill with oxygen, which is distributed to our red blood cells for transportation throughout our bodies. Our bodies need a lot of oxygen to function, and healthy people have at least 95% oxygen saturation all the time.
    Conditions like asthma or COVID-19 make it harder for bodies to absorb oxygen from the lungs. This leads to oxygen saturation percentages that drop to 90% or below, an indication that medical attention is needed.
    In a clinic, doctors monitor oxygen saturation using pulse oximeters — those clips you put over your fingertip or ear. But monitoring oxygen saturation at home multiple times a day could help patients keep an eye on COVID symptoms, for example.
    In a proof-of-principle study, University of Washington and University of California San Diego researchers have shown that smartphones are capable of detecting blood oxygen saturation levels down to 70%. This is the lowest value that pulse oximeters should be able to measure, as recommended by the U.S. Food and Drug Administration.
    The technique involves participants placing their finger over the camera and flash of a smartphone, which uses a deep-learning algorithm to decipher the blood oxygen levels. When the team delivered a controlled mixture of nitrogen and oxygen to six subjects to artificially bring their blood oxygen levels down, the smartphone correctly predicted whether the subject had low blood oxygen levels 80% of the time. More

  • in

    This environmentally friendly quantum sensor runs on sunlight

    Quantum tech is going green.

    A new take on highly sensitive magnetic field sensors ditches the power-hungry lasers that previous devices have relied on to make their measurements and replaces them with sunlight. Lasers can gobble 100 watts or so of power — like keeping a bright lightbulb burning. The innovation potentially untethers quantum sensors from that energy need. The result is an environmentally friendly prototype on the forefront of technology, researchers report in an upcoming issue of Physical Review X Energy.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    The big twist is in how the device uses sunlight. It doesn’t use solar cells to convert light into electricity. Instead, the sunlight does the job of the laser’s light, says Jiangfeng Du, a physicist at the University of Science and Technology of China in Hefei.   

    Quantum magnetometers often include a powerful green laser to measure magnetic fields. The laser shines on a diamond that contains atomic defects (SN: 2/26/08). The defects result when nitrogen atoms replace some of the carbon atoms that pure diamonds are made of. The green laser causes the nitrogen defects to fluoresce, emitting red light with an intensity that depends on the strength of the surrounding magnetic fields.

    The new quantum sensor needs green light too. There’s plenty of that in sunlight, as seen in the green wavelengths reflected from tree leaves and grass. To collect enough of it to run their magnetometer, Du and colleagues replaced the laser with a lens 15 centimeters across to gather sunlight. They then filtered the light to remove all colors but green and focused it on a diamond with nitrogen atom defects. The result is red fluorescence that reveals magnetic field strengths just as laser-equipped magnetometers do.

    Green-colored light shining on the diamond-based sensor in a quantum device can be used to measure magnetic fields. In this prototype, a lens (top) collects sunlight, which is filtered to leave only green wavelengths of light. That green light provides an environmentally friendly alternative to the light created by power-hungry lasers that conventional quantum devices rely on.Yunbin Zhu/University of Science and Technology of China

    Changing energy from one type to another, as happens when solar cells collect light and produce electricity, is an inherently inefficient process (SN: 7/26/17). The researchers claim that avoiding the conversion of sunlight to electricity to run lasers makes their approach three times more efficient than would be possible with solar cells powering lasers.

    “I’ve never seen any other reports that connect solar research to quantum technologies,” says Yen-Hung Lin, a physicist at the University of Oxford who was not involved with the study. “It might well ignite a spark of interest in this unexplored direction, and we could see more interdisciplinary research in the field of energy.”

    Quantum devices sensitive to other things, like electric fields or pressure, could also benefit from the sunlight-driven approach, the researchers say. In particular, space-based quantum technology might use the intense sunlight available outside Earth’s atmosphere to provide light tailored for quantum sensors. The remaining light, in wavelengths that the quantum sensors don’t use, could be relegated to solar cells that power electronics to process the quantum signals.

    The sunlight-driven magnetometer is just a first step in the melding of quantum and environmentally sustainable technology. “In the current state, this device is primarily for developmental purposes,” Du says. “We expect that the devices will be used for practical purposes. But there [is] lots of work to be done.” More

  • in

    Even smartest AI models don't match human visual processing

    Deep convolutional neural networks (DCNNs) don’t see objects the way humans do — using configural shape perception — and that could be dangerous in real-world AI applications, says Professor James Elder, co-author of a York University study published today.
    Published in the Cell Press journal iScience, Deep learning models fail to capture the configural nature of human shape perception is a collaborative study by Elder, who holds the York Research Chair in Human and Computer Vision and is Co-Director of York’s Centre for AI & Society, and Assistant Psychology Professor Nicholas Baker at Loyola College in Chicago, a former VISTA postdoctoral fellow at York.
    The study employed novel visual stimuli called “Frankensteins” to explore how the human brain and DCNNs process holistic, configural object properties.
    “Frankensteins are simply objects that have been taken apart and put back together the wrong way around,” says Elder. “As a result, they have all the right local features, but in the wrong places.”
    The investigators found that while the human visual system is confused by Frankensteins, DCNNs are not — revealing an insensitivity to configural object properties.
    “Our results explain why deep AI models fail under certain conditions and point to the need to consider tasks beyond object recognition in order to understand visual processing in the brain,” Elder says. “These deep models tend to take ‘shortcuts’ when solving complex recognition tasks. While these shortcuts may work in many cases, they can be dangerous in some of the real-world AI applications we are currently working on with our industry and government partners,” Elder points out.
    One such application is traffic video safety systems: “The objects in a busy traffic scene — the vehicles, bicycles and pedestrians — obstruct each other and arrive at the eye of a driver as a jumble of disconnected fragments,” explains Elder. “The brain needs to correctly group those fragments to identify the correct categories and locations of the objects. An AI system for traffic safety monitoring that is only able to perceive the fragments individually will fail at this task, potentially misunderstanding risks to vulnerable road users.”
    According to the researchers, modifications to training and architecture aimed at making networks more brain-like did not lead to configural processing, and none of the networks were able to accurately predict trial-by-trial human object judgements. “We speculate that to match human configural sensitivity, networks must be trained to solve broader range of object tasks beyond category recognition,” notes Elder.
    Story Source:
    Materials provided by York University. Note: Content may be edited for style and length. More

  • in

    The magneto-optic modulator

    Many state-of-the-art technologies work at incredibly low temperatures. Superconducting microprocessors and quantum computers promise to revolutionize computation, but scientists need to keep them just above absolute zero (-459.67° Fahrenheit) to protect their delicate states. Still, ultra-cold components have to interface with room temperature systems, providing both a challenge and an opportunity for engineers.
    An international team of scientists, led by UC Santa Barbara’s Paolo Pintus, has designed a device to help cryogenic computers talk with their fair-weather counterparts. The mechanism uses a magnetic field to convert data from electrical current to pulses of light. The light can then travel via fiber-optic cables, which can transmit more information than regular electrical cables while minimizing the heat that leaks into the cryogenic system. The team’s results appear in the journal Nature Electronics.
    “A device like this could enable seamless integration with cutting-edge technologies based on superconductors, for example,” said Pintus, a project scientist in UC Santa Barbara’s Optoelectronics Research Group. Superconductors can carry electrical current without any energy loss, but typically require temperatures below -450° Fahrenheit to work properly.
    Right now, cryogenic systems use standard metal wires to connect with room-temperature electronics. Unfortunately, these wires transfer heat into the cold circuits and can only transmit a small amount of data at a time.
    Pintus and his collaborators wanted to address both these issues at once. “The solution is using light in an optical fiber to transfer information instead of using electrons in a metal cable,” he said.
    Fiber optics are standard in modern telecommunications. These thin glass cables carry information as pulses of light far faster than metal wires can carry electrical charges. As a result, fiberoptic cables can relay 1,000 times more data than conventional wires over the same time span. And glass is a good insulator, meaning it will transfer far less heat to the cryogenic components than a metal wire. More