More stories

  • in

    Can eyes on self-driving cars reduce accidents?

    Robotic eyes on autonomous vehicles could improve pedestrian safety, according to a new study at the University of Tokyo. Participants played out scenarios in virtual reality (VR) and had to decide whether to cross a road in front of a moving vehicle or not. When that vehicle was fitted with robotic eyes, which either looked at the pedestrian (registering their presence) or away (not registering them), the participants were able to make safer or more efficient choices.
    Self-driving vehicles seem to be just around the corner. Whether they’ll be delivering packages, plowing fields or busing kids to school, a lot of research is underway to turn a once futuristic idea into reality.
    While the main concern for many is the practical side of creating vehicles that can autonomously navigate the world, researchers at the University of Tokyo have turned their attention to a more “human” concern of self-driving technology. “There is not enough investigation into the interaction between self-driving cars and the people around them, such as pedestrians. So, we need more investigation and effort into such interaction to bring safety and assurance to society regarding self-driving cars,” said Professor Takeo Igarashi from the Graduate School of Information Science and Technology.
    One key difference with self-driving vehicles is that drivers may become more of a passenger, so they may not be paying full attention to the road, or there may be nobody at the wheel at all. This makes it difficult for pedestrians to gauge whether a vehicle has registered their presence or not, as there might be no eye contact or indications from the people inside it.
    So, how could pedestrians be made aware of when an autonomous vehicle has noticed them and is intending to stop? Like a character from the Pixar movie Cars, a self-driving golf cart was fitted with two large, remote-controlled robotic eyes. The researchers called it the “gazing car.” They wanted to test whether putting moving eyes on the cart would affect people’s more risky behavior, in this case, whether people would still cross the road in front of a moving vehicle when in a hurry.
    The team set up four scenarios, two where the cart had eyes and two without. The cart had either noticed the pedestrian and was intending to stop or had not noticed them and was going to keep driving. When the cart had eyes, the eyes would either be looking towards the pedestrian (going to stop) or looking away (not going to stop). More

  • in

    New software platform advances understanding of the surface finish of manufactured components

    Scientists from the University of Freiburg, Germany, and the University of Pittsburgh have developed a software platform that facilitates and standardizes the analysis of surfaces. The contact.engineering platform enables users to create a digital twin of a surface and thus to help predict, for example, how quickly it wears out, how well it conducts heat, or how well it adheres to other materials. The team included Michael Röttger from the Department of Microsystems Engineering, Lars Pastewka and Antoine Sanner from the Department of Microsystems Engineering and the University of Freiburg’s Cluster of Excellence livMatS, and Tevis Jacobs from the Department of Mechanical Engineering and Materials Science at the University of Pittsburgh’s Swanson School of Engineering.
    Topography influences material properties
    All engineered materials have surface roughness, even if they appear smooth when seen with the naked eye. Viewed under a microscope, they resemble the surfaces of a mountain landscape. “It is of particular interest, in both industrial applications and scientific research, to have precise knowledge of a surface’s topography, as this influences properties like the adhesion, friction, wettability, and durability of the material,” says Pastewka.
    Saving time and cost in manufacturing
    Manufacturers must carefully control the surface finish of, for example, automobiles or medical devices to ensure proper performance of the final application. At present, the optimal surface finish is found primarily by a trial-and-error process, where a series of components are made with different machining practices and then their properties are tested to determine which is best. This is a slow and costly process. “It would be far more efficient to use scientific models to design the optimal topography for a given application, but this is not possible at present,” says Jacobs. “It would require scientific advancements in linking topography to properties, and technical advancements in measuring and describing a surface.”
    The contact.engineering platform facilitates both of these advances and standardizes the procedure: It automatically integrates the various data from different tools, corrects measurement errors, and uses the data to create a digital twin of the surface. The platform calculates statistical metrics and applies mechanical models to the surfaces, helping to predict behavior. “The users can thus identify which topographical features influence which properties. This allows a systematic optimization of finishing processes,” says Pastewka.
    Facilitating open science
    The software platform also serves as a database on which users can share their measurements with colleagues or collaborators. Users can also choose to make their surface measurements available to the public. When they publish the data, a digital object identifier (DOI) is generated that can be referenced in scientific publications.
    “We are continually developing contact.engineering and would like to add even more analysis tools, for example for the chemical composition of surfaces,” says Pastewka. “The goal is to provide users with a digital twin that is as comprehensive as possible. That’s why we also welcome suggestions for improvements to the software platform from users in industry and research.”
    The development of contact.engineering was funded from the European Research Council and the US National Science Foundation, as well as from the University of Freiburg’s Cluster of Excellence Living, Adaptive, and Energy-autonomous Materials Systems (livMatS).
    Story Source:
    Materials provided by University of Pittsburgh. Note: Content may be edited for style and length. More

  • in

    Machine learning generates 3D model from 2D pictures

    Researchers from the McKelvey School of Engineering at Washington University in St. Louis have developed a machine learning algorithm that can create a continuous 3D model of cells from a partial set of 2D images that were taken using the same standard microscopy tools found in many labs today.
    Their findings were published Sept. 16 in the journal Nature Machine Intelligence.
    “We train the model on the set of digital images to obtain a continuous representation,” said Ulugbek Kamilov, assistant professor of electrical and systems engineering and of computer science and engineering. “Now, I can show it any way I want. I can zoom in smoothly and there is no pixelation.”
    The key to this work was the use of a neural field network, a particular kind of machine learning system that learns a mapping from spatial coordinates to the corresponding physical quantities. When the training is complete, researchers can point to any coordinate and the model can provide the image value at that location.
    A particular strength of neural field networks is that they do not need to be trained on copious amounts of similar data. Instead, as long as there is a sufficient number of 2D images of the sample, the network can represent it in its entirety, inside and out.
    The image used to train the network is just like any other microscopy image. In essence, a cell is lit from below; the light travels through it and is captured on the other side, creating an image.
    “Because I have some views of the cell, I can use those images to train the model,” Kamilov said. This is done by feeding the model information about a point in the sample where the image captured some of the internal structure of the cell.
    Then the network takes its best shot at recreating that structure. If the output is wrong, the network is tweaked. If it’s correct, that pathway is reinforced. Once the predictions match real-world measurements, the network is ready to fill in parts of the cell that were not captured by the original 2D images.
    The model now contains information of a full, continuous representation of the cell — there’s no need to save a data-heavy image file because it can always be recreated by the neural field network.
    And, Kamilov said, not only is the model an easy-to-store, true representation of the cell, but also, in many ways, it’s more useful than the real thing.
    “I can put any coordinate in and generate that view,” he said. “Or I can generate entirely new views from different angles.” He can use the model to spin a cell like a top or zoom in for a closer look; use the model to do other numerical tasks; or even feed it into another algorithm.
    Story Source:
    Materials provided by Washington University in St. Louis. Note: Content may be edited for style and length. More

  • in

    A smartphone's camera and flash could help people measure blood oxygen levels at home

    First, pause and take a deep breath.
    When we breathe in, our lungs fill with oxygen, which is distributed to our red blood cells for transportation throughout our bodies. Our bodies need a lot of oxygen to function, and healthy people have at least 95% oxygen saturation all the time.
    Conditions like asthma or COVID-19 make it harder for bodies to absorb oxygen from the lungs. This leads to oxygen saturation percentages that drop to 90% or below, an indication that medical attention is needed.
    In a clinic, doctors monitor oxygen saturation using pulse oximeters — those clips you put over your fingertip or ear. But monitoring oxygen saturation at home multiple times a day could help patients keep an eye on COVID symptoms, for example.
    In a proof-of-principle study, University of Washington and University of California San Diego researchers have shown that smartphones are capable of detecting blood oxygen saturation levels down to 70%. This is the lowest value that pulse oximeters should be able to measure, as recommended by the U.S. Food and Drug Administration.
    The technique involves participants placing their finger over the camera and flash of a smartphone, which uses a deep-learning algorithm to decipher the blood oxygen levels. When the team delivered a controlled mixture of nitrogen and oxygen to six subjects to artificially bring their blood oxygen levels down, the smartphone correctly predicted whether the subject had low blood oxygen levels 80% of the time. More

  • in

    This environmentally friendly quantum sensor runs on sunlight

    Quantum tech is going green.

    A new take on highly sensitive magnetic field sensors ditches the power-hungry lasers that previous devices have relied on to make their measurements and replaces them with sunlight. Lasers can gobble 100 watts or so of power — like keeping a bright lightbulb burning. The innovation potentially untethers quantum sensors from that energy need. The result is an environmentally friendly prototype on the forefront of technology, researchers report in an upcoming issue of Physical Review X Energy.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    The big twist is in how the device uses sunlight. It doesn’t use solar cells to convert light into electricity. Instead, the sunlight does the job of the laser’s light, says Jiangfeng Du, a physicist at the University of Science and Technology of China in Hefei.   

    Quantum magnetometers often include a powerful green laser to measure magnetic fields. The laser shines on a diamond that contains atomic defects (SN: 2/26/08). The defects result when nitrogen atoms replace some of the carbon atoms that pure diamonds are made of. The green laser causes the nitrogen defects to fluoresce, emitting red light with an intensity that depends on the strength of the surrounding magnetic fields.

    The new quantum sensor needs green light too. There’s plenty of that in sunlight, as seen in the green wavelengths reflected from tree leaves and grass. To collect enough of it to run their magnetometer, Du and colleagues replaced the laser with a lens 15 centimeters across to gather sunlight. They then filtered the light to remove all colors but green and focused it on a diamond with nitrogen atom defects. The result is red fluorescence that reveals magnetic field strengths just as laser-equipped magnetometers do.

    Green-colored light shining on the diamond-based sensor in a quantum device can be used to measure magnetic fields. In this prototype, a lens (top) collects sunlight, which is filtered to leave only green wavelengths of light. That green light provides an environmentally friendly alternative to the light created by power-hungry lasers that conventional quantum devices rely on.Yunbin Zhu/University of Science and Technology of China

    Changing energy from one type to another, as happens when solar cells collect light and produce electricity, is an inherently inefficient process (SN: 7/26/17). The researchers claim that avoiding the conversion of sunlight to electricity to run lasers makes their approach three times more efficient than would be possible with solar cells powering lasers.

    “I’ve never seen any other reports that connect solar research to quantum technologies,” says Yen-Hung Lin, a physicist at the University of Oxford who was not involved with the study. “It might well ignite a spark of interest in this unexplored direction, and we could see more interdisciplinary research in the field of energy.”

    Quantum devices sensitive to other things, like electric fields or pressure, could also benefit from the sunlight-driven approach, the researchers say. In particular, space-based quantum technology might use the intense sunlight available outside Earth’s atmosphere to provide light tailored for quantum sensors. The remaining light, in wavelengths that the quantum sensors don’t use, could be relegated to solar cells that power electronics to process the quantum signals.

    The sunlight-driven magnetometer is just a first step in the melding of quantum and environmentally sustainable technology. “In the current state, this device is primarily for developmental purposes,” Du says. “We expect that the devices will be used for practical purposes. But there [is] lots of work to be done.” More

  • in

    Even smartest AI models don't match human visual processing

    Deep convolutional neural networks (DCNNs) don’t see objects the way humans do — using configural shape perception — and that could be dangerous in real-world AI applications, says Professor James Elder, co-author of a York University study published today.
    Published in the Cell Press journal iScience, Deep learning models fail to capture the configural nature of human shape perception is a collaborative study by Elder, who holds the York Research Chair in Human and Computer Vision and is Co-Director of York’s Centre for AI & Society, and Assistant Psychology Professor Nicholas Baker at Loyola College in Chicago, a former VISTA postdoctoral fellow at York.
    The study employed novel visual stimuli called “Frankensteins” to explore how the human brain and DCNNs process holistic, configural object properties.
    “Frankensteins are simply objects that have been taken apart and put back together the wrong way around,” says Elder. “As a result, they have all the right local features, but in the wrong places.”
    The investigators found that while the human visual system is confused by Frankensteins, DCNNs are not — revealing an insensitivity to configural object properties.
    “Our results explain why deep AI models fail under certain conditions and point to the need to consider tasks beyond object recognition in order to understand visual processing in the brain,” Elder says. “These deep models tend to take ‘shortcuts’ when solving complex recognition tasks. While these shortcuts may work in many cases, they can be dangerous in some of the real-world AI applications we are currently working on with our industry and government partners,” Elder points out.
    One such application is traffic video safety systems: “The objects in a busy traffic scene — the vehicles, bicycles and pedestrians — obstruct each other and arrive at the eye of a driver as a jumble of disconnected fragments,” explains Elder. “The brain needs to correctly group those fragments to identify the correct categories and locations of the objects. An AI system for traffic safety monitoring that is only able to perceive the fragments individually will fail at this task, potentially misunderstanding risks to vulnerable road users.”
    According to the researchers, modifications to training and architecture aimed at making networks more brain-like did not lead to configural processing, and none of the networks were able to accurately predict trial-by-trial human object judgements. “We speculate that to match human configural sensitivity, networks must be trained to solve broader range of object tasks beyond category recognition,” notes Elder.
    Story Source:
    Materials provided by York University. Note: Content may be edited for style and length. More

  • in

    The magneto-optic modulator

    Many state-of-the-art technologies work at incredibly low temperatures. Superconducting microprocessors and quantum computers promise to revolutionize computation, but scientists need to keep them just above absolute zero (-459.67° Fahrenheit) to protect their delicate states. Still, ultra-cold components have to interface with room temperature systems, providing both a challenge and an opportunity for engineers.
    An international team of scientists, led by UC Santa Barbara’s Paolo Pintus, has designed a device to help cryogenic computers talk with their fair-weather counterparts. The mechanism uses a magnetic field to convert data from electrical current to pulses of light. The light can then travel via fiber-optic cables, which can transmit more information than regular electrical cables while minimizing the heat that leaks into the cryogenic system. The team’s results appear in the journal Nature Electronics.
    “A device like this could enable seamless integration with cutting-edge technologies based on superconductors, for example,” said Pintus, a project scientist in UC Santa Barbara’s Optoelectronics Research Group. Superconductors can carry electrical current without any energy loss, but typically require temperatures below -450° Fahrenheit to work properly.
    Right now, cryogenic systems use standard metal wires to connect with room-temperature electronics. Unfortunately, these wires transfer heat into the cold circuits and can only transmit a small amount of data at a time.
    Pintus and his collaborators wanted to address both these issues at once. “The solution is using light in an optical fiber to transfer information instead of using electrons in a metal cable,” he said.
    Fiber optics are standard in modern telecommunications. These thin glass cables carry information as pulses of light far faster than metal wires can carry electrical charges. As a result, fiberoptic cables can relay 1,000 times more data than conventional wires over the same time span. And glass is a good insulator, meaning it will transfer far less heat to the cryogenic components than a metal wire. More

  • in

    Data science reveals universal rules shaping cells' power stations

    Mitochondria are compartments — so-called “organelles” — in our cells that provide the chemical energy supply we need to move, think, and live. Chloroplasts are organelles in plants and algae that capture sunlight and perform photosynthesis. At a first glance, they might look worlds apart. But an international team of researchers, led by the University of Bergen, have used data science and computational biology to show that the same “rules” have shaped how both organelles — and more — have evolved throughout life’s history.
    Both types of organelle were once independent organisms, with their own full genomes. Billions of years ago, those organisms were captured and imprisoned by other cells — the ancestors of modern species. Since then, the organelles have lost most of their genomes, with only a handful of genes remaining in modern-day mitochondrial and chloroplast DNA. These remaining genes are essential for life and important in many devastating diseases, but why they stay in organelle DNA — when so many others have been lost — has been debated for decades.
    For a fresh perspective on this question, the scientists took a data-driven approach. They gathered data on all the organelle DNA that has been sequenced across life. They then used modelling, biochemistry, and structural biology to represent a wide range of different hypotheses about gene retention as a set of numbers associated with each gene. Using tools from data science and statistics, they asked which ideas could best explain the patterns of retained genes in the data they had compiled — testing the results with unseen data to check their power.
    “Some clear patterns emerged from the modelling,” explains Kostas Giannakis, a postdoctoral researcher at Bergen and joint first author on the paper. “Lots of these genes encode subunits of larger cellular machines, which are assembled like a jigsaw. Genes for the pieces in the middle of the jigsaw are most likely to stay in organelle DNA.”
    The team believe that this is because keeping local control over the production of such central subunits help the organelle quickly respond to change — a version of the so-called “CoRR” model. They also found support for other existing, debated, and new ideas. For example, if a gene product is hydrophobic — and hard to import to the organelle from outside — the data shows that it is often retained there. Genes that are themselves encoded using stronger-binding chemical groups are also more often retained — perhaps because they are more robust in the harsh environment of the organelle.
    “These different hypotheses have usually been thought of as competing in the past,” says Iain Johnston, a professor at Bergen and leader of the team. “But actually no single mechanism can explain all the observations — it takes a combination. A strength of this unbiased, data-driven approach is that it can show that lots of ideas are partly right, but none exclusively so — perhaps explaining the long debate on these topics.”
    To their surprise, the team also found that their models trained to describe mitochondrial genes also predicted the retention of chloroplast genes, and vice versa. They also found that the same genetic features shaping mitochondrial and chloroplast DNA also appear to play a role in the evolution of other endosymbionts — organisms which have been more recently captured by other hosts, from algae to insects.
    “That was a wow moment,” says Johnston. “We — and others — have had this idea that similar pressures might apply to the evolution of different organelles. But to see this universal, quantitative link — data from one organelle precisely predicting patterns in another, and in more recent endosymbionts — was really striking.”
    The research is part of a broader project funded by the European Research Council, and the team are now working on a parallel question — how different organisms maintain the organelle genes that they do retain. Mutations in mitochondrial DNA can cause devastating inherited diseases; the team are using modelling, statistics, and experiments to explore how these mutations are dealt with in humans, plants, and more.
    Story Source:
    Materials provided by The University of Bergen. Note: Content may be edited for style and length. More