More stories

  • in

    Deep learning networks prefer the human voice — just like us

    The digital revolution is built on a foundation of invisible 1s and 0s called bits. As decades pass, and more and more of the world’s information and knowledge morph into streams of 1s and 0s, the notion that computers prefer to “speak” in binary numbers is rarely questioned. According to new research from Columbia Engineering, this could be about to change.
    A new study from Mechanical Engineering Professor Hod Lipson and his PhD student Boyuan Chen proves that artificial intelligence systems might actually reach higher levels of performance if they are programmed with sound files of human language rather than with numerical data labels. The researchers discovered that in a side-by-side comparison, a neural network whose “training labels” consisted of sound files reached higher levels of performance in identifying objects in images, compared to another network that had been programmed in a more traditional manner, using simple binary inputs.
    “To understand why this finding is significant,” said Lipson, James and Sally Scapa Professor of Innovation and a member of Columbia’s Data Science Institute, “It’s useful to understand how neural networks are usually programmed, and why using the sound of the human voice is a radical experiment.”
    When used to convey information, the language of binary numbers is compact and precise. In contrast, spoken human language is more tonal and analog, and, when captured in a digital file, non-binary. Because numbers are such an efficient way to digitize data, programmers rarely deviate from a numbers-driven process when they develop a neural network.
    Lipson, a highly regarded roboticist, and Chen, a former concert pianist, had a hunch that neural networks might not be reaching their full potential. They speculated that neural networks might learn faster and better if the systems were “trained” to recognize animals, for instance, by using the power of one of the world’s most highly evolved sounds — the human voice uttering specific words.
    One of the more common exercises AI researchers use to test out the merits of a new machine learning technique is to train a neural network to recognize specific objects and animals in a collection of different photographs. To check their hypothesis, Chen, Lipson and two students, Yu Li and Sunand Raghupathi, set up a controlled experiment. They created two new neural networks with the goal of training both of them to recognize 10 different types of objects in a collection of 50,000 photographs known as “training images.”
    One AI system was trained the traditional way, by uploading a giant data table containing thousands of rows, each row corresponding to a single training photo. The first column was an image file containing a photo of a particular object or animal; the next 10 columns corresponded to 10 possible object types: cats, dogs, airplanes, etc. A “1” in any column indicates the correct answer, and nine 0s indicate the incorrect answers. More

  • in

    Understanding fruit fly behavior may be next step toward autonomous vehicles

    With over 70% of respondents to a AAA annual survey on autonomous driving reporting they would fear being in a fully self-driving car, makers like Tesla may be back to the drawing board before rolling out fully autonomous self-driving systems. But new research from Northwestern University shows us we may be better off putting fruit flies behind the wheel instead of robots.
    Drosophila have been subjects of science as long as humans have been running experiments in labs. But given their size, it’s easy to wonder what can be learned by observing them. Research published today in the journal Nature Communications demonstrates that fruit flies use decision-making, learning and memory to perform simple functions like escaping heat. And researchers are using this understanding to challenge the way we think about self-driving cars.
    “The discovery that flexible decision-making, learning and memory are used by flies during such a simple navigational task is both novel and surprising,” said Marco Gallio, the corresponding author on the study. “It may make us rethink what we need to do to program safe and flexible self-driving vehicles.”
    According to Gallio, an associate professor of neurobiology in the Weinberg College of Arts and Sciences, the questions behind this study are similar to those vexing engineers building cars that move on their own. How does a fruit fly (or a car) cope with novelty? How can we build a car that is flexibly able to adapt to new conditions?
    This discovery reveals brain functions in the household pest that are typically associated with more complex brains like those of mice and humans.
    “Animal behavior, especially that of insects, is often considered largely fixed and hard-wired — like machines,” Gallio said. “Most people have a hard time imagining that animals as different from us as a fruit fly may possess complex brain functions, such as the ability to learn, remember or make decisions.”
    To study how fruit flies tend to escape heat, the Gallio lab built a tiny plastic chamber with four floor tiles whose temperatures could be independently controlled and confined flies inside. They then used high-resolution video recordings to map how a fly reacted when it encountered a boundary between a warm tile and a cool tile. They found flies were remarkably good at treating heat boundaries as invisible barriers to avoid pain or harm. More

  • in

    A new, positive approach could be the key to next-generation, transparent electronics

    A new study, out this week, could pave the way to revolutionary, transparent electronics.
    Such see-through devices could potentially be integrated in glass, in flexible displays and in smart contact lenses, bringing to life futuristic devices that seem like the product of science fiction.
    For several decades, researchers have sought a new class of electronics based on semiconducting oxides, whose optical transparency could enable these fully-transparent electronics.
    Oxide-based devices could also find use in power electronics and communication technology, reducing the carbon footprint of our utility networks.
    A RMIT-led team has now introduced ultrathin beta-tellurite to the two-dimensional (2D) semiconducting material family, providing an answer to this decades-long search for a high mobility p-type oxide.
    “This new, high-mobility p-type oxide fills a crucial gap in the materials spectrum to enable fast, transparent circuits,” says team leader Dr Torben Daeneke, who led the collaboration across three FLEET nodes. More

  • in

    Qubits comprised of holes could be the trick to build faster, larger quantum computers

    A new study indicates holes the solution to operational speed/coherence trade-off, potential scaling up of qubits to a mini-quantum computer.
    Quantum computers are predicted to be much more powerful and functional than today’s ‘classical’ computers.
    One way to make a quantum bit is to use the ‘spin’ of an electron, which can point either up or down. To make quantum computers as fast and power-efficient as possible we would like to operate them using only electric fields, which are applied using ordinary electrodes.
    Although spin does not ordinarily ‘talk’ to electric fields, in some materials spins can interact with electric fields indirectly, and these are some of the hottest materials currently studied in quantum computing.
    The interaction that enables spins to talk to electric fields is called the spin-orbit interaction, and is traced all the way back to Einstein’s theory of relativity.
    The fear of quantum-computing researchers has been that when this interaction is strong, any gain in operation speed would be offset by a loss in coherence (essentially, how long we can preserve quantum information). More

  • in

    Kirigami-style fabrication may enable new 3D nanostructures

    A new technique that mimics the ancient Japanese art of kirigami may offer an easier way to fabricate complex 3D nanostructures for use in electronics, manufacturing and health care.
    Kirigami enhances the Japanese artform of origami, which involves folding paper to create 3D structural designs, by strategically incorporating cuts to the paper prior to folding. The method enables artists to create sophisticated three-dimensional structures more easily.
    “We used kirigami at the nanoscale to create complex 3D nanostructures,” said Daniel Lopez, Penn State Liang Professor of Electrical Engineering and Computer Science, and leader of the team that published this research in Advanced Materials. “These 3D structures are difficult to fabricate because current nanofabrication processes are based on the technology used to fabricate microelectronics which only use planar, or flat, films. Without kirigami techniques, complex three-dimensional structures would be much more complicated to fabricate or simply impossible to make.”
    Lopez said that if force is applied to a uniform structural film, nothing really happens other than stretching it a bit, like what happens when a piece of paper is stretched. But when cuts are introduced to the film, and forces are applied in a certain direction, a structure pops up, similar to when a kirigami artist applies force to a cut paper. The geometry of the planar pattern of cuts determines the shape of the 3D architecture.
    “We demonstrated that it is possible to use conventional planar fabrication methods to create different 3D nanostructures from the same 2D cut geometry,” Lopez said. “By introducing minimum changes to the dimensions of the cuts in the film, we can drastically change the three-dimensional shape of the pop-up architectures. We demonstrated nanoscale devices that can tilt or change their curvature just by changing the width of the cuts a few nanometers.”
    This new field of kirigami-style nanoengineering enables the development of machines and structures that can change from one shape to another, or morph, in response to changes in the environment. One example is an electronic component that changes shape in elevated temperatures to enable more air flow within a device to keep it from overheating.
    “This kirigami technique will allow the development of adaptive flexible electronics that can be incorporated onto surfaces with complicated topography, such as a sensor resting on the human brain,” Lopez said. “We could use these concepts to design sensors and actuators that can change shape and configuration to perform a task more efficiently. Imagine the potential of structures that can change shape with minuscule changes in temperature, illumination or chemical conditions.”
    Lopez will focus his future research on applying these kirigami techniques to materials that are one atom thick, and thin actuators made of piezoelectrics. These 2D materials open new possibilities for applications of kirigami-induced structures. Lopez said his goal is to work with other researchers at Penn State’s Materials Research Institute (MRI) to develop a new generation of miniature machines that are atomically flat and are more responsive to changes in the environment.
    “MRI is a world leader in the synthesis and characterization of 2D materials, which are the ultimate thin-films that can be used for kirigami engineering,” Lopez said. “Moreover, by incorporating ultra-thin piezo and ferroelectric materials onto kirigami structures, we will develop agile and shape-morphing structures. These shape-morphing micro-machines would be very useful for applications in harsh environments and for drug delivery and health monitoring. I am working at making Penn State and MRI the place where we develop these super-small machines for a specific variety of applications.”
    Story Source:
    Materials provided by Penn State. Original written by Jamie Oberdick. Note: Content may be edited for style and length. More

  • in

    New method uses device cameras to measure pulse, breathing rate and could help telehealth

    Telehealth has become a critical way for doctors to still provide health care while minimizing in-person contact during COVID-19. But with phone or Zoom appointments, it’s harder for doctors to get important vital signs from a patient, such as their pulse or respiration rate, in real time.
    A University of Washington-led team has developed a method that uses the camera on a person’s smartphone or computer to take their pulse and respiration signal from a real-time video of their face. The researchers presented this state-of-the-art system in December at the Neural Information Processing Systems conference.
    Now the team is proposing a better system to measure these physiological signals. This system is less likely to be tripped up by different cameras, lighting conditions or facial features, such as skin color. The researchers will present these findings April 8 at the ACM Conference on Health, Interference, and Learning.
    “Machine learning is pretty good at classifying images. If you give it a series of photos of cats and then tell it to find cats in other images, it can do it. But for machine learning to be helpful in remote health sensing, we need a system that can identify the region of interest in a video that holds the strongest source of physiological information — pulse, for example — and then measure that over time,” said lead author Xin Liu, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering.
    “Every person is different,” Liu said. “So this system needs to be able to quickly adapt to each person’s unique physiological signature, and separate this from other variations, such as what they look like and what environment they are in.”
    The team’s system is privacy preserving — it runs on the device instead of in the cloud — and uses machine learning to capture subtle changes in how light reflects off a person’s face, which is correlated with changing blood flow. Then it converts these changes into both pulse and respiration rate. More

  • in

    A robot that senses hidden objects

    In recent years, robots have gained artificial vision, touch, and even smell. “Researchers have been giving robots human-like perception,” says MIT Associate Professor Fadel Adib. In a new paper, Adib’s team is pushing the technology a step further. “We’re trying to give robots superhuman perception,” he says.
    The researchers have developed a robot that uses radio waves, which can pass through walls, to sense occluded objects. The robot, called RF-Grasp, combines this powerful sensing with more traditional computer vision to locate and grasp items that might otherwise be blocked from view. The advance could one day streamline e-commerce fulfillment in warehouses or help a machine pluck a screwdriver from a jumbled toolkit.
    The research will be presented in May at the IEEE International Conference on Robotics and Automation. The paper’s lead author is Tara Boroushaki, a research assistant in the Signal Kinetics Group at the MIT Media Lab. Her MIT co-authors include Adib, who is the director of the Signal Kinetics Group; and Alberto Rodriguez, the Class of 1957 Associate Professor in the Department of Mechanical Engineering. Other co-authors include Junshan Leng, a research engineer at Harvard University, and Ian Clester, a PhD student at Georgia Tech.
    As e-commerce continues to grow, warehouse work is still usually the domain of humans, not robots, despite sometimes-dangerous working conditions. That’s in part because robots struggle to locate and grasp objects in such a crowded environment. “Perception and picking are two roadblocks in the industry today,” says Rodriguez. Using optical vision alone, robots can’t perceive the presence of an item packed away in a box or hidden behind another object on the shelf — visible light waves, of course, don’t pass through walls.
    But radio waves can.
    For decades, radio frequency (RF) identification has been used to track everything from library books to pets. RF identification systems have two main components: a reader and a tag. The tag is a tiny computer chip that gets attached to — or, in the case of pets, implanted in — the item to be tracked. The reader then emits an RF signal, which gets modulated by the tag and reflected back to the reader. More

  • in

    Dynamic model of SARS-CoV-2 spike protein reveals potential new vaccine targets

    A new, detailed model of the surface of the SARS-CoV-2 spike protein reveals previously unknown vulnerabilities that could inform development of vaccines. Mateusz Sikora of the Max Planck Institute of Biophysics in Frankfurt, Germany, and colleagues present these findings in the open-access journal PLOS Computational Biology.
    SARS-CoV-2 is the virus responsible for the COVID-19 pandemic. A key feature of SARS-CoV-2 is its spike protein, which extends from its surface and enables it to target and infect human cells. Extensive research has resulted in detailed static models of the spike protein, but these models do not capture the flexibility of the spike protein itself nor the movements of protective glycans — chains of sugar molecules — that coat it.
    To support vaccine development, Sikora and colleagues aimed to identify novel potential target sites on the surface of the spike protein. To do so, they developed molecular dynamics simulations that capture the complete structure of the spike protein and its motions in a realistic environment.
    These simulations show that glycans on the spike protein act as a dynamic shield that helps the virus evade the human immune system. Similar to car windshield wipers, the glycans cover nearly the entire spike surface by flopping back and forth, even though their coverage is minimal at any given instant.
    By combining the dynamic spike protein simulations with bioinformatic analysis, the researchers identified spots on the surface of the spike proteins that are least protected by the glycan shields. Some of the detected sites have been identified in previous research, but some are novel. The vulnerability of many of these novel sites was confirmed by other research groups in subsequent lab experiments.
    “We are in a phase of the pandemic driven by the emergence of new variants of SARS-CoV-2, with mutations concentrated in particular in the spike protein,” Sikora says. “Our approach can support the design of vaccines and therapeutic antibodies, especially when established methods struggle.”
    The method developed for this study could also be applied to identify potential vulnerabilities of other viral proteins.
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More