More stories

  • in

    Screening for skin disease on your laptop

    The founding chair of the Biomedical Engineering Department at the University of Houston is reporting a new deep neural network architecture that provides early diagnosis of systemic sclerosis (SSc), a rare autoimmune disease marked by hardened or fibrous skin and internal organs. The proposed network, implemented using a standard laptop computer (2.5 GHz Intel Core i7), can immediately differentiate between images of healthy skin and skin with systemic sclerosis.
    “Our preliminary study, intended to show the efficacy of the proposed network architecture, holds promise in the characterization of SSc,” reports Metin Akay, John S. Dunn Endowed Chair Professor of biomedical engineering. The work is published in the IEEE Open Journal of Engineering in Medicine and Biology.
    “We believe that the proposed network architecture could easily be implemented in a clinical setting, providing a simple, inexpensive and accurate screening tool for SSc.”
    For patients with SSc, early diagnosis is critical, but often elusive. Several studies have shown that organ involvement could occur far earlier than expected in the early phase of the disease, but early diagnosis and determining the extent of disease progression pose significant challenge for physicians, even at expert centers, resulting in delays in therapy and management.
    In artificial intelligence, deep learning organizes algorithms into layers (the artificial neural network) that can make its own intelligent decisions. To speed up the learning process, the new network was trained using the parameters of MobileNetV2, a mobile vision application, pre-trained on the ImageNet dataset with 1.4M images.
    “By scanning the images, the network learns from the existing images and decides which new image is normal or in an early or late stage of disease,” said Akay.
    Among several deep learning networks, Convolutional Neural Networks (CNNs) are most commonly used in engineering, medicine and biology, but their success in biomedical applications has been limited due to the size of the available training sets and networks.
    To overcome these difficulties, Akay and partner Yasemin Akay combined the UNet, a modified CNN architecture, with added layers, and they developed a mobile training module. The results showed that the proposed deep learning architecture is superior and better than CNNs for classification of SSc images.
    “After fine tuning, our results showed the proposed network reached 100% accuracy on the training image set, 96.8% accuracy on the validation image set, and 95.2% on the testing image set,” said Yasmin Akay, UH instructional associate professor of biomedical engineering.
    The training time was less than five hours.
    Joining Metin Akay and Yasemin Akay, the paper was co-authored by Yong Du, Cheryl Shersen, Ting Chen and Chandra Mohan, all of University of Houston; and Minghua Wu and Shervin Assassi of the University of Texas Health Science Center (UT Health).
    Story Source:
    Materials provided by University of Houston. Original written by Laurie Fickman. Note: Content may be edited for style and length. More

  • in

    Deep learning networks prefer the human voice — just like us

    The digital revolution is built on a foundation of invisible 1s and 0s called bits. As decades pass, and more and more of the world’s information and knowledge morph into streams of 1s and 0s, the notion that computers prefer to “speak” in binary numbers is rarely questioned. According to new research from Columbia Engineering, this could be about to change.
    A new study from Mechanical Engineering Professor Hod Lipson and his PhD student Boyuan Chen proves that artificial intelligence systems might actually reach higher levels of performance if they are programmed with sound files of human language rather than with numerical data labels. The researchers discovered that in a side-by-side comparison, a neural network whose “training labels” consisted of sound files reached higher levels of performance in identifying objects in images, compared to another network that had been programmed in a more traditional manner, using simple binary inputs.
    “To understand why this finding is significant,” said Lipson, James and Sally Scapa Professor of Innovation and a member of Columbia’s Data Science Institute, “It’s useful to understand how neural networks are usually programmed, and why using the sound of the human voice is a radical experiment.”
    When used to convey information, the language of binary numbers is compact and precise. In contrast, spoken human language is more tonal and analog, and, when captured in a digital file, non-binary. Because numbers are such an efficient way to digitize data, programmers rarely deviate from a numbers-driven process when they develop a neural network.
    Lipson, a highly regarded roboticist, and Chen, a former concert pianist, had a hunch that neural networks might not be reaching their full potential. They speculated that neural networks might learn faster and better if the systems were “trained” to recognize animals, for instance, by using the power of one of the world’s most highly evolved sounds — the human voice uttering specific words.
    One of the more common exercises AI researchers use to test out the merits of a new machine learning technique is to train a neural network to recognize specific objects and animals in a collection of different photographs. To check their hypothesis, Chen, Lipson and two students, Yu Li and Sunand Raghupathi, set up a controlled experiment. They created two new neural networks with the goal of training both of them to recognize 10 different types of objects in a collection of 50,000 photographs known as “training images.”
    One AI system was trained the traditional way, by uploading a giant data table containing thousands of rows, each row corresponding to a single training photo. The first column was an image file containing a photo of a particular object or animal; the next 10 columns corresponded to 10 possible object types: cats, dogs, airplanes, etc. A “1” in any column indicates the correct answer, and nine 0s indicate the incorrect answers. More

  • in

    Understanding fruit fly behavior may be next step toward autonomous vehicles

    With over 70% of respondents to a AAA annual survey on autonomous driving reporting they would fear being in a fully self-driving car, makers like Tesla may be back to the drawing board before rolling out fully autonomous self-driving systems. But new research from Northwestern University shows us we may be better off putting fruit flies behind the wheel instead of robots.
    Drosophila have been subjects of science as long as humans have been running experiments in labs. But given their size, it’s easy to wonder what can be learned by observing them. Research published today in the journal Nature Communications demonstrates that fruit flies use decision-making, learning and memory to perform simple functions like escaping heat. And researchers are using this understanding to challenge the way we think about self-driving cars.
    “The discovery that flexible decision-making, learning and memory are used by flies during such a simple navigational task is both novel and surprising,” said Marco Gallio, the corresponding author on the study. “It may make us rethink what we need to do to program safe and flexible self-driving vehicles.”
    According to Gallio, an associate professor of neurobiology in the Weinberg College of Arts and Sciences, the questions behind this study are similar to those vexing engineers building cars that move on their own. How does a fruit fly (or a car) cope with novelty? How can we build a car that is flexibly able to adapt to new conditions?
    This discovery reveals brain functions in the household pest that are typically associated with more complex brains like those of mice and humans.
    “Animal behavior, especially that of insects, is often considered largely fixed and hard-wired — like machines,” Gallio said. “Most people have a hard time imagining that animals as different from us as a fruit fly may possess complex brain functions, such as the ability to learn, remember or make decisions.”
    To study how fruit flies tend to escape heat, the Gallio lab built a tiny plastic chamber with four floor tiles whose temperatures could be independently controlled and confined flies inside. They then used high-resolution video recordings to map how a fly reacted when it encountered a boundary between a warm tile and a cool tile. They found flies were remarkably good at treating heat boundaries as invisible barriers to avoid pain or harm. More

  • in

    A new, positive approach could be the key to next-generation, transparent electronics

    A new study, out this week, could pave the way to revolutionary, transparent electronics.
    Such see-through devices could potentially be integrated in glass, in flexible displays and in smart contact lenses, bringing to life futuristic devices that seem like the product of science fiction.
    For several decades, researchers have sought a new class of electronics based on semiconducting oxides, whose optical transparency could enable these fully-transparent electronics.
    Oxide-based devices could also find use in power electronics and communication technology, reducing the carbon footprint of our utility networks.
    A RMIT-led team has now introduced ultrathin beta-tellurite to the two-dimensional (2D) semiconducting material family, providing an answer to this decades-long search for a high mobility p-type oxide.
    “This new, high-mobility p-type oxide fills a crucial gap in the materials spectrum to enable fast, transparent circuits,” says team leader Dr Torben Daeneke, who led the collaboration across three FLEET nodes. More

  • in

    Qubits comprised of holes could be the trick to build faster, larger quantum computers

    A new study indicates holes the solution to operational speed/coherence trade-off, potential scaling up of qubits to a mini-quantum computer.
    Quantum computers are predicted to be much more powerful and functional than today’s ‘classical’ computers.
    One way to make a quantum bit is to use the ‘spin’ of an electron, which can point either up or down. To make quantum computers as fast and power-efficient as possible we would like to operate them using only electric fields, which are applied using ordinary electrodes.
    Although spin does not ordinarily ‘talk’ to electric fields, in some materials spins can interact with electric fields indirectly, and these are some of the hottest materials currently studied in quantum computing.
    The interaction that enables spins to talk to electric fields is called the spin-orbit interaction, and is traced all the way back to Einstein’s theory of relativity.
    The fear of quantum-computing researchers has been that when this interaction is strong, any gain in operation speed would be offset by a loss in coherence (essentially, how long we can preserve quantum information). More

  • in

    Kirigami-style fabrication may enable new 3D nanostructures

    A new technique that mimics the ancient Japanese art of kirigami may offer an easier way to fabricate complex 3D nanostructures for use in electronics, manufacturing and health care.
    Kirigami enhances the Japanese artform of origami, which involves folding paper to create 3D structural designs, by strategically incorporating cuts to the paper prior to folding. The method enables artists to create sophisticated three-dimensional structures more easily.
    “We used kirigami at the nanoscale to create complex 3D nanostructures,” said Daniel Lopez, Penn State Liang Professor of Electrical Engineering and Computer Science, and leader of the team that published this research in Advanced Materials. “These 3D structures are difficult to fabricate because current nanofabrication processes are based on the technology used to fabricate microelectronics which only use planar, or flat, films. Without kirigami techniques, complex three-dimensional structures would be much more complicated to fabricate or simply impossible to make.”
    Lopez said that if force is applied to a uniform structural film, nothing really happens other than stretching it a bit, like what happens when a piece of paper is stretched. But when cuts are introduced to the film, and forces are applied in a certain direction, a structure pops up, similar to when a kirigami artist applies force to a cut paper. The geometry of the planar pattern of cuts determines the shape of the 3D architecture.
    “We demonstrated that it is possible to use conventional planar fabrication methods to create different 3D nanostructures from the same 2D cut geometry,” Lopez said. “By introducing minimum changes to the dimensions of the cuts in the film, we can drastically change the three-dimensional shape of the pop-up architectures. We demonstrated nanoscale devices that can tilt or change their curvature just by changing the width of the cuts a few nanometers.”
    This new field of kirigami-style nanoengineering enables the development of machines and structures that can change from one shape to another, or morph, in response to changes in the environment. One example is an electronic component that changes shape in elevated temperatures to enable more air flow within a device to keep it from overheating.
    “This kirigami technique will allow the development of adaptive flexible electronics that can be incorporated onto surfaces with complicated topography, such as a sensor resting on the human brain,” Lopez said. “We could use these concepts to design sensors and actuators that can change shape and configuration to perform a task more efficiently. Imagine the potential of structures that can change shape with minuscule changes in temperature, illumination or chemical conditions.”
    Lopez will focus his future research on applying these kirigami techniques to materials that are one atom thick, and thin actuators made of piezoelectrics. These 2D materials open new possibilities for applications of kirigami-induced structures. Lopez said his goal is to work with other researchers at Penn State’s Materials Research Institute (MRI) to develop a new generation of miniature machines that are atomically flat and are more responsive to changes in the environment.
    “MRI is a world leader in the synthesis and characterization of 2D materials, which are the ultimate thin-films that can be used for kirigami engineering,” Lopez said. “Moreover, by incorporating ultra-thin piezo and ferroelectric materials onto kirigami structures, we will develop agile and shape-morphing structures. These shape-morphing micro-machines would be very useful for applications in harsh environments and for drug delivery and health monitoring. I am working at making Penn State and MRI the place where we develop these super-small machines for a specific variety of applications.”
    Story Source:
    Materials provided by Penn State. Original written by Jamie Oberdick. Note: Content may be edited for style and length. More

  • in

    New method uses device cameras to measure pulse, breathing rate and could help telehealth

    Telehealth has become a critical way for doctors to still provide health care while minimizing in-person contact during COVID-19. But with phone or Zoom appointments, it’s harder for doctors to get important vital signs from a patient, such as their pulse or respiration rate, in real time.
    A University of Washington-led team has developed a method that uses the camera on a person’s smartphone or computer to take their pulse and respiration signal from a real-time video of their face. The researchers presented this state-of-the-art system in December at the Neural Information Processing Systems conference.
    Now the team is proposing a better system to measure these physiological signals. This system is less likely to be tripped up by different cameras, lighting conditions or facial features, such as skin color. The researchers will present these findings April 8 at the ACM Conference on Health, Interference, and Learning.
    “Machine learning is pretty good at classifying images. If you give it a series of photos of cats and then tell it to find cats in other images, it can do it. But for machine learning to be helpful in remote health sensing, we need a system that can identify the region of interest in a video that holds the strongest source of physiological information — pulse, for example — and then measure that over time,” said lead author Xin Liu, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering.
    “Every person is different,” Liu said. “So this system needs to be able to quickly adapt to each person’s unique physiological signature, and separate this from other variations, such as what they look like and what environment they are in.”
    The team’s system is privacy preserving — it runs on the device instead of in the cloud — and uses machine learning to capture subtle changes in how light reflects off a person’s face, which is correlated with changing blood flow. Then it converts these changes into both pulse and respiration rate. More

  • in

    A robot that senses hidden objects

    In recent years, robots have gained artificial vision, touch, and even smell. “Researchers have been giving robots human-like perception,” says MIT Associate Professor Fadel Adib. In a new paper, Adib’s team is pushing the technology a step further. “We’re trying to give robots superhuman perception,” he says.
    The researchers have developed a robot that uses radio waves, which can pass through walls, to sense occluded objects. The robot, called RF-Grasp, combines this powerful sensing with more traditional computer vision to locate and grasp items that might otherwise be blocked from view. The advance could one day streamline e-commerce fulfillment in warehouses or help a machine pluck a screwdriver from a jumbled toolkit.
    The research will be presented in May at the IEEE International Conference on Robotics and Automation. The paper’s lead author is Tara Boroushaki, a research assistant in the Signal Kinetics Group at the MIT Media Lab. Her MIT co-authors include Adib, who is the director of the Signal Kinetics Group; and Alberto Rodriguez, the Class of 1957 Associate Professor in the Department of Mechanical Engineering. Other co-authors include Junshan Leng, a research engineer at Harvard University, and Ian Clester, a PhD student at Georgia Tech.
    As e-commerce continues to grow, warehouse work is still usually the domain of humans, not robots, despite sometimes-dangerous working conditions. That’s in part because robots struggle to locate and grasp objects in such a crowded environment. “Perception and picking are two roadblocks in the industry today,” says Rodriguez. Using optical vision alone, robots can’t perceive the presence of an item packed away in a box or hidden behind another object on the shelf — visible light waves, of course, don’t pass through walls.
    But radio waves can.
    For decades, radio frequency (RF) identification has been used to track everything from library books to pets. RF identification systems have two main components: a reader and a tag. The tag is a tiny computer chip that gets attached to — or, in the case of pets, implanted in — the item to be tracked. The reader then emits an RF signal, which gets modulated by the tag and reflected back to the reader. More