More stories

  • in

    Robots can be more aware of human co-workers, with system that provides context

    Working safely is not only about processes, but context — understanding the work environment and circumstances, and being able to predict what other people will do next. A new system empowers robots with this level of context awareness, so they can work side-by-side with humans on assembly lines more efficiently and without unnecessary interruptions.
    Instead of being able to only judge distance between itself and its human co-workers, the human-robot collaboration system can identify each worker it works with, as well as the person’s skeleton model, which is an abstract of body volume, says Hongyi Liu, a researcher at KTH Royal Institute of Technology. Using this information, the context-aware robot system can recognize the worker’s pose and even predict the next pose. These abilities provide the robot with a context to be aware of while interacting.
    Liu says that the system operates with artificial intelligence that requires less computational power and smaller datasets than traditional machine learning methods. It relies instead on a form of machine learning called transfer learning — which reuses knowledge developed through training before being adapted into an operational model.
    The research was published in the recent issue of Robotics and Computer-Integrated Manufacturing, and was co-authored by KTH Professor Lihui Wang.
    Liu says that the technology is out ahead of today’s International Organization for Standards (ISO) requirements for collaborative robot safety, so implementation of the technology would require industrial action. But the context awareness offers better efficiency than the one-dimensional interaction workers now experience with robots, he says.
    “Under the ISO standard and technical specification, when a human approaches a robot it slows down, and if he or she comes close enough it will stop. If the person moves away it resumes. That’s a pretty low level of context awareness,” he says.
    “It jeopardizes efficiency. Production is slowed and humans cannot work closely to robots.”
    Liu compares the context-aware robot system to a self-driving car that recognizes how long a stoplight has been red and anticipates moving again. Instead of braking or downshifting, it begins to adjust its speed by cruising toward the intersection, thereby sparing the brakes and transmission further wear.
    Experiments with the system showed that with context, a robot can operate more safely and efficiently without slowing down production.
    In one test performed with the system, a robot arm’s path was blocked unexpectedly by someone’s hand. But rather than stop, the robot adjusted — it predicted the future trajectory of the hand and the arm moved around the hand.
    “This is safety not just from the technical point of view in avoiding collisions, but being able to recognize the context of the assembly line,” he says. “This gives an additional layer of safety.”
    The research was an extension of the Symbiotic Human Robot Collaborative Assembly project, which was completed in 2019.
    Story Source:
    Materials provided by KTH, Royal Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Do school-based interventions help improve reading and math in at-risk children?

    School-based interventions that target students with, or at risk of, academic difficulties in kindergarten to grade 6 have positive effects on reading and mathematics, according to an article published in Campbell Systematic Reviews.
    The review analyzed evidence from 205 studies, 186 of which were randomized controlled trials, to examine the effects of targeted school-based interventions on students’ performance on standardized tests in reading and math.
    Peer-assisted instruction and small-group instruction by adults were among the most effective interventions. The authors noted that these have substantial potential to boost skills in students experiencing academic difficulties.
    “It is exciting to see that there are many interventions with substantial impacts on math and reading skills, especially in these times when many students have not been able to attend school and the number of students who need extra help may be even larger than usual,” said lead author Jens Dietrichson, PhD, of VIVE, the Danish Center for Social Science Research. “It is also interesting that there is large variation: far from all interventions have positive effects, and there are substantial and robust differences between the types of interventions. Thus, schools can boost the skills of students with difficulties by implementing targeted interventions, but it matters greatly how they do it.”
    Story Source:
    Materials provided by Wiley. Note: Content may be edited for style and length. More

  • in

    The incredible bacterial 'homing missiles' that scientists want to harness

    Imagine there are arrows that are lethal when fired on your enemies yet harmless if they fall on your friends. It’s easy to see how these would be an amazing advantage in warfare, if they were real. However, something just like these arrows does indeed exist, and they are used in warfare … just on a different scale.
    These weapons are called tailocins, and the reality is almost stranger than fiction.
    “Tailocins are extremely strong protein nanomachines made by bacteria,” explained Vivek Mutalik, a research scientist at Lawrence Berkeley National Laboratory (Berkeley Lab) who studies tailocins and phages, the bacteria-infecting viruses that tailocins appear to be remnants of. “They look like phages but they don’t have the capsid, which is the ‘head’ of the phage that contains the viral DNA and replication machinery. So, they’re like a spring-powered needle that goes and sits on the target cell, then appears to poke all the way through the cell membrane making a hole to the cytoplasm, so the cell loses its ions and contents and collapses.”
    A wide variety of bacteria are capable of producing tailocins, and seem to do so under stress conditions. Because the tailocins are only lethal to specific strains — so specific, in fact, that they have earned the nickname “bacterial homing missiles” — tailocins appear to be a tool used by bacteria to compete with their rivals. Due to their similarity with phages, scientists believe that the tailocins are produced by DNA that was originally inserted into bacterial genomes during viral infections (viruses give their hosts instructions to make more of themselves), and over evolutionary time, the bacteria discarded the parts of the phage DNA that weren’t beneficial but kept the parts that could be co-opted for their own benefit.
    But, unlike most abilities that are selected through evolution, tailocins do not save the individual. According to Mutalik, bacteria are killed if they produce tailocins, just as they would be if they were infected by true phage virus, because the pointed nanomachines erupt through the membrane to exit the producing cell much like replicated viral particles. But once released, the tailocins only target certain strains, sparing the other cells of the host lineage.
    “They benefit kin but the individual is sacrificed, which is a type of altruistic behavior. But we don’t yet understand how this phenomenon happens in nature,” said Mutalik. Scientists also don’t know precisely how the stabbing needle plunger of the tailocin functions. More

  • in

    Scientists harness chaos to protect devices from hackers

    Researchers have found a way to use chaos to help develop digital fingerprints for electronic devices that may be unique enough to foil even the most sophisticated hackers.
    Just how unique are these fingerprints? The researchers believe it would take longer than the lifetime of the universe to test for every possible combination available.
    “In our system, chaos is very, very good,” said Daniel Gauthier, senior author of the study and professor of physics at The Ohio State University.
    The study was recently published online in the journal IEEE Access.
    The researchers created a new version of an emerging technology called physically unclonable functions, or PUFs, that are built into computer chips.
    Gauthier said these new PUFs could potentially be used to create secure ID cards, to track goods in supply chains and as part of authentication applications, where it is vital to know that you’re not communicating with an impostor. More

  • in

    Screening for skin disease on your laptop

    The founding chair of the Biomedical Engineering Department at the University of Houston is reporting a new deep neural network architecture that provides early diagnosis of systemic sclerosis (SSc), a rare autoimmune disease marked by hardened or fibrous skin and internal organs. The proposed network, implemented using a standard laptop computer (2.5 GHz Intel Core i7), can immediately differentiate between images of healthy skin and skin with systemic sclerosis.
    “Our preliminary study, intended to show the efficacy of the proposed network architecture, holds promise in the characterization of SSc,” reports Metin Akay, John S. Dunn Endowed Chair Professor of biomedical engineering. The work is published in the IEEE Open Journal of Engineering in Medicine and Biology.
    “We believe that the proposed network architecture could easily be implemented in a clinical setting, providing a simple, inexpensive and accurate screening tool for SSc.”
    For patients with SSc, early diagnosis is critical, but often elusive. Several studies have shown that organ involvement could occur far earlier than expected in the early phase of the disease, but early diagnosis and determining the extent of disease progression pose significant challenge for physicians, even at expert centers, resulting in delays in therapy and management.
    In artificial intelligence, deep learning organizes algorithms into layers (the artificial neural network) that can make its own intelligent decisions. To speed up the learning process, the new network was trained using the parameters of MobileNetV2, a mobile vision application, pre-trained on the ImageNet dataset with 1.4M images.
    “By scanning the images, the network learns from the existing images and decides which new image is normal or in an early or late stage of disease,” said Akay.
    Among several deep learning networks, Convolutional Neural Networks (CNNs) are most commonly used in engineering, medicine and biology, but their success in biomedical applications has been limited due to the size of the available training sets and networks.
    To overcome these difficulties, Akay and partner Yasemin Akay combined the UNet, a modified CNN architecture, with added layers, and they developed a mobile training module. The results showed that the proposed deep learning architecture is superior and better than CNNs for classification of SSc images.
    “After fine tuning, our results showed the proposed network reached 100% accuracy on the training image set, 96.8% accuracy on the validation image set, and 95.2% on the testing image set,” said Yasmin Akay, UH instructional associate professor of biomedical engineering.
    The training time was less than five hours.
    Joining Metin Akay and Yasemin Akay, the paper was co-authored by Yong Du, Cheryl Shersen, Ting Chen and Chandra Mohan, all of University of Houston; and Minghua Wu and Shervin Assassi of the University of Texas Health Science Center (UT Health).
    Story Source:
    Materials provided by University of Houston. Original written by Laurie Fickman. Note: Content may be edited for style and length. More

  • in

    Deep learning networks prefer the human voice — just like us

    The digital revolution is built on a foundation of invisible 1s and 0s called bits. As decades pass, and more and more of the world’s information and knowledge morph into streams of 1s and 0s, the notion that computers prefer to “speak” in binary numbers is rarely questioned. According to new research from Columbia Engineering, this could be about to change.
    A new study from Mechanical Engineering Professor Hod Lipson and his PhD student Boyuan Chen proves that artificial intelligence systems might actually reach higher levels of performance if they are programmed with sound files of human language rather than with numerical data labels. The researchers discovered that in a side-by-side comparison, a neural network whose “training labels” consisted of sound files reached higher levels of performance in identifying objects in images, compared to another network that had been programmed in a more traditional manner, using simple binary inputs.
    “To understand why this finding is significant,” said Lipson, James and Sally Scapa Professor of Innovation and a member of Columbia’s Data Science Institute, “It’s useful to understand how neural networks are usually programmed, and why using the sound of the human voice is a radical experiment.”
    When used to convey information, the language of binary numbers is compact and precise. In contrast, spoken human language is more tonal and analog, and, when captured in a digital file, non-binary. Because numbers are such an efficient way to digitize data, programmers rarely deviate from a numbers-driven process when they develop a neural network.
    Lipson, a highly regarded roboticist, and Chen, a former concert pianist, had a hunch that neural networks might not be reaching their full potential. They speculated that neural networks might learn faster and better if the systems were “trained” to recognize animals, for instance, by using the power of one of the world’s most highly evolved sounds — the human voice uttering specific words.
    One of the more common exercises AI researchers use to test out the merits of a new machine learning technique is to train a neural network to recognize specific objects and animals in a collection of different photographs. To check their hypothesis, Chen, Lipson and two students, Yu Li and Sunand Raghupathi, set up a controlled experiment. They created two new neural networks with the goal of training both of them to recognize 10 different types of objects in a collection of 50,000 photographs known as “training images.”
    One AI system was trained the traditional way, by uploading a giant data table containing thousands of rows, each row corresponding to a single training photo. The first column was an image file containing a photo of a particular object or animal; the next 10 columns corresponded to 10 possible object types: cats, dogs, airplanes, etc. A “1” in any column indicates the correct answer, and nine 0s indicate the incorrect answers. More

  • in

    Understanding fruit fly behavior may be next step toward autonomous vehicles

    With over 70% of respondents to a AAA annual survey on autonomous driving reporting they would fear being in a fully self-driving car, makers like Tesla may be back to the drawing board before rolling out fully autonomous self-driving systems. But new research from Northwestern University shows us we may be better off putting fruit flies behind the wheel instead of robots.
    Drosophila have been subjects of science as long as humans have been running experiments in labs. But given their size, it’s easy to wonder what can be learned by observing them. Research published today in the journal Nature Communications demonstrates that fruit flies use decision-making, learning and memory to perform simple functions like escaping heat. And researchers are using this understanding to challenge the way we think about self-driving cars.
    “The discovery that flexible decision-making, learning and memory are used by flies during such a simple navigational task is both novel and surprising,” said Marco Gallio, the corresponding author on the study. “It may make us rethink what we need to do to program safe and flexible self-driving vehicles.”
    According to Gallio, an associate professor of neurobiology in the Weinberg College of Arts and Sciences, the questions behind this study are similar to those vexing engineers building cars that move on their own. How does a fruit fly (or a car) cope with novelty? How can we build a car that is flexibly able to adapt to new conditions?
    This discovery reveals brain functions in the household pest that are typically associated with more complex brains like those of mice and humans.
    “Animal behavior, especially that of insects, is often considered largely fixed and hard-wired — like machines,” Gallio said. “Most people have a hard time imagining that animals as different from us as a fruit fly may possess complex brain functions, such as the ability to learn, remember or make decisions.”
    To study how fruit flies tend to escape heat, the Gallio lab built a tiny plastic chamber with four floor tiles whose temperatures could be independently controlled and confined flies inside. They then used high-resolution video recordings to map how a fly reacted when it encountered a boundary between a warm tile and a cool tile. They found flies were remarkably good at treating heat boundaries as invisible barriers to avoid pain or harm. More

  • in

    A new, positive approach could be the key to next-generation, transparent electronics

    A new study, out this week, could pave the way to revolutionary, transparent electronics.
    Such see-through devices could potentially be integrated in glass, in flexible displays and in smart contact lenses, bringing to life futuristic devices that seem like the product of science fiction.
    For several decades, researchers have sought a new class of electronics based on semiconducting oxides, whose optical transparency could enable these fully-transparent electronics.
    Oxide-based devices could also find use in power electronics and communication technology, reducing the carbon footprint of our utility networks.
    A RMIT-led team has now introduced ultrathin beta-tellurite to the two-dimensional (2D) semiconducting material family, providing an answer to this decades-long search for a high mobility p-type oxide.
    “This new, high-mobility p-type oxide fills a crucial gap in the materials spectrum to enable fast, transparent circuits,” says team leader Dr Torben Daeneke, who led the collaboration across three FLEET nodes. More