More stories

  • in

    Face off for best ID checkers

    Psychologists from UNSW Sydney have developed a new face identification ability test that will help find facial recognition experts for a variety of police and government agencies, including contract tracing.
    The Glasgow Face Matching Test 2 [GFMT2] targets high-performing facial recognition individuals known as super-recognisers, who have an extraordinary ability to memorise and recall faces.
    The type of professional roles that involve face identification and that could benefit from the test include visa processors, passport issuers, border control officers, police, contract tracers, as well as security staff in private industry.
    “Being able to recognise faces of friends and family is a skill that most of us take for granted,” Scientia Fellow Dr David White from UNSW Science’s School of Psychology says. “But comparing images of unfamiliar faces and deciding if they show the same person is a task that most of our participants find challenging, even passport officers with many years of experience in the task. A major finding in our field in recent years has been that some people are much better than others at identifying faces from photographs.
    This is an insight that has changed the way staff are recruited, for example passport and police officers.”
    The lead investigator at UNSW Sydney’s Face Research Lab says the GFMT2 test is valuable for identifying super-recognisers who have been used by the London Metropolitan Police Service in criminal investigations, and famously in the alleged poisoning of former Russian spies in Salisbury. More

  • in

    Smart technology is not making us dumber

    There are plenty of negatives associated with smart technology — tech neck, texting and driving, blue light rays — but there is also a positive: the digital age is not making us stupid, says University of Cincinnati social/behavioral expert Anthony Chemero.
    “Despite the headlines, there is no scientific evidence that shows that smartphones and digital technology harm our biological cognitive abilities,” says the UC professor of philosophy and psychology who recently co-authored a paper stating such in Nature Human Behaviour.
    In the paper, Chemero and colleagues at the University of Toronto’s Rotman School of Management expound on the evolution of the digital age, explaining how smart technology supplements thinking, thus helping us to excel.
    “What smartphones and digital technology seem to do instead is to change the ways in which we engage our biological cognitive abilities,” Chemero says, adding “these changes are actually cognitively beneficial.”
    For example, he says, your smart phone knows the way to the baseball stadium so that you don’t have to dig out a map or ask for directions, which frees up brain energy to think about something else. The same holds true in a professional setting: “We’re not solving complex mathematical problems with pen and paper or memorizing phone numbers in 2021.”
    Computers, tablets and smart phones, he says, function as an auxiliary, serving as tools which are good at memorization, calculation and storing information and presenting information when you need it.
    Additionally, smart technology augments decision making skills that we would be hard pressed to accomplish on our own, says the paper’s lead author Lorenzo Cecutti, a PhD candidate at the University of Toronto. Using GPS technology on our phones, he says, can not only help us get there, but lets us choose a route based on traffic conditions. “That would be a challenging task when driving round in a new city.”
    Chemero adds: “You put all this technology) together with a naked human brain and you get something that’s smarter…and the result is that we, supplemented by our technology, are actually capable of accomplishing much more complex tasks than we could with our un-supplemented biological abilities.”
    While there may be other consequences to smart technology, “making us stupid is not one of them,” says Chemero.
    Story Source:
    Materials provided by University of Cincinnati. Original written by Angela Koenig. Note: Content may be edited for style and length. More

  • in

    Insect-sized robot navigates mazes with the agility of a cheetah

    Many insects and spiders get their uncanny ability to scurry up walls and walk upside down on ceilings with the help of specialized sticky footpads that allow them to adhere to surfaces in places where no human would dare to go.
    Engineers at the University of California, Berkeley, have used the principle behind some of these footpads, called electrostatic adhesion, to create an insect-scale robot that can swerve and pivot with the agility of a cheetah, giving it the ability to traverse complex terrain and quickly avoid unexpected obstacles.
    The robot is constructed from a thin, layered material that bends and contracts when an electric voltage is applied. In a 2019 paper, the research team demonstrated that this simple design can be used to create a cockroach-sized robot that can scurry across a flat surface at a rate of 20 body lengths per second, or about 1.5 miles per hour — nearly the speed of living cockroaches themselves, and the fastest relative speed of any insect-sized robot.
    In a new study, the research team added two electrostatic footpads to the robot. Applying a voltage to either of the footpads increases the electrostatic force between the footpad and a surface, making that footpad stick more firmly to the surface and forcing the rest of the robot to rotate around the foot.
    The two footpads give operators full control over the trajectory of the robot, and allow the robot to make turns with a centripetal acceleration that exceeds that of most insects.
    “Our original robot could move very, very fast, but we could not really control whether the robot went left or right, and a lot of the time it would move randomly, because if there was a slight difference in the manufacturing process — if the robot was not symmetrical — it would veer to one side,” said Liwei Lin, a professor of mechanical engineering at UC Berkeley. “In this work, the major innovation was adding these footpads that allow it to make very, very fast turns.”
    To demonstrate the robot’s agility, the research team filmed the robot navigating Lego mazes while carrying a small gas sensor and swerving to avoid falling debris. Because of its simple design, the robot can also survive being stepped on by a 120-pound human.
    Small, robust robots like these could be ideal for conducting search and rescue operations or investigating other hazardous situations, such as scoping out potential gas leaks, Lin said. While the team demonstrated most of the robot’s skills while it was “tethered,” or powered and controlled through a small electrical wire, they also created an “untethered” version that can operate on battery power for up to 19 minutes and 31 meters while carrying a gas sensor.
    “One of the biggest challenges today is making smaller scale robots that maintain the power and control of bigger robots,” Lin said. “With larger-scale robots, you can include a big battery and a control system, no problem. But when you try to shrink everything down to a smaller and smaller scale, the weight of those elements become difficult for the robot to carry and the robot generally moves very slowly. Our robot is very fast, quite strong, and requires very little power, allowing it to carry sensors and electronics while also carrying a battery.”
    Video: https://www.youtube.com/watch?v=TmRol48_DKs
    Story Source:
    Materials provided by University of California – Berkeley. Original written by Kara Manke. Note: Content may be edited for style and length. More

  • in

    Researchers explore how children learn language

    Small children learn language at a pace far faster than teenagers or adults. One explanation for this learning advantage comes not from differences between children and adults, but from the differences in the way that people talk to children and adults.
    For the first time, a team of researchers developed a method to experimentally evaluate how parents use what they know about their children’s language when they talk to them. They found that parents have extremely precise models of their children’s language knowledge, and use these models to tune the language they use when speaking to them. The results are available in an advance online publication of the journal of Psychological Science.
    “We have known for years that parents talk to children differently than to other adults in a lot of ways, for example simplifying their speech, reduplicating words and stretching out vowel sounds,” said Daniel Yurovsky, assistant professor in psychology at Carnegie Mellon University. “This stuff helps young kids get a toehold into language, but we didn’t whether parents change the way they talk as children are acquiring language, giving children language input that is ‘just right’ for learning the next thing.”
    Adults tend to speak to children more slowly and at a higher pitch. They also use more exaggerated enunciation, repetition and simplified language structure. Adults also pepper their communication with questions to gauge the child’s comprehension. As the child’s language fluency increases, the sentence structure and complexity used by adults increases.
    Yurovsky likens this to the progression a student follows when learning math in school.
    “When you go to school, you start with algebra and then take plane geometry before moving onto calculus,” said Yurovsky. “People talk to kids using same kind of structure without thinking about it. They are tracking how much their child knows about language and modifying how they speak so that for children understand them.”
    Yurovsky and his team sought to understand exactly how caregivers tune their interactions to match their child’s speech development. The team developed a game where parents helped their children to pick a specific animal from a set of three, a game that toddlers (aged 15 to 23 months) and their parents play routinely in their daily lives. Half of the animals in the matching game were animals that children typically learn before age 2 (e.g. cat, cow), and the other half were animals that are typically learned later (e.g. peacock, leopard). More

  • in

    Skin in the game: Transformative approach uses the human body to recharge smartwatches

    As smart watches are increasingly able to monitor the vital signs of health, including what’s going on when we sleep, a problem has emerged: those wearable, wireless devices are often disconnected from our body overnight, being charged at the bedside.
    “Quality of sleep and its patterns contain a lot of important information about patients’ health conditions,” says Sunghoon Ivan Lee, assistant professor in the University of Massachusetts Amherst College of Information and Computer Sciences and director of the Advanced Human Health Analytics Laboratory.
    But that information can’t be tracked on smartwatches if the wearable devices are being charged as users sleep, which prior research has shown is frequently the case. Lee adds, “The main reason users discontinue the long-term use of wearable devices is because they have to frequently charge the on-device battery.”
    Pondering this problem, Lee brainstormed with UMass Amherst wearable computing engineer Jeremy Gummeson to find a solution to continuously recharge these devices on the body so they can monitor the user’s health 24/7.
    The scientists’ aha moment came when they realized “human skin is a conductible material,” Lee recalls. “Why can’t we instrument daily objects, such as the office desk, chair and car steering wheel, so they can seamlessly transfer power through human skin to charge up a watch or any wearable sensor while the users interact with them? Like, using human skin as a wire.
    “Then we can motivate people to do things like sleep tracking because they never have to take their watch off to charge it,” he adds. More

  • in

    Understanding potential topological quantum bits

    Quantum computers promise great advances in many fields — from cryptography to the simulation of protein folding. Yet, which physical system works best to build the underlying quantum bits is still an open question. Unlike regular bits in your computer, these so-called qubits cannot only take the values 0 and 1, but also mixtures of the two. While this potentially makes them very useful, they also become very unstable.
    One approach to solve this problem bets on topological qubits that encode the information in their spatial arrangement. That could provide a more stable and error-resistant basis for computation than other setups. The problem is that no one has ever definitely found a topological qubit yet.
    An international team of researchers from Austria, Copenhagen, and Madrid around Marco Valentini from the Nanoelectronics group at IST Austria now have examined a setup which was predicted to produce the so-called Majorana zero modes — the core ingredient for a topological qubit. They found that a valid signal for such modes can in fact be a false flag.
    Half of an Electron
    The experimental setup is composed of a tiny wire just some hundred nanometers — some millionths of a millimeter — long, grown by Peter Krogstrup from Microsoft Quantum and University of Copenhagen. These appropriately-called nanowires form a free-floating connection between two metal conductors on a chip. They are coated with a superconducting material that loses all electrical resistance at very low temperatures. The coating goes all the way up to a tiny part left at one end of the wire, which forms a crucial part of the setup: the junction. The whole contraption is then exposed to a magnetic field.
    The scientists’ theories predicted that Majorana zero modes — the basis for the topological qubit they were looking for — should appear in the nanowire. These Majorana zero modes are a strange phenomenon, because they started out as a mathematical trick to describe one electron in the wire as composed of two halves. Usually, physicists do not think of electrons as something that can be split, but using this nanowire setup it should have been possible so separate these “half-electrons” and to use them as qubits.
    “We were excited to work on this very promising material platform,” explains Marco Valentini, who joined IST Austria as an intern before becoming a PhD student in the Nanoelectronics group. “What we expected to see was the signal of Majorana zero modes in the nanowire, but we found nothing. First, we were confused, then frustrated. Eventually, and in close collaboration with our colleagues from the Theory of Quantum Materials and Solid State Quantum Technologies group in Madrid, we examined the setup, and found out what was wrong with it.”
    A False Flag
    After attempting to find the signatures of the Majorana zero modes, the researchers began to vary the nanowire setup to check whether any effects from its architecture were disturbing their experiment. “We did several experiments on different setups to find out what was going wrong,” Valentini explains. “It took us a while, but when we doubled the length of the uncoated junction from a hundred nanometers to two hundred, we found our culprit.”
    When the junction was big enough the following happened: The exposed inner nanowire formed a so-called quantum dot — a tiny speck of matter that shows special quantum mechanical properties due to its confined geometry. The electrons in this quantum dot could then interact with the ones in the coating superconductor next to it, and by that mimic the signal of the “half-electrons” — the Majorana zero modes — which the scientists were looking for.
    “This unexpected conclusion came after we established the theoretical model of how the quantum dot interacts with the superconductor in a magnetic field and compared the experimental data with detailed simulations performed by Fernando Peñaranda, a PhD student in the Madrid team,” says Valentini.
    “Mistaking this mimicking signal for a Majorana zero mode shows us how careful we have to be in our experiments and in our conclusions,” Valentini cautions. “While this may seem like a step back in the search for Majorana zero modes, it actually is a crucial step forward in understanding nanowires and their experimental signals. This finding shows that the cycle of discovery and critical examination among international peers is central to the advancement of scientific knowledge.” More

  • in

    Using AI to predict 3D printing processes

    Additive manufacturing has the potential to allow one to create parts or products on demand in manufacturing, automotive engineering, and even in outer space. However, it’s a challenge to know in advance how a 3D printed object will perform, now and in the future.
    Physical experiments — especially for metal additive manufacturing (AM) — are slow and costly. Even modeling these systems computationally is expensive and time-consuming.
    “The problem is multi-phase and involves gas, liquids, solids, and phase transitions between them,” said University of Illinois Ph.D. student Qiming Zhu. “Additive manufacturing also has a wide range of spatial and temporal scales. This has led to large gaps between the physics that happens on the small scale and the real product.”
    Zhu, Zeliang Liu (a software engineer at Apple), and Jinhui Yan (professor of Civil and Environmental Engineering at the University of Illinois), are trying to address these challenges using machine learning. They are using deep learning and neural networks to predict the outcomes of complex processes involved in additive manufacturing.
    “We want to establish the relationship between processing, structure, properties, and performance,” Zhu said.
    Current neural network models need large amounts of data for training. But in the additive manufacturing field, obtaining high-fidelity data is difficult, according to Zhu. To reduce the need for data, Zhu and Yan are pursuing ‘physics informed neural networking,’ or PINN. More

  • in

    Novel microscopy method provides look into future of cell biology

    What if a microscope allowed us to explore the 3D microcosm of blood vessels, nerves, and cancer cells instantaneously in virtual reality? What if it could provide views from multiple directions in real time without physically moving the specimen and worked up to 100 times faster than current technology?
    UT Southwestern scientists collaborated with colleagues in England and Australia to build and test a novel optical device that converts commonly used microscopes into multiangle projection imaging systems. The invention, described in an article in today’s Nature Methods, could open new avenues in advanced microscopy, the researchers say.
    “It is a completely new technology, although the theoretical foundations for it can be found in old computer science literature,” says corresponding author Reto Fiolka, Ph.D. Both he and co-author Kevin Dean, Ph.D., are assistant professors of cell biology and in the Lyda Hill Department of Bioinformatics at UT Southwestern.
    “It is as if you are holding the biological specimen with your hand, rotating it, and inspecting it, which is an incredibly intuitive way to interact with a sample. By rapidly imaging the sample from two different perspectives, we can interactively visualize the sample in virtual reality on the fly,” says Dean, director of the UTSW Microscopy Innovation Laboratory, which collaborates with researchers across campus to develop custom instruments that leverage advances in light microscopy.
    Currently, acquiring 3D-image information from a microscope requires a data-intensive process, in which hundreds of 2D images of the specimen are assembled into a so-called image stack. To visualize the data, the image stack is then loaded into a graphics software program that performs computations to form two-dimensional projections from different viewing perspectives on a computer screen, the researchers explain.
    “Those two steps require a lot of time and may need a very powerful and expensive computer to interact with the data,” Fiolka says. More