More stories

  • in

    Making computer servers worldwide more climate friendly

    An elegant new algorithm developed by Danish researchers can significantly reduce the resource consumption of the world’s computer servers. Computer servers are as taxing on the climate as global air traffic combined, thereby making the green transition in IT an urgent matter. The researchers, from the University of Copenhagen, expect major IT companies to deploy the algorithm immediately.
    One of the flipsides of our runaway internet usage is its impact on climate due to the massive amount of electricity consumed by computer servers. Current CO2 emissions from data centres are as high as from global air traffic combined — with emissions expected to double within just a few years.
    Only a handful of years have passed since Professor Mikkel Thorup was among a group of researchers behind an algorithm that addressed part of this problem by producing a groundbreaking recipe to streamline computer server workflows. Their work saved energy and resources. Tech giants including Vimeo and Google enthusiastically implemented the algorithm in their systems, with online video platform Vimeo reporting that the algorithm had reduced their bandwidth usage by a factor of eight.
    Now, Thorup and two fellow UCPH researchers have perfected the already clever algorithm, making it possible to address a fundamental problem in computer systems — the fact that some servers become overloaded while other servers have capacity left — many times faster than today.
    “We have found an algorithm that removes one of the major causes of overloaded servers once and for all. Our initial algorithm was a huge improvement over the way industry had been doing things, but this version is many times better and reduces resource usage to the greatest extent possible. Furthermore, it is free to use for all,” says Professor Thorup of the University of Copenhagen’s Department of Computer Science, who developed the algorithm alongside department colleagues Anders Aamand and Jakob Bæk Tejs Knudsen.
    Soaring internet traffic
    The algorithm addresses the problem of servers becoming overloaded as they receive more requests from clients than they have the capacity to handle. This happens as users pile in to watch a certain Vimeo video or Netflix film. As a result, systems often need to shift clients around many times to achieve a balanced distribution among servers. More

  • in

    New report aims to improve VR use in healthcare education

    A new report that could help improve how immersive technologies such as Virtual Reality (VR) and Augmented Reality (AR) are used in healthcare education and training has been published with significant input from the University of Huddersfield.
    Professor David Peebles, Director of the University’s Centre for Cognition and Neuroscience, and Huddersfield PhD graduate Matthew Pears contributed to the report — ‘Immersive technologies in healthcare training and education: Three principles for progress’ — recently published by the University of Leeds with input from range of academics, technologists and health professionals.
    The principles have also been expanded upon in a letter to the prestigious journal BMJ Simulation and Technology Enhanced Learning.
    The Huddersfield contribution to the report stems from research conducted over several years, which involved another former Huddersfield PhD researcher, Yeshwanth Pulijala, and Professor Eunice Ma, now with Falmouth University.
    “Yeshwanth had an interest in technology and education, and in using VR for dentistry training. Matthew was looking at soft skills and situation awareness, which could be applied to investigating how dentists were able to keep a track of what was going on around them. They were similar subjects, although with different emphases, and so it seemed a natural area for collaboration.”
    With only a relatively small number of dental schools in the UK, the quartet visited seven dental schools in India in early 2017, with support from travel grants from Santander Bank, to test their VR-based training materials on students. The experience gained from that visit contributed to both researchers’ PhDs, and ultimately led to the involvement of Professor Peebles and Matthew Pears in the new report.
    The report argues for greater standardisation of how to use immersive technologies in healthcare training and education. As Professor Peebles explains, “It’s about developing a set of principles and guidelines for the use of immersive technology in medical treatment. Immersive technology is becoming increasingly popular and, as the technology is advancing, it’s becoming clear that there is great potential to make training more accessible and effective.
    “It is important, however, that research is driven by the needs of the user and existing evidence rather than the technology. Rather than thinking ‘we have a new bit of VR or AR kit, what can we do with it?’, we should be looking at the problem that needs solving — what are the learning needs, so how do we use technology to solve it?
    “Developing immersive training materials can be very time-consuming and difficult to evaluate properly. Getting surgeons and medical students to take time out to test your VR training is challenging. In our case we were lucky to have a surgeon, Professor Ashraf Ayoub, a Professor of Oral and Maxillofacial Surgery at the University of Glasgow, who granted us permission to film a surgical procedure that was then transformed into a 3D environment to train students about situation awareness while in the operating theatre.”
    Professor Peebles hopes the work so far will provide a basis for more investigations that could help get the most from the potential that VR and immersive technology have to offer.
    “Conducting these kinds of studies is difficult to do well, in particular getting sufficient quantitative data that allows you to rigorously evaluate them. “As the report recommends, more collaboration is required to pool technological and intellectual resources, to try to develop a set of standards and a community that works together to boost and improve research in this area.”
    Story Source:
    Materials provided by University of Huddersfield. Note: Content may be edited for style and length. More

  • in

    Face off for best ID checkers

    Psychologists from UNSW Sydney have developed a new face identification ability test that will help find facial recognition experts for a variety of police and government agencies, including contract tracing.
    The Glasgow Face Matching Test 2 [GFMT2] targets high-performing facial recognition individuals known as super-recognisers, who have an extraordinary ability to memorise and recall faces.
    The type of professional roles that involve face identification and that could benefit from the test include visa processors, passport issuers, border control officers, police, contract tracers, as well as security staff in private industry.
    “Being able to recognise faces of friends and family is a skill that most of us take for granted,” Scientia Fellow Dr David White from UNSW Science’s School of Psychology says. “But comparing images of unfamiliar faces and deciding if they show the same person is a task that most of our participants find challenging, even passport officers with many years of experience in the task. A major finding in our field in recent years has been that some people are much better than others at identifying faces from photographs.
    This is an insight that has changed the way staff are recruited, for example passport and police officers.”
    The lead investigator at UNSW Sydney’s Face Research Lab says the GFMT2 test is valuable for identifying super-recognisers who have been used by the London Metropolitan Police Service in criminal investigations, and famously in the alleged poisoning of former Russian spies in Salisbury. More

  • in

    Smart technology is not making us dumber

    There are plenty of negatives associated with smart technology — tech neck, texting and driving, blue light rays — but there is also a positive: the digital age is not making us stupid, says University of Cincinnati social/behavioral expert Anthony Chemero.
    “Despite the headlines, there is no scientific evidence that shows that smartphones and digital technology harm our biological cognitive abilities,” says the UC professor of philosophy and psychology who recently co-authored a paper stating such in Nature Human Behaviour.
    In the paper, Chemero and colleagues at the University of Toronto’s Rotman School of Management expound on the evolution of the digital age, explaining how smart technology supplements thinking, thus helping us to excel.
    “What smartphones and digital technology seem to do instead is to change the ways in which we engage our biological cognitive abilities,” Chemero says, adding “these changes are actually cognitively beneficial.”
    For example, he says, your smart phone knows the way to the baseball stadium so that you don’t have to dig out a map or ask for directions, which frees up brain energy to think about something else. The same holds true in a professional setting: “We’re not solving complex mathematical problems with pen and paper or memorizing phone numbers in 2021.”
    Computers, tablets and smart phones, he says, function as an auxiliary, serving as tools which are good at memorization, calculation and storing information and presenting information when you need it.
    Additionally, smart technology augments decision making skills that we would be hard pressed to accomplish on our own, says the paper’s lead author Lorenzo Cecutti, a PhD candidate at the University of Toronto. Using GPS technology on our phones, he says, can not only help us get there, but lets us choose a route based on traffic conditions. “That would be a challenging task when driving round in a new city.”
    Chemero adds: “You put all this technology) together with a naked human brain and you get something that’s smarter…and the result is that we, supplemented by our technology, are actually capable of accomplishing much more complex tasks than we could with our un-supplemented biological abilities.”
    While there may be other consequences to smart technology, “making us stupid is not one of them,” says Chemero.
    Story Source:
    Materials provided by University of Cincinnati. Original written by Angela Koenig. Note: Content may be edited for style and length. More

  • in

    Insect-sized robot navigates mazes with the agility of a cheetah

    Many insects and spiders get their uncanny ability to scurry up walls and walk upside down on ceilings with the help of specialized sticky footpads that allow them to adhere to surfaces in places where no human would dare to go.
    Engineers at the University of California, Berkeley, have used the principle behind some of these footpads, called electrostatic adhesion, to create an insect-scale robot that can swerve and pivot with the agility of a cheetah, giving it the ability to traverse complex terrain and quickly avoid unexpected obstacles.
    The robot is constructed from a thin, layered material that bends and contracts when an electric voltage is applied. In a 2019 paper, the research team demonstrated that this simple design can be used to create a cockroach-sized robot that can scurry across a flat surface at a rate of 20 body lengths per second, or about 1.5 miles per hour — nearly the speed of living cockroaches themselves, and the fastest relative speed of any insect-sized robot.
    In a new study, the research team added two electrostatic footpads to the robot. Applying a voltage to either of the footpads increases the electrostatic force between the footpad and a surface, making that footpad stick more firmly to the surface and forcing the rest of the robot to rotate around the foot.
    The two footpads give operators full control over the trajectory of the robot, and allow the robot to make turns with a centripetal acceleration that exceeds that of most insects.
    “Our original robot could move very, very fast, but we could not really control whether the robot went left or right, and a lot of the time it would move randomly, because if there was a slight difference in the manufacturing process — if the robot was not symmetrical — it would veer to one side,” said Liwei Lin, a professor of mechanical engineering at UC Berkeley. “In this work, the major innovation was adding these footpads that allow it to make very, very fast turns.”
    To demonstrate the robot’s agility, the research team filmed the robot navigating Lego mazes while carrying a small gas sensor and swerving to avoid falling debris. Because of its simple design, the robot can also survive being stepped on by a 120-pound human.
    Small, robust robots like these could be ideal for conducting search and rescue operations or investigating other hazardous situations, such as scoping out potential gas leaks, Lin said. While the team demonstrated most of the robot’s skills while it was “tethered,” or powered and controlled through a small electrical wire, they also created an “untethered” version that can operate on battery power for up to 19 minutes and 31 meters while carrying a gas sensor.
    “One of the biggest challenges today is making smaller scale robots that maintain the power and control of bigger robots,” Lin said. “With larger-scale robots, you can include a big battery and a control system, no problem. But when you try to shrink everything down to a smaller and smaller scale, the weight of those elements become difficult for the robot to carry and the robot generally moves very slowly. Our robot is very fast, quite strong, and requires very little power, allowing it to carry sensors and electronics while also carrying a battery.”
    Video: https://www.youtube.com/watch?v=TmRol48_DKs
    Story Source:
    Materials provided by University of California – Berkeley. Original written by Kara Manke. Note: Content may be edited for style and length. More

  • in

    Researchers explore how children learn language

    Small children learn language at a pace far faster than teenagers or adults. One explanation for this learning advantage comes not from differences between children and adults, but from the differences in the way that people talk to children and adults.
    For the first time, a team of researchers developed a method to experimentally evaluate how parents use what they know about their children’s language when they talk to them. They found that parents have extremely precise models of their children’s language knowledge, and use these models to tune the language they use when speaking to them. The results are available in an advance online publication of the journal of Psychological Science.
    “We have known for years that parents talk to children differently than to other adults in a lot of ways, for example simplifying their speech, reduplicating words and stretching out vowel sounds,” said Daniel Yurovsky, assistant professor in psychology at Carnegie Mellon University. “This stuff helps young kids get a toehold into language, but we didn’t whether parents change the way they talk as children are acquiring language, giving children language input that is ‘just right’ for learning the next thing.”
    Adults tend to speak to children more slowly and at a higher pitch. They also use more exaggerated enunciation, repetition and simplified language structure. Adults also pepper their communication with questions to gauge the child’s comprehension. As the child’s language fluency increases, the sentence structure and complexity used by adults increases.
    Yurovsky likens this to the progression a student follows when learning math in school.
    “When you go to school, you start with algebra and then take plane geometry before moving onto calculus,” said Yurovsky. “People talk to kids using same kind of structure without thinking about it. They are tracking how much their child knows about language and modifying how they speak so that for children understand them.”
    Yurovsky and his team sought to understand exactly how caregivers tune their interactions to match their child’s speech development. The team developed a game where parents helped their children to pick a specific animal from a set of three, a game that toddlers (aged 15 to 23 months) and their parents play routinely in their daily lives. Half of the animals in the matching game were animals that children typically learn before age 2 (e.g. cat, cow), and the other half were animals that are typically learned later (e.g. peacock, leopard). More

  • in

    Skin in the game: Transformative approach uses the human body to recharge smartwatches

    As smart watches are increasingly able to monitor the vital signs of health, including what’s going on when we sleep, a problem has emerged: those wearable, wireless devices are often disconnected from our body overnight, being charged at the bedside.
    “Quality of sleep and its patterns contain a lot of important information about patients’ health conditions,” says Sunghoon Ivan Lee, assistant professor in the University of Massachusetts Amherst College of Information and Computer Sciences and director of the Advanced Human Health Analytics Laboratory.
    But that information can’t be tracked on smartwatches if the wearable devices are being charged as users sleep, which prior research has shown is frequently the case. Lee adds, “The main reason users discontinue the long-term use of wearable devices is because they have to frequently charge the on-device battery.”
    Pondering this problem, Lee brainstormed with UMass Amherst wearable computing engineer Jeremy Gummeson to find a solution to continuously recharge these devices on the body so they can monitor the user’s health 24/7.
    The scientists’ aha moment came when they realized “human skin is a conductible material,” Lee recalls. “Why can’t we instrument daily objects, such as the office desk, chair and car steering wheel, so they can seamlessly transfer power through human skin to charge up a watch or any wearable sensor while the users interact with them? Like, using human skin as a wire.
    “Then we can motivate people to do things like sleep tracking because they never have to take their watch off to charge it,” he adds. More

  • in

    Understanding potential topological quantum bits

    Quantum computers promise great advances in many fields — from cryptography to the simulation of protein folding. Yet, which physical system works best to build the underlying quantum bits is still an open question. Unlike regular bits in your computer, these so-called qubits cannot only take the values 0 and 1, but also mixtures of the two. While this potentially makes them very useful, they also become very unstable.
    One approach to solve this problem bets on topological qubits that encode the information in their spatial arrangement. That could provide a more stable and error-resistant basis for computation than other setups. The problem is that no one has ever definitely found a topological qubit yet.
    An international team of researchers from Austria, Copenhagen, and Madrid around Marco Valentini from the Nanoelectronics group at IST Austria now have examined a setup which was predicted to produce the so-called Majorana zero modes — the core ingredient for a topological qubit. They found that a valid signal for such modes can in fact be a false flag.
    Half of an Electron
    The experimental setup is composed of a tiny wire just some hundred nanometers — some millionths of a millimeter — long, grown by Peter Krogstrup from Microsoft Quantum and University of Copenhagen. These appropriately-called nanowires form a free-floating connection between two metal conductors on a chip. They are coated with a superconducting material that loses all electrical resistance at very low temperatures. The coating goes all the way up to a tiny part left at one end of the wire, which forms a crucial part of the setup: the junction. The whole contraption is then exposed to a magnetic field.
    The scientists’ theories predicted that Majorana zero modes — the basis for the topological qubit they were looking for — should appear in the nanowire. These Majorana zero modes are a strange phenomenon, because they started out as a mathematical trick to describe one electron in the wire as composed of two halves. Usually, physicists do not think of electrons as something that can be split, but using this nanowire setup it should have been possible so separate these “half-electrons” and to use them as qubits.
    “We were excited to work on this very promising material platform,” explains Marco Valentini, who joined IST Austria as an intern before becoming a PhD student in the Nanoelectronics group. “What we expected to see was the signal of Majorana zero modes in the nanowire, but we found nothing. First, we were confused, then frustrated. Eventually, and in close collaboration with our colleagues from the Theory of Quantum Materials and Solid State Quantum Technologies group in Madrid, we examined the setup, and found out what was wrong with it.”
    A False Flag
    After attempting to find the signatures of the Majorana zero modes, the researchers began to vary the nanowire setup to check whether any effects from its architecture were disturbing their experiment. “We did several experiments on different setups to find out what was going wrong,” Valentini explains. “It took us a while, but when we doubled the length of the uncoated junction from a hundred nanometers to two hundred, we found our culprit.”
    When the junction was big enough the following happened: The exposed inner nanowire formed a so-called quantum dot — a tiny speck of matter that shows special quantum mechanical properties due to its confined geometry. The electrons in this quantum dot could then interact with the ones in the coating superconductor next to it, and by that mimic the signal of the “half-electrons” — the Majorana zero modes — which the scientists were looking for.
    “This unexpected conclusion came after we established the theoretical model of how the quantum dot interacts with the superconductor in a magnetic field and compared the experimental data with detailed simulations performed by Fernando Peñaranda, a PhD student in the Madrid team,” says Valentini.
    “Mistaking this mimicking signal for a Majorana zero mode shows us how careful we have to be in our experiments and in our conclusions,” Valentini cautions. “While this may seem like a step back in the search for Majorana zero modes, it actually is a crucial step forward in understanding nanowires and their experimental signals. This finding shows that the cycle of discovery and critical examination among international peers is central to the advancement of scientific knowledge.” More