More stories

  • in

    Faster path planning for rubble-roving robots

    Robots that need to use their arms to make their way across treacherous terrain just got a speed upgrade with a new path planning approach, developed by University of Michigan researchers.
    The improved algorithm path planning algorithm found successful paths three times as often as standard algorithms, while needing much less processing time.
    A new algorithm speeds up path planning for robots that use arm-like appendages to maintain balance on treacherous terrain such as disaster areas or construction sites, U-M researchers have shown. The improved path planning algorithm found successful paths three times as often as standard algorithms, while needing much less processing time.
    “In a collapsed building or on very rough terrain, a robot won’t always be able to balance itself and move forward with just its feet,” said Dmitry Berenson, associate professor of electrical and computer engineering and core faculty at the Robotics Institute.
    “You need new algorithms to figure out where to put both feet and hands. You need to coordinate all these limbs together to maintain stability, and what that boils down to is a very difficult problem.”
    The research enables robots to determine how difficult the terrain is before calculating a successful path forward, which might include bracing on the wall with one or two hands while taking the next step forward. More

  • in

    Gender, personality influence use of interactive tools online

    People’s personality — such as how extroverted or introverted they are — and their gender can be linked to how they interact online, and whether they prefer interacting with a system rather than with other people.
    In a study, a team of researchers found that people considered websites more interactive if they had tools to facilitate communication between users, often referred to as computer-mediated communication, or CMC. However, male extroverts also considered sites with tools that let them interact with the computer, called human-computer interaction, or HCI, to be more interactive compared to extroverted women, who viewed sites with CMC tools to be more interactive.
    “When you go to a website — for example, the Google search engine — you’re essentially engaging in HCI, which is different from CMC, which is when you’re communicating with other humans through computer technology,” said S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory. “When we talk about HCI here, it’s really about the degree to which the system or the machine allows us to interact with it, and it includes everything from how we swipe and tap on our mobile devices, to how we try to access different information through links on a website. When we talk about CMC, it is about the tools to chat with somebody else, like a customer service agent through an online portal, or when we’re having a video chat via zoom, for example.”
    Knowing who your web visitors are and what engages them is an important part of creating good user experiences, added Sundar, who is also an affiliate of the Institute for Computational and Data Science. “For developers, it’s useful to know who will appreciate what types of interactivity that you have to offer, or what kind of interactivity should you offer to which kind of people.
    “These are actually quite important business decisions, because they cost a lot of money and have a lot of backend consequences,” said Sundar. For example, in an e-commerce site, which may be primarily trafficked by women, the findings suggest that efforts should be made to provide ways to talk to other people, such as chat tools, rather than simply tools to interact with the computer, such as being able to turn an image of a product in all directions.
    Real world behaviors in the virtual world
    When people use websites, many of the habits and behaviors they have adopted in real life influence their behaviors online, said Yan Huang, assistant professor of integrated strategic communication in the Jack J. Valenti School of Communication, University of Houston and first author of the paper. The study is in line with that, she added, demonstrating how people who are extroverted in real life also like to interact in virtual settings. More

  • in

    New algorithm can help improve cellular materials design

    New research published in Scientific Reports has revealed that a simple but robust algorithm can help engineers to improve the design of cellular materials that are used in a variety of diverse applications ranging from defence, bio-medical to smart structures and the aerospace sector.
    The way in which cellular materials will perform can be uncertain and so calculations to help engineers predict how they will react for a particular design, for a given set of loads, conditions and constraints, can help maximise their design and subsequent performance.
    The research collaborators at the Faculty of Science and Engineering, Swansea University, Indian Institute of Technology Delhi and Brown University, USA, found that running specialised calculations can help engineers to find the optimum micro-structure for cellular materials that are used for a wide range of purposes, from advanced aerospace applications to stents used for blocked arteries.
    Research author Dr Tanmoy Chatterjee said: “This paper is the result of one year of sustained collaborative research. The results illustrate that uncertainties in the micro-scale can drastically impact the mechanical performance of metamaterials. Our formulation achieved novel microstructure designs by employing computational algorithms which follow the evolutionary principles of nature.”
    Co-author Professor Sondipon Adhikari explains:
    “This approach allowed us to achieve extreme mechanical properties involving negative Poisson’s ratio (auxetic metamaterial) and elastic modulus. The ability to manipulate extreme mechanical properties through novel optimal micro-architecture designs will open up new possibilities for manufacturing and applications.”
    Story Source:
    Materials provided by Swansea University. Note: Content may be edited for style and length. More

  • in

    Progress in algorithms makes small, noisy quantum computers viable

    As reported in a new article in Nature Reviews Physics, instead of waiting for fully mature quantum computers to emerge, Los Alamos National Laboratory and other leading institutions have developed hybrid classical/quantum algorithms to extract the most performance — and potentially quantum advantage — from today’s noisy, error-prone hardware. Known as variational quantum algorithms, they use the quantum boxes to manipulate quantum systems while shifting much of the work load to classical computers to let them do what they currently do best: solve optimization problems.
    “Quantum computers have the promise to outperform classical computers for certain tasks, but on currently available quantum hardware they can’t run long algorithms. They have too much noise as they interact with environment, which corrupts the information being processed,” said Marco Cerezo, a physicist specializing in quantum computing, quantum machine learning, and quantum information at Los Alamos and a lead author of the paper. “With variational quantum algorithms, we get the best of both worlds. We can harness the power of quantum computers for tasks that classical computers can’t do easily, then use classical computers to complement the computational power of quantum devices.”
    Current noisy, intermediate scale quantum computers have between 50 and 100 qubits, lose their “quantumness” quickly, and lack error correction, which requires more qubits. Since the late 1990s, however, theoreticians have been developing algorithms designed to run on an idealized large, error-correcting, fault tolerant quantum computer.
    “We can’t implement these algorithms yet because they give nonsense results or they require too many qubits. So people realized we needed an approach that adapts to the constraints of the hardware we have — an optimization problem,” said Patrick Coles, a theoretical physicist developing algorithms at Los Alamos and the senior lead author of the paper.
    “We found we could turn all the problems of interest into optimization problems, potentially with quantum advantage, meaning the quantum computer beats a classical computer at the task,” Coles said. Those problems include simulations for material science and quantum chemistry, factoring numbers, big-data analysis, and virtually every application that has been proposed for quantum computers.
    The algorithms are called variational because the optimization process varies the algorithm on the fly, as a kind of machine learning. It changes parameters and logic gates to minimize a cost function, which is a mathematical expression that measures how well the algorithm has performed the task. The problem is solved when the cost function reaches its lowest possible value.
    In an iterative function in the variational quantum algorithm, the quantum computer estimates the cost function, then passes that result back to the classical computer. The classical computer then adjusts the input parameters and sends them to the quantum computer, which runs the optimization again.
    The review article is meant to be a comprehensive introduction and pedagogical reference for researches starting on this nascent field. In it, the authors discuss all the applications for algorithms and how they work, as well as cover challenges, pitfalls, and how to address them. Finally, it looks into the future, considering the best opportunities for achieving quantum advantage on the computers that will be available in the next couple of years.
    Story Source:
    Materials provided by DOE/Los Alamos National Laboratory. Note: Content may be edited for style and length. More

  • in

    Best of both worlds — Combining classical and quantum systems to meet supercomputing demands

    One of the most interesting phenomena in quantum mechanics is “quantum entanglement.” This phenomenon describes how certain particles are inextricably linked, such that their states can only be described with reference to each other. This particle interaction also forms the basis of quantum computing. And this is why, in recent years, physicists have looked for techniques to generate entanglement. However, these techniques confront a number of engineering hurdles, including limitations in creating large number of “qubits” (quantum bits, the basic unit of quantum information), the need to maintain extremely low temperatures (1 K), and the use of ultrapure materials. Surfaces or interfaces are crucial in the formation of quantum entanglement. Unfortunately, electrons confined to surfaces are prone to “decoherence,” a condition in which there is no defined phase relationship between the two distinct states. Thus, to obtain stable, coherent qubits, the spin states of surface atoms (or equivalently, protons) must be determined.
    Recently, a team of scientists in Japan, including Prof. Takahiro Matsumoto from Nagoya City University, Prof. Hidehiko Sugimoto from Chuo University, Dr. Takashi Ohhara from the Japan Atomic Energy Agency, and Dr. Susumu Ikeda from High Energy Accelerator Research Organization, recognized the need for stable qubits. By looking at the surface spin states, the scientists discovered an entangled pair of protons on the surface of a silicon nanocrystal.
    Prof. Matsumoto, the lead scientist, outlines the significance of their study, “Proton entanglement has been previously observed in molecular hydrogen and plays an important role in a variety of scientific disciplines. However, the entangled state was found in gas or liquid phases only. Now, we have detected quantum entanglement on a solid surface, which can lay the groundwork for future quantum technologies.” Their pioneering study was published in a recent issue of Physical Review B.
    The scientists studied the spin states using a technique known as “inelastic neutron scattering spectroscopy” to determine the nature of surface vibrations. By modeling these surface atoms as “harmonic oscillators,” they showed anti-symmetry of protons. Since the protons were identical (or indistinguishable), the oscillator model restricted their possible spin states, resulting in strong entanglement. Compared to the proton entanglement in molecular hydrogen, the entanglement harbored a massive energy difference between its states, ensuring its longevity and stability. Additionally, the scientists theoretically demonstrated a cascade transition of terahertz entangled photon pairs using the proton entanglement.
    The confluence of proton qubits with contemporary silicon technology could result in an organic union of classical and quantum computing platforms, enabling a much larger number of qubits (106) than currently available (102), and ultra-fast processing for new supercomputing applications. “Quantum computers can handle intricate problems, such as integer factorization and the ‘traveling salesman problem,’ which are virtually impossible to solve with traditional supercomputers. This could be a game-changer in quantum computing with regard to storing, processing, and transferring data, potentially even leading to a paradigm shift in pharmaceuticals, data security, and many other areas,” concludes an optimistic Prof. Matsumoto.
    We could be on the verge of witnessing a technological revolution in quantum computing!
    Story Source:
    Materials provided by Nagoya City University. Note: Content may be edited for style and length. More

  • in

    A mobility-based approach to optimize pandemic lockdown strategies

    A new strategy for modeling the spread of COVID-19 incorporates smartphone-captured data on people’s movements and shows promise for aiding development of optimal lockdown policies. Ritabrata Dutta of Warwick University, U.K., and colleagues present these findings in the open-access journal PLOS Computational Biology.
    Evidence shows that lockdowns are effective in mitigating the spread of COVID-19. However, they do come at a high economic cost, and in practice, not everybody follows government guidance on lockdowns. Thus, Dutta and colleagues propose, an optimal lockdown strategy would balance between controlling the ongoing COVID-19 pandemic and minimizing the economic costs of lockdowns.
    To help guide such a strategy, the researchers developed new mathematical models that simulate the spread of COVID-19. The models focus on England and France and — using a statistical approach known as approximate Bayesian computation — they incorporate both public health data and data on changes in people’s movements, as captured by Google via Android devices; this mobility data serves as a measure of the effectiveness of lockdown policies.
    Then, the researchers demonstrated how their models could be applied to design optimal lockdown strategies for England and France using a mathematical technique called optimal control. They showed that it is possible to design effective lockdown protocols that allow partial reopening of workplaces and schools, while taking into account both public health costs and economic costs. The models can be updated in real time, and they can be adapted to any country for which reliable public health and Google mobility data are available.
    “Our work opens the door to a larger integration between epidemiological models and real-world data to, through the use of supercomputers, determine best public policies to mitigate the effects of a pandemic,” Dutta says. “In a not-so-distant future, policy makers may be able to express certain prioritization criteria, and a computational engine, with an extensive use of different datasets, could determine the best course of action.”
    Next, the researchers plan to refine their country-wide models to work at smaller scales; specifically, each of the 348 local district authorities of the U.K.
    The researchers add, “The integration of big data, epidemiological models and supercomputers can help us design an optimal lockdown strategy in real time, while balancing both public health and economic costs.”
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Is your mobile provider tracking your location? New technology could stop it

    Right now, there is a good chance your phone is tracking your location — even with GPS services turned off. That’s because, to receive service, our phones reveal personal identifiers to cell towers owned by major network operators. This has led to vast and largely unregulated data-harvesting industries based around selling users’ location data to third parties without consent.
    For the first time, researchers at the University of Southern California (USC) Viterbi School of Engineering and Princeton University have found a way to stop this privacy breach using existing cellular networks. The new system, presented at USENIX Security conference on Aug. 11, protects users’ mobile privacy while providing normal mobile connectivity.
    The new architecture, called “Pretty Good Phone Privacy” or PGPP, decouples phone connectivity from authentication and billing by anonymizing personal identifiers sent to cell towers. The software-based solution, described by the researchers as an “architecture change,” does not alter cellular network hardware.
    “We’ve unwittingly accepted that our phones are tracking devices in disguise, but until now we’ve had no other option — using mobile devices meant accepting this tracking,” said study co-author Barath Raghavan, an assistant professor in computer science at USC. “We figured out how to decouple authentication from connectivity and ensure privacy while maintaining seamless connectivity, and it is all done in software.”
    Decoupling authentication and phone connectivity
    Currently, for your phone to work, the network has to know your location and identify you as paying customer. As such, both your identity and location data are tracked by the device at all times. Data brokers and major operators have taken advantage of this system to profit off revealing sensitive user data — to date, in the United States, there are no federal laws restricting the use of location data. More

  • in

    Toward next-generation brain-computer interface systems

    Brain-computer interfaces (BCIs) are emerging assistive devices that may one day help people with brain or spinal injuries to move or communicate. BCI systems depend on implantable sensors that record electrical signals in the brain and use those signals to drive external devices like computers or robotic prosthetics.
    Most current BCI systems use one or two sensors to sample up to a few hundred neurons, but neuroscientists are interested in systems that are able to gather data from much larger groups of brain cells.
    Now, a team of researchers has taken a key step toward a new concept for a future BCI system — one that employs a coordinated network of independent, wireless microscale neural sensors, each about the size of a grain of salt, to record and stimulate brain activity. The sensors, dubbed “neurograins,” independently record the electrical pulses made by firing neurons and send the signals wirelessly to a central hub, which coordinates and processes the signals.
    In a study published on August 12 in Nature Electronics, the research team demonstrated the use of nearly 50 such autonomous neurograins to record neural activity in a rodent.
    The results, the researchers say, are a step toward a system that could one day enable the recording of brain signals in unprecedented detail, leading to new insights into how the brain works and new therapies for people with brain or spinal injuries.
    “One of the big challenges in the field of brain-computer interfaces is engineering ways of probing as many points in the brain as possible,” said Arto Nurmikko, a professor in Brown’s School of Engineering and the study’s senior author. “Up to now, most BCIs have been monolithic devices — a bit like little beds of needles. Our team’s idea was to break up that monolith into tiny sensors that could be distributed across the cerebral cortex. That’s what we’ve been able to demonstrate here.”
    The team, which includes experts from Brown, Baylor University, University of California at San Diego and Qualcomm, began the work of developing the system about four years ago. The challenge was two-fold, said Nurmikko, who is affiliated with Brown’s Carney Institute for Brain Science. The first part required shrinking the complex electronics involved in detecting, amplifying and transmitting neural signals into the tiny silicon neurograin chips. The team first designed and simulated the electronics on a computer, and went through several fabrication iterations to develop operational chips. More