More stories

  • in

    Quantum steering for more precise measurements

    Quantum systems consisting of several particles can be used to measure magnetic or electric fields more precisely. A young physicist at the University of Basel has now proposed a new scheme for such measurements that uses a particular kind of correlation between quantum particles.
    In quantum information, the fictitious agents Alice and Bob are often used to illustrate complex communication tasks. In one such process, Alice can use entangled quantum particles such as photons to transmit or “teleport” a quantum state — unknown even to herself — to Bob, something that is not feasible using traditional communications.
    However, it has been unclear whether the team Alice-Bob can use similar quantum states for other things besides communication. A young physicist at the University of Basel has now shown how particular types of quantum states can be used to perform measurements with higher precision than quantum physics would ordinarily allow. The results have been published in the scientific journal Nature Communications.
    Quantum steering at a distance
    Together with researchers in Great Britain and France, Dr. Matteo Fadel, who works at the Physics Department of the University of Basel, has thought about how high-precision measurement tasks can be tackled with the help of so-called quantum steering.
    Quantum steering describes the fact that in certain quantum states of systems consisting of two particles, a measurement on the first particle allows one to make more precise predictions about possible measurement results on the second particle than quantum mechanics would allow if only the measurement on the second particle had been made. It is just as if the measurement on the first particle had “steered” the state of the second one. More

  • in

    Machine learning model generates realistic seismic waveforms

    A new machine-learning model that generates realistic seismic waveforms will reduce manual labor and improve earthquake detection, according to a study published recently in JGR Solid Earth.
    “To verify the e?cacy of our generative model, we applied it to seismic ?eld data collected in Oklahoma,” said Youzuo Lin, a computational scientist in Los Alamos National Laboratory’s Geophysics group and principal investigator of the project. “Through a sequence of qualitative and quantitative tests and benchmarks, we saw that our model can generate high-quality synthetic waveforms and improve machine learning-based earthquake detection algorithms.”
    Quickly and accurately detecting earthquakes can be a challenging task. Visual detection done by people has long been considered the gold standard, but requires intensive manual labor that scales poorly to large data sets. In recent years, automatic detection methods based on machine learning have improved the accuracy and efficiency of data collection; however, the accuracy of those methods relies on access to a large amount of high?quality, labeled training data, often tens of thousands of records or more.
    To resolve this data dilemma, the research team developed SeismoGen based on a generative adversarial network (GAN), which is a type of deep generative model that can generate high?quality synthetic samples in multiple domains. In other words, deep generative models train machines to do things and create new data that could pass as real.
    Once trained, the SeismoGen model is capable of producing realistic seismic waveforms of multiple labels. When applied to real Earth seismic datasets in Oklahoma, the team saw that data augmentation from SeismoGen?generated synthetic waveforms could be used to improve earthquake detection algorithms in instances when only small amounts of labeled training data are available.
    Story Source:
    Materials provided by DOE/Los Alamos National Laboratory. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence model predicts which key of the immune system opens the locks of coronavirus

    With an artificial intelligence (AI) method developed by researchers at Aalto University and University of Helsinki, researchers can now link immune cells to their targets and, for example, uncouple which white blood cells recognize SARS-CoV-2. The developed tool has broad applications in understanding the function of the immune system in infections, autoimmune disorders, and cancer.
    The human immune defense is based on the ability of white blood cells to accurately identify disease-causing pathogens and to initiate a defense reaction against them. The immune defense is able to recall the pathogens it has encountered previously, on which, for example, the effectiveness of vaccines is based. Thus, the immune defense the most accurate patient record system that carries a history of all pathogens an individual has faced. This information however has previously been difficult to obtain from patient samples.
    The learning immune system can be roughly divided into two parts, of which B cells are responsible for producing antibodies against pathogens, while T cells are responsible for destroying their targets. The measurement of antibodies by traditional laboratory methods is relatively simple, which is why antibodies already have several uses in healthcare.
    “Although it is known that the role of T cells in the defense response against for example viruses and cancer is essential, identifying the targets of T cells has been difficult despite extensive research,” says Satu Mustjoki, Professor of Translational Hematology.
    AI helps to identify new key-lock pairs
    T cells identify their targets in a key and a lock principle, where the key is the T cell receptor on the surface of the T cell and the key is the protein presented on the surface of an infected cell. An individual is estimated to carry more different T cell keys than there are stars in the Milky Way, making the mapping of T cell targets with laboratory techniques cumbersome. More

  • in

    Scientists glimpse signs of a puzzling state of matter in a superconductor

    Unconventional superconductors contain a number of exotic phases of matter that are thought to play a role, for better or worse, in their ability to conduct electricity with 100% efficiency at much higher temperatures than scientists had thought possible — although still far short of the temperatures that would allow their wide deployment in perfectly efficient power lines, maglev trains and so on.
    Now scientists at the Department of Energy’s SLAC National Accelerator Laboratory have glimpsed the signature of one of those phases, known as pair-density waves or PDW, and confirmed that it’s intertwined with another phase known as charge density wave (CDW) stripes — wavelike patterns of higher and lower electron density in the material.
    Observing and understanding PDW and its correlations with other phases may be essential for understanding how superconductivity emerges in these materials, allowing electrons to pair up and travel with no resistance, said Jun-Sik Lee, a SLAC staff scientist who led the research at the lab’s Stanford Synchrotron Radiation Lightsource (SSRL).
    Even indirect evidence of the PDW phase intertwined with charge stripes, he said, is an important step on the long road toward understanding the mechanism behind unconventional superconductivity, which has eluded scientists over more than 30 years of research.
    Lee added that the method his team used to make this observation, which involved dramatically increasing the sensitivity of a standard X-ray technique known as resonant soft X-ray scattering (RSXS) so it could see the extremely faint signals given off by these phenomena, has potential for directly sighting both the PDW signature and its correlations with other phases in future experiments. That’s what they plan to work on next.
    The scientists described their findings today in Physical Review Letters. More

  • in

    Mechanical engineers develop new high-performance artificial muscle technology

    In the field of robotics, researchers are continually looking for the fastest, strongest, most efficient and lowest-cost ways to actuate, or enable, robots to make the movements needed to carry out their intended functions.
    The quest for new and better actuation technologies and ‘soft’ robotics is often based on principles of biomimetics, in which machine components are designed to mimic the movement of human muscles — and ideally, to outperform them. Despite the performance of actuators like electric motors and hydraulic pistons, their rigid form limits how they can be deployed. As robots transition to more biological forms and as people ask for more biomimetic prostheses, actuators need to evolve.
    Associate professor (and alum) Michael Shafer and professor Heidi Feigenbaum of Northern Arizona University’s Department of Mechanical Engineering, along with graduate student researcher Diego Higueras-Ruiz, published a paper in Science Robotics presenting a new, high-performance artificial muscle technology they developed in NAU’s Dynamic Active Systems Laboratory. The paper, titled “Cavatappi artificial muscles from drawing, twisting, and coiling polymer tubes,” details how the new technology enables more human-like motion due to its flexibility and adaptability, but outperforms human skeletal muscle in several metrics.
    “We call these new linear actuators cavatappi artificial muscles based on their resemblance to the Italian pasta,” Shafer said.
    Because of their coiled, or helical, structure, the actuators can generate more power, making them an ideal technology for bioengineering and robotics applications. In the team’s initial work, they demonstrated that cavatappi artificial muscles exhibit specific work and power metrics ten and five times higher than human skeletal muscles, respectively, and as they continue development, they expect to produce even higher levels of performance.
    “The cavatappi artificial muscles are based on twisted polymer actuators (TPAs), which were pretty revolutionary when they first came out because they were powerful, lightweight and cheap. But they were very inefficient and slow to actuate because you had to heat and cool them. Additionally, their efficiency is only about two percent,” Shafer said. “For the cavatappi, we get around this by using pressurized fluid to actuate, so we think these devices are far more likely to be adopted. These devices respond about as fast as we can pump the fluid. The big advantage is their efficiency. We have demonstrated contractile efficiency of up to about 45 percent, which is a very high number in the field of soft actuation.”
    The engineers think this technology could be used in soft robotics applications, conventional robotic actuators (for example, for walking robots), or even potentially in assistive technologies like exoskeletons or prostheses.
    “We expect that future work will include the use of cavatappi artificial muscles in many applications due to their simplicity, low-cost, lightweight, flexibility, efficiency and strain energy recovery properties, among other benefits,” Shafer said.
    Technology is available for licensing, partnering opportunities.
    Working with the NAU Innovations team, the inventors have taken steps to protect their intellectual property. The technology has entered the protection and early commercialization stage and is available for licensing and partnering opportunities. For more information, please contact NAU Innovations.
    Shafer joined NAU in 2013. His other research interests are related to energy harvesting, wildlife telemetry systems and unmanned aerial systems. Feigenbaum joined NAU in 2007, and her other research interest include ratcheting in metals and smart materials. The graduate student on this project, Diego Higueras-Ruiz, received his MS in Mechanical Engineering from NAU in 2018 and will be completing his PhD in Bioengineering in Fall 2021. This work has been supported through a grant from NAU’s Research and Development Preliminary Studies program. More

  • in

    AI algorithms can influence people's voting and dating decisions in experiments

    In a new series of experiments, artificial intelligence (A.I.) algorithms were able to influence people’s preferences for fictitious political candidates or potential romantic partners, depending on whether recommendations were explicit or covert. Ujué Agudo and Helena Matute of Universidad de Deusto in Bilbao, Spain, present these findings in the open-access journal PLOS ONE on April 21, 2021.
    From Facebook to Google search results, many people encounter A.I. algorithms every day. Private companies are conducting extensive research on the data of their users, generating insights into human behavior that are not publicly available. Academic social science research lags behind private research, and public knowledge on how A.I. algorithms might shape people’s decisions is lacking.
    To shed new light, Agudo and Matute conducted a series of experiments that tested the influence of A.I. algorithms in different contexts. They recruited participants to interact with algorithms that presented photos of fictitious political candidates or online dating candidates, and asked the participants to indicate whom they would vote for or message. The algorithms promoted some candidates over others, either explicitly (e.g., “90% compatibility”) or covertly, such as by showing their photos more often than others’.
    Overall, the experiments showed that the algorithms had a significant influence on participants’ decisions of whom to vote for or message. For political decisions, explicit manipulation significantly influenced decisions, while covert manipulation was not effective. The opposite effect was seen for dating decisions.
    The researchers speculate these results might reflect people’s preference for human explicit advice when it comes to subjective matters such as dating, while people might prefer algorithmic advice on rational political decisions.
    In light of their findings, the authors express support for initiatives that seek to boost the trustworthiness of A.I., such as the European Commission’s Ethics Guidelines for Trustworthy AI and DARPA’s explainable AI (XAI) program. Still, they caution that more publicly available research is needed to understand human vulnerability to algorithms.
    Meanwhile, the researchers call for efforts to educate the public on the risks of blind trust in recommendations from algorithms. They also highlight the need for discussions around ownership of the data that drives these algorithms.
    The authors add: “If a fictitious and simplistic algorithm like ours can achieve such a level of persuasion without establishing actually customized profiles of the participants (and using the same photographs in all cases), a more sophisticated algorithm such as those with which people interact in their daily lives should certainly be able to exert a much stronger influence.”
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Pepper the robot talks to itself to improve its interactions with people

    Ever wondered why your virtual home assistant doesn’t understand your questions? Or why your navigation app took you on the side street instead of the highway? In a study published April 21st in the journal iScience, Italian researchers designed a robot that “thinks out loud” so that users can hear its thought process and better understand the robot’s motivations and decisions.
    “If you were able to hear what the robots are thinking, then the robot might be more trustworthy,” says co-author Antonio Chella, describing first author Arianna Pipitone’s idea that launched the study at the University of Palermo. “The robots will be easier to understand for laypeople, and you don’t need to be a technician or engineer. In a sense, we can communicate and collaborate with the robot better.”
    Inner speech is common in people and can be used to gain clarity, seek moral guidance, and evaluate situations in order to make better decisions. To explore how inner speech might impact a robot’s actions, the researchers built a robot called Pepper that speaks to itself. They then asked people to set the dinner table with Pepper according to etiquette rules to study how Pepper’s self-dialogue skills influence human-robot interactions.
    The scientists found that, with the help of inner speech, Pepper is better at solving dilemmas. In one experiment, the user asked Pepper to place the napkin at the wrong spot, contradicting the etiquette rule. Pepper started asking itself a series of self-directed questions and concluded that the user might be confused. To be sure, Pepper confirmed the user’s request, which led to further inner speech.
    “Ehm, this situation upsets me. I would never break the rules, but I can’t upset him, so I’m doing what he wants,” Pepper said to itself, placing the napkin at the requested spot. Through Pepper’s inner voice, the user can trace its thoughts to learn that Pepper was facing a dilemma and solved it by prioritizing the human’s request. The researchers suggest that the transparency could help establish human-robot trust.
    Comparing Pepper’s performance with and without inner speech, Pipitone and Chella discovered that the robot had a higher task-completion rate when engaging in self-dialogue. Thanks to inner speech, Pepper outperformed the international standard functional and moral requirements for collaborative robots — guidelines that machines, from humanoid AI to mechanic arms at the manufacturing line, follow.
    “People were very surprised by the robot’s ability,” says Pipitone. “The approach makes the robot different from typical machines because it has the ability to reason, to think. Inner speech enables alternative solutions for the robots and humans to collaborate and get out of stalemate situations.”
    Although hearing the inner voice of robots enriches the human-robot interaction, some people might find it inefficient because the robot spends more time completing tasks when it talks to itself. The robot’s inner speech is also limited to the knowledge that researchers gave it. Still, Pipitone and Chella say their work provides a framework to further explore how self-dialogue can help robots focus, plan, and learn.
    “In some sense, we are creating a generational robot that likes to chat,” says Chella. The authors say that, from navigation apps and the camera on your phone to medical robots in the operation rooms, machines and computers alike can benefit from this chatty feature. “Inner speech could be useful in all the cases where we trust the computer or a robot for the evaluation of a situation,” Chella says.
    Story Source:
    Materials provided by Cell Press. Note: Content may be edited for style and length. More

  • in

    Augmented reality in retail and its impact on sales

    Augmented reality (AR) is a technology that superimposes virtual objects onto a live view of physical environments, helping users visualize how these objects fit into their physical world. Researchers from City University of Hong Kong and Singapore Management University published a new paper in the Journal of Marketing that identifies four broad uses of AR in retail settings and examines the impact of AR on retail sales.
    The study, forthcoming in the Journal of Marketing, is titled “Augmented Reality in Retail and Its Impact on Sales” and is authored by Yong-Chin Tan, Sandeep Chandukala, and Srinivas Reddy. The researchers discuss the following uses of AR in retail settings:* To entertain customers. AR transforms static objects into interactive, animated three-dimensional objects, helping marketers create fresh experiences that captivate and entertain customers. Marketers can use AR-enabled experiences to drive traffic to their physical locations. For example, Walmart collaborated with DC Comics and Marvel to place special thematic displays with exclusive superhero-themed AR experiences in its stores. In addition to creating novel and engaging experiences for customers, the displays also encouraged customers to explore different areas in the stores. * To educate customers. Due to its interactive and immersive format, AR is also an effective medium to deliver content and information to customers. To help customers better appreciate their new car models, Toyota and Hyundai have utilized AR to demonstrate key features and innovative technologies in a vivid and visually appealing manner. AR can also be used to provide in-store wayfinding and product support. Walgreens and Lowe’s have developed in-store navigation apps that overlay directional signals onto a live view of the path in front of users to guide them to product locations and notify them if there are special promotions along the way. * To facilitate product evaluation. By retaining the physical environment as a backdrop for virtual elements, AR also helps users visualize how products would appear in their actual consumption contexts to assess product fit more accurately prior to purchase. For example, Ikea’s Place app uses AR to overlay true-to-scale, three-dimensional models of furniture onto a live view of customers’ rooms. Customers can easily determine if the products fit in a space without taking any measurements. Uniqlo and Topshop have also deployed the same technology in their physical stores, offering customers greater convenience by reducing the need to change in and out of different outfits. An added advantage of AR is its ability to accommodate a wide assortment of products. This capability is particularly useful for made-to-order or bulky products. BMW and Audi have used AR to provide customers with true-to-scale, three-dimensional visual representations of car models based on customized features such as paint color, wheel design, and interior aesthetics. * To enhance the post-purchase consumption experience. Lastly, AR can be used to enhance and redefine the way products are experienced or consumed after they have been purchased. For example, Lego recently launched several specially designed brick sets that combine physical and virtual gameplay. Through the companion AR app, animated Lego characters spring to life and interact with the physical Lego sets, creating a whole new playing experience. In a bid to address skepticism about the quality of its food ingredients, McDonald’s has also used AR to let customers discover the origins of ingredients in the food they purchased via story-telling and three-dimensional animations. The research also focuses on the promising application of AR to facilitate product evaluation prior to purchase and examine how it impacts sales in online retail. For example: * The availability and usage of AR has a positive impact on sales. The overall impact appears to be small, but certain products are more likely to benefit from the technology than others. * The impact of AR is stronger for products and brands that are less popular. Thus, retailers carrying wide product assortments can use AR to stimulate demand for niche products at the long tail of the sales distribution. AR may also help to level the playing field for less-popular brands. With the launch of AR-enabled display ads on advertising platforms such as Facebook and YouTube, less-established brands could consider investing in this new ad format because they stand to benefit most from this technology. * The impact of AR is also greater for products that are more expensive, indicating that AR could increase overall revenues for retailers. Retailers selling premium products may also leverage AR to improve decision comfort and reduce customers’ hesitation in the purchase process. * Customers who are new to the online channel or product category are more likely to purchase after using AR, suggesting that AR has the potential to promote online channel adoption and category expansion. As prior research has shown that multichannel customers are more profitable, omni-channel retailers can use AR to encourage their offline customers to adopt the online channel.Taken together, these findings provide converging evidence that AR is most effective when product-related uncertainty is high. Managers can thus use AR to reduce customer uncertainty and improve sales.
    Story Source:
    Materials provided by American Marketing Association. Original written by Matt Weingarden. Note: Content may be edited for style and length. More