More stories

  • in

    Disclosing ‘true normal price’ recommended to protect consumers from deceptive pricing

    Fifty years ago, the Federal Trade Commission (FTC) stopped enforcing deceptive pricing regulations, assuming that competition would keep retailers honest.
    Since then, competition has increased significantly — yet the practice of posting false, inflated comparison prices alongside sale prices has continued unchecked.
    Think of an advertisement from a furniture store that touts a $599 sale price for a couch as an $800 savings from a promoted regular price of $1,399. The problem is that the store may have never offered the couch for sale at the higher price.
    This practice, called “fictitious pricing,” is ubiquitous in the retail trade. One recent investigation tracked the prices of 25 major retailers and found that “most stores’ sale prices … are bogus discounts” because the listed regular price is seldom, if ever, the price charged for the products.
    “Competition and the Regulation of Fictitious Pricing” is forthcoming in the Journal of Marketing from Joe Urbany, professor of marketing at the University of Notre Dame’s Mendoza College of Business, along with Rick Staelin from Duke University and Donald Ngwe, a senior researcher at Microsoft.
    The paper critically evaluates two assumptions underlying the FTC’s decision to halt deceptive pricing prosecution.
    The first is that inflated reference prices are largely ignored by consumers, who focus primarily on the sale prices, leading to price competition that pushes selling prices lower and renders reference prices harmless. More

  • in

    People’s everyday pleasures may improve cognitive arousal and performance

    Listening to music and drinking coffee are the sorts of everyday pleasures that can impact a person’s brain activity in ways that improve cognitive performance, including in tasks requiring concentration and memory.
    That’s a finding of a new NYU Tandon School of Engineering study involving MINDWATCH, a groundbreaking brain-monitoring technology.
    Developed over the past six years by NYU Tandon’s Biomedical Engineering Associate Professor Rose Faghih, MINDWATCH is an algorithm that analyzes a person’s brain activity from data collected via any wearable device that can monitor electrodermal activity (EDA). This activity reflects changes in electrical conductance triggered by emotional stress, linked to sweat responses.
    In this recent MINDWATCH study, published in Nature Scientific Reports, subjects wearing skin-monitoring wristbands and brain monitoring headbands completed cognitive tests while listening to music, drinking coffee and sniffing perfumes reflecting their individual preferences. They also completed those tests without any of those stimulants.
    The MINDWATCH algorithm revealed that music and coffee measurably altered subjects’ brain arousal, essentially putting them in a physiological “state of mind” that could modulate their performance in the working memory tasks they were performing.
    Specifically, MINDWATCH determined the stimulants triggered increased “beta band” brain wave activity, a state associated with peak cognitive performance. Perfume had a modest positive effect as well, suggesting the need for further study.
    “The pandemic has impacted the mental well-being of many people across the globe and now more than ever, there is a need to seamlessly monitor the negative impact of everyday stressors on one’s cognitive function,” said Faghih. “Right now MINDWATCH is still under development, but our eventual goal is that it will contribute to technology that could allow any person to monitor his or her own brain cognitive arousal in real time, detecting moments of acute stress or cognitive disengagement, for example. At those times, MINDWATCH could ‘nudge’ a person towards simple and safe interventions — perhaps listening to music — so they could get themselves into a brain state in which they feel better and perform job or school tasks more successfully.”
    The specific cognitive test used in this study — a working memory task, called the n-back test — involves presenting a sequence of stimuli (in this case, images or sounds) one by one and asking the subject to indicate whether the current stimulus matches the one presented “n” items back in the sequence. This study employed a 1-back test — the participant responded “yes” when the current stimulus is the same as the one presented one item back — and a more challenging 3-back test, asking the same for three items back. More

  • in

    Researchers use SPAD detector to achieve 3D quantum ghost imaging

    Researchers have reported the first 3D measurements acquired with quantum ghost imaging. The new technique enables 3D imaging on a single photon level, yielding the lowest photon dose possible for any measurement.
    “3D imaging with single photons could be used for various biomedical applications, such as eye care diagnostics,” said researcher Carsten Pitsch from the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation and Karlsruhe Institute of Technology, both in Germany. “It can be applied to image materials and tissues that are sensitive to light or drugs that become toxic when exposed to light without any risk of damage.”
    In the Optica Publishing Group journal Applied Optics, the researchers describe their new approach, which incorporates new single photon avalanche diode (SPAD) array detectors. They apply the new imaging scheme, which they call asynchronous detection, to perform 3D imaging with quantum ghost imaging.
    “Asynchronous detection might also be useful for military or security applications since it could be used to observe without being detected while also reducing the effects of over-illumination, turbulence and scattering,” said Pitsch. “We also want to investigate its use in hyperspectral imaging, which could allow multiple spectral regions to be recorded simultaneously while using a very low photon dose. This could be very useful for biological analysis.”
    Adding a third dimension
    Quantum ghost imaging creates images using entangled photon-pairs in which only one member of the photon pair interacts with the object. The detection time for each photon is then used to identify entangled pairs, which allows an image to be reconstructed. This approach not only allows imaging at extremely low light levels but also means that the objects being imaged do not have to interact with the photons used for imaging.
    Previous setups for quantum ghost imaging were not capable of 3D imaging because they relied on intensified charge-coupled device (ICCD) cameras. Although these cameras have good spatial resolution, they are time-gated and don’t allow the independent temporal detection of single photons. More

  • in

    The ‘unknome’: A database of human genes we know almost nothing about

    Researchers from the United Kingdom hope that a new, publicly available database they have created will shrink, not grow, over time. That’s because it is a compendium of the thousands of understudied proteins encoded by genes in the human genome, whose existence is known but whose functions are mostly not. The database, dubbed the “unknome,” is the work of Matthew Freeman of the Dunn School of Pathology, University of Oxford, England, and Sean Munro of MRC Laboratory of Molecular Biology in Cambridge, England, and colleagues, and is described in the open access journal PLOS Biology. Their own investigations of a subset of proteins in the database reveal that a majority contribute to important cellular functions, including development and resilience to stress.
    The sequencing of the human genome has made it clear that it encodes thousands of likely protein sequences whose identities and functions are still unknown. There are multiple reasons for this, including the tendency to focus scarce research dollars on already-known targets, and the lack of tools, including antibodies, to interrogate cells about the function of these proteins. But the risks of ignoring these proteins are significant, the authors argue, since it is likely that some, perhaps many, play important roles in critical cell processes, and may both provide insight and targets for therapeutic intervention.
    To promote more rapid exploration of such proteins, the authors created the unknome database (www.unknome.org), that assigns to every protein a “knownness” score, reflecting the information in the scientific literature about function, conservation across species, subcellular compartmentalization, and other elements. Based on this system, there are many thousands of proteins whose knownness is near-zero. Proteins from model organisms are included, along with those from the human genome. The database is open to all and is customizable, allowing the user to provide their own weights to different elements, thereby generating their own set of knownness scores to prioritize their own research.
    To test the utility of the database, the authors chose 260 genes in humans for which there were comparable genes in flies, and which had knownness scores of 1 or less in both species, indicating that almost nothing was known about them. For many of them, a complete knockout of the gene was incompatible with life in the fly; partial knockdowns or tissue-specific knockdowns led to the discovery that a large fraction contributed to essential functions influencing fertility, development, tissue growth, protein quality control, or stress resistance.
    The results suggest that, despite decades of detailed study, there are thousands of fly genes that remain to be understood at even the most basic level, and the same is clearly true for the human genome. “These uncharacterized genes have not deserved their neglect,” Munro said. “Our database provides a powerful, versatile and efficient platform to identify and select important genes of unknown function for analysis, thereby accelerating the closure of the gap in biological knowledge that the unknome represents.”
    Munro adds, “The role of thousands of human proteins remains unclear and yet research tends to focus on those that are already well understood. To help address this we created an Unknome database that ranks proteins based on how little is known about them, and then performed functional screens on a selection of these mystery proteins to demonstrate how ignorance can drive biological discovery.” More

  • in

    Texting while walking makes college students more likely to fall

    When it comes to college-aged adults who are glued to their smartphones, experts argue over whether texting while walking increases the risk of an accident. Some studies have shown that texting pedestrians are more likely to walk into oncoming traffic, while other studies suggest that young adults have mastered the art of multitasking and are able to text accurately while navigating obstacles. However, few studies have measured how texters respond to unpredictable hazard conditions. By simulating an environment with random slipping threats, researchers report in the journal Heliyon on August 8th that texting increases the risk of falling in response to walkway hazards.
    “On any day it seems as many as 80% of people, both younger and older, may be head down and texting. I wondered: is this safe?” says senior author Matthew A. Brodie, a neuroscientist and engineer at the University of New South Wales (UNSW) Graduate School of Biomedical Engineering. “This has made me want to investigate the dangers of texting while walking. I wanted to know if these dangers are real or imagined and to measure the risk in a repeatable way.”
    The team recruited 50 UNSW undergraduate students from his “Mechanics of the Human Body” course for this experiment. Brodie and co-author Yoshiro Okubo invented a tiled hazard walkway at Neuroscience Research Australia’s gait laboratory, which halfway through had a tile that could be adjusted to slide out of place, so anyone who stepped on it would slip as if on a banana peel. Students wore a safety harness — preventing any slip from becoming a fall — and sensors that collected their motion data. They then were asked to go along the walkway either without texting or while typing “The quick brown fox jumps over the lazy dog.”
    To better simulate the uncertainty of real life, students were only told that they may or may not slip. This allowed the researchers to study how texting pedestrians might anticipate and try to prevent a potential slip, such as by leaning forward.
    “What surprised me is how differently people responded to the threat of slipping,” says Brodie. “Some slowed down and took a more cautious approach. Others sped up in anticipation of slipping. Such different approaches reinforce how no two people are the same, and to better prevent accidents from texting while walking, multiple strategies may be needed.”
    Despite motion data showing that texting participants tried to be more cautious in response to a threat, this did not counteract their risk of falling. When participants went from leaning forwards (such as over a phone) to slipping backwards, their motion sensors showed an increase in the range of their “trunk angle.” Researchers used this number to measure whether the texting condition was making students more likely to fall, and they found that the average trunk angle range during a fall significantly increased if a student was texting.
    Walking also caused the texters’ accuracy to decrease. The highest texting accuracy occurred when participants were seated, but accuracy decreased even as walking participants were cautioned about a potential slip that did not occur. The lowest accuracy, however, occurred in conditions where participants did slip.
    The researchers note that young people may be more likely to take risks even if they are aware that texting and walking could increase their likelihood of falling. For that reason, the authors suggest that educational initiatives such as signs might be less effective in reaching this population. In addition to education, the researchers also suggest that phones could implement locking technology similar to what is used when users are driving. The technology could detect walking activity and activate a screen lock to prevent texting during that time. In future research, the team plans on looking into the effectiveness of this intervention. More

  • in

    Quantum material exhibits ‘non-local’ behavior that mimics brain function

    We often believe computers are more efficient than humans. After all, computers can complete a complex math equation in a moment and can also recall the name of that one actor we keep forgetting. However, human brains can process complicated layers of information quickly, accurately, and with almost no energy input: recognizing a face after only seeing it once or instantly knowing the difference between a mountain and the ocean. These simple human tasks require enormous processing and energy input from computers, and even then, with varying degrees of accuracy.
    Creating brain-like computers with minimal energy requirements would revolutionize nearly every aspect of modern life. Funded by the Department of Energy, Quantum Materials for Energy Efficient Neuromorphic Computing (Q-MEEN-C) — a nationwide consortium led by the University of California San Diego — has been at the forefront of this research.
    UC San Diego Assistant Professor of Physics Alex Frañó is co-director of Q-MEEN-C and thinks of the center’s work in phases. In the first phase, he worked closely with President Emeritus of University of California and Professor of Physics Robert Dynes, as well as Rutgers Professor of Engineering Shriram Ramanathan. Together, their teams were successful in finding ways to create or mimic the properties of a single brain element (such as a neuron or synapse) in a quantum material.
    Now, in phase two, new research from Q-MEEN-C, published in Nano Letters, shows that electrical stimuli passed between neighboring electrodes can also affect non-neighboring electrodes. Known as non-locality, this discovery is a crucial milestone in the journey toward new types of devices that mimic brain functions known as neuromorphic computing.
    “In the brain it’s understood that these non-local interactions are nominal — they happen frequently and with minimal exertion,” stated Frañó, one of the paper’s co-authors. “It’s a crucial part of how the brain operates, but similar behaviors replicated in synthetic materials are scarce.”
    Like many research projects now bearing fruit, the idea to test whether non-locality in quantum materials was possible came about during the pandemic. Physical lab spaces were shuttered, so the team ran calculations on arrays that contained multiple devices to mimic the multiple neurons and synapses in the brain. In running these tests, they found that non-locality was theoretically possible.
    When labs reopened, they refined this idea further and enlisted UC San Diego Jacobs School of Engineering Associate Professor Duygu Kuzum, whose work in electrical and computer engineering helped them turn a simulation into an actual device. More

  • in

    Accurate measurement of permittivity advances radio telescope receivers and next generation telecommunication networks

    Researchers invented a novel method to measure the permittivity of insulators 100 times more accurately than before. This technology is expected to contribute to the efficient development of sensitive radio receivers for radio telescopes as well as to the development of devices for the next generation communication networks, “Beyond 5G/6G.”
    Permittivity is a value that indicates how electrons inside an insulator react when a voltage is applied to the insulator. It is an important parameter for understanding the behavior of radio waves as they travel through insulators. In the development of telecommunications equipment, it is necessary to accurately determine the permittivity of materials used for circuit boards and building columns and walls. For radio astronomy, researchers also need to know the permittivity of components used in radio receivers.
    By devising a calculation method for electromagnetic wave propagation, the research team developed an analytical algorithm that derives the permittivity directly rather than by approximation. The team, consisting of researchers and engineers from the National Astronomical Observatory of Japan (NAOJ) and the National Institute of Information and Communications Technology (NICT), then used the new method to measure lens material for a receiver being developed for the Atacama Large Millimeter/submillimeter Array (ALMA) and confirmed that the results were consistent with other methods, demonstrating its effectiveness in actual device development.
    “The newly developed method is expected to contribute to not only the design of radio telescope components, but also to the development of high-frequency materials and devices for the realization of next-generation communication networks (Beyond 5G/6G) using the millimeter wave/terahertz band,” says Ryo Sakai, an engineer at NAOJ and the lead author of the research paper published recently.
    Reducing the error due to approximation by a factor of 100 speeds up the development process. If the permittivity of individual materials is measured inaccurately, the actual fabricated product may not meet the target performance. If the permittivity is known accurately from the design stage, unnecessary trial and error can be reduced and costs can be cut.
    Conventionally, there are several methods used for measuring permittivity. One method that can accurately measure permittivity is the “resonance method,” but in that case, the material to be measured must be placed in a device called a resonator, which requires precision processing of the material, sometimes less than several hundred micrometers thick. Another drawback is that the permittivity can only be measured at several specific frequencies. Since it is necessary to measure the permittivity of various materials during the development stage of a device, if high-precision processing is required for each measurement, the development process will take a long time. On the other hand, the “free-space method,” which has fewer of these drawbacks, is also used, but in this case, an approximation has been used to analyze the measurement results, and the error caused by this makes accurate measurement difficult.
    “Compared to other measurement methods, the free-space method has fewer restrictions on the shape of the measurement sample, and it is easy to extend the measurement frequency band,” says Sakai. The new analysis method is used with the “free-space method,” which means that with the new method, we can accurately measure permittivity with fewer constraints.
    NAOJ and NICT have jointly been conducting research and development for high-precision material property measurement systems at millimeter-wave and terahertz-wave frequencies. The team is aiming for further technological innovation by combining the knowledge gained through the development of astronomical instruments with that gained from developing communication technology. More

  • in

    New model reduces bias and enhances trust in AI decision-making and knowledge organization

    University of Waterloo researchers have developed a new explainable artificial intelligence (AI) model to reduce bias and enhance trust and accuracy in machine learning-generated decision-making and knowledge organization.
    Traditional machine learning models often yield biased results, favouring groups with large populations or being influenced by unknown factors, and take extensive effort to identify from instances containing patterns and sub-patterns coming from different classes or primary sources.
    The medical field is one area where there are severe implications for biased machine learning results. Hospital staff and medical professionals rely on datasets containing thousands of medical records and complex computer algorithms to make critical decisions about patient care. Machine learning is used to sort the data, which saves time. However, specific patient groups with rare symptomatic patterns may go undetected, and mislabeled patients and anomalies could impact diagnostic outcomes. This inherent bias and pattern entanglement leads to misdiagnoses and inequitable healthcare outcomes for specific patient groups.
    Thanks to new research led by Dr. Andrew Wong, a distinguished professor emeritus of systems design engineering at Waterloo, an innovative model aims to eliminate these barriers by untangling complex patterns from data to relate them to specific underlying causes unaffected by anomalies and mislabeled instances. It can enhance trust and reliability in Explainable Artificial Intelligence (XAI.)
    “This research represents a significant contribution to the field of XAI,” Wong said. “While analyzing a vast amount of protein binding data from X-ray crystallography, my team revealed the statistics of the physicochemical amino acid interacting patterns which were masked and mixed at the data level due to the entanglement of multiple factors present in the binding environment. That was the first time we showed entangled statistics can be disentangled to give a correct picture of the deep knowledge missed at the data level with scientific evidence.”
    This revelation led Wong and his team to develop the new XAI model called Pattern Discovery and Disentanglement (PDD).
    “With PDD, we aim to bridge the gap between AI technology and human understanding to help enable trustworthy decision-making and unlock deeper knowledge from complex data sources,” said Dr. Peiyuan Zhou, the lead researcher on Wong’s team. More