More stories

  • in

    Turning ChatGPT into a ‘chemistry assistant’

    Developing new materials requires significant time and labor, but some chemists are now hopeful that artificial intelligence (AI) could one day shoulder much of this burden. In a new study in the Journal of the American Chemical Society, a team prompted a popular AI model, ChatGPT, to perform one particularly time-consuming task: searching scientific literature. With that data, they built a second tool, a model to predict experimental results.
    Reports from previous studies offer a vast trove of information that chemists need, but finding and parsing the most relevant details can be laborious. For example, those interested in designing highly porous, crystalline metal-organic frameworks (MOFs) — which have potential applications in areas such as clean energy — must sort through hundreds of scientific papers describing a variety of experimental conditions. Researchers have previously attempted to coax AI to take over this task; however, the language processing models they used required significant technical expertise, and applying them to new topics meant changing the program. Omar Yaghi and colleagues wanted to see if the next generation of language models, which includes ChatGPT, could offer a more accessible, flexible way to extract information.
    To analyze text from scientific papers, the team gave ChatGPT prompts, or instructions, guiding it through three processes intended to identify and summarize the experimental information the manuscripts contained. The researchers carefully constructed these prompts to minimize the model’s tendency to make up responses, a phenomenon known as hallucination, and to ensure the best responses possible.
    When tested on 228 papers describing MOF syntheses, this system extracted more than 26,000 factors relevant for making roughly 800 of these compounds. With these data, the team trained a separate AI model to predict the crystalline state of MOFs based on these conditions. And finally, to make the data more user friendly, they built a chatbot to answer questions about it. The team notes that, unlike previous AI-based efforts, this one does not require expertise in coding. What’s more, scientists can shift its focus simply by adjusting the narrative language in the prompts. This new system, which they dub the “ChatGPT Chemistry Assistant,” could also be useful in other fields of chemistry, according to the researchers. More

  • in

    How sure is sure? Incorporating human error into machine learning

    Researchers are developing a way to incorporate one of the most human of characteristics — uncertainty — into machine learning systems.
    Human error and uncertainty are concepts that many artificial intelligence systems fail to grasp, particularly in systems where a human provides feedback to a machine learning model. Many of these systems are programmed to assume that humans are always certain and correct, but real-world decision-making includes occasional mistakes and uncertainty.
    Researchers from the University of Cambridge, along with The Alan Turing Institute, Princeton, and Google DeepMind, have been attempting to bridge the gap between human behaviour and machine learning, so that uncertainty can be more fully accounted for in AI applications where humans and machines are working together. This could help reduce risk and improve trust and reliability of these applications, especially where safety is critical, such as medical diagnosis.
    The team adapted a well-known image classification dataset so that humans could provide feedback and indicate their level of uncertainty when labelling a particular image. The researchers found that training with uncertain labels can improve these systems’ performance in handling uncertain feedback, although humans also cause the overall performance of these hybrid systems to drop. Their results will be reported at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society (AIES 2023) in Montréal.
    ‘Human-in-the-loop’ machine learning systems — a type of AI system that enables human feedback — are often framed as a promising way to reduce risks in settings where automated models cannot be relied upon to make decisions alone. But what if the humans are unsure?
    “Uncertainty is central in how humans reason about the world but many AI models fail to take this into account,” said first author Katherine Collins from Cambridge’s Department of Engineering. “A lot of developers are working to address model uncertainty, but less work has been done on addressing uncertainty from the person’s point of view.”
    We are constantly making decisions based on the balance of probabilities, often without really thinking about it. Most of the time — for example, if we wave at someone who looks just like a friend but turns out to be a total stranger — there’s no harm if we get things wrong. However, in certain applications, uncertainty comes with real safety risks. More

  • in

    How randomized data can improve our security

    Huge streams of data pass through our computers and smartphones every day. In simple terms, technical devices contain two essential units to process this data: A processor, which is a kind of control center, and a RAM, comparable to memory. Modern processors use a cache to act as a bridge between the two, since memory is much slower at providing data than the processor is at processing it. This cache often contains private data that could be an attractive target for attackers. A team of scientists from Bochum, Germany, in cooperation with researchers from Japan, has now developed an innovative cipher that not only offers greater security than previous approaches, but is also more efficient and faster. They are presenting their work at the prestigious Usenix Security Symposium in Anaheim, California (USA).
    The team includes Dr. Federico Canale and Professor Gregor Leander from the Chair of Symmetric Cryptography, Jan Philipp Thoma and Professor Tim Güneysu from the Chair of Security Engineering, all from Ruhr University Bochum, as well as Yosuke Todo from NTT Social Informatics Laboratories and Rei Ueno from Tohoku University (Japan).
    Cache not well protected against side-channel attacks until now
    Years ago, CASA PI Professor Yuval Yarom, who has been at Ruhr University since April 2023, discovered that the cache is not well protected against a certain type of attack. The serious Spectre and Meltdown vulnerabilities made headlines at the time because they affected all popular microprocessors as well as cloud services. Caches are unobtrusive, but they perform an important task: they store data that is requested very frequently. Its main function is to reduce latency. If the CPU had to fetch from slower RAM every time it needed to access data, this would slow down the system. This is why the CPU fetches certain data from the cache. However, attackers can exploit this communication between CPU and cache. Their method: They overwrite the cache’s unsecured data. The system requests the data from main memory because it cannot find it in the cache. This process is measurably slower. “In so-called timing side-channel attacks, attackers can measure the time differences and use them to observe memory accesses by other programs. Thus, they can steal private keys for encryption algorithms, for example,” explains Jan Philipp Thoma from the Chair of Security Engineering.
    Innovative mathematical solution
    While patches have been developed to fix the vulnerability for certain attacks, they have failed to provide provable security. However, the team from Bochum and Japan has now come up with an innovative solution: “Our idea is to use mathematical processes to randomize the data in the cache,” explains Gregor Leander, who recently received an ECR Advanced Grant for his research. This randomization in the CPU’s caches can help prevent attacks by disabling attackers from removing data from the cache.
    “The interdisciplinary approach of cryptography and hardware security considerations is a novelty in computer security. While there have been previous ideas for randomized cache architectures, none have been very efficient and none have been able to completely withhold strong attackers,” said Tim Güneysu, who heads the Chair of Security Engineering. The new SCARF model uses block cipher encryption, a completely new idea for the field, according to the researchers. “Normally, we encrypt data with 128 bits, in the cache we sometimes work with 10 bits. This is a complex process because it takes much longer to mix this data with a large key,” said Gregor Leander. The large key is needed because a shorter encryption of such small amounts of data could be more easily broken by attackers. More

  • in

    Turning big data into better breeds and varieties: Can AI help feed the planet?

    Artificial intelligence could hold the key to feeding 10 billion people by 2050 in the face of climate change and rapidly evolving pests and pathogens according to researchers at The University of Queensland.
    Professor Lee Hickey from UQ’s Queensland Alliance for Agriculture and Food Innovation said AI offered opportunities to accelerate the development of high performing plants and animals for better farm sustainability and profitability.
    “Breeders are collecting billions of data points, but the big challenge is how we turn this colossal amount of data into knowledge to support smarter decisions in the breeding process,” Professor Hickey said.
    “AI can help to identify which plants and animals we use for crossing or carry forward to the next generation.”
    Professor Ben Hayes, the co-inventor of genomic prediction, said the QAAFI team had identified four applications for AI in crop and livestock breeding.
    “The first one is deciding what to breed — it might sound simple, but this decision is becoming more complex,” Professor Hayes said.
    “In an increasingly challenging environment, consumer acceptance will be more important, so AI is a good way to pull together the preferences of millions of people. More

  • in

    A new weapon in the war on robocall scams

    The latest weapon in the war on robocalls is an automated system that analyzes the content of these unsolicited bulk calls to shed light on both the scope of the problem and the type of scams being perpetuated by robocalls. The tool, called SnorCall, is designed to help regulators, phone carriers and other stakeholders better understand and monitor robocall trends — and take action against related criminal activity.
    “Although telephone service providers, regulators and researchers have access to call metadata — such as the number being called and the length of the call — they do not have tools to investigate what is being said on robocalls at the vast scale required,” says Brad Reaves, corresponding author of a paper on the work and an assistant professor of computer science at North Carolina State University.
    “For one thing, providers don’t want to listen in on calls — it raises significant privacy concerns. But robocalls are a huge problem, and are often used to conduct criminal fraud. To better understand the scope of this problem, and gain insights into these scams, we need to know what is being said on these robocalls.
    “We’ve developed a tool that allows us to the characterize the content of robocalls,” Reaves says. “And we’ve done it without violating privacy concerns; in collaboration with a telecommunications company called Bandwidth, we operate more than 60,000 phone numbers that are used solely by us to monitor unsolicited robocalls. We did not use any phone numbers of actual customers.”
    The new tool, SnorCall, essentially records all robocalls received on the monitored phone lines. It bundles together robocalls that use the same audio, reducing the number of robocalls whose content needs to be analyzed by around an order of magnitude. These recorded robocalls are then transcribed and analyzed by a machine learning framework called Snorkel that can be used to characterize each call.
    “SnorCall essentially uses labels to identify what each robocall is about,” Reaves says. “Does it mention a specific company or government program? Does it request specific personal information? If so, what kind? Does it request money? If so, how much? This is all fed into a database that we can use to identify trends or behaviors.”
    As a proof of concept, the researchers used SnorCall to assess 232,723 robocalls collected over 23 months on the more than 60,000 phone lines dedicated to the study. More

  • in

    Disclosing ‘true normal price’ recommended to protect consumers from deceptive pricing

    Fifty years ago, the Federal Trade Commission (FTC) stopped enforcing deceptive pricing regulations, assuming that competition would keep retailers honest.
    Since then, competition has increased significantly — yet the practice of posting false, inflated comparison prices alongside sale prices has continued unchecked.
    Think of an advertisement from a furniture store that touts a $599 sale price for a couch as an $800 savings from a promoted regular price of $1,399. The problem is that the store may have never offered the couch for sale at the higher price.
    This practice, called “fictitious pricing,” is ubiquitous in the retail trade. One recent investigation tracked the prices of 25 major retailers and found that “most stores’ sale prices … are bogus discounts” because the listed regular price is seldom, if ever, the price charged for the products.
    “Competition and the Regulation of Fictitious Pricing” is forthcoming in the Journal of Marketing from Joe Urbany, professor of marketing at the University of Notre Dame’s Mendoza College of Business, along with Rick Staelin from Duke University and Donald Ngwe, a senior researcher at Microsoft.
    The paper critically evaluates two assumptions underlying the FTC’s decision to halt deceptive pricing prosecution.
    The first is that inflated reference prices are largely ignored by consumers, who focus primarily on the sale prices, leading to price competition that pushes selling prices lower and renders reference prices harmless. More

  • in

    People’s everyday pleasures may improve cognitive arousal and performance

    Listening to music and drinking coffee are the sorts of everyday pleasures that can impact a person’s brain activity in ways that improve cognitive performance, including in tasks requiring concentration and memory.
    That’s a finding of a new NYU Tandon School of Engineering study involving MINDWATCH, a groundbreaking brain-monitoring technology.
    Developed over the past six years by NYU Tandon’s Biomedical Engineering Associate Professor Rose Faghih, MINDWATCH is an algorithm that analyzes a person’s brain activity from data collected via any wearable device that can monitor electrodermal activity (EDA). This activity reflects changes in electrical conductance triggered by emotional stress, linked to sweat responses.
    In this recent MINDWATCH study, published in Nature Scientific Reports, subjects wearing skin-monitoring wristbands and brain monitoring headbands completed cognitive tests while listening to music, drinking coffee and sniffing perfumes reflecting their individual preferences. They also completed those tests without any of those stimulants.
    The MINDWATCH algorithm revealed that music and coffee measurably altered subjects’ brain arousal, essentially putting them in a physiological “state of mind” that could modulate their performance in the working memory tasks they were performing.
    Specifically, MINDWATCH determined the stimulants triggered increased “beta band” brain wave activity, a state associated with peak cognitive performance. Perfume had a modest positive effect as well, suggesting the need for further study.
    “The pandemic has impacted the mental well-being of many people across the globe and now more than ever, there is a need to seamlessly monitor the negative impact of everyday stressors on one’s cognitive function,” said Faghih. “Right now MINDWATCH is still under development, but our eventual goal is that it will contribute to technology that could allow any person to monitor his or her own brain cognitive arousal in real time, detecting moments of acute stress or cognitive disengagement, for example. At those times, MINDWATCH could ‘nudge’ a person towards simple and safe interventions — perhaps listening to music — so they could get themselves into a brain state in which they feel better and perform job or school tasks more successfully.”
    The specific cognitive test used in this study — a working memory task, called the n-back test — involves presenting a sequence of stimuli (in this case, images or sounds) one by one and asking the subject to indicate whether the current stimulus matches the one presented “n” items back in the sequence. This study employed a 1-back test — the participant responded “yes” when the current stimulus is the same as the one presented one item back — and a more challenging 3-back test, asking the same for three items back. More

  • in

    Researchers use SPAD detector to achieve 3D quantum ghost imaging

    Researchers have reported the first 3D measurements acquired with quantum ghost imaging. The new technique enables 3D imaging on a single photon level, yielding the lowest photon dose possible for any measurement.
    “3D imaging with single photons could be used for various biomedical applications, such as eye care diagnostics,” said researcher Carsten Pitsch from the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation and Karlsruhe Institute of Technology, both in Germany. “It can be applied to image materials and tissues that are sensitive to light or drugs that become toxic when exposed to light without any risk of damage.”
    In the Optica Publishing Group journal Applied Optics, the researchers describe their new approach, which incorporates new single photon avalanche diode (SPAD) array detectors. They apply the new imaging scheme, which they call asynchronous detection, to perform 3D imaging with quantum ghost imaging.
    “Asynchronous detection might also be useful for military or security applications since it could be used to observe without being detected while also reducing the effects of over-illumination, turbulence and scattering,” said Pitsch. “We also want to investigate its use in hyperspectral imaging, which could allow multiple spectral regions to be recorded simultaneously while using a very low photon dose. This could be very useful for biological analysis.”
    Adding a third dimension
    Quantum ghost imaging creates images using entangled photon-pairs in which only one member of the photon pair interacts with the object. The detection time for each photon is then used to identify entangled pairs, which allows an image to be reconstructed. This approach not only allows imaging at extremely low light levels but also means that the objects being imaged do not have to interact with the photons used for imaging.
    Previous setups for quantum ghost imaging were not capable of 3D imaging because they relied on intensified charge-coupled device (ICCD) cameras. Although these cameras have good spatial resolution, they are time-gated and don’t allow the independent temporal detection of single photons. More