More stories

  • in

    Reverse engineering Jackson Pollock

    Can a machine be trained to paint like Jackson Pollock? More specifically, can 3D-printing harness the Pollock’s distinctive techniques to quickly and accurately print complex shapes?
    “I wanted to know, can one replicate Jackson Pollock, and reverse engineer what he did,” said L. Mahadevan, the Lola England de Valpine Professor of Applied Mathematics at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), and Professor of Organismic and Evolutionary Biology, and of Physics in the Faculty of Arts and Sciences (FAS).
    Mahadevan and his team combined physics and machine learning to develop a new 3D-printing technique that can quickly create complex physical patterns — including replicating a segment of a Pollock painting — by leveraging the same natural fluid instability that Pollock used in his work.
    The research is published in Soft Matter.
    3D and 4D printing has revolutionized manufacturing but the process is still painstakingly slow.
    The issue, as it usually is, is physics. Liquid inks are bound by the rules of fluid dynamics, which means when they fall from a height, they become unstable, folding and coiling in on themselves. You can observe this at home by drizzling honey on a piece of toast.
    More than two decades ago, Mahadevan provided a simple physical explanation of this process, and later suggested how Pollock could have intuitively used these ideas to paint from a distance. More

  • in

    New techniques efficiently accelerate sparse tensors for massive AI models

    Researchers from MIT and NVIDIA have developed two techniques that accelerate the processing of sparse tensors, a type of data structure that’s used for high-performance computing tasks. The complementary techniques could result in significant improvements to the performance and energy-efficiency of systems like the massive machine-learning models that drive generative artificial intelligence.
    Tensors are data structures used by machine-learning models. Both of the new methods seek to efficiently exploit what’s known as sparsity — zero values — in the tensors. When processing these tensors, one can skip over the zeros and save on both computation and memory. For instance, anything multiplied by zero is zero, so it can skip that operation. And it can compress the tensor (zeros don’t need to be stored) so a larger portion can be stored in on-chip memory.
    However, there are several challenges to exploiting sparsity. Finding the nonzero values in a large tensor is no easy task. Existing approaches often limit the locations of nonzero values by enforcing a sparsity pattern to simplify the search, but this limits the variety of sparse tensors that can be processed efficiently.
    Another challenge is that the number of nonzero values can vary in different regions of the tensor. This makes it difficult to determine how much space is required to store different regions in memory. To make sure the region fits, more space is often allocated than is needed, causing the storage buffer to be underutilized. This increases off-chip memory traffic, which requires extra computation.
    The MIT and NVIDIA researchers crafted two solutions to address these problems. For one, they developed a technique that allows the hardware to efficiently find the nonzero values for a wider variety of sparsity patterns.
    For the other solution, they created a method that can handle the case where the data do not fit in memory, which increases the utilization of the storage buffer and reduces off-chip memory traffic.
    Both methods boost the performance and reduce the energy demands of hardware accelerators specifically designed to speed up the processing of sparse tensors. More

  • in

    Human input boosts citizens’ acceptance of AI and perceptions of fairness, study shows

    Increasing human input when AI is used for public services boosts acceptance of the technology, a new study shows.
    The research shows citizens are not only concerned about AI fairness but also about potential human biases. They are in favour of AI being used in cases when administrative discretion is perceived as too large.
    Researchers found citizens’ knowledge about AI does not alter their acceptance of the technology. More accurate systems and lower cost systems also increased their acceptance. Cost and accuracy of technology mattered more to them than human involvement.
    The study, by Laszlo Horvath from Birkbeck, University of London and Oliver James, Susan Banducci and Ana Beduschi from the University of Exeter, is published in the journal Government Information Quarterly.
    Academics carried out an experiment with 2,143 people in the UK. Respondents were asked to select if they would prefer more or less AI in systems to process immigration visas and parking permits.
    Researchers found more human involvement tended to increase acceptance of AI. Yet, when substantial human discretion was introduced in parking permit scenarios, respondents preferred more limited human input.
    System-level factors such as high accuracy, the presence of an appeals system, increased transparency, reduced cost, non-sharing of data, and the absence of private company involvement all boosted both acceptance and perceived procedural fairness. More

  • in

    Hey, Siri: Moderate AI voice speed encourages digital assistant use

    Voice speed and interaction style may determine whether a user sees a digital assistant like Alexa or Siri as a helpful partner or something to control, according to a team led by Penn State researchers. The findings reveal insights into the parasocial, or one-sided, relationships that people can form with digital assistants, according to the researchers.
    They reported their findings in the Journal of Business Research.
    “We endow these digital assistants with personalities and human characteristics, and it impacts how we interact with the devices,” said Brett Christenson, assistant clinical professor of marketing at Penn State and first author of the study. “If you could design the perfect voice for every consumer, it could be a very useful tool.”
    The researchers found that a digital assistant’s moderate talking speed, compared to faster and slower speeds, increased the likelihood that a person would use the assistant. In addition, conversation-like interactions, rather than monologues, mitigated the negative effects of faster and slower voice speeds and increased user trust in the digital assistant, according to the researchers.
    “As people adopt devices that can speak to them, having a consistent, branded voice can be used as a strategic competitive tool,” Christenson said. “What this paper shows is that when you’re designing the voice of a digital assistant, not all voices are equal in terms of their impact on the customer.”
    Christenson and his colleagues conducted three experiments to measure how changing the voice speed and interaction style of a digital assistant affected a user’s likelihood to use and trust the device. In the first study, they asked 753 participants to use a digital assistant to help them create a personal budget. The digital assistant recited a monological, or one-way, script at either a slow, moderate or fast pace.
    The researchers then asked the participants how likely they would be to use the digital assistant to create a personal budget, measuring responses from one, not at all likely, to seven, very likely. They found that participants who heard the moderate voice speed were more likely to use the digital assistant than those who heard the slow or fast voices. More

  • in

    Scientists train AI to illuminate drugs’ impact

    An ideal medicine for one person may prove ineffective or harmful for someone else, and predicting who could benefit from a given drug has been difficult. Now, an international team led by neuroscientist Kirill Martemyanov, Ph.D., based at The Herbert Wertheim UF Scripps Institute for Biomedical Innovation & Technology, is training artificial intelligence to assist.
    Martemyanov’s group used a powerful molecular tracking technology to profile the action of more than 100 prominent cellular drug targets, including their more common genetic variations. The scientists then used that data to develop and train an AI-anchored platform. In a study that appears in the Oct. 31 issue of the journal Cell Reports, Martemyanov and colleagues report that their algorithm predicted with more than 80% accuracy how cell surface receptors would respond to drug-like molecules.
    The data used to train the algorithm was gathered over a decade of experimentation. Their long-range goal is to refine the tool and use it to help power the design of true precision medications, said Martemyanov, who chairs the institute’s neuroscience department.
    “We all think of ourselves as more or less normal, but we are not. We are all basically mutants. We have tremendous variability in our cell receptors,” Martemyanov said. “If doctors don’t know what exact genetic alteration you have, you just have this one-size-fits-all approach to prescribing, so you have to experiment to find what works for you.”
    One-third of all drugs work by binding to cell-surface receptors called G protein-coupled receptors, or GPCRs. These are complexes that cross the cell membrane, with a “docking station” on the cell’s exterior and a branch that extends into the cell. When a drug pulls into its GPCR dock, the branch moves, triggering a G protein inside the cell and setting off a cascade of changes, like falling dominoes.
    The result of activating or blocking this process might be anything from pain relief, quieting allergies or reducing blood pressure. Besides medications, other things like hormones, neurotransmitters and even scents dock with GPCRs to direct biological activities.
    Scientists have catalogued about 800 GPCRs in humans. About half are dedicated to senses, especially smell. About 250 more receive medicines or other known molecules. Martemyanov’s team had to invent a new protocol to observe and document them. They found many surprises. Some GPCRs worked as expected, but others didn’t, notably those for neurotransmitters called glutamate. More

  • in

    Late not great — imperfect timekeeping places significant limit on quantum computers

    New research from a consortium of quantum physicists, led by Trinity College Dublin’s Dr Mark Mitchison, shows that imperfect timekeeping places a fundamental limit to quantum computers and their applications. The team claims that even tiny timing errors add up to place a significant impact on any large-scale algorithm, posing another problem that must eventually be solved if quantum computers are to fulfil the lofty aspirations that society has for them.
    It is difficult to imagine modern life without clocks to help organise our daily schedules; with a digital clock in every person’s smartphone or watch, we take precise timekeeping for granted — although that doesn’t stop people from being late!
    And for quantum computers, precise timing is even more essential, as they exploit the bizarre behaviour of tiny particles — such as atoms, electrons, and photons — to process information. While this technology is still at an early stage, it promises to dramatically speed up the solution of important problems, like the discovery of new pharmaceuticals or materials. This potential has driven significant investment across the private and public sector, such as the establishment of the Trinity Quantum Alliance academic-industrial partnership launched earlier this year.
    Currently, however, quantum computers are still too small to be useful. A major challenge to scaling them up is the extreme fragility of the quantum states that are used to encode information. In the macroscopic world, this is not a problem. For example, you can add numbers perfectly using an abacus, in which wooden beads are pushed back and forth to represent arithmetic operations. The wooden beads have very stable states: each one sits in a specific place and it will stay in place unless intentionally moved. Importantly, whether you move the bead quickly or slowly does not affect the result.
    But in quantum physics, it is more complicated.
    “Mathematically speaking, changing a quantum state in a quantum computer corresponds to a rotation in an abstract high-dimensional space,” says Jake Xuereb from the Atomic Institute at the Vienna University of Technology, the first author of the paper. “In order to achieve the desired state in the end, the rotation must be applied for a very specific period of time — otherwise you turn the state either too little or too far.”
    Given that real clocks are never perfect, the team investigated the impact of imperfect timing on quantum algorithms. More

  • in

    Accelerating AI tasks while preserving data security

    With the proliferation of computationally intensive machine-learning applications, such as chatbots that perform real-time language translation, device manufacturers often incorporate specialized hardware components to rapidly move and process the massive amounts of data these systems demand.
    Choosing the best design for these components, known as deep neural network accelerators, is challenging because they can have an enormous range of design options. This difficult problem becomes even thornier when a designer seeks to add cryptographic operations to keep data safe from attackers.
    Now, MIT researchers have developed a search engine that can efficiently identify optimal designs for deep neural network accelerators, that preserve data security while boosting performance.
    Their search tool, known as SecureLoop, is designed to consider how the addition of data encryption and authentication measures will impact the performance and energy usage of the accelerator chip. An engineer could use this tool to obtain the optimal design of an accelerator tailored to their neural network and machine-learning task.
    When compared to conventional scheduling techniques that don’t consider security, SecureLoop can improve performance of accelerator designs while keeping data protected.
    Using SecureLoop could help a user improve the speed and performance of demanding AI applications, such as autonomous driving or medical image classification, while ensuring sensitive user data remains safe from some types of attacks.
    “If you are interested in doing a computation where you are going to preserve the security of the data, the rules that we used before for finding the optimal design are now broken. So all of that optimization needs to be customized for this new, more complicated set of constraints. And that is what [lead author] Kyungmi has done in this paper,” says Joel Emer, an MIT professor of the practice in computer science and electrical engineering and co-author of a paper on SecureLoop. More

  • in

    The brain may learn about the world the same way some computational models do

    To make our way through the world, our brain must develop an intuitive understanding of the physical world around us, which we then use to interpret sensory information coming into the brain.
    How does the brain develop that intuitive understanding? Many scientists believe that it may use a process similar to what’s known as “self-supervised learning.” This type of machine learning, originally developed as a way to create more efficient models for computer vision, allows computational models to learn about visual scenes based solely on the similarities and differences between them, with no labels or other information.
    A pair of studies from researchers at the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT offers new evidence supporting this hypothesis. The researchers found that when they trained models known as neural networks using a particular type of self-supervised learning, the resulting models generated activity patterns very similar to those seen in the brains of animals that were performing the same tasks as the models.
    The findings suggest that these models are able to learn representations of the physical world that they can use to make accurate predictions about what will happen in that world, and that the mammalian brain may be using the same strategy, the researchers say.
    “The theme of our work is that AI designed to help build better robots ends up also being a framework to better understand the brain more generally,” says Aran Nayebi, a postdoc in the ICoN Center. “We can’t say if it’s the whole brain yet, but across scales and disparate brain areas, our results seem to be suggestive of an organizing principle.”
    Nayebi is the lead author of one of the studies, co-authored with Rishi Rajalingham, a former MIT postdoc now at Meta Reality Labs, and senior authors Mehrdad Jazayeri, an associate professor of brain and cognitive sciences and a member of the McGovern Institute for Brain Research; and Robert Yang, an assistant professor of brain and cognitive sciences and an associate member of the McGovern Institute. Ila Fiete, director of the ICoN Center, a professor of brain and cognitive sciences, and an associate member of the McGovern Institute, is the senior author of the other study, which was co-led by Mikail Khona, an MIT graduate student, and Rylan Schaeffer, a former senior research associate at MIT.
    Both studies will be presented at the 2023 Conference on Neural Information Processing Systems (NeurIPS) in December. More