More stories

  • in

    Late not great — imperfect timekeeping places significant limit on quantum computers

    New research from a consortium of quantum physicists, led by Trinity College Dublin’s Dr Mark Mitchison, shows that imperfect timekeeping places a fundamental limit to quantum computers and their applications. The team claims that even tiny timing errors add up to place a significant impact on any large-scale algorithm, posing another problem that must eventually be solved if quantum computers are to fulfil the lofty aspirations that society has for them.
    It is difficult to imagine modern life without clocks to help organise our daily schedules; with a digital clock in every person’s smartphone or watch, we take precise timekeeping for granted — although that doesn’t stop people from being late!
    And for quantum computers, precise timing is even more essential, as they exploit the bizarre behaviour of tiny particles — such as atoms, electrons, and photons — to process information. While this technology is still at an early stage, it promises to dramatically speed up the solution of important problems, like the discovery of new pharmaceuticals or materials. This potential has driven significant investment across the private and public sector, such as the establishment of the Trinity Quantum Alliance academic-industrial partnership launched earlier this year.
    Currently, however, quantum computers are still too small to be useful. A major challenge to scaling them up is the extreme fragility of the quantum states that are used to encode information. In the macroscopic world, this is not a problem. For example, you can add numbers perfectly using an abacus, in which wooden beads are pushed back and forth to represent arithmetic operations. The wooden beads have very stable states: each one sits in a specific place and it will stay in place unless intentionally moved. Importantly, whether you move the bead quickly or slowly does not affect the result.
    But in quantum physics, it is more complicated.
    “Mathematically speaking, changing a quantum state in a quantum computer corresponds to a rotation in an abstract high-dimensional space,” says Jake Xuereb from the Atomic Institute at the Vienna University of Technology, the first author of the paper. “In order to achieve the desired state in the end, the rotation must be applied for a very specific period of time — otherwise you turn the state either too little or too far.”
    Given that real clocks are never perfect, the team investigated the impact of imperfect timing on quantum algorithms. More

  • in

    Accelerating AI tasks while preserving data security

    With the proliferation of computationally intensive machine-learning applications, such as chatbots that perform real-time language translation, device manufacturers often incorporate specialized hardware components to rapidly move and process the massive amounts of data these systems demand.
    Choosing the best design for these components, known as deep neural network accelerators, is challenging because they can have an enormous range of design options. This difficult problem becomes even thornier when a designer seeks to add cryptographic operations to keep data safe from attackers.
    Now, MIT researchers have developed a search engine that can efficiently identify optimal designs for deep neural network accelerators, that preserve data security while boosting performance.
    Their search tool, known as SecureLoop, is designed to consider how the addition of data encryption and authentication measures will impact the performance and energy usage of the accelerator chip. An engineer could use this tool to obtain the optimal design of an accelerator tailored to their neural network and machine-learning task.
    When compared to conventional scheduling techniques that don’t consider security, SecureLoop can improve performance of accelerator designs while keeping data protected.
    Using SecureLoop could help a user improve the speed and performance of demanding AI applications, such as autonomous driving or medical image classification, while ensuring sensitive user data remains safe from some types of attacks.
    “If you are interested in doing a computation where you are going to preserve the security of the data, the rules that we used before for finding the optimal design are now broken. So all of that optimization needs to be customized for this new, more complicated set of constraints. And that is what [lead author] Kyungmi has done in this paper,” says Joel Emer, an MIT professor of the practice in computer science and electrical engineering and co-author of a paper on SecureLoop. More

  • in

    The brain may learn about the world the same way some computational models do

    To make our way through the world, our brain must develop an intuitive understanding of the physical world around us, which we then use to interpret sensory information coming into the brain.
    How does the brain develop that intuitive understanding? Many scientists believe that it may use a process similar to what’s known as “self-supervised learning.” This type of machine learning, originally developed as a way to create more efficient models for computer vision, allows computational models to learn about visual scenes based solely on the similarities and differences between them, with no labels or other information.
    A pair of studies from researchers at the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT offers new evidence supporting this hypothesis. The researchers found that when they trained models known as neural networks using a particular type of self-supervised learning, the resulting models generated activity patterns very similar to those seen in the brains of animals that were performing the same tasks as the models.
    The findings suggest that these models are able to learn representations of the physical world that they can use to make accurate predictions about what will happen in that world, and that the mammalian brain may be using the same strategy, the researchers say.
    “The theme of our work is that AI designed to help build better robots ends up also being a framework to better understand the brain more generally,” says Aran Nayebi, a postdoc in the ICoN Center. “We can’t say if it’s the whole brain yet, but across scales and disparate brain areas, our results seem to be suggestive of an organizing principle.”
    Nayebi is the lead author of one of the studies, co-authored with Rishi Rajalingham, a former MIT postdoc now at Meta Reality Labs, and senior authors Mehrdad Jazayeri, an associate professor of brain and cognitive sciences and a member of the McGovern Institute for Brain Research; and Robert Yang, an assistant professor of brain and cognitive sciences and an associate member of the McGovern Institute. Ila Fiete, director of the ICoN Center, a professor of brain and cognitive sciences, and an associate member of the McGovern Institute, is the senior author of the other study, which was co-led by Mikail Khona, an MIT graduate student, and Rylan Schaeffer, a former senior research associate at MIT.
    Both studies will be presented at the 2023 Conference on Neural Information Processing Systems (NeurIPS) in December. More

  • in

    Powder engineering adds AI to the mix

    A research team at Osaka Metropolitan University has developed a new simulation method that accurately predicts powder mixing using AI, and has succeeded in increasing calculation speed by approximately 350 times while maintaining the same level of accuracy as conventional methods. This method is expected to not only pave the way for more efficient and precise powder mixing processes but also open up new possibilities for industries seeking to enhance product quality and streamline production.
    Imagine a world without powders. It may sound exaggerated, but our daily lives are intricately connected to powders in various ways from foods, pharmaceuticals, cosmetics to batteries, ceramics, etc. In all these industries, powder mixing is an important unit operation where different types of powders are mixed to achieve uniformity. However, it can be difficult to predict what conditions are optimal to achieve the desired uniformity as the process often relies on trial and error as well as engineers’ expertise.
    Numerical simulations using the discrete element method (DEM) have been used widely as an approach that can accurately predict powder mixing. This is achieved by calculating the motion of all particles in a very short time range (1/1,000,000 of a second), calculating the motion of the entire powder using the calculated values, and then repeating the process over and over again to calculate the motion of each particle a short time ahead. Therefore, the substantial amount of time it takes to predict powder mixing significantly hampers the ability to have large-scale and long-duration powder mixing processes.
    A research team led by Associate Professor Hideya Nakamura, Associate Professor Shuji Ohsaki, Professor Satoru Watano, and Ph.D. student Naoki Kishida from the Graduate School of Engineering at Osaka Metropolitan University has developed a new simulation method using AI. Additionally, the team has succeeded in enhancing computational speed by about 350 times. This new method is characterized by using a recurrent neural network (RNN) that enables a long-time-scale powder mixing simulation with low computational costs while maintaining the same level of accuracy as conventional methods.
    “We have successfully harnessed our knowledge in powder technology, which we have honed over many years, and combined it with machine learning to rapidly predict the unique behavior of complex powders,” explained Professor Nakamura. “We would like to build upon this achievement to contribute to the future of industries seeking to enhance product quality and streamline production.” More

  • in

    Virtual meetings tire people because we’re doing them wrong

    New research suggests sleepiness during virtual meetings is caused by mental underload and boredom. Earlier studies suggested that fatigue from virtual meetings stems from mental overload, but new research from Aalto University shows that sleepiness during virtual meetings might actually be a result of mental underload and boredom.
    ‘I expected to find that people get stressed in remote meetings. But the result was the opposite — especially those who were not engaged in their work quickly became drowsy during remote meetings,’ says Assistant Professor Niina Nurmi, who led the study.
    The researchers measured heart rate variability during virtual meetings and face-to-face meetings, examining different types of fatigue experiences among 44 knowledge workers across nearly 400 meetings. The team at Aalto collaborated with researchers at the Finnish Institute of Occupational Health, where stress and recovery are studied using heart rate monitors. The paper was published in the Journal of Occupational Health Psychology.
    ‘We combined physiological methods with ethnographic research. We shadowed each subject for two workdays, recording all events with time stamps, to find out the sources of human physiological responses,’ Nurmi says.
    The study also included a questionnaire to identify people’s general attitude and work engagement.
    ‘The format of a meeting had little effect on people who were highly engaged and enthusiastic about their work. They were able to stay active even during virtual meetings. On the other hand, workers whose work engagement was low and who were not very enthusiastic about their work found virtual meetings very tiring.’
    It’s easier to maintain focus in face-to-face meetings than virtual ones, as the latter have limited cognitive cues and sensory input. ‘Especially when cameras are off, the participant is left under-stimulated and may start to compensate by multitasking,’ Nurmi explains.
    Although an appropriate level of stimulation is generally beneficial for the brain, multitasking during virtual meetings is problematic. Only highly automated tasks, such as walking, can be properly carried out during a virtual meeting.
    ‘Walking and other automated activities can boost your energy levels and help you to concentrate on the meeting. But if you’re trying to focus on two things that require cognitive attention simultaneously, you can’t hear if something important is happening in the meeting. Alternatively, you have to constantly switch between tasks. It’s really taxing for the brain,’ Nurmi says. More

  • in

    AI can alert urban planners and policymakers to cities’ decay

    More than two-thirds of the world’s population is expected to live in cities by 2050, according to the United Nations. As urbanization advances around the globe, researchers at the University of Notre Dame and Stanford University said the quality of the urban physical environment will become increasingly critical to human well-being and to sustainable development initiatives.
    However, measuring and tracking the quality of an urban environment, its evolution and its spatial disparities is difficult due to the amount of on-the-ground data needed to capture these patterns. To address the issue, Yong Suk Lee, assistant professor of technology, economy and global affairs in the Keough School of Global Affairs at the University of Notre Dame, and Andrea Vallebueno from Stanford University used machine learning to develop a scalable method to measure urban decay at a spatially granular level over time.
    Their findings were recently published in Scientific Reports.
    “As the world urbanizes, urban planners and policymakers need to make sure urban design and policies adequately address critical issues such as infrastructure and transportation improvements, poverty and the health and safety of urbanites, as well as the increasing inequality within and across cities,” Lee said. “Using machine learning to recognize patterns of neighborhood development and urban inequality, we can help urban planners and policymakers better understand the deterioration of urban space and its importance in future planning.”
    Traditionally, the measurement of urban quality and quality of life in urban spaces has used sociodemographic and economic characteristics such as crime rates and income levels, survey data of urbanites’ perception and valued attributes of the urban environment, or image datasets describing the urban space and its socioeconomic qualities. The growing availability of street view images presents new prospects in identifying urban features, Lee said, but the reliability and consistency of these methods across different locations and time remains largely unexplored.
    In their study, Lee and Vallebueno used the YOLOv5 model (a form of artificial intelligence that can detect objects) to detect eight object classes that indicate urban decay or contribute to an unsightly urban space — things like potholes, graffiti, garbage, tents, barred or broken windows, discolored or dilapidated façades, weeds and utility markings. They focused on three cities: San Francisco, Mexico City and South Bend, Indiana. They chose neighborhoods in these cities based on factors including urban diversity, stages of urban decay and the authors’ familiarity with the cities.
    Using comparative data, they evaluated their method in three contexts: homelessness in the Tenderloin District of San Francisco between 2009 and 2021, a set of small-scale housing projects carried out in 2017 through 2019 in a subset of Mexico City neighborhoods, and the western neighborhoods of South Bend in the 2011 through 2019 period — a part of the city that had been declining for decades but also saw urban revival initiatives. More

  • in

    Novel device promotes efficient, real-time and secure wireless access

    A new device from the lab of Dinesh Bharadia, an affiliate of the UC San Diego Qualcomm Institute (QI) and faculty member with the Jacobs School of Engineering’s Department of Electrical and Computer Engineering, offers a fresh tool for the challenge of increasing public access to the wireless network.
    Researchers developed prototype technology to filter out interference from other radio signals while sweeping underutilized spectrum frequency bands for high-traffic periods. The technology could help regulators distribute wireless access at an affordable cost during low-traffic periods.
    “Through meticulous analysis of spectrum usage, we can identify underutilized segments and hidden opportunities, which, when leveraged, would lead to a cost-effective connectivity solution for users around the globe,” said Bharadia. “Crescendo stands at the forefront of this initiative, offering a low-complexity yet highly effective solution with advanced algorithms that provides robust spectrum insights for all.”
    Accessing a “Quiet” Resource
    When unoccupied, broadband frequencies owned by users like the U.S. Navy or military can offer wireless connection to the public or corporations at low cost. The challenge is determining when the primary owners use the frequencies, and when they would be available for public use.
    Working with Associate Professor Aaron Schulman of the Jacobs School of Engineering Computer Science and Engineering Department, researchers from Bharadia’s Wireless Communications, Sensing and Networking Group created a novel device called “Crescendo.”
    Crescendo features adaptive software that allows it to sweep for activity across a range of frequencies within an agency-owned wideband spectrum. The device can adapt to signal interference in real-time by dynamically adjusting which signals it receives to tune out interference from nearby towers, base stations and other sources of high power signals. The technology’s high signal fidelity also ensures that users can count on a secure connection, with any cyberattacks identified in real-time. More

  • in

    Robot stand-in mimics movements in VR

    Researchers from Cornell and Brown University have developed a souped-up telepresence robot that responds automatically and in real-time to a remote user’s movements and gestures made in virtual reality.
    The robotic system, called VRoxy, allows a remote user in a small space, like an office, to collaborate via VR with teammates in a much larger space. VRoxy represents the latest in remote, robotic embodiment.
    Donning a VR headset, a user has access to two view modes: Live mode shows an immersive image of the collaborative space in real time for interactions with local collaborators, while navigational mode displays rendered pathways of the room, allowing remote users to “teleport” to where they’d like to go. This navigation mode allows for quicker, smoother mobility for the remote user and limits motion sickness.
    The system’s automatic nature lets remote teammates focus solely on collaboration rather than on manually steering the robot, researchers said.
    “The great benefit of virtual reality is we can leverage all kinds of locomotion techniques that people use in virtual reality games, like instantly moving from one position to another,” said Mose Sakashita, a doctoral student in the field of information science at Cornell. “This functionality enables remote users to physically occupy a very limited amount of space but collaborate with teammates in a much larger remote environment.”
    Sakashita is the lead author of “VRoxy: Wide-Area Collaboration From an Office Using a VR-Driven Robotic Proxy,” to be presented at the ACM Symposium on User Interface Software and Technology (UIST), held Oct. 29 through Nov. 1.
    VRoxy’s automatic, real-time responsiveness is key for both remote and local teammates, researchers said. With a robot proxy like VRoxy, a remote teammate confined to a small office can interact in a group activity held in a much larger space, like in a design collaboration scenario. More