More stories

  • in

    AI can alert urban planners and policymakers to cities’ decay

    More than two-thirds of the world’s population is expected to live in cities by 2050, according to the United Nations. As urbanization advances around the globe, researchers at the University of Notre Dame and Stanford University said the quality of the urban physical environment will become increasingly critical to human well-being and to sustainable development initiatives.
    However, measuring and tracking the quality of an urban environment, its evolution and its spatial disparities is difficult due to the amount of on-the-ground data needed to capture these patterns. To address the issue, Yong Suk Lee, assistant professor of technology, economy and global affairs in the Keough School of Global Affairs at the University of Notre Dame, and Andrea Vallebueno from Stanford University used machine learning to develop a scalable method to measure urban decay at a spatially granular level over time.
    Their findings were recently published in Scientific Reports.
    “As the world urbanizes, urban planners and policymakers need to make sure urban design and policies adequately address critical issues such as infrastructure and transportation improvements, poverty and the health and safety of urbanites, as well as the increasing inequality within and across cities,” Lee said. “Using machine learning to recognize patterns of neighborhood development and urban inequality, we can help urban planners and policymakers better understand the deterioration of urban space and its importance in future planning.”
    Traditionally, the measurement of urban quality and quality of life in urban spaces has used sociodemographic and economic characteristics such as crime rates and income levels, survey data of urbanites’ perception and valued attributes of the urban environment, or image datasets describing the urban space and its socioeconomic qualities. The growing availability of street view images presents new prospects in identifying urban features, Lee said, but the reliability and consistency of these methods across different locations and time remains largely unexplored.
    In their study, Lee and Vallebueno used the YOLOv5 model (a form of artificial intelligence that can detect objects) to detect eight object classes that indicate urban decay or contribute to an unsightly urban space — things like potholes, graffiti, garbage, tents, barred or broken windows, discolored or dilapidated façades, weeds and utility markings. They focused on three cities: San Francisco, Mexico City and South Bend, Indiana. They chose neighborhoods in these cities based on factors including urban diversity, stages of urban decay and the authors’ familiarity with the cities.
    Using comparative data, they evaluated their method in three contexts: homelessness in the Tenderloin District of San Francisco between 2009 and 2021, a set of small-scale housing projects carried out in 2017 through 2019 in a subset of Mexico City neighborhoods, and the western neighborhoods of South Bend in the 2011 through 2019 period — a part of the city that had been declining for decades but also saw urban revival initiatives. More

  • in

    Novel device promotes efficient, real-time and secure wireless access

    A new device from the lab of Dinesh Bharadia, an affiliate of the UC San Diego Qualcomm Institute (QI) and faculty member with the Jacobs School of Engineering’s Department of Electrical and Computer Engineering, offers a fresh tool for the challenge of increasing public access to the wireless network.
    Researchers developed prototype technology to filter out interference from other radio signals while sweeping underutilized spectrum frequency bands for high-traffic periods. The technology could help regulators distribute wireless access at an affordable cost during low-traffic periods.
    “Through meticulous analysis of spectrum usage, we can identify underutilized segments and hidden opportunities, which, when leveraged, would lead to a cost-effective connectivity solution for users around the globe,” said Bharadia. “Crescendo stands at the forefront of this initiative, offering a low-complexity yet highly effective solution with advanced algorithms that provides robust spectrum insights for all.”
    Accessing a “Quiet” Resource
    When unoccupied, broadband frequencies owned by users like the U.S. Navy or military can offer wireless connection to the public or corporations at low cost. The challenge is determining when the primary owners use the frequencies, and when they would be available for public use.
    Working with Associate Professor Aaron Schulman of the Jacobs School of Engineering Computer Science and Engineering Department, researchers from Bharadia’s Wireless Communications, Sensing and Networking Group created a novel device called “Crescendo.”
    Crescendo features adaptive software that allows it to sweep for activity across a range of frequencies within an agency-owned wideband spectrum. The device can adapt to signal interference in real-time by dynamically adjusting which signals it receives to tune out interference from nearby towers, base stations and other sources of high power signals. The technology’s high signal fidelity also ensures that users can count on a secure connection, with any cyberattacks identified in real-time. More

  • in

    Robot stand-in mimics movements in VR

    Researchers from Cornell and Brown University have developed a souped-up telepresence robot that responds automatically and in real-time to a remote user’s movements and gestures made in virtual reality.
    The robotic system, called VRoxy, allows a remote user in a small space, like an office, to collaborate via VR with teammates in a much larger space. VRoxy represents the latest in remote, robotic embodiment.
    Donning a VR headset, a user has access to two view modes: Live mode shows an immersive image of the collaborative space in real time for interactions with local collaborators, while navigational mode displays rendered pathways of the room, allowing remote users to “teleport” to where they’d like to go. This navigation mode allows for quicker, smoother mobility for the remote user and limits motion sickness.
    The system’s automatic nature lets remote teammates focus solely on collaboration rather than on manually steering the robot, researchers said.
    “The great benefit of virtual reality is we can leverage all kinds of locomotion techniques that people use in virtual reality games, like instantly moving from one position to another,” said Mose Sakashita, a doctoral student in the field of information science at Cornell. “This functionality enables remote users to physically occupy a very limited amount of space but collaborate with teammates in a much larger remote environment.”
    Sakashita is the lead author of “VRoxy: Wide-Area Collaboration From an Office Using a VR-Driven Robotic Proxy,” to be presented at the ACM Symposium on User Interface Software and Technology (UIST), held Oct. 29 through Nov. 1.
    VRoxy’s automatic, real-time responsiveness is key for both remote and local teammates, researchers said. With a robot proxy like VRoxy, a remote teammate confined to a small office can interact in a group activity held in a much larger space, like in a design collaboration scenario. More

  • in

    Certain online games use dark designs to collect player data

    Gaming is a $193 billion industry — nearly double the size of the film and music industries combined — and there are around three billion gamers worldwide. While online gaming can improve wellbeing and foster social relations, privacy and awareness issues could potentially offset these benefits and cause real harm to gamers.
    The new study, by scientists at Aalto University’s Department of Computer Science, reveals potentially questionable data collection practices in online games, along with misconceptions and concerns about privacy among players. The study also offers risk mitigation strategies for players and design recommendations for game developers to improve privacy in online games.
    ‘We had two supporting lines of inquiry in this study: what players think about games, and what games are really up to with respect to privacy,’ says Janne Lindqvist, associate professor of computer science at Aalto. ‘It was really surprising to us how nuanced the considerations of gamers were. For example, participants said that, to protect their privacy, they would avoid using voice chat in games unless it was absolutely necessary. Our game analysis revealed that some games try to nudge people to reveal their online identities by offering things like virtual rewards.’
    The authors identified instances of games using dark design — interface decisions that manipulate users into doing something they otherwise wouldn’t. These could facilitate the collection of player data and encourage players to integrate their social media accounts or allow data sharing with third parties.
    ‘When social media accounts are linked to games, players generally can’t know what access the games have to these accounts or what information they receive,’ says Amel Bourdoucen, doctoral researcher in usable security at Aalto. ‘For example, in some popular games, users can log in with (or link to) their social media accounts, but these games may not specify what data is collected through such integration.’
    The global gaming community has been subject to increased scrutiny over the past decade because of online harassment and the industry’s burnout culture. While these issues still linger, the push for more tech regulation in the EU and US has also brought privacy issues to the forefront.
    ‘Data handling practices of games are often hidden behind legal jargon in privacy policies,’ says Bourdoucen. ‘When users’ data are collected, games should make sure the players understand and consent to what is being collected. This can increase the player’s awareness and sense of control in games. Gaming companies should also protect players’ privacy and keep them safe while playing online.’ More

  • in

    Controlling waves in magnets with superconductors for the first time

    Quantum physicists at Delft University of Technology have shown that it’s possible to control and manipulate spin waves on a chip using superconductors for the first time. These tiny waves in magnets may offer an alternative to electronics in the future, interesting for energy-efficient information technology or connecting pieces in a quantum computer, for example. The breakthrough, published in Science, primarily gives physicists new insight into the interaction between magnets and superconductors.
    Energy-efficient substitute
    “Spin waves are waves in a magnetic material that we can use to transmit information,” explains Michael Borst, who led the experiment. “Because spin waves can be a promising building block for an energy-efficient replacement for electronics, scientists have been searching for an efficient way to control and manipulate spin waves for years.”
    Theory predicts that metal electrodes give control over spin waves, but physicists have barely seen such effects in experiments until now. “The breakthrough of our research team is that we show that we can indeed control spin waves properly if we use a superconducting electrode,” says Toeno van der Sar, Associate Professor in the Department of Quantum Nanoscience.
    Superconducting mirror
    It works as follows: a spin wave generates a magnetic field that in turn generates a supercurrent in the superconductor. That supercurrent acts as a mirror for the spin wave: the superconducting electrode reflects the magnetic field back to the spin wave. The superconducting mirror causes spin waves to move up and down more slowly, and that makes the waves easily controllable. Borst: “When spin waves pass under the superconducting electrode, it turns out that their wavelength changes completely! And by varying the temperature of the electrode slightly, we can tune the magnitude of the change very accurately.”
    “We started with a thin magnetic layer of yttrium iron garnet (YIG), known as the best magnet on Earth. On top of that we laid a superconducting electrode and another electrode to induce the spin waves. By cooling to -268 degrees, we got the electrode into a superconducting state,” Van der Sar says. “It was amazing to see that the spin waves got slower and slower as it got colder. That gives us a unique handle to manipulate the spin waves; we can deflect them, reflect them, make them resonate and more. But it also gives us tremendous new insights into the properties of superconductors.”
    Unique sensor More

  • in

    A superatomic semiconductor sets a speed record

    The search is on for better semiconductors. Writing in Science, a team of chemists at Columbia University led by Jack Tulyag, a PhD student working with chemistry professor Milan Delor, describes the fastest and most efficient semiconductor yet: a superatomic material called Re6Se8Cl2.
    Semiconductors — most notably, silicon — underpin the computers, cellphones, and other electronic devices that power our daily lives, including the device on which you are reading this article. As ubiquitous as semiconductors have become, they come with limitations. The atomic structure of any material vibrates, which creates quantum particles called phonons. Phonons in turn cause the particles — either electrons or electron-hole pairs called excitons — that carry energy and information around electronic devices to scatter in a matter of nanometers and femtoseconds. This means that energy is lost in the form of heat, and that information transfer has a speed limit.
    The search is on for better options. Writing in Science, a team of chemists at Columbia University led by Jack Tulyag, a PhD student working with chemistry professor Milan Delor, describes the fastest and most efficient semiconductor yet: a superatomic material called Re6Se8Cl2.
    Rather than scattering when they come into contact with phonons, excitons in Re6Se8Cl2 actually bind with phonons to create new quasiparticles called acoustic exciton-polarons. Although polarons are found in many materials, those in Re6Se8Cl2 have a special property: they are capable of ballistic, or scatter-free, flow. This ballistic behavior could mean faster and more efficient devices one day.
    In experiments run by the team, acoustic exciton-polarons in Re6Se8Cl2 moved fast — twice as fast as electrons in silicon — and crossed several microns of the sample in less than a nanosecond. Given that polarons can last for about 11 nanoseconds, the team thinks the exciton-polarons could cover more than 25 micrometers at a time. And because these quasiparticles are controlled by light rather than an electrical current and gating, processing speeds in theoretical devices have the potential to reach femtoseconds — six orders of magnitude faster than the nanoseconds achievable in current Gigahertz electronics. All at room temperature.
    “In terms of energy transport, Re6Se8Cl2 is the best semiconductor that we know of, at least so far,” Delor said.
    A Quantum Version of the Tortoise and the Hare
    Re6Se8Cl2 is a superatomic semiconductor created in the lab of collaborator Xavier Roy. Superatoms are clusters of atoms bound together that behave like one big atom, but with different properties than the elements used to build them. Synthesizing superatoms is a specialty of the Roy lab, and they are a main focus of Columbia’s NSF-funded Material Research Science and Engineering Center on Precision Assembled Quantum Materials. Delor is interested in controlling and manipulating the transport of energy through superatoms and other unique materials developed at Columbia. To do this, the team builds super-resolution imaging tools that can capture particles moving at ultrasmall, ultrafast scales. More

  • in

    Major milestone achieved in new quantum computing architecture

    Coherence stands as a pillar of effective communication, whether it is in writing, speaking or information processing. This principle extends to quantum bits, or qubits, the building blocks of quantum computing. A quantum computer could one day tackle previously insurmountable challenges in climate prediction, material design, drug discovery and more.
    A team led by the U.S. Department of Energy’s (DOE) Argonne National Laboratory has achieved a major milestone toward future quantum computing. They have extended the coherence time for their novel type of qubit to an impressive 0.1 milliseconds — nearly a thousand times better than the previous record.
    In everyday life, 0.1 milliseconds is as fleeting as a blink of an eye. However, in the quantum world, it represents a long enough window for a qubit to perform many thousands of operations.
    Unlike classical bits, qubits seemingly can exist in both states, 0 and 1. For any working qubit, maintaining this mixed state for a sufficiently long coherence time is imperative. The challenge is to safeguard the qubit against the constant barrage of disruptive noise from the surrounding environment.
    The team’s qubits encode quantum information in the electron’s motional (charge) states. Because of that, they are called charge qubits.
    “Among various existing qubits, electron charge qubits are especially attractive because of their simplicity in fabrication and operation, as well as compatibility with existing infrastructures for classical computers,” said Dafei Jin, a professor at the University of Notre Dame with a joint appointment at Argonne and the lead investigator of the project. “This simplicity should translate into low cost in building and running large-scale quantum computers.”
    Jin is a former staff scientist at the Center for Nanoscale Materials (CNM), a DOE Office of Science user facility at Argonne. While there, he led the discovery of their new type of qubit, reported last year. More

  • in

    Energy-saving AI chip

    Hussam Amrouch has developed an AI-ready architecture that is twice as powerful as comparable in-memory computing approaches. As reported in the journal Nature, the professor at the Technical University of Munich (TUM) applies a new computational paradigm using special circuits known as ferroelectric field effect transistors (FeFETs). Within a few years, this could prove useful for generative AI, deep learning algorithms and robotic applications.
    The basic idea is simple: unlike previous chips, where only calculations were carried out on transistors, they are now the location of data storage as well. That saves time and energy. “As a result, the performance of the chips is also boosted,” says Hussam Amrouch, a professor of AI processor design at the Technical University of Munich (TUM). The transistors on which he performs calculations and stores data measure just 28 nanometers, with millions of them placed on each of the new AI chips. The chips of the future will have to be faster and more efficient than earlier ones. Consequently, they cannot heat up as quickly. This is essential if they are to support such applications as real-time calculations when a drone is in flight, for example. “Tasks like this are extremely complex and energy-hungry for a computer,” explains the professor.
    Modern chips: many steps, low energy consumption
    These key requirements for a chip are summed up mathematically by the parameter TOPS/W: “tera-operations per second per watt.” This can be seen as the currency for the chips of the future. The question is how many trillion operations (TOP) a processor can perform per second (S) when provided with one watt (W) of power. The new AI chip, developed in a collaboration between Bosch and Fraunhofer IMPS and supported in the production process by the US company GlobalFoundries, can deliver 885 TOPS/W. This makes it twice as powerful as comparable AI chips, including a MRAM chip by Samsung. CMOS chips, which are now commonly used, operate in the range of 10-20 TOPS/W. This is demonstrated in results recently published in Nature.
    In-memory computing works like the human brain
    The researchers borrowed the principle of modern chip architecture from humans. “In the brain, neurons handle the processing of signals, while synapses are capable of remembering this information,” says Amrouch, describing how people are able to learn and recall complex interrelationships. To do this, the chip uses “ferroelectric” (FeFET) transistors. These are electronic switches that incorporate special additional characteristics (reversal of poles when a voltage is applied) and can store information even when cut off from the power source. In addition, they guarantee the simultaneous storage and processing of data within the transistors. “Now we can build highly efficient chipsets that can be used for such applications as deep learning, generative AI or robotics, for example where data have to be processed where they are generated,” believes Amrouch.
    Market-ready chips will require interdisciplinary collaboration
    The goal is to use the chip to run deep learning algorithms, recognize objects in space or process data from drones in flight with no time lag. However, the professor from the integrated Munich Institute of Robotics and Machine Intelligence (MIRMI) at TUM believes that it will be a few years before this is achieved. He thinks that it will be three to five years, at the soonest, before the first in-memory chips suitable for real-world applications become available. One reason for this, among others, lies in the security requirements of industry. Before a technology of this kind can be used in the automotive industry, for example, it is not enough for it to function reliably. It also has to meet the specific criteria of the sector. “This again highlights the importance of interdisciplinary collaboration with researchers from various disciplines such as computer science, informatics and electrical engineering,” says the hardware expert Amrouch. He sees this as a special strength of MIRMI. More