More stories

  • in

    Quantum drum duet measured

    Like conductors of a spooky symphony, researchers at the National Institute of Standards and Technology (NIST) have “entangled” two small mechanical drums and precisely measured their linked quantum properties. Entangled pairs like this might someday perform computations and transmit data in large-scale quantum networks.
    The NIST team used microwave pulses to entice the two tiny aluminum drums into a quantum version of the Lindy Hop, with one partner bopping in a cool and calm pattern while the other was jiggling a bit more. Researchers analyzed radar-like signals to verify that the two drums’ steps formed an entangled pattern — a duet that would be impossible in the everyday classical world.
    What’s new is not so much the dance itself but the researchers’ ability to measure the drumbeats, rising and falling by just one-quadrillionth of a meter, and verify their fragile entanglement by detecting subtle statistical relationships between their motions.
    The research is described in the May 7 issue of Science.
    “If you analyze the position and momentum data for the two drums independently, they each simply look hot,” NIST physicist John Teufel said. “But looking at them together, we can see that what looks like random motion of one drum is highly correlated with the other, in a way that is only possible through quantum entanglement.”
    Quantum mechanics was originally conceived as the rulebook for light and matter at atomic scales. However, in recent years researchers have shown that the same rules can apply to increasingly larger objects such as the drums. Their back-and-forth motion makes them a type of system known as a mechanical oscillator. Such systems were entangled for the first time at NIST about a decade ago, and in that case the mechanical elements were single atoms. More

  • in

    Physicists find a novel way to switch antiferromagnetism on and off

    When you save an image to your smartphone, those data are written onto tiny transistors that are electrically switched on or off in a pattern of “bits” to represent and encode that image. Most transistors today are made from silicon, an element that scientists have managed to switch at ever-smaller scales, enabling billions of bits, and therefore large libraries of images and other files, to be packed onto a single memory chip.
    But growing demand for data, and the means to store them, is driving scientists to search beyond silicon for materials that can push memory devices to higher densities, speeds, and security.
    Now MIT physicists have shown preliminary evidence that data might be stored as faster, denser, and more secure bits made from antiferromagnets.
    Antiferromagnetic, or AFM materials are the lesser-known cousins to ferromagnets, or conventional magnetic materials. Where the electrons in ferromagnets spin in synchrony — a property that allows a compass needle to point north, collectively following the Earth’s magnetic field — electrons in an antiferromagnet prefer the opposite spin to their neighbor, in an “antialignment” that effectively quenches magnetization even at the smallest scales.
    The absence of net magnetization in an antiferromagnet makes it impervious to any external magnetic field. If they were made into memory devices, antiferromagnetic bits could protect any encoded data from being magnetically erased. They could also be made into smaller transistors and packed in greater numbers per chip than traditional silicon.
    Now the MIT team has found that by doping extra electrons into an antiferromagnetic material, they can turn its collective antialigned arrangement on and off, in a controllable way. They found this magnetic transition is reversible, and sufficiently sharp, similar to switching a transistor’s state from 0 to 1. The results, published today in Physical Review Letters, demonstrate a potential new pathway to use antiferromagnets as a digital switch. More

  • in

    Open source tool can help identify gerrymandering in voting maps

    With state legislatures nationwide preparing for the once-a-decade redrawing of voting districts, a research team has developed a better computational method to help identify improper gerrymandering designed to favor specific candidates or political parties.
    In an article in the Harvard Data Science Review, the researchers describe the improved mathematical methodology of an open source tool called GerryChain (https://github.com/mggg/GerryChain). The tool can help observers detect gerrymandering in a voting district plan by creating a pool, or ensemble, of alternate maps that also meet legal voting criteria. This map ensemble can show if the proposed plan is an extreme outlier — one that is very unusual from the norm of plans generated without bias, and therefore, likely to be drawn with partisan goals in mind.
    An earlier version of GerryChain was used to analyze maps proposed to remedy the Virginia House of Delegates districts that a federal court ruled in 2018 were unconstitutional racial gerrymanders. The updated tool will likely play a role in the upcoming redistricting using new census data.
    “We wanted to build an open-source software tool and make that available to people interested in reform, especially in states where there are skewed baselines,” said Daryl DeFord, assistant mathematics professor at Washington State University and a co-lead author on the paper. “It can be an impactful way for people to get involved in this process, particularly going into this year’s redistricting cycle where there are going to be a lot of opportunities for pointing out less than optimal behavior.”
    The GerryChain tool, first created by a team led by DeFord as a part of the 2018 Voting Rights Data Institute, has already been downloaded 20,000 times. The new paper, authored by Deford along with Moon Duchin of Tufts University and Justin Solomon of the Massachusetts Institute of Technology, focuses on how the mathematical and computational models implemented in GerryChain can be used to put proposed voting districting plans into context by creating large samples of alternative valid plans for comparison. These alternate plans are often used when a voting plan is challenged in court as being unfair as well as to analyze potential impacts of redistricting reform.
    For instance, the enacted 2010 House of Delegates plan in Virginia had 12 voting districts with a Black voting age population at or above 55%. By comparing that plan against an ensemble of alternate plans that all fit the legal criteria, advocates showed that map was an extreme outlier of what was possible. In other words, it was likely drawn intentionally to “pack” some districts with a Black voter population to “crack” other districts, breaking the influence of those voters. More

  • in

    T-GPS processes a graph with trillion edges on a single computer?

    A KAIST research team has developed a new technology that enables to process a large-scale graph algorithm without storing the graph in the main memory or on disks. Named as T-GPS (Trillion-scale Graph Processing Simulation) by the developer Professor Min-Soo Kim from the School of Computing at KAIST, it can process a graph with one trillion edges using a single computer.
    Graphs are widely used to represent and analyze real-world objects in many domains such as social networks, business intelligence, biology, and neuroscience. As the number of graph applications increases rapidly, developing and testing new graph algorithms is becoming more important than ever before. Nowadays, many industrial applications require a graph algorithm to process a large-scale graph (e.g., one trillion edges). So, when developing and testing graph algorithms such for a large-scale graph, a synthetic graph is usually used instead of a real graph. This is because sharing and utilizing large-scale real graphs is very limited due to their being proprietary or being practically impossible to collect.
    Conventionally, developing and testing graph algorithms is done via the following two-step approach: generating and storing a graph and executing an algorithm on the graph using a graph processing engine.
    The first step generates a synthetic graph and stores it on disks. The synthetic graph is usually generated by either parameter-based generation methods or graph upscaling methods. The former extracts a small number of parameters that can capture some properties of a given real graph and generates the synthetic graph with the parameters. The latter upscales a given real graph to a larger one so as to preserve the properties of the original real graph as much as possible.
    The second step loads the stored graph into the main memory of the graph processing engine such as Apache GraphX and executes a given graph algorithm on the engine. Since the size of the graph is too large to fit in the main memory of a single computer, the graph engine typically runs on a cluster of several tens or hundreds of computers. Therefore, the cost of the conventional two-step approach is very high.
    The research team solved the problem of the conventional two-step approach. It does not generate and store a large-scale synthetic graph. Instead, it just loads the initial small real graph into main memory. Then, T-GPS processes a graph algorithm on the small real graph as if the large-scale synthetic graph that should be generated from the real graph exists in main memory. After the algorithm is done, T-GPS returns the exactly same result as the conventional two-step approach.
    The key idea of T-GPS is generating only the part of the synthetic graph that the algorithm needs to access on the fly and modifying the graph processing engine to recognize the part generated on the fly as the part of the synthetic graph actually generated.
    The research team showed that T-GPS can process a graph of 1 trillion edges using a single computer, while the conventional two-step approach can only process of a graph of 1 billion edges using a cluster of eleven computers of the same specification. Thus, T-GPS outperforms the conventional approach by 10,000 times in terms of computing resources. The team also showed that the speed of processing an algorithm in T-GPS is up to 43 times faster than the conventional approach. This is because T-GPS has no network communication overhead, while the conventional approach has a lot of communication overhead among computers.
    Prof. Kim believes that this work will have a large impact on the IT industry where almost every area utilizes graph data, adding, “T-GPS can significantly increase both the scale and efficiency of developing a new graph algorithm.”
    This work was supported by the National Research Foundation (NRF) of Korea and Institute of Information & communications Technology Planning & Evaluation (IITP). More

  • in

    Better way to determine safe drug doses for children

    Determining safe yet effective drug dosages for children is an ongoing challenge for pharmaceutical companies and medical doctors alike. A new drug is usually first tested on adults, and results from these trials are used to select doses for pediatric trials. The underlying assumption is typically that children are like adults, just smaller, which often holds true, but may also overlook differences that arise from the fact that children’s organs are still developing.
    Compounding the problem, pediatric trials don’t always shed light on other differences that can affect recommendations for drug doses. There are many factors that limit children’s participation in drug trials — for instance, some diseases simply are rarer in children — and consequently, the generated datasets tend to be very sparse.
    To make drugs and their development safer for children, researchers at Aalto University and the pharmaceutical company Novartis have developed a method that makes better use of available data.
    ‘This is a method that could help determine safe drug doses more quickly and with less observations than before,’ says co-author Aki Vehtari, an associate professor of computer science at Aalto University and the Finnish Center for Artificial Intelligence FCAI.
    In their study, the research team created a model that improves our understanding of how organs develop.
    ‘The size of an organ is not necessarily the only thing that affects its performance. Kids’ organs are simply not as efficient as those of adults. In drug modeling, if we assume that size is the only thing that matters, we might end up giving too large of doses,’ explains Eero Siivola, first author of the study and doctoral student at Aalto University.
    Whereas the standard approach of assessing pediatric data relies on subjective evaluations of model diagnostics, the new approach, based on Gaussian process regression, is more data-driven and consequently less prone to bias. It is also better at handling small sample sizes as uncertainties are accounted for.
    The research comes out of FCAI’s research programme on Agile and probabilistic AI, offering a great example of a method that makes the best out of even very scarce datasets.
    In the study, the researchers demonstrate their approach by re-analyzing a pediatric trial investigating Everolimus, a drug used to prevent the rejection of organ transplants. But the possible benefits of their method are far reaching.
    ‘It works for any drug whose concentration we want to examine,’ Vehtari says, like allergy and pain medication.
    The approach could be particularly useful for situations where a new drug is tested on a completely new group — of children or adults — which is small in size, potentially making the trial phase much more efficient than it currently is. Another promising application relates to extending use of an existing drug to other symptoms or diseases; the method could support this process more effectively than current practices.
    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More

  • in

    An uncrackable combination of invisible ink and artificial intelligence

    Coded messages in invisible ink sound like something only found in espionage books, but in real life, they can have important security purposes. Yet, they can be cracked if their encryption is predictable. Now, researchers reporting in ACS Applied Materials & Interfaces have printed complexly encoded data with normal ink and a carbon nanoparticle-based invisible ink, requiring both UV light and a computer that has been taught the code to reveal the correct messages.
    Even as electronic records advance, paper is still a common way to preserve data. Invisible ink can hide classified economic, commercial or military information from prying eyes, but many popular inks contain toxic compounds or can be seen with predictable methods, such as light, heat or chemicals. Carbon nanoparticles, which have low toxicity, can be essentially invisible under ambient lighting but can create vibrant images when exposed to ultraviolet (UV) light — a modern take on invisible ink. In addition, advances in artificial intelligence (AI) models — made by networks of processing algorithms that learn how to handle complex information — can ensure that messages are only decipherable on properly trained computers. So, Weiwei Zhao, Kang Li, Jie Xu and colleagues wanted to train an AI model to identify and decrypt symbols printed in a fluorescent carbon nanoparticle ink, revealing hidden messages when exposed to UV light.
    The researchers made carbon nanoparticles from citric acid and cysteine, which they diluted with water to create an invisible ink that appeared blue when exposed to UV light. The team loaded the solution into an ink cartridge and printed a series of simple symbols onto paper with an inkjet printer. Then, they taught an AI model, composed of multiple algorithms, to recognize symbols illuminated by UV light and decode them using a special codebook. Finally, they tested the AI model’s ability to decode messages printed using a combination of both regular red ink and the UV fluorescent ink. With 100% accuracy, the AI model read the regular ink symbols as “STOP,” but when a UV light was shown on the writing, the invisible ink illustrated the desired message “BEGIN.” Because these algorithms can notice minute modifications in symbols, this approach has the potential to encrypt messages securely using hundreds of different unpredictable symbols, the researchers say.
    Story Source:
    Materials provided by American Chemical Society. Note: Content may be edited for style and length. More

  • in

    SMART evaluates impact of competition between autonomous vehicles and public transit

    The rapid advancement of Autonomous Vehicles (AV) technology in recent years has changed transport systems and consumer habits globally. As countries worldwide see a surge in AV usage, the rise of shared Autonomous Mobility on Demand (AMoD) service is likely to be next on the cards. Public Transit (PT), a critical component of urban transportation, will inevitably be impacted by the upcoming influx of AMoD and the question remains unanswered on whether AMoD would co-exist with or threaten the PT system.
    Researchers at the Future Urban Mobility (FM) Interdisciplinary Research Group (IRG) at Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, and Massachusetts Institute of Technology (MIT), conducted a case study in the first-mile mobility market from origins to subway stations in Tampines, Singapore, to find out.
    In a paper titled “Competition between Shared Autonomous Vehicles and Public Transit: A Case Study in Singapore” recently published in the journal Transportation Research Part C: Emerging Technologies, the first-of-its-kind study used Game Theory to analyse the competition between AMoD and PT.
    The study was simulated and evaluated from a competitive perspective?where both AMoD and PT operators are profit-oriented with dynamically adjustable supply strategies. Using an agent-based simulation, the competition process and system performance were evaluated from the standpoints of four stakeholders — the AMoD operator, the PT operator, passengers, and the transport authority.
    “The objective of our study is to envision cities of the future and to understand how competition between AMoD and PT will impact the evolution of transportation systems,” says the corresponding author of the paper, SMART FM Lead Principal Investigator and Associate Professor at MIT Department of Urban Studies and Planning, Jinhua Zhao. “Our study found that competition between AMoD and PT can be favourable, leading to increased profits and system efficiency for both operators when compared to the status quo, while also benefiting the public and the transport authorities. However, the impact of the competition on passengers is uneven and authorities may be required to provide support for people who suffer from higher travel costs or longer travel times in terms of discounts or other feeder modes.”
    The research found that the competition between AMoD and PT would compel bus operators to reduce the frequency of inefficient routes and allow AMoDs to fill in the gaps in the service coverage. “Although the overall bus supply was reduced, the change was not uniform,” says the first author of the paper, a PhD candidate at MIT, Baichuan Mo. “We found that PT services will be spatially concentrated to shorter routes that feed directly to the subway station, and temporally concentrated to peak hours. On average, this reduces travel time of passengers but increases travel costs. However, the generalised travel cost is reduced when incorporating the value of time.” The study also found that providing subsidies to PT services would result in a relatively higher supply, profit, and market share for PT as compared to AMoD, and increased passenger generalised travel cost and total system passenger car equivalent (PCE), which is measured by the average vehicle load and the total vehicle kilometer traveled.
    The findings suggest that PT should be allowed to optimise its supply strategies under specific operation goals and constraints to improve efficiency. On the other hand, AMoD operations should be regulated to reduce detrimental system impacts, including limiting the number of licenses, operation time, and service areas, resulting in AMoD operating in a manner more complementary to PT system.
    “Our research shows that under the right conditions, an AMoD-PT integrated transport system can effectively co-exist and complement each other, benefiting all four stakeholders involved,” says SMART FM alumni, Hongmou Zhang, a PhD graduate from MIT’s Department of Urban Studies and Planning, and now Assistant Professor at Peking University School of Government. “Our findings will help the industry, policy makers and government bodies create future policies and plans to maximise the efficiency and sustainability of transportation systems, as well as protect the social welfare of residents as passengers.”
    The findings of this study is important for future mobility industries and relevant government bodies as it provides insight into possible evolutions and threats to urban transportation systems with the rise of AV and AMoD, and offers a predictive guide for future policy and regulation designs for a AMoD-PT integrated transport system. Policymakers should consider the uneven social costs such as increased travel costs or travel time, especially to vulnerable groups, by supporting and providing them with discounts or other feeder modes.
    The research is carried out by SMART and supported by the National Research Foundation (NRF) Singapore under its Campus for Research Excellence And Technological Enterprise (CREATE) programme. More

  • in

    New algorithm uses a hologram to control trapped ions

    Researchers have discovered the most precise way to control individual ions using holographic optical engineering technology.
    The new technology uses the first known holographic optical engineering device to control trapped ion qubits. This technology promises to help create more precise controls of qubits that will aid the development of quantum industry-specific hardware to further new quantum simulation experiments and potentially quantum error correction processes for trapped ion qubits.
    “Our algorithm calculates the hologram’s profile and removes any aberrations from the light, which lets us develop a highly precise technique for programming ions,” says lead author Chung-You Shih, a PhD student at the University of Waterloo’s Institute for Quantum Computing (IQC).
    Kazi Rajibul Islam, a faculty member at IQC and in physics and astronomy at Waterloo is the lead investigator on this work. His team has been trapping ions used in quantum simulation in the Laboratory for Quantum Information since 2019 but needed a precise way to control them.
    A laser aimed at an ion can “talk” to it and change the quantum state of the ion, forming the building blocks of quantum information processing. However, laser beams have aberrations and distortions that can result in a messy, wide focus spot, which is a problem because the distance between trapped ions is a few micrometers — much narrower than a human hair.
    The laser beam profiles the team wanted to stimulate the ions would need to be precisely engineered. To achieve this they took a laser, blew its light up to 1cm wide and then sent it through a digital micromirror device (DMD), which is programable and functions as a movie projector. The DMD chip has two-million micron-scale mirrors on it that are individually controlled using electric voltage. Using an algorithm that Shih developed, the DMD chip is programmed to display a hologram pattern. The light produced from the DMD hologram can have its intensity and phase exactly controlled.
    In testing, the team has been able to manipulate each ion with the holographic light. Previous research has struggled with cross talk, which means that if a laser focuses on one ion, the light leaks on the surrounding ions. With this device, the team successfully characterizes the aberrations using an ion as a sensor. They can then cancel the aberrations by adjusting the hologram and obtain the lowest cross talk in the world.
    “There is a challenge in using commercially available DMD technology,” Shih says. “Its controller is made for projectors and UV lithography, not quantum experiments. Our next step is to develop our own hardware for quantum computation experiments.”
    This research was supported in part by the Canada First Research Excellence Fund through Transformative Quantum Technologies.
    Story Source:
    Materials provided by University of Waterloo. Note: Content may be edited for style and length. More