More stories

  • in

    Laser attack blinds autonomous vehicles, deleting pedestrians and confusing cars

    Self-driving cars, like the human drivers that preceded them, need to see what’s around them to avoid obstacles and drive safely.
    The most sophisticated autonomous vehicles typically use lidar, a spinning radar-type device that acts as the eyes of the car. Lidar provides constant information about the distance to objects so the car can decide what actions are safe to take.
    But these eyes, it turns out, can be tricked.
    New research reveals that expertly timed lasers shined at an approaching lidar system can create a blind spot in front of the vehicle large enough to completely hide moving pedestrians and other obstacles. The deleted data causes the cars to think the road is safe to continue moving along, endangering whatever may be in the attack’s blind spot.
    This is the first time that lidar sensors have been tricked into deleting data about obstacles.
    The vulnerability was uncovered by researchers from the University of Florida, the University of Michigan and the University of Electro-Communications in Japan. The scientists also provide upgrades that could eliminate this weakness to protect people from malicious attacks. More

  • in

    2D nanosheets as anodes in Li-ion batteries: The answer is in the sheets

    Lithium-ion batteries are ubiquitous in the world of electric vehicles. However, a significant challenge encountered with their use is their low battery life and slow charging capability. Recent studies suggest two-dimensional (2D) nanomaterials to be a strong candidate for enhancing their performance. Recently, a collaborative research team from Japan and India demonstrated the efficacy of using 2D titanium diboride nanosheets in lithium-ion batteries. Their findings could have far-reaching consequences in the field of electric vehicles and other electronics.
    As the electric vehicle (EV) industry is advancing, so are the efforts in the research and development of superior lithium (Li)-ion batteries to power these vehicles. Exploring and expanding rapid charge-discharge technology and extended battery life are critical challenges in their development. A few factors, such as the diffusion of Li ions, characteristics of the electrode-electrolyte interface, and electrode porosity, can help overcome these issues achieve extreme fast charging and ultralong life.
    In recent years, two-dimensional (2D) nanomaterials, which are thin sheet-like structures with a thickness of a few nanometers, have emerged as potential anode materials for Li-ion batteries. These nanosheets possess a high aspect ratio and high density of active sites, which enables fast charging and superior cycling performance. In particular, 2D nanomaterials based on transition-metal diborides (or TMDs) have piqued the interest of researchers. TMDs have been found to have a high rate and long cycling stability for Li ion storage, owing to their honeycomb planes of boron and multivalent transition-metal atoms.
    Recently, a group of scientists led by Prof. Noriyoshi Matsumi from the Japan Advanced Institute of Science and Technology (JAIST) and Prof. Kabeer Jasuja from the Indian Institite of Technology (IIT) Gandhinagar set out to further explore the potential of TMDs for energy storage. The team conducted the first experimental study on the storage potential of titanium diboride (TiB2)-based hierarchical nanosheets (THNS) as an anode material for Li-ion batteries. The team comprised Rajashekar Badam, former Senior Lecturer at JAIST; Akash Varma, former M.S. Course Student at JAIST; Koichi Higashimine, Technical Specialist at JAIST and Asha Liza James, Ph.D. Student at IIT Gandhinagar. Their study was published in ACS Applied Nano Materials and made available online on September 19, 2022.
    The THNS were developed by oxidizing TiB2 powder with hydrogen peroxide, followed by centrifuging and freeze-drying the solution. “What makes our work stand out is the scalability of the method developed for synthesizing these TiB2 nanosheets. For any nanomaterial to translate into a tangible technology, scalability is the limiting factor. Our synthesis method only requires stirring and no sophisticated equipment. This is on account of the dissolution and recrystallization behavior exhibited by TiB2, a serendipitous discovery that makes this work a promising bridge from lab to the field,” explains Prof. Kabeer.
    Thereafter, the team constructed an anodic Li-ion half-cell using the THNS as active anode material. The team studied the charge-storage characteristics of the THNS-based anodes.
    The team found that the THNS-based anode showed a high discharge capacity of 380 mAh/g with a current density of just 0.025 A/g. Furthermore, they saw that a discharge capacity of 174 mAh/g could be obtained for a high current density of 1 A/g, with a charge time of 10 min and a capacity retention of 89.7% after 1,000 cycles. Additionally, the THNS-based Li-ion anode could sustain very high current rates, in the order of 15 to 20 A/g facilitating ultrafast charging in about 9 to 14 seconds. Under the high current rate, with a capacity retention greater than 80% was observed after 10,000 cycles.
    The results of this study indicate the suitability of the 2D TiB2 nanosheets as a candidate for fast-charging and long-life Li-ion batteries. They also highlight the advantage of nano-scaling bulk materials, like TiB2, to attain promising properties, including pseudocapacitive charge storage, excellent high-rate capability, and superior cyclability. Explaining the potential long-term effects of their research, Prof. Matsumi says, “Such quick-charging technology can accelerate the diffusion of EVs and significantly decrease waiting times for charging various mobile electronic devices. We hope our findings can stimulate more research in this field, which can eventually lead to the convenience of EV users, lesser air pollution in cities, and less stressful mobile life in order to enhance the productivity of our society.”
    Here’s hoping that we soon see this remarkable technology being used in EVs and other electronic devices. More

  • in

    Artificial intelligence and molecule machine join forces to generalize automated chemistry

    Artificial intelligence, “building-block” chemistry and a molecule-making machine teamed up to find the best general reaction conditions for synthesizing chemicals important to biomedical and materials research — a finding that could speed innovation and drug discovery as well as make complex chemistry automated and accessible.
    With the machine-generated optimized conditions, researchers at the University of Illinois Urbana-Champaign and collaborators in Poland and Canada doubled the average yield of a special, hard-to-optimize type of reaction linking carbon atoms together in pharmaceutically important molecules. The researchers say their system provides a platform that also could be used to find general conditions for other classes of reactions and solutions for similarly complex problems. They reported their findings in the journal Science.
    “Generality is critical for automation, and thus making molecular innovation accessible even to nonchemists,” said study co-leader Dr. Martin D. Burke, an Illinois professor of chemistry and of the Carle Illinois College of Medicine, as well as a medical doctor. “The challenge is the haystack of possible reaction conditions is astronomical, and the needle is hidden somewhere inside. By leveraging the power of artificial intelligence and building-block chemistry to create a feedback loop, we were able to shrink the haystack. And we found the needle.”
    Automated synthesis machines for proteins and nucleic acids such as DNA have revolutionized research and chemical manufacturing in those fields, but many chemicals of importance for pharmaceutical, clinical, manufacturing and materials applications are small molecules with complex structures, the researchers say.
    Burke’s group has pioneered the development of simple chemical building blocks for small molecules. His lab also developed an automated molecule-making machine that snaps together the buildings blocks to create a wide range of possible structures.
    However, general reaction conditions to make the automated process broadly applicable have remained elusive. More

  • in

    A faster experiment to find and study topological materials

    Topological materials, an exotic class of materials whose surfaces exhibit different electrical or functional properties than their interiors, have been a hot area of research since their experimental realization in 2007 — a finding that sparked further research and precipitated a Nobel Prize in Physics in 2016. These materials are thought to have great potential in a variety of fields, and might someday be used in ultraefficient electronic or optical devices, or key components of quantum computers.
    But there are many thousands of compounds that may theoretically have topological characteristics, and synthesizing and testing even one such material to determine its topological properties can take months of experiments and analysis. Now a team of researchers at MIT and elsewhere have come up with a new approach that can rapidly screen candidate materials and determine with more than 90 percent accuracy whether they are topological.
    Using this new method, the researchers have produced a list candidate materials. A few of these were already known to have topological properties, but the rest are newly predicted by this approach.
    The findings are reported in the journal Advanced Materials in a paper by Mingda Li, the Class ’47 Career Development Professor at MIT, graduate students (and twin sisters) Nina Andrejevic at MIT and Jovana Andrejevic at Harvard University, and seven others at MIT, Harvard, Princeton University, and Argonne National Laboratory.
    Topological materials are named after a branch of mathematics that describes shapes based on their invariant characteristics, which persist no matter how much an object is continuously stretched or squeezed out of its original shape. Topological materials, similarly, have properties that remain constant despite changes in their conditions, such as external perturbations or impurities.
    There are several varieties of topological materials, including semiconductors, conductors, and semimetals, among others. Initially, it was thought that there were only a handful of such materials, but recent theory and calculations have predicted that in fact thousands of different compounds may have at least some topological characteristics. The hard part is figuring out experimentally which compounds may be topological. More

  • in

    Rewards only promote cooperation if the other person also learns about them

    Researchers at the Max Planck Institute in Plön show that reputation plays a key role in determining which rewarding policies people adopt. Using game theory, they explain why individuals learn to use rewards to specifically promote good behaviour.
    Often, we use positive incentives like rewards to promote cooperative behaviour. But why do we predominantly reward cooperation? Why is defection rarely rewarded? Or more generally, why do we bother to engage in any form of rewarding in the first place? Theoretical work done by researchers Saptarshi Pal and Christian Hilbe at the Max Planck Research Group ‘Dynamics of Social Behaviour’ suggests that reputation effects can explain why individuals learn to reward socially.
    With tools from evolutionary game theory, the researchers construct a model where individuals in a population (the players) can adopt different strategies of cooperation and rewarding over time. In this model, the players’ reputation is a key element. The players know, with a degree of certainty (characterized by the information transmissibility of the population), how their interaction partners are going to react to their behaviour (that is, which behaviours they deem worthy of rewards). If the information transmissibility is sufficiently high, players learn to reward cooperation. In contrast, without sufficient information about peers, players refrain from using rewards. The researchers show that these effects of reputation also play out in a similar way when individuals interact in groups with more than two individuals.
    Antisocial rewarding
    In addition to highlighting the role of reputation in catalyzing cooperation and social rewarding, the scientists identify a couple of scenarios where antisocial rewarding may evolve. Antisocial rewarding either requires populations to be assorted or rewards to be mutually beneficial for both the recipient and the provider of the reward. “These conditions under which people may learn to reward defection are however a bit restrictive since they additionally require information to be scarce” adds Saptarshi Pal.
    The results from this study suggest that rewards are only effective in promoting cooperation when they can sway individuals to act opportunistically. These opportunistic players only cooperate when they anticipate a reward for their cooperation. A higher information transmissibility increases both, the incentive to reward others for cooperating, and the incentive to cooperate in the first place. Overall, the model suggests that when people reward cooperation in an environment where information transmissibility is high, they ultimately benefit themselves. This interpretation takes the altruism out of social rewarding — people may not use rewards to enhance others’ welfare, but to help themselves.
    Story Source:
    Materials provided by Max-Planck-Gesellschaft. Note: Content may be edited for style and length. More

  • in

    New form of universal quantum computers

    Computing power of quantum machines is currently still very low. Increasing it is still proving to be a major challenge. Physicists now present a new architecture for a universal quantum computer that overcomes such limitations and could be the basis of the next generation of quantum computers soon.
    Quantum bits (qubits) in a quantum computer serve as a computing unit and memory at the same time. Because quantum information cannot be copied, it cannot be stored in a memory as in a classical computer. Due to this limitation, all qubits in a quantum computer must be able to interact with each other. This is currently still a major challenge for building powerful quantum computers. In 2015, theoretical physicist Wolfgang Lechner, together with Philipp Hauke and Peter Zoller, addressed this difficulty and proposed a new architecture for a quantum computer, now named LHZ architecture after the authors.
    “This architecture was originally designed for optimization problems,” recalls Wolfgang Lechner of the Department of Theoretical Physics at the University of Innsbruck, Austria. “In the process, we reduced the architecture to a minimum in order to solve these optimization problems as efficiently as possible.” The physical qubits in this architecture do not represent individual bits but encode the relative coordination between the bits. “This means that not all qubits have to interact with each other anymore,” explains Wolfgang Lechner. With his team, he has now shown that this parity concept is also suitable for a universal quantum computer.
    Complex operations are simplified
    Parity computers can perform operations between two or more qubits on a single qubit. “Existing quantum computers already implement such operations very well on a small scale,” Michael Fellner from Wolfgang Lechner’s team explains. “However, as the number of qubits increases, it becomes more and more complex to implement these gate operations.” In two publications in Physical Review Letters and Physical Review A, the Innsbruck scientists now show that parity computers can, for example, perform quantum Fourier transformations — a fundamental building block of many quantum algorithms — with significantly fewer computation steps and thus more quickly. “The high parallelism of our architecture means that, for example, the well-known Shor algorithm for factoring numbers can be executed very efficiently,” Fellner explains.
    Two-stage error correction
    The new concept also offers hardware-efficient error correction. Because quantum systems are very sensitive to disturbances, quantum computers must correct errors continuously. Significant resources must be devoted to protecting quantum information, which greatly increases the number of qubits required. “Our model operates with a two-stage error correction, one type of error (bit flip error or phase error) is prevented by the hardware used,” say Anette Messinger and Kilian Ender, also members of the Innsbruck research team. There are already initial experimental approaches for this on different platforms. “The other type of error can be detected and corrected via the software,” Messinger and Ender say. This would allow a next generation of universal quantum computers to be realized with manageable effort. The spin-off company ParityQC, co-founded by Wolfgang Lechner and Magdalena Hauser, is already working in Innsbruck with partners from science and industry on possible implementations of the new model.
    The research at the University of Innsbruck was financially supported by the Austrian Science Fund FWF and the Austrian Research Promotion Agency FFG.
    Story Source:
    Materials provided by University of Innsbruck. Note: Content may be edited for style and length. More

  • in

    Unveiling the dimensionality of complex networks through hyperbolic geometry

    Reducing redundant information to find simplifying patterns in data sets and complex networks is a scientific challenge in many knowledge fields. Moreover, detecting the dimensionality of the data is still a hard-to-solve problem. An article published in the journal Nature Communications presents a method to infer the dimensionality of complex networks through the application of hyperbolic geometrics, which capture the complexity of relational structures of the real world in many diverse domains.
    Among the authors of the study are the researchers M. Ángeles Serrano and Marián Boguñá, from the Faculty of Physics and the Institute of Complex Systems of the UB (UBICS), and Pedro Almargo, from the Higher Technical School of Engineering of the University of Sevilla. The research study provides a multidimensional hyperbolic model of complex networks that reproduces its connectivity, with an ultra-low and customizable dimensionality for each specific network. This enables a better characterization of its structure — e.g. at a community scale — and the improvement of its predictive capability.
    The study reveals unexpected regularities, such as the extremely low dimensions of molecular networks associated with biological tissues; the slightly higher dimensionality required by social networks and the Internet; and the discovery that brain connectomes are close to three dimensions in their automatic organisation.
    Hyperbolic versus Euclidean geometry
    The intrinsic geometry of data sets or complex networks is not obvious, which becomes an obstacle in determining the dimensionality of real networks. Another challenge is that the definition of distance has to be established according to their relational and connectivity structure, and this also requires sophisticated models.
    Now, the new approach is based on the geometry of complex networks, and more specifically, on the configurational geometric model or SD model. “This model, which we have developed in previous work, describes the structure of complex networks based on fundamental principles,” says the lecturer M. Ángeles, ICREA researcher at the Department of Condensed Matter Physics of the UB. More

  • in

    Mathematical modeling suggests U.S. counties are still unprepared for COVID spikes

    America was unprepared for the magnitude of the pandemic, which overwhelmed many counties and filled some hospitals to capacity. A new paper in PNAS suggests there may have been a mathematical method, of sorts, to the madness of those early COVID days.
    The study tests a model that closely matches the patterns of case counts and deaths reported, county by county, across the United States between April 2020 and June 2021. The model suggests that unprecedented COVID spikes could, even now, overwhelm local jurisdictions.
    “Our best estimate, based on the data, is that the numbers of cases and deaths per county have infinite variance, which means that a county could get hit with a tremendous number of cases or deaths,” says Rockefeller’s Joel Cohen. “We cannot reasonably anticipate that any county will have the resources to cope with extremely large, rare events, so it is crucial that counties — as well as states and even countries — develop plans, ahead of time, to share resources.”
    Predicting 99 percent of a pandemic
    Ecologists might have guessed that the spread of COVID cases and deaths would at least roughly conform to Taylor’s Law, a formula that relates a population’s mean to its variance (a measure of the scatter around the average). From how crop yields fluctuate, to the frequency of tornado outbreaks, to how cancer cells multiply, Taylor’s Law forms the backbone of many statistical models that experts use to describe thousands of species, including humans.
    But when Cohen began looking into whether Taylor’s Law could also describe the grim COVID statistics provided by The New York Times, he ran into a surprise. More