More stories

  • in

    Diamonds are for quantum sensing

    Scientists from the University of Tsukuba demonstrated how ultrafast spectroscopy can be used to improve the temporal resolution of quantum sensors. By measuring the orientation of coherent spins inside a diamond lattice, they showed that magnetic fields can be measured even over very short times. This work may allow for the advancement of the field of ultra-high accuracy measurements known as quantum metrology, as well as “spintronic” quantum computers that operate based on electron spins.
    Quantum sensing offers the possibility of extremely accurate monitoring of temperature, as well as magnetic and electric fields, with nanometer resolution. By observing how these properties affect the energy level differences within a sensing molecule, new avenues in the field of nanotechnology and quantum computing may become viable. However, the time resolution of conventional quantum sensing methods has previously been limited to the range of microseconds due to limited luminescence lifetimes. A new approach is needed to help refine the quantum sensing.
    Now, a team of researchers led by the University of Tsukuba developed a new method for implementing magnetic field measurements in a well-known quantum sensing system. Nitrogen-vacancy (NV) centers are specific defects in diamonds in which two adjacent carbon atoms have been replaced by a nitrogen atom and a vacancy. The spin state of an extra electron at this site can be read or coherently manipulated using pulses of light.
    “For example, the negatively charged NV spin state can be used as a quantum magnetometer with an all-optical readout system, even at room temperature,” first author Ryosuke Sakurai says. The team used an “inverse Cotton-Mouton” effect to test their method. The normal Cotton-Mouton effect occurs when a transverse magnetic field creates birefringence, which can change linearly polarized light into having an elliptical polarization. In this experiment, the scientists did the opposite, and used light of different polarizations to create tiny controlled local magnetic fields.
    “With nonlinear opto-magnetic quantum sensing, it will be possible to measure local magnetic fields, or spin currents, in advanced materials with high spatial and temporal resolution,” senior author Muneaki Hase and his colleague Toshu An at the Japan Advanced Institute of Science and Technology, say. The team hopes that this work will help enable quantum spintronic computers that are sensitive spin states, not just electrical charge as with current computers. The research may also enable new experiments to observe dynamic changes in magnetic fields or possibly even single spins under realistic device-operating conditions.
    Story Source:
    Materials provided by University of Tsukuba. Note: Content may be edited for style and length. More

  • in

    Shedding light on linguistic diversity and its evolution

    Scholars from the Max Planck Institute for Evolutionary Anthropology in Germany and the University of Auckland in New Zealand have created a new global repository of linguistic data. The project is designed to facilitate new insights into the evolution of words and sounds of the languages spoken across the world today. The Lexibank database contains standardized lexical data for more than 2000 languages. It is the most extensive publicly available collection compiled so far.
    Is it true that many languages in the world use words similar to “mama” and “papa” for “mother” and “father”? If a language uses only one word for both “arm” and “hand,” does it also use only one word for both “leg” and “foot”? How do languages manage to use a relatively small number of words to express so many concepts? An interdisciplinary team of linguists, computational scientists and psychologists have created a large public database that can be used to study these and many more questions with the help of computational methods.
    “When our Department of Linguistic and Cultural Evolution was founded in 2014, I presented my colleagues with an ambitious goal: there are more than 7000 languages in the world. Create databases with the most extensive documentation of the linguistic diversity as possible,” says Max Planck Director Russell Gray. “Our inspiration came from Genbank — a large genetic database where biologists from all over the world have deposited genomic data,” Gray continues. “Genbank was a game changer. The large amount of freely available sequence data revolutionized the ways we can analyze biological diversity. We hope that the first of our global linguistic databases, Lexibank, will help start to revolutionize our knowledge of linguistic diversity in a similar way.”
    New standards and new software
    The Lexibank repository provides data in the form of standardized wordlists for more than 2000 language varieties. “The work on Lexibank coincided with a push towards more consistent data formats in linguistic databases. Thus Lexibank can serve both as a large-scale example of the benefits of standardization and a catalyst for further standardization,” reports Robert Forkel, who led the computational part of the data collection. “We decided to create our own standards, called Cross-Linguistic Data Formats, which have now been used successfully in a multitude of projects in which our department is involved.”
    The new standards proposed by the team are accompanied by new software tools that greatly facilitate linguists’ workflows. “We have designed new computer-assisted workflows that enable existing language datasets to be made comparable,” says Johann-Mattis List, who led the practical part of the data curation. “With these workflows, we have dramatically increased the efficiency of data standardization and data curation.”
    Identifying patterns of language evolution
    In addition to collecting and sharing the standardized language data, the authors also designed new computational techniques to answer questions about the evolution of linguistic diversity. They illustrate how these methods can be used by computing how languages differ or agree with respect to sixty different features.
    “Thanks to our standardized representation of language data, it is now easy to check how many languages use words like’mama’ and ‘papa’ for ‘mother’ and ‘father’,” reports List. “It turns out that this pattern can indeed be found in many languages of the world and in very different regions,” adds Simon J. Greenhill, one of the founders of the Lexibank project. “Since all the languages with this pattern are not closely related to each other, it reflects independent parallel evolution, just as the great linguist Roman Jakobson suggested in 1968.”
    Expanding the data and developing new methods
    The new data collection, and the automatically computed language features will contribute to new insights into open questions on linguistic diversity and language evolution. “Nobody thinks that the analysis must stop with the examples we give in our paper,” says List. “On the contrary, we hope that linguists, psychologists, and evolutionary scientists will feel encouraged to build on our example by expanding the data and developing new methods,” adds Forkel.
    Even in their current study, the authors present findings that warrant future investigations. “When investigating which languages use the same word for ‘arm’ and ‘hand’, we found that these languages typically also use the same word for ‘leg’ and ‘foot’,” List reports. “While this may seem to be a silly coincidence, it shows that the lexicon of human languages is often much more structured than one might assume when investigating one language in isolation.” More

  • in

    Let machines do the work: Automating semiconductor research with machine learning

    The semiconductor industry has been growing steadily ever since its first steps in the mid-twentieth century and, thanks to the high-speed information and communication technologies it enabled, it has given way to the rapid digitalization of society. Today, in line with a tight global energy demand, there is a growing need for faster, more integrated, and more energy-efficient semiconductor devices.
    However, modern semiconductor processes have already reached the nanometer scale, and the design of novel high-performance materials now involves the structural analysis of semiconductor nanofilms. Reflection high-energy electron diffraction (RHEED) is a widely used analytical method for this purpose. RHEED can be used to determine the structures that form on the surface of thin films at the atomic level and can even capture structural changes in real time as the thin film is being synthesized!
    Unfortunately, for all its benefits, RHEED is sometimes hindered by the fact that its output patterns are complex and difficult to interpret. In virtually all cases, a highly skilled experimenter is needed to make sense of the huge amounts of data that RHEED can produce in the form of diffraction patterns. But what if we could make machine learning do most of the work when processing RHEED data?
    A team of researchers led by Dr. Naoka Nagamura, a visiting associate professor at Tokyo University of Science (TUS) and a senior researcher of National Institute for Materials Science (NIMS), Japan, has been working on just that. In their latest study, published online on 09 June 2022 in the international journal Science and Technology of Advanced Materials: Methods, the team explored the possibility of using machine learning to automatically analyze RHEED data. This work, which was supported by JST-PRESTO and JST-CREST, was the result of joint research by TUS and NIMS, Japan. It was co-authored by Ms. Asako Yoshinari, Prof. Masato Kotsugi also from TUS, and Dr. Yuma Iwasaki from NIMS.
    The researchers focused on the surface superstructures that form on the first atomic layers of clean single-crystal silicon (one of the most versatile semiconductor materials). depending on the amount of indium atoms adsorbed and slight differences in temperature. Surface superstructures are atomic arrangements unique to crystal surfaces where atoms stabilize in different periodic patterns than those inside the bulk of the crystal, depending on differences in the surrounding environment. Because they often exhibit unique physical properties, surface superstructures are the focus of much interest in materials science.
    First, the team used different hierarchical clustering methods, which are aimed at dividing samples into different clusters based on various measures of similarity. This approach serves to detect how many different surface superstructures are present. After trying different techniques, the researchers found that Ward’s method could best track the actual phase transitions in surface superstructures.
    The scientists then sought to determine the optimal process conditions for synthesizing each of the identified surface superstructures. They focused on the indium deposition time for which each superstructure was most extensively formed. Principal component analysis and other typical methods for dimensionality reduction did not perform well. Fortunately, non-negative matrix factorization, a different clustering and dimensionality reduction technique, could accurately and automatically obtain the optimal deposition times for each superstructure. Excited about these results, Dr. Nagamura remarks, “Our efforts will help automate the work that typically requires time-consuming manual analysis by specialists. We believe our study has the potential to change the way materials research is done and allow scientists to spend more time on creative pursuits.”
    Overall, the findings reported in this study will hopefully lead to new and effective ways of using machine learning technique for materials science — a central topic in the field of materials informatics. In turn, this would have implications in our everyday lives as existing devices and technologies are upgraded with better materials. “Our approach can be used to analyze the superstructures grown not only on thin-film silicon single-crystal surfaces, but also metal crystal surfaces, sapphire, silicon carbide, gallium nitride, and various other important substrates. Thus, we expect our work to accelerate the research and development of next-generation semiconductors and high-speed communication devices,” concludes Dr. Nagamura.
    Story Source:
    Materials provided by Tokyo University of Science. Note: Content may be edited for style and length. More

  • in

    Biotechnology platforms enable fast, customizable vaccine production

    When COVID-19 created an urgent need for vaccines that could be made quickly, safely and cost-effectively, traditional manufacturing approaches were not sufficient to meet the demand. Biopharmaceutical companies therefore shifted to novel biotechnology platform-based techniques that could be more quickly adapted to manufacture COVID-19 vaccines, and that were more robust, customizable and flexible than traditional approaches. An examination of this transition by a Penn State-led team concludes that such smart manufacturing techniques could in the future be applied to other viruses, potentially allowing vaccine development to keep pace with constantly evolving pathogens, according to project lead Soundar Kumara, Allen E. Pearce and Allen M. Pearce Professor of Industrial Engineering at Penn State.
    The findings were published online by the American Society of Mechanical Engineers’ Journal of Computing and Information Science in Engineering and will appear in the journal’s August print issue.
    “Vaccines based on biotechnology platform-based techniques have ‘smart’ characteristics that are more versatile than vaccines designed and manufactured using traditional methods,” said Vishnu Kumar, industrial engineering doctoral candidate and co-author of the paper.
    Biotechnology platform-based vaccine development involves cultivating a flexible baseline structure that can be customized as needed to create new vaccines for related viruses. When pathogens mutate, researchers identify the changes and then apply them to the existing structure. This approach was underway when the COVID-19 pandemic began, and the massive global demand accelerated the large-scale and widespread adoption of the platform, Kumar said.
    Pfizer/BioNTech and Moderna used one such platform, based on messenger RNA, to develop their vaccines. The mRNA platform had already been designed to serve as the basis of a vaccine for coronaviruses, which include the common cold and mutate rapidly. SARS-CoV-2, the virus that causes COVID-19, was sequenced within one year of the start of the pandemic. Researchers used this information to modify the existing mRNA platform to develop a vaccine tailored to that version of SARS-CoV-2 — a process that took less than a week once they had the genetic data. Johnson & Johnson used a similar approach called viral vector. In contrast, traditional vaccine manufacturing, which involves the culture of disease-causing pathogens and the injection of some form of these pathogens, can take 10 to 15 years to develop.
    Biotechnology-based techniques have the potential to drive future research for viruses beyond COVID-19, such as the flu, according to Kumar. A smart manufacturing approach using systems that gather, store and transmit high-quality process data could facilitate connections between devices during each stage of the vaccine development and manufacturing process.
    “With an in-depth understanding of the COVID-19 vaccine as a ‘product,’ biopharmaceutical firms can appropriately identify and apply strategies, such as modular manufacturing, mass customization, automation and knowledge management to boost the vaccine development and manufacturing process,” Kumar said.
    Vijay Srinivasan of the Engineering Laboratory at the National Institute of Standards and Technology co-authored the paper.
    The Penn State Allen E. Pearce and Allen M. Pearce Professorship partially funded this research.
    Story Source:
    Materials provided by Penn State. Original written by Mary Fetzer. Note: Content may be edited for style and length. More

  • in

    Introducing a transceiver that can tap into the higher frequency bands of 5G networks

    5G networks are becoming more prevalent worldwide. Many consumer devices that support 5G are already benefiting from increased speeds and lower latency. However, some frequency bands allocated for 5G are not effectively utilized owing to technological limitations. These frequency bands include the New Radio (NR) 39 GHz band, but actually span from 37 GHz to 43.5 GHz, depending on the country. The NR band offers notable advantages in performance over other lower frequency bands 5G networks use today. For instance, it enables ultra-low latency in communication along with data rates of over 10 Gb/s and a massive capacity to accommodate several users.
    However, these feats come at a cost. High-frequency signals are attenuated quickly as they travel through space. It is, therefore, crucial that the transmitted power is concentrated in a narrow beam aimed directly at the receiver. This can, in principle, be achieved using phased-array beamformers, transmission devices composed of an array of carefully phase-controlled antennas. However, working at high frequency regions of the NR band decreases the efficiency of power amplifiers as they tend to suffer from nonlinearity issues, which distort the transmitted signal.
    To address these issues, a team of researchers led by Professor Kenichi Okada from Tokyo Institute of Technology (Tokyo Tech), Japan, have recently developed, in a new study, a novel phased-array beamformer for 5G base stations. Their design adapts two well-known techniques, namely the Doherty amplifier and digital predistortion (DPD), into a mmWave phased-array transceiver, but with a few twists. The researchers will present their findings in the upcoming 2022 IEEE Symposium on VLSI Technology and Circuits.
    The Doherty amplifier, developed in 1936, has seen a resurgence in modern telecommunication devices owing to its good power efficiency and suitability for signals with a high peak-to-average ratio (such as 5G signals). The team at Tokyo Tech modified the conventional Doherty amplifier design and produced a bi-directional amplifier. What this means is that the same circuit can both amplify a signal to be transmitted and a received signal with low noise. This fulfilled the crucial role of amplification for both transmission and reception. “Our proposed bidirectional implementation for the amplifier is very area-efficient. Additionally, thanks to its co-design with a wafer-level chip-scale packaging technology, it enables low insertion loss. This means that less power is lost as the signal traverses the amplifier,” explains Professor Okada.
    Despite its several advantages, however, the Doherty amplifier can exacerbate nonlinearity problems that arise from mismatches in the elements of the phased-array antenna. The team addressed this problem in two ways. First, they employed the DPD technique, which involves distorting the signal before transmission to effectively cancel out the distortion introduced by the amplifier. Their implementation, unlike the conventional DPD approaches, used a shared look-up table (LUT) for all antennas, minimizing the complexity of the circuit. Second, they introduced inter-element mismatch compensation capabilities to the phased array, improving its overall linearity. “We compared the proposed device with other state-of-the-art 5G phased-array transceivers and found that, by compensating the inter-element mismatches in the shared-LUT DPD module, ours demonstrate a lower adjacent channel leakage and transmission error,” remarks Professor Okada. “Hopefully, the device and techniques described in this study will let us all reap the benefits of 5G NR sooner!”
    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Ultra-fast photonic computing processor uses polarization

    New research uses multiple polarisation channels to carry out parallel processing — enhancing computing density by several orders over conventional electronic chips.
    In a paper published today in Science Advances, researchers at the University of Oxford have developed a method using the polarisation of light to maximise information storage density and computing performance using nanowires.
    Light has an exploitable property — different wavelengths of light do not interact with each other — a characteristic used by fibreoptics to carry parallel streams of data. Similarly, different polarisations of light do not interact with each other either. Each polarisation can be used as an independent information channel, enabling more information to be stored in multiple channels, hugely enhancing information density.
    First author and DPhil student June Sang Lee, Department of Materials, University of Oxford said: ‘We all know that the advantage of photonics over electronics is that light is faster and more functional over large bandwidths. So, our aim was to fully harness such advantages of photonics combining with tunable material to realise faster and denser information processing.’
    In collaboration with Professor C David Wright, University of Exeter, the research team developed a HAD (hybridized-active-dielectric) nanowire, using a hybrid glassy material which shows switchable material properties upon the illumination of optical pulses. Each nanowire shows selective responses to a specific polarisation direction, so information can be simultaneously processed using multiple polarisations in different directions.
    Using this concept, researchers have developed the first photonic computing processor to utilise polarisations of light.
    Photonic computing is carried out through multiple polarisation channels, leading to an enhancement in computing density by several orders compared to that of conventional electronic chips. The computing speeds are faster because these nanowires are modulated by nanosecond optical pulses.
    Since the invention of the first integrated circuit in 1958, packing more transistors into a given size of an electronic chip has been the go-to means of maximising computing density — the so-called ‘Moore’s Law’. However, with Artificial Intelligence and Machine Learning requiring specialised hardware that is beginning to push the boundaries of established computing, the dominant question in this area of electronic engineering has been ‘How do we pack more functionalities into a single transistor?’
    For over a decade, researchers in Professor Harish Bhaskaran’s lab in the Department of Materials, University of Oxford have been looking into using light as a means to compute.
    Professor Bhaskaran, who led the work, said: ‘This is just the beginning of what we would like to see in future, which is the exploitation of all degrees of freedoms that light offers, including polarisation to dramatically parallelise information processing. Definitely early-stage work, but super exciting ideas that combine electronics, non-linear materials and computing. Lots of exciting prospects to work on which is always a great place to be in!’
    Story Source:
    Materials provided by University of Oxford. Note: Content may be edited for style and length. More

  • in

    What quantum information and snowflakes have in common, and what we can do about it

    Qubits are a basic building block for quantum computers, but they’re also notoriously fragile — tricky to observe without erasing their information in the process. Now, new research from the University of Colorado Boulder and the National Institute of Standards and Technology (NIST) could be a leap forward for handling qubits with a light touch.
    In the study, a team of physicists demonstrated that it could read out the signals from a type of qubit called a superconducting qubit using laser light, and without destroying the qubit at the same time.
    The group’s results could be a major step toward building a quantum internet, the researchers say. Such a network would link up dozens or even hundreds of quantum chips, allowing engineers to solve problems that are beyond the reach of even the fastest supercomputers around today. They could also, theoretically, use a similar set of tools to send unbreakable codes over long distances.
    The study, which will appear June 15 in the journal Nature, was led by JILA, a joint research institute between CU Boulder and NIST.
    “Currently, there’s no way to send quantum signals between distant superconducting processors like we send signals between two classical computers,” said Robert Delaney, lead author of the study and a former graduate student at JILA.
    Delaney explained that the traditional bits that run your laptop are pretty limited: They can only take on a value of zero or one, the numbers that underlie most computer programming to date. Qubits, in contrast, can be zeros, ones or, through a property called “superposition,” exist as zeros and ones at the same time. More

  • in

    Military cannot rely on AI for strategy or judgment, study suggests

    Using artificial intelligence (AI) for warfare has been the promise of science fiction and politicians for years, but new research from the Georgia Institute of Technology argues only so much can be automated and shows the value of human judgment.
    “All of the hard problems in AI really are judgment and data problems, and the interesting thing about that is when you start thinking about war, the hard problems are strategy and uncertainty, or what is well known as the fog of war,” said Jon Lindsay, an associate professor in the School of Cybersecurity & Privacy and the Sam Nunn School of International Affairs. “You need human sense-making and to make moral, ethical, and intellectual decisions in an incredibly confusing, fraught, scary situation.”
    AI decision-making is based on four key components: data about a situation, interpretation of those data (or prediction), determining the best way to act in line with goals and values (or judgment), and action. Machine learning advancements have made predictions easier, which makes data and judgment even more valuable. Although AI can automate everything from commerce to transit, judgment is where humans must intervene, Lindsay and University of Toronto Professor Avi Goldfarb wrote in the paper, “Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War,” published in International Security.
    Many policy makers assume human soldiers could be replaced with automated systems, ideally making militaries less dependent on human labor and more effective on the battlefield. This is called the substitution theory of AI, but Lindsay and Goldfarb state that AI should not be seen as a substitute, but rather a complement to existing human strategy.
    “Machines are good at prediction, but they depend on data and judgment, and the most difficult problems in war are information and strategy,” he said. “The conditions that make AI work in commerce are the conditions that are hardest to meet in a military environment because of its unpredictability.”
    An example Lindsay and Goldfarb highlight is the Rio Tinto mining company, which uses self-driving trucks to transport materials, reducing costs and risks to human drivers. There are abundant, predictable, and unbiased data traffic patterns and maps that require little human intervention unless there are road closures or obstacles.
    War, however, usually lacks abundant unbiased data, and judgments about objectives and values are inherently controversial, but that doesn’t mean it’s impossible. The researchers argue AI would be best employed in bureaucratically stabilized environments on a task-by-task basis.
    “All the excitement and the fear are about killer robots and lethal vehicles, but the worst case for military AI in practice is going to be the classically militaristic problems where you’re really dependent on creativity and interpretation,” Lindsay said. “But what we should be looking at is personnel systems, administration, logistics, and repairs.”
    There are also consequences to using AI for both the military and its adversaries, according to the researchers. If humans are the central element to deciding when to use AI in warfare, then military leadership structure and hierarchies could change based on the person in charge of designing and cleaning data systems and making policy decisions. This also means adversaries will aim to compromise both data and judgment since they would largely affect the trajectory of the war. Competing against AI may push adversaries to manipulate or disrupt data to make sound judgment even harder. In effect, human intervention will be even more necessary.
    Yet this is just the start of the argument and innovations.
    “If AI is automating prediction, that’s making judgment and data really important,” Lindsay said. “We’ve already automated a lot of military action with mechanized forces and precision weapons, then we automated data collection with intelligence satellites and sensors, and now we’re automating prediction with AI. So, when are we going to automate judgment, or are there components of judgment cannot be automated?”
    Until then, though, tactical and strategic decision making by humans continues to be the most important aspect of warfare. More